keyword stringclasses 7 values | repo_name stringlengths 8 98 | file_path stringlengths 4 244 | file_extension stringclasses 29 values | file_size int64 0 84.1M | line_count int64 0 1.6M | content stringlengths 1 84.1M ⌀ | language stringclasses 14 values |
|---|---|---|---|---|---|---|---|
3D | OpenMS/OpenMS | src/topp/MSGFPlusAdapter.cpp | .cpp | 42,577 | 907 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hendrik Weisser $
// $Authors: Dilek Dere, Mathias Walzer, Petra Gutenbrunner, Hendrik Weisser, Chris Bielow $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/SearchEngineBase.h>
#include <OpenMS/ANALYSIS/ID/PercolatorFeatureSetHelper.h>
#include <OpenMS/ANALYSIS/ID/PeptideIndexing.h>
#include <OpenMS/CHEMISTRY/ModificationsDB.h>
#include <OpenMS/CHEMISTRY/ProteaseDB.h>
#include <OpenMS/DATASTRUCTURES/DefaultParamHandler.h>
#include <OpenMS/DATASTRUCTURES/String.h>
#include <OpenMS/FORMAT/CsvFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/MzMLFile.h>
#include <OpenMS/METADATA/ProteinIdentification.h>
#include <OpenMS/METADATA/SpectrumMetaDataLookup.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/SYSTEM/JavaInfo.h>
#include <QProcessEnvironment>
#include <QLockFile>
#include <algorithm>
#include <fstream>
#include <map>
#include <cstddef>
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_MSGFPlusAdapter MSGFPlusAdapter
@brief Adapter for the MS-GF+ protein identification (database search) engine.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → MSGFPlusAdapter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes @n (or another centroiding tool)</td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFilter or @n any protein/peptide processing tool</td>
</tr>
</table>
</CENTER>
MS-GF+ must be installed before this wrapper can be used. Please make sure that Java and MS-GF+ are working.@n
At the time of writing, MS-GF+ can be downloaded from https://github.com/MSGFPlus/msgfplus/releases.
The following MS-GF+ version is required: <b>MS-GF+ 2019/07/03</b>. Older versions will not work properly, giving
an error: <em>[Error] Invalid parameter: -maxMissedCleavages.</em>
Input spectra for MS-GF+ have to be centroided; profile spectra will raise an error in the adapter.
The first time MS-GF+ is applied to a database (FASTA file), it will index the file contents and
generate a number of auxiliary files in the same directory as the database (e.g. for "db.fasta": "db.canno", "db.cnlap", "db.csarr" and "db.cseq" will be generated).
It is advisable to keep these files for future MS-GF+ searches, to save the indexing step.@n
@note This Adapter uses an internal locking mechanism (a file lock), to ensure that MSGF+ does not attempt to create the database index
in parallel (which would fail badly) when multiple instances of this Adapter are run concurrently on the same FASTA database.
After the database has been indexed, multiple MS-GF+ processes (even without this Adapters locking) can use it in parallel.
This adapter supports relative database filenames, which (when not found in the current working directory) are looked up in the directories specified
by 'OpenMS.ini:id_db_dir'.
The adapter works in three steps to generate an idXML file: First MS-GF+ is run on the input MS data and the sequence database,
producing an mzIdentML (.mzid) output file containing the search results. This file is then converted to a text file (.tsv) using MS-GF+' "MzIDToTsv" tool.
Finally, the .tsv file is parsed and a result in idXML format is generated.
An optional MSGF+ configuration file can be added via '-conf' parameter.
See https://github.com/MSGFPlus/msgfplus/blob/master/docs/examples/MSGFPlus_Params.txt for
an example and consult the MSGF+ documentation for further details.
Parameters specified in the configuration file are ignored by MS-GF+ if they are also specified on the command line.
This adapter passes all flags which you can set on the command line, so use the configuration file <b>only</b> for parameters which
are not available here (this includes fixed/variable modifications, which are passed on the commandline via <code>-mod <file></code>).
Thus, be very careful that your settings in '-conf' actually take effect (try running again without '-conf' file and test if the results change).
@note This adapter supports 15N labeling by specifying the 20 AA modifications 'Label:15N(x)' as fixed modifications.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_MSGFPlusAdapter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_MSGFPlusAdapter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
using namespace OpenMS;
using namespace std;
class MSGFPlusAdapter :
public SearchEngineBase
{
public:
MSGFPlusAdapter() :
SearchEngineBase("MSGFPlusAdapter", "MS/MS database search using MS-GF+.", true),
// parameter choices (the order of the values must be the same as in the MS-GF+ parameters!):
fragment_methods_(ListUtils::create<String>("from_spectrum,CID,ETD,HCD")),
instruments_(ListUtils::create<String>("low_res,high_res,TOF,Q_Exactive")),
protocols_(ListUtils::create<String>("automatic,phospho,iTRAQ,iTRAQ_phospho,TMT,none")),
tryptic_(ListUtils::create<String>("non,semi,fully"))
{
ProteaseDB::getInstance()->getAllMSGFNames(enzymes_);
std::sort(enzymes_.begin(),enzymes_.end());
}
protected:
/// parts of a sequence of the form "K.AAAA.R"
struct SequenceParts
{
char aa_before, aa_after; // may be '\0' if not given
String peptide;
SequenceParts(): aa_before(0), aa_after(0) {}
};
// lists of allowed parameter values:
vector<String> fragment_methods_, instruments_, enzymes_, protocols_, tryptic_;
// primary MS run referenced in the mzML file
StringList primary_ms_run_path_;
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input file (MS-GF+ parameter '-s')");
setValidFormats_("in", {"mzML", "mzXML", "mgf", "ms2" });
registerOutputFile_("out", "<file>", "", "Output file", false);
setValidFormats_("out", ListUtils::create<String>("idXML"));
registerOutputFile_("mzid_out", "<file>", "", "Alternative output file (MS-GF+ parameter '-o')\nEither 'out' or 'mzid_out' are required. They can be used together.", false);
setValidFormats_("mzid_out", ListUtils::create<String>("mzid"));
registerInputFile_("executable", "<file>", "MSGFPlus.jar", "The MSGFPlus Java archive file. Provide a full or relative path, or make sure it can be found in your PATH environment.", true, false, {"is_executable"});
registerInputFile_("database", "<file>", "", "Protein sequence database (FASTA file; MS-GF+ parameter '-d'). Non-existing relative filenames are looked up via 'OpenMS.ini:id_db_dir'.", true, false, ListUtils::create<String>("skipexists"));
setValidFormats_("database", ListUtils::create<String>("FASTA"));
registerDoubleOption_("precursor_mass_tolerance", "<value>", 10, "Precursor monoisotopic mass tolerance (MS-GF+ parameter '-t')", false);
registerStringOption_("precursor_error_units", "<choice>", "ppm", "Unit of precursor mass tolerance (MS-GF+ parameter '-t')", false);
setValidStrings_("precursor_error_units", ListUtils::create<String>("Da,ppm"));
registerStringOption_("isotope_error_range", "<range>", "0,1", "Range of allowed isotope peak errors (MS-GF+ parameter '-ti'). Takes into account the error introduced by choosing a non-monoisotopic peak for fragmentation. Combined with 'precursor_mass_tolerance'/'precursor_error_units', this determines the actual precursor mass tolerance. E.g. for experimental mass 'exp' and calculated mass 'calc', '-precursor_mass_tolerance 20 -precursor_error_units ppm -isotope_error_range -1,2' tests '|exp - calc - n * 1.00335 Da| < 20 ppm' for n = -1, 0, 1, 2.", false);
registerStringOption_("fragment_method", "<choice>", fragment_methods_[0], "Fragmentation method ('from_spectrum' relies on spectrum meta data and uses CID as fallback option; MS-GF+ parameter '-m')", false);
setValidStrings_("fragment_method", fragment_methods_);
registerStringOption_("instrument", "<choice>", instruments_[0], "Instrument that generated the data ('low_res'/'high_res' refer to LCQ and LTQ instruments; MS-GF+ parameter '-inst')", false);
setValidStrings_("instrument", instruments_);
registerStringOption_("enzyme", "<choice>", enzymes_[6], "Enzyme used for digestion, or type of cleavage. Note: MS-GF+ does not support blocking rules. (MS-GF+ parameter '-e')", false);
setValidStrings_("enzyme", enzymes_);
registerStringOption_("protocol", "<choice>", protocols_[0], "Labeling or enrichment protocol used, if any (MS-GF+ parameter '-p')", false);
setValidStrings_("protocol", protocols_);
registerStringOption_("tryptic", "<choice>", tryptic_[2], "Level of cleavage specificity required (MS-GF+ parameter '-ntt')", false);
setValidStrings_("tryptic", tryptic_);
registerIntOption_("min_precursor_charge", "<num>", 2, "Minimum precursor ion charge (only used for spectra without charge information; MS-GF+ parameter '-minCharge')", false);
setMinInt_("min_precursor_charge", 1);
registerIntOption_("max_precursor_charge", "<num>", 3, "Maximum precursor ion charge (only used for spectra without charge information; MS-GF+ parameter '-maxCharge')", false);
setMinInt_("max_precursor_charge", 1);
registerIntOption_("min_peptide_length", "<num>", 6, "Minimum peptide length to consider (MS-GF+ parameter '-minLength')", false);
setMinInt_("min_peptide_length", 1);
registerIntOption_("max_peptide_length", "<num>", 40, "Maximum peptide length to consider (MS-GF+ parameter '-maxLength')", false);
setMinInt_("max_peptide_length", 1);
registerIntOption_("matches_per_spec", "<num>", 1, "Number of matches per spectrum to be reported (MS-GF+ parameter '-n')", false);
setMinInt_("matches_per_spec", 1);
registerIntOption_("min_peaks", "<num>", 10, "Minimum number of ions a spectrum must have to be examined", false);
setMinInt_("min_peaks", 10);
registerStringOption_("add_features", "<true/false>", "true", "Output additional features (MS-GF+ parameter '-addFeatures'). This is required by Percolator and hence by default enabled.", false, false);
setValidStrings_("add_features", ListUtils::create<String>("true,false"));
registerIntOption_("max_mods", "<num>", 2, "Maximum number of modifications per peptide. If this value is large, the search may take very long.", false);
setMinInt_("max_mods", 0);
registerIntOption_("max_missed_cleavages", "<num>", -1, "Maximum number of missed cleavages allowed for a peptide to be considered for scoring. (default: -1 meaning unlimited)", false);
setMinInt_("max_missed_cleavages", -1);
registerIntOption_("tasks", "<num>", 0, "(Override the number of tasks to use on the threads; Default: (internally calculated based on inputs))\n"
" More tasks than threads will reduce the memory requirements of the search, but will be slower (how much depends on the inputs).\n"
" 1 <= tasks <= numThreads: will create one task per thread, which is the original behavior.\n"
" tasks = 0: use default calculation - minimum of: (threads*3) and (numSpectra/250).\n"
" tasks < 0: multiply number of threads by abs(tasks) to determine number of tasks (i.e., -2 means \"2 * numThreads\" tasks).\n"
" One task per thread will use the most memory, but will usually finish the fastest.\n"
" 2-3 tasks per thread will use comparably less memory, but may cause the search to take 1.5 to 2 times as long.", false);
vector<String> all_mods;
ModificationsDB::getInstance()->getAllSearchModifications(all_mods);
registerStringList_("fixed_modifications", "<mods>", {"Carbamidomethyl (C)"}, "Fixed modifications, specified using Unimod (www.unimod.org) terms, e.g. 'Carbamidomethyl (C)' or 'Oxidation (M)'", false);
setValidStrings_("fixed_modifications", all_mods);
registerStringList_("variable_modifications", "<mods>", {"Oxidation (M)"}, "Variable modifications, specified using Unimod (www.unimod.org) terms, e.g. 'Carbamidomethyl (C)' or 'Oxidation (M)'",
false);
setValidStrings_("variable_modifications", all_mods);
registerFlag_("legacy_conversion", "Use the indirect conversion of MS-GF+ results to idXML via export to TSV. Try this only if the default conversion takes too long or uses too much memory.", true);
registerInputFile_("conf", "<file>", "", "Optional MSGF+ configuration file (passed as -conf <file> to MSGF+). See documentation for examples. Parameters of the adapter take precedence. Use conf file only for settings not available here (for example, any fixed/var modifications, in the conf file will be ignored, since they are provided via -mod flag)", false, false);
registerInputFile_("java_executable", "<file>", "java", "The Java executable. Usually Java is on the system PATH. If Java is not found, use this parameter to specify the full path to Java", false, false, {"is_executable"});
registerIntOption_("java_memory", "<num>", 3500, "Maximum Java heap size (in MB)", false);
registerIntOption_("java_permgen", "<num>", 0, "Maximum Java permanent generation space (in MB); only for Java 7 and below", false, true);
// register peptide indexing parameter (with defaults for this search engine) TODO: check if search engine defaults are needed
registerPeptideIndexingParameter_(PeptideIndexing().getParameters());
}
// The following sequence modification methods are used to modify the sequence stored in the TSV such that it can be used by AASequence
// Method to cut the amino acids before/after the peptide (splice sites) off the sequence.
// The sequences in the TSV file have the format 'K.XXXR.X' (where XXXR is the actual peptide sequence).
// This method returns the sequence split into its three parts (e.g. "K", "XXXR", "X").
struct SequenceParts splitSequence_(const String& sequence)
{
struct SequenceParts parts;
size_t len = sequence.size(), start = 0, count = string::npos;
if (len > 3) // in 'X.Y', which side would we cut off?
{
if (sequence[1] == '.')
{
start = 2;
parts.aa_before = sequence[0];
}
if (sequence[len - 2] == '.')
{
count = len - start - 2;
parts.aa_after = sequence[len - 1];
}
}
parts.peptide = sequence.substr(start, count);
return parts;
}
String modifyNTermAASpecificSequence_(const String& seq)
{
String swap;
string modifiedSequence = seq;
vector<pair<String, char> > massShiftList;
massShiftList.push_back(make_pair("-18.011", 'E'));
massShiftList.push_back(make_pair("-17.027", 'Q'));
for (vector<pair<String, char> >::const_iterator it = massShiftList.begin(); it != massShiftList.end(); ++it)
{
string modMassShift = it->first;
size_t found = modifiedSequence.find(modMassShift);
if (found != string::npos)
{
String tmp = modifiedSequence.substr(0, found + modMassShift.length() + 1);
size_t foundAA = tmp.find_first_of("ABCDEFGHIJKLMNOPQRSTUVWXYZ");
if ((foundAA > found) && (tmp[foundAA] == it->second)) // no AA at the begin
{
if (found > 0)
{
swap = modifiedSequence.substr(0, found);
}
return swap += *tmp.rbegin() + modMassShift + modifiedSequence.substr(found + modMassShift.length() + 1);
}
}
}
return modifiedSequence;
}
// Method to replace the mass representation of modifications.
// Modifications in the TSV file have the format 'M+15.999'
// After using this method the sequence should look like this: 'M[+15.999]'
String modifySequence_(const String& seq)
{
String modifiedSequence = seq;
size_t found1 = modifiedSequence.find_first_of("+-");
while (found1 != string::npos)
{
modifiedSequence = modifiedSequence.insert(found1, 1, '[');
size_t found2 = modifiedSequence.find_first_of("ABCDEFGHIJKLMNOPQRSTUVWXYZ", found1);
if (found2 != string::npos)
{
modifiedSequence.insert(found2, 1, ']');
found1 = modifiedSequence.find_first_of("+-", found2 + 2);
}
else // last amino acid is modified
{
modifiedSequence = modifiedSequence + ']';
return modifiedSequence;
}
}
return modifiedSequence;
}
// Parse mzML and create RTMapping
// get RT: it doesn't exist in output from MS-GF+
// get m/z: it is rounded after converting to TSV
void generateInputfileMapping_(map<String, vector<float> >& rt_mapping)
{
String exp_name = getStringOption_("in");
if (!exp_name.empty())
{
PeakMap exp;
// load only MS2 spectra:
FileHandler f;
f.getOptions().addMSLevel(2);
f.getOptions().setFillData(false);
f.loadExperiment(exp_name, exp, {FileTypes::MZML});
exp.getPrimaryMSRunPath(primary_ms_run_path_);
// if no primary run is assigned, the mzML file is the (unprocessed) primary file
if (primary_ms_run_path_.empty())
{
primary_ms_run_path_.push_back(exp_name);
}
for (MSSpectrum& ms : exp)
{
String id = ms.getNativeID(); // expected format: "... scan=#"
if (!id.empty())
{
rt_mapping[id].push_back(ms.getRT());
rt_mapping[id].push_back(ms.getPrecursors()[0].getMZ());
}
}
}
}
String makeModString_(const String& mod_name, bool fixed=true)
{
const ResidueModification* mod = ModificationsDB::getInstance()->getModification(mod_name);
char residue = mod->getOrigin();
if (residue == 'X')
{
residue = '*'; // terminal mod. without residue specificity
}
String position = mod->getTermSpecificityName();
if (position == "Protein N-term")
{
position = "Prot-N-term";
}
else if (position == "Protein C-term")
{
position = "Prot-C-term";
}
else if (position == "none")
{
position = "any";
}
return String(mod->getDiffMonoMass()) + ", " + residue + (fixed ? ", fix, " : ", opt, ") + position + ", " + mod->getId() + " # " + mod_name;
}
void writeModificationsFile_(const String& out_path, const vector<String>& fixed_mods, const vector<String>& variable_mods, Size max_mods)
{
ofstream output(out_path.c_str());
if (!output)
{
throw Exception::FileNotWritable(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, out_path);
}
output << "# MS-GF+ modifications file written by MSGFPlusAdapter (part of OpenMS)\n"
<< "NumMods=" << max_mods
<< "\n\n# Fixed modifications:\n";
if (fixed_mods.empty())
{
output << "# (none)\n";
}
else
{
for (vector<String>::const_iterator it = fixed_mods.begin(); it != fixed_mods.end(); ++it)
{
output << makeModString_(*it) << "\n";
}
}
output << "\n# Variable modifications:\n";
if (variable_mods.empty())
{
output << "# (none)\n";
}
else
{
for (vector<String>::const_iterator it = variable_mods.begin(); it != variable_mods.end(); ++it)
{
output << makeModString_(*it, false) << "\n";
}
}
}
String describeHit_(const PeptideHit& hit)
{
return "peptide hit with sequence '" + hit.getSequence().toString() +
"', charge " + String(hit.getCharge()) + ", score " +
String(hit.getScore());
}
// Set the MS-GF+ e-value (MS:1002052) as new peptide identification score.
void switchScores_(PeptideIdentification& id)
{
for (PeptideHit& hit : id.getHits())
{
// MS:1002052 == MS-GF spectral E-value
if (!hit.metaValueExists("MS:1002052"))
{
String msg = "Meta value 'MS:1002052' not found for " + describeHit_(hit);
throw Exception::MissingInformation(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, msg);
}
hit.setScore(hit.getMetaValue("MS:1002052"));
}
id.setScoreType("SpecEValue");
id.setHigherScoreBetter(false);
}
bool createLockedDBIndex(const String& db_name, const QString java_executable, const QString java_memory, const QString executable)
{
const String db_indexfile = FileHandler::stripExtension(db_name) + ".canno";
const QString lockfile = (db_name + ".lock").toQString();
QLockFile lock_db(lockfile);
OPENMS_LOG_DEBUG << "Checking for db index, using a lock file ..." << std::endl;
if (!lock_db.lock())
{
String msg;
switch (lock_db.error())
{
case QLockFile::NoError:
msg = "The lock was acquired successfully.";
break;
case QLockFile::LockFailedError:
msg = "The lock could not be acquired because another process holds it.";
break;
case QLockFile::PermissionError: // if we cannot create the log, hopefully noone else can (who runs on different accounts anyways?)
// so we may dare to check for existance of the index (even though we are not locked right now)
msg = "The lock file could not be created, for lack of permissions in the parent directory.";
if (!File::exists(db_indexfile))
{
OPENMS_LOG_ERROR << msg << " Checking index anyway: No database index found! Please make the directory writable or pre-create an DB index." << std::endl;
return false;
}
OPENMS_LOG_DEBUG << msg << " Checking index anyway: found it!" << std::endl;
return true;
case QLockFile::UnknownError:
msg = "Another error happened, for instance a full partition prevented writing out the lock file.";
};
OPENMS_LOG_ERROR << "An error occurred while trying to acquire a file lock: " << msg << " using the file '" << lockfile.toStdString()
<< "'.\nPlease check the previous error message and contact OpenMS support if you cannot solve the problem.";
return false;
}
// we have a lock: now check if we need to create a new index (which only one instance should do)
if (!File::exists(db_indexfile))
{
OPENMS_LOG_INFO << "\nNo database index found! Creating index while holding a lock ..." << std::endl;
QStringList process_params; // the actual process is Java, not MS-GF+!
// java -Xmx3500M -cp MSGFPlus.jar edu.ucsd.msjava.msdbsearch.BuildSA -d DatabaseFile
process_params << java_memory
<< "-cp" << executable
<< "edu.ucsd.msjava.msdbsearch.BuildSA"
<< "-d" << db_name.toQString()
<< "-tda" << "0"; // do NOT add & index a reverse DB (i.e. '-tda=2'), since this DB may already contain FW+BW,
// and duplicating again will cause MSGF+ to error with 'too many redundant proteins'
// collect all output since MSGF+ might return 'success' even though it did not like the command arguments (e.g. if the version is too old)
// If no output file is produced, we can print the stderr below.
String proc_stdout, proc_stderr;
TOPPBase::ExitCodes exit_code = runExternalProcess_(java_executable, process_params, proc_stdout, proc_stderr);
if (exit_code != EXECUTION_OK)
{
// if there was sth like a segfault, runExternalProcess_ will write a warning about the type of error,
// but not print the output of the program.
OPENMS_LOG_ERROR << "The output of MSGF+'s Index Database Creation was:\nSTDOUT:\n" << proc_stdout << "\nSTDERR:\n" << proc_stderr << endl;
return false;
}
OPENMS_LOG_INFO << " ... done" << std::endl;
}
// free lock, since database index exists at this point
lock_db.unlock();
OPENMS_LOG_DEBUG << "... releasing DB lock" << std::endl;
return true;
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parse parameters
//-------------------------------------------------------------
String in = getRawfileName();
String out = getStringOption_("out");
String mzid_out = getStringOption_("mzid_out");
if (mzid_out.empty() && out.empty())
{
writeLogError_("Error: no output file given (parameter 'out' or 'mzid_out')");
return ILLEGAL_PARAMETERS;
}
const String java_executable = getStringOption_("java_executable");
if (!getFlag_("force"))
{
if (!JavaInfo::canRun(java_executable))
{
writeLogError_("Fatal error: Java is needed to run MS-GF+!");
return EXTERNAL_PROGRAM_NOTFOUND;
}
}
else
{
writeLogWarn_("The installation of Java was not checked.");
}
const QString java_memory = "-Xmx" + QString::number(getIntOption_("java_memory")) + "m";
const QString executable = getStringOption_("executable").toQString();
const String db_name = getDBFilename();
if (!createLockedDBIndex(db_name, java_executable.toQString(), java_memory, executable))
{
OPENMS_LOG_ERROR << "Could not create/verify database index. Aborting ..." << std::endl;
return ExitCodes::INTERNAL_ERROR;
}
vector<String> fixed_mods = getStringList_("fixed_modifications");
vector<String> variable_mods = getStringList_("variable_modifications");
bool no_mods = fixed_mods.empty() && variable_mods.empty();
Int max_mods = getIntOption_("max_mods");
if ((max_mods == 0) && !no_mods)
{
writeLogWarn_("Warning: Modifications are defined ('fixed_modifications'/'variable_modifications'), but the number of allowed modifications is zero ('max_mods'). Is that intended?");
}
// create temporary directory (and modifications file, if necessary):
File::TempDir tmp_dir(debug_level_ >= 2);
String mzid_temp, mod_file;
// always create a temporary mzid file first, even if mzid output is requested via "mzid_out"
// (reason: TOPPAS may pass a filename with wrong extension to "mzid_out", which would cause an error in MzIDToTSVConverter below,
// so we make sure that we have a properly named mzid file for the converter; see https://github.com/OpenMS/OpenMS/issues/1251)
mzid_temp = tmp_dir.getPath() + "msgfplus_output.mzid";
if (!no_mods)
{
mod_file = tmp_dir.getPath() + "msgfplus_mods.txt";
writeModificationsFile_(mod_file, fixed_mods, variable_mods, max_mods);
}
// parameters also used by OpenMS (see idXML creation below):
String enzyme = getStringOption_("enzyme");
double precursor_mass_tol = getDoubleOption_("precursor_mass_tolerance");
String precursor_error_units = getStringOption_("precursor_error_units");
Int min_precursor_charge = getIntOption_("min_precursor_charge");
Int max_precursor_charge = getIntOption_("max_precursor_charge");
// parameters only needed for MS-GF+:
// no need to handle "not found" case - would have given error during parameter parsing:
Int fragment_method_code = ListUtils::getIndex<String>(fragment_methods_, getStringOption_("fragment_method"));
Int instrument_code = ListUtils::getIndex<String>(instruments_, getStringOption_("instrument"));
Int enzyme_code = ProteaseDB::getInstance()->getEnzyme(enzyme)->getMSGFID();
Int protocol_code = ListUtils::getIndex<String>(protocols_, getStringOption_("protocol"));
// protocol code = 0 corresponds to "automatic" (MS-GF+ docu 2017) and "none" (MS-GF+ docu 2013). We keep 0 = "none" for backward compatibility.
if (protocol_code == 5)
{
protocol_code = 0;
}
Int tryptic_code = ListUtils::getIndex<String>(tryptic_, getStringOption_("tryptic"));
QStringList process_params; // the actual process is Java, not MS-GF+!
process_params << java_memory
<< "-jar" << executable
<< "-s" << in.toQString()
<< "-o" << mzid_temp.toQString()
<< "-d" << db_name.toQString()
<< "-t" << QString::number(precursor_mass_tol) + precursor_error_units.toQString()
<< "-ti" << getStringOption_("isotope_error_range").toQString()
<< "-m" << QString::number(fragment_method_code)
<< "-inst" << QString::number(instrument_code)
<< "-e" << QString::number(enzyme_code)
<< "-protocol" << QString::number(protocol_code)
<< "-ntt" << QString::number(tryptic_code)
<< "-minLength" << QString::number(getIntOption_("min_peptide_length"))
<< "-maxLength" << QString::number(getIntOption_("max_peptide_length"))
<< "-minNumPeaks" << QString::number(getIntOption_("min_peaks"))
<< "-minCharge" << QString::number(min_precursor_charge)
<< "-maxCharge" << QString::number(max_precursor_charge)
<< "-maxMissedCleavages" << QString::number(getIntOption_("max_missed_cleavages"))
<< "-n" << QString::number(getIntOption_("matches_per_spec"))
<< "-addFeatures" << QString::number(int((getParam_().getValue("add_features") == "true")))
<< "-tasks" << QString::number(getIntOption_("tasks"))
<< "-thread" << QString::number(getIntOption_("threads"));
String conf = getStringOption_("conf");
if (!conf.empty())
{
process_params << "-conf" << conf.toQString();
}
if (!mod_file.empty())
{
process_params << "-mod" << mod_file.toQString();
}
//-------------------------------------------------------------
// execute MS-GF+
//-------------------------------------------------------------
// run MS-GF+ process and create the .mzid file
writeLogInfo_("Running MSGFPlus search...");
// collect all output since MSGF+ might return 'success' even though it did not like the command arguments (e.g. if the version is too old)
// If no output file is produced, we can print the stderr below.
String proc_stdout, proc_stderr;
TOPPBase::ExitCodes exit_code = runExternalProcess_(java_executable.toQString(), process_params, proc_stdout, proc_stderr);
if (exit_code != EXECUTION_OK)
{
// if there was sth like a segfault, runExternalProcess_ will write a warning about the type of error,
// but not print the output of the program.
OPENMS_LOG_ERROR << "The output of MSGF+ was:\nSTDOUT:\n" << proc_stdout << "\nSTDERR:\n" << proc_stderr << endl;
return exit_code;
}
//-------------------------------------------------------------
// create idXML output
//-------------------------------------------------------------
if (!out.empty())
{
if (!File::exists(mzid_temp))
{
OPENMS_LOG_ERROR << "MSGF+ failed. Temporary output file '" << mzid_temp << "' was not created.\n"
<< "The output of MSGF+ was:\nSTDOUT:\n" << proc_stdout << "\nSTDERR:\n" << proc_stderr << endl;
return EXTERNAL_PROGRAM_ERROR;
}
vector<ProteinIdentification> protein_ids;
PeptideIdentificationList peptide_ids;
if (getFlag_("legacy_conversion"))
{
// run TSV converter
String tsv_out = tmp_dir.getPath() + "msgfplus_converted.tsv";
int java_permgen = getIntOption_("java_permgen");
process_params.clear();
process_params << java_memory;
if (java_permgen > 0)
{
process_params << "-XX:MaxPermSize=" + QString::number(java_permgen) + "m";
}
process_params << "-cp" << executable << "edu.ucsd.msjava.ui.MzIDToTsv"
<< "-i" << mzid_temp.toQString()
<< "-o" << tsv_out.toQString()
<< "-showQValue" << "1"
<< "-showDecoy" << "1"
<< "-unroll" << "1";
writeLogInfo_("Running MzIDToTSVConverter...");
exit_code = runExternalProcess_(java_executable.toQString(), process_params);
if (exit_code != EXECUTION_OK)
{
return exit_code;
}
// initialize map
map<String, vector<float> > rt_mapping;
generateInputfileMapping_(rt_mapping);
// handle the search parameters
ProteinIdentification::SearchParameters search_parameters;
search_parameters.db = db_name;
search_parameters.charges = "+" + String(min_precursor_charge) + "-+" + String(max_precursor_charge);
search_parameters.mass_type = ProteinIdentification::PeakMassType::MONOISOTOPIC;
search_parameters.fixed_modifications = fixed_mods;
search_parameters.variable_modifications = variable_mods;
search_parameters.precursor_mass_tolerance = precursor_mass_tol;
search_parameters.precursor_mass_tolerance_ppm = false;
if (precursor_error_units == "ppm") // convert to Da (at m/z 666: 0.01 Da ~ 15 ppm)
{
search_parameters.precursor_mass_tolerance *= 2.0 / 3000.0;
search_parameters.precursor_mass_tolerance_ppm = true;
}
search_parameters.digestion_enzyme = *(ProteaseDB::getInstance()->getEnzyme(enzyme));
search_parameters.enzyme_term_specificity = static_cast<EnzymaticDigestion::Specificity>(tryptic_code);
// create idXML file
ProteinIdentification protein_id;
protein_id.setPrimaryMSRunPath(primary_ms_run_path_);
DateTime now = DateTime::now();
String date_string = now.getDate();
String identifier = "MS-GF+_" + date_string;
protein_id.setIdentifier(identifier);
protein_id.setDateTime(now);
protein_id.setSearchParameters(search_parameters);
protein_id.setSearchEngineVersion("");
protein_id.setSearchEngine("MSGFPlus");
protein_id.setScoreType(""); // MS-GF+ doesn't assign protein scores
// store all peptide identifications in a map, the key is the scan number
map<int, PeptideIdentification> peptide_identifications;
set<String> prot_accessions;
// iterate over the rows of the TSV file
// columns: #SpecFile, SpecID, ScanNum, FragMethod, Precursor, IsotopeError, PrecursorError(ppm), Charge, Peptide, Protein, DeNovoScore, MSGFScore, SpecEValue, EValue, QValue, PepQValue
// maybe TODO: replace column indexes ("elements[N]") by something more expressive
CsvFile tsvfile(tsv_out, '\t');
for (Size row_count = 1; row_count < tsvfile.rowCount(); ++row_count) // skip header line
{
vector<String> elements;
if (!tsvfile.getRow(row_count, elements))
{
writeLogError_("Error: could not split row " + String(row_count) + " of file '" + tsv_out + "'");
return PARSE_ERROR;
}
int scan_number = 0;
if ((elements[2].empty()) || (elements[2] == "-1"))
{
scan_number = elements[1].suffix('=').toInt();
}
else
{
scan_number = elements[2].toInt();
}
struct SequenceParts parts = splitSequence_(elements[8]);
parts.peptide.substitute(',', '.'); // decimal separator should be dot, not comma
AASequence seq = AASequence::fromString(modifySequence_(modifyNTermAASpecificSequence_(parts.peptide)));
String accession = elements[9];
// @BUG If there's a space before the protein accession in the FASTA file (e.g. "> accession ..."),
// the "Protein" field in the TSV file will be empty, leading to an empty accession and no protein
// reference in the idXML output file! (The mzIdentML output is not affected by this.)
prot_accessions.insert(accession);
PeptideEvidence evidence;
evidence.setProteinAccession(accession);
if ((parts.aa_before == 0) && (parts.aa_after == 0))
{
evidence.setAABefore(PeptideEvidence::UNKNOWN_AA);
evidence.setAAAfter(PeptideEvidence::UNKNOWN_AA);
}
else // if one cleavage site is given, assume the other side is terminal
{
if (parts.aa_before != 0)
{
evidence.setAABefore(parts.aa_before);
}
else
{
evidence.setAABefore(PeptideEvidence::N_TERMINAL_AA);
}
if (parts.aa_after != 0)
{
evidence.setAAAfter(parts.aa_after);
}
else
{
evidence.setAAAfter(PeptideEvidence::C_TERMINAL_AA);
}
}
bool hit_exists = false;
// if the PeptideIdentification doesn't exist yet, a new one will be created:
PeptideIdentification& pep_ident = peptide_identifications[scan_number];
if (!pep_ident.getHits().empty()) // previously existing PeptideIdentification
{
// do we have a peptide hit with this sequence already?
for (PeptideHit& hit : pep_ident.getHits())
{
if (hit.getSequence() == seq) // yes!
{
hit_exists = true;
hit.addPeptideEvidence(evidence);
break;
}
}
}
else // new PeptideIdentification
{
String spec_id = elements[1];
pep_ident.setRT(rt_mapping[spec_id][0]);
pep_ident.setMZ(rt_mapping[spec_id][1]);
pep_ident.setMetaValue("ScanNumber", scan_number);
pep_ident.setScoreType("SpecEValue");
pep_ident.setHigherScoreBetter(false);
pep_ident.setIdentifier(identifier);
}
if (!hit_exists) // add new PeptideHit
{
double score = elements[12].toDouble();
UInt rank = 0; // set to 0 at the moment
Int charge = elements[7].toInt();
PeptideHit hit(score, rank, charge, std::move(seq));
hit.addPeptideEvidence(evidence);
pep_ident.insertHit(hit);
}
}
vector<ProteinHit> prot_hits;
for (set<String>::iterator it = prot_accessions.begin(); it != prot_accessions.end(); ++it)
{
if (it->empty())
{
continue; // don't write a protein hit without accession (see @BUG above)
}
ProteinHit prot_hit = ProteinHit();
prot_hit.setAccession(*it);
prot_hits.push_back(prot_hit);
}
protein_id.setHits(prot_hits);
protein_ids.push_back(protein_id);
// iterate over map and create a vector of peptide identifications
PeptideIdentification pep;
for (map<int, PeptideIdentification>::iterator it = peptide_identifications.begin();
it != peptide_identifications.end(); ++it)
{
pep = it->second;
pep.sort();
peptide_ids.push_back(pep);
}
}
else // no legacy conversion
{
FileHandler().loadIdentifications(mzid_temp, protein_ids, peptide_ids, {FileTypes::MZIDENTML});
// MzID might contain missed_cleavages set to -1 which leads to a crash in PeptideIndexer
for (auto& pid : protein_ids)
{
pid.getSearchParameters().missed_cleavages = 1000; // use a high value (1000 was used in previous MSGF+ version)
pid.getSearchParameters().digestion_enzyme = *(ProteaseDB::getInstance()->getEnzyme(enzyme));
}
// set the MS-GF+ spectral e-value as new peptide identification score
for (auto& pep : peptide_ids)
{
switchScores_(pep);
}
// add missing RTs and FAIMS CVs to peptide IDs
MSExperiment exp;
MzMLFile mzml_file{};
// Load spectrum metadata (not just file metadata) but skip peak data
mzml_file.getOptions().setMetadataOnly(false);
mzml_file.getOptions().setFillData(false);
mzml_file.load(in, exp);
SpectrumMetaDataLookup::addMissingRTsToPeptideIDs(peptide_ids, exp);
// Annotate FAIMS compensation voltage if present
SpectrumMetaDataLookup::addMissingFAIMSToPeptideIDs(peptide_ids, exp);
}
// use OpenMS meta value key
for (PeptideIdentification& pid : peptide_ids)
{
for (PeptideHit& psm : pid.getHits())
{
auto v = psm.getMetaValue("IsotopeError");
// TODO cast to Int!
psm.setMetaValue(Constants::UserParam::ISOTOPE_ERROR, v);
psm.removeMetaValue("IsotopeError");
}
}
// write all (!) parameters as metavalues to the search parameters
if (!protein_ids.empty())
{
DefaultParamHandler::writeParametersToMetaValues(this->getParam_(), protein_ids[0].getSearchParameters(), this->getToolPrefix());
}
// if "reindex" parameter is set to true will perform reindexing
if (auto ret = reindex_(protein_ids, peptide_ids); ret != EXECUTION_OK) return ret;
// get feature set used in percolator
StringList feature_set;
PercolatorFeatureSetHelper::addMSGFFeatures(peptide_ids, feature_set);
protein_ids.front().getSearchParameters().setMetaValue("extra_features", ListUtils::concatenate(feature_set, ","));
FileHandler().storeIdentifications(out, protein_ids, peptide_ids, {FileTypes::IDXML});
}
//-------------------------------------------------------------
// create (move) mzid output
//-------------------------------------------------------------
if (!mzid_out.empty())
{ // move the temporary file to the actual destination:
if (!File::rename(mzid_temp, mzid_out))
{
return CANNOT_WRITE_OUTPUT_FILE;
}
}
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
MSGFPlusAdapter tool;
return tool.main(argc, argv);
}
///@endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/MRMTransitionGroupPicker.cpp | .cpp | 9,948 | 287 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hannes Roest $
// $Authors: Hannes Roest $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CONCEPT/Exception.h>
#include <OpenMS/CONCEPT/ProgressLogger.h>
#include <OpenMS/KERNEL/MRMTransitionGroup.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/KERNEL/FeatureMap.h>
#include <OpenMS/ANALYSIS/TARGETED/TargetedExperiment.h>
// files
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/SYSTEM/File.h>
// interfaces
#include <OpenMS/OPENSWATHALGO/DATAACCESS/ISpectrumAccess.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/SimpleOpenMSSpectraAccessFactory.h>
// helpers
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/SimpleOpenMSSpectraAccessFactory.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/DataAccessHelper.h>
#include <OpenMS/ANALYSIS/OPENSWATH/MRMTransitionGroupPicker.h>
using namespace std;
using namespace OpenMS;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_MRMTransitionGroupPicker MRMTransitionGroupPicker
@brief Picks peaks in SRM/MRM chromatograms that belong to the same precursors.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → MRMTransitionGroupPicker →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_OpenSwathChromatogramExtractor </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> @ref TOPP_OpenSwathFeatureXMLToTSV </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_MRMMapper </td>
</tr>
</table>
</CENTER>
This tools accepts a set of chromatograms and picks peaks in them, correctly
grouping related transitions from the same precursor together. It will
perform the following steps:
- Step 1: find features (peaks) in individual chromatograms
- Step 2: merge these features to consensus features that span multiple chromatograms
Step 1 is performed by smoothing the individual chromatogram and applying the
PeakPickerHiRes.
Step 2 is performed by finding the largest peak overall and use this to
create a feature, propagating this through all chromatograms.
This tool will not compute any scores for the peaks, in order to do peak picking please use TOPP_OpenSwathAnalyzer
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_MRMTransitionGroupPicker.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_MRMTransitionGroupPicker.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPMRMTransitionGroupPicker
: public TOPPBase
{
public:
TOPPMRMTransitionGroupPicker()
: TOPPBase("MRMTransitionGroupPicker", "Picks peaks in SRM/MRM chromatograms.")
{
}
protected:
typedef ReactionMonitoringTransition TransitionType;
typedef TargetedExperiment TargetedExpType;
typedef MRMTransitionGroup<MSChromatogram, TransitionType> MRMTransitionGroupType;
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input file");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerInputFile_("tr", "<file>", "", "transition file ('TraML' or 'csv')");
setValidFormats_("tr", ListUtils::create<String>("csv,traML"));
registerOutputFile_("out", "<file>", "", "output file");
setValidFormats_("out", ListUtils::create<String>("featureXML"));
registerSubsection_("algorithm", "Algorithm parameters section");
}
Param getSubsectionDefaults_(const String &) const override
{
return MRMTransitionGroupPicker().getDefaults();
}
struct MRMGroupMapper
{
typedef std::map<String, std::vector< const TransitionType* > > AssayMapT;
// chromatogram map
std::map<String, int> chromatogram_map;
// Map peptide id
std::map<String, int> assay_peptide_map;
// Group transitions
AssayMapT assay_map;
/// Create the mapping
void doMap(OpenSwath::SpectrumAccessPtr input, TargetedExpType& transition_exp)
{
for (Size i = 0; i < input->getNrChromatograms(); i++)
{
chromatogram_map[input->getChromatogramNativeID(i)] = boost::numeric_cast<int>(i);
}
for (Size i = 0; i < transition_exp.getPeptides().size(); i++)
{
assay_peptide_map[transition_exp.getPeptides()[i].id] = boost::numeric_cast<int>(i);
}
for (Size i = 0; i < transition_exp.getTransitions().size(); i++)
{
assay_map[transition_exp.getTransitions()[i].getPeptideRef()].push_back(&transition_exp.getTransitions()[i]);
}
}
/// Check that all assays have a corresponding chromatogram
bool allAssaysHaveChromatograms()
{
for (AssayMapT::iterator assay_it = assay_map.begin(); assay_it != assay_map.end(); ++assay_it)
{
for (Size i = 0; i < assay_it->second.size(); i++)
{
if (chromatogram_map.find(assay_it->second[i]->getNativeID()) == chromatogram_map.end())
{
return false;
}
}
}
return true;
}
/// Fill up transition group with paired Transitions and Chromatograms
void getTransitionGroup(OpenSwath::SpectrumAccessPtr input, MRMTransitionGroupType& transition_group, String id)
{
transition_group.setTransitionGroupID(id);
// Go through all transitions
for (Size i = 0; i < assay_map[id].size(); i++)
{
// Check first whether we have a mapping (e.g. see -force option)
const TransitionType* transition = assay_map[id][i];
if (chromatogram_map.find(transition->getNativeID()) == chromatogram_map.end())
{
OPENMS_LOG_DEBUG << "Found no matching chromatogram for id " << transition->getNativeID() << std::endl;
continue;
}
OpenSwath::ChromatogramPtr cptr = input->getChromatogramById(chromatogram_map[transition->getNativeID()]);
MSChromatogram chromatogram;
OpenSwathDataAccessHelper::convertToOpenMSChromatogram(cptr, chromatogram);
chromatogram.setMetaValue("product_mz", transition->getProductMZ());
chromatogram.setMetaValue("precursor_mz", transition->getPrecursorMZ());
chromatogram.setNativeID(transition->getNativeID());
// Now add the transition and the chromatogram to the group
transition_group.addTransition(*transition, transition->getNativeID());
transition_group.addChromatogram(chromatogram, chromatogram.getNativeID());
}
}
};
void run_(OpenSwath::SpectrumAccessPtr input,
FeatureMap & output, TargetedExpType& transition_exp, bool force)
{
MRMTransitionGroupPicker trgroup_picker;
Param picker_param = getParam_().copy("algorithm:", true);
trgroup_picker.setParameters(picker_param);
MRMGroupMapper m;
m.doMap(input, transition_exp);
if (!m.allAssaysHaveChromatograms() && !force)
{
throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
"Not all assays could be mapped to chromatograms");
}
// Iterating over all the assays
for (MRMGroupMapper::AssayMapT::iterator assay_it = m.assay_map.begin(); assay_it != m.assay_map.end(); ++assay_it)
{
String id = assay_it->first;
// Create new transition group if there is none for this peptide
MRMTransitionGroupType transition_group;
m.getTransitionGroup(input, transition_group, id);
// Process the transition_group
trgroup_picker.pickTransitionGroup(transition_group);
// Add to output
for (Size i = 0; i < transition_group.getFeatures().size(); i++)
{
MRMFeature mrmfeature = transition_group.getFeatures()[i];
// Prepare the subordinates for the mrmfeature (process all current
// features and then append all precursor subordinate features)
std::vector<Feature> allFeatures = mrmfeature.getFeatures();
for (std::vector<Feature>::iterator f_it = allFeatures.begin(); f_it != allFeatures.end(); ++f_it)
{
f_it->getConvexHulls().clear();
f_it->ensureUniqueId();
}
mrmfeature.setSubordinates(allFeatures); // add all the subfeatures as subordinates
output.push_back(mrmfeature);
}
}
}
ExitCodes main_(int, const char **) override
{
String in = getStringOption_("in");
String out = getStringOption_("out");
String tr_file = getStringOption_("tr");
bool force = getFlag_("force");
std::shared_ptr<PeakMap > exp ( new PeakMap );
FileHandler().loadExperiment(in, *exp, {FileTypes::MZML}, log_type_);
TargetedExpType transition_exp;
FileHandler().loadTransitions(tr_file, transition_exp, {FileTypes::TRAML});
FeatureMap output;
OpenSwath::SpectrumAccessPtr input = SimpleOpenMSSpectraFactory::getSpectrumAccessOpenMSPtr(exp);
run_(input, output, transition_exp, force);
output.ensureUniqueId();
if (getFlag_("test"))
{
// if test mode set, add file without path so we can compare it
output.setPrimaryMSRunPath({"file://" + File::basename(in)}, *exp);
}
else
{
output.setPrimaryMSRunPath({in}, *exp);
}
FileHandler().storeFeatures(out, output, {FileTypes::FEATUREXML});
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPMRMTransitionGroupPicker tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/RNPxlXICFilter.cpp | .cpp | 9,056 | 278 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Timo Sachsenberg $
// --------------------------------------------------------------------------
#include <OpenMS/KERNEL/StandardTypes.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/KERNEL/FeatureMap.h>
#include <OpenMS/CONCEPT/Constants.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <algorithm>
#include <numeric>
#include <iostream>
#include <iomanip>
#include <fstream>
using namespace std;
using namespace OpenMS;
/**
@page TOPP_RNPxlXICFilter RNPxlXICFilter
@brief Filters MS2 spectra based on XIC intensities in control and treatment. Used in RNPxl experiments to reduce candidate spectra.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → RNPxlXICFilter →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> NuXL </td>
</tr>
</table>
</CENTER>
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_RNPxlXICFilter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_RNPxlXICFilter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPRNPxlXICFilter :
public TOPPBase
{
public:
TOPPRNPxlXICFilter() :
TOPPBase("RNPxlXICFilter", "Remove MS2 spectra from treatment based on the fold change between control and treatment.")
{
}
protected:
void registerOptionsAndFlags_() override
{
// input files
registerInputFile_("control", "<file>", "", "input mzML file");
setValidFormats_("control", ListUtils::create<String>("mzML"));
registerInputFile_("treatment", "<file>", "", "input mzML file");
setValidFormats_("treatment", ListUtils::create<String>("mzML"));
registerDoubleOption_("fold_change", "", 2.0, "fold change between XICs", false, false);
registerDoubleOption_("rt_tol", "", 20, "RT tolerance in [s] for finding max peak (whole RT range around RT middle)", false, false);
registerDoubleOption_("mz_tol", "", 10, "m/z tolerance in [ppm] for finding a peak", false, false);
// output files
registerOutputFile_("out", "<file>", "", "output of the treatment file after XIC filtering.");
setValidFormats_("out", ListUtils::create<String>("mzML"));
}
void filterByFoldChange(const PeakMap& exp1, const PeakMap& exp2,
const vector<double>& pc_ms2_rts, const vector<double>& pc_mzs,
const double rttol, const double mztol, double fold_change,
vector<double>& control_XIC_larger,
vector<double>& treatment_XIC_larger,
vector<double>& indifferent_XICs)
{
assert(pc_mzs.size() == pc_ms2_rts.size());
// search for each EIC and add up
for (Size i = 0; i < pc_mzs.size(); ++i)
{
//cerr << "start" << endl;
double pc_ms2_rt = pc_ms2_rts[i];
double pc_mz = pc_mzs[i];
//std::cerr << "Rt" << cm[i].getRT() << " mz: " << cm[i].getMZ() << " R " << cm[i].getMetaValue("rank") << "\n";
double mz_da = mztol * pc_mzs[i] / 1e6; // mz tolerance in Dalton
double rt_start = pc_ms2_rts[i] - rttol / 2.0;
// get area iterator (is MS1 only!) for rt and mz window
PeakMap::ConstAreaIterator it1 = exp1.areaBeginConst(pc_ms2_rt - rttol / 2, pc_ms2_rt + rttol / 2, pc_mz - mz_da, pc_mz + mz_da);
PeakMap::ConstAreaIterator it2 = exp2.areaBeginConst(pc_ms2_rt - rttol / 2, pc_ms2_rt + rttol / 2, pc_mz - mz_da, pc_mz + mz_da);
// determine maximum number of MS1 scans in retention time window
set<double> rts1;
set<double> rts2;
for (; it1 != exp1.areaEndConst(); ++it1)
{
rts1.insert(it1.getRT());
}
for (; it2 != exp2.areaEndConst(); ++it2)
{
rts2.insert(it2.getRT());
}
Size length = std::max(rts1.size(), rts2.size()) / 2.0;
//cout << length << endl;
if (length == 0)
{
cerr << "WARNING: no MS1 scans in retention time window found in both maps (mz: " << pc_mzs[i] << " / rt: " << pc_ms2_rts[i] << ")" << endl;
continue;
}
vector<double> XIC1(length, 0.0);
vector<double> XIC2(length, 0.0);
it1 = exp1.areaBeginConst(pc_ms2_rt - rttol / 2, pc_ms2_rt + rttol / 2, pc_mz - mz_da, pc_mz + mz_da);
it2 = exp2.areaBeginConst(pc_ms2_rt - rttol / 2, pc_ms2_rt + rttol / 2, pc_mz - mz_da, pc_mz + mz_da);
for (; it1 != exp1.areaEndConst(); ++it1)
{
double relative_rt = (it1.getRT() - rt_start) / rttol;
Size bin = relative_rt * (length - 1);
XIC1[bin] += it1->getIntensity();
if (bin >= length)
{
bin = length - 1;
}
}
for (; it2 != exp2.areaEndConst(); ++it2)
{
double relative_rt = (it2.getRT() - rt_start) / rttol;
Size bin = relative_rt * (length - 1);
if (bin >= length)
{
bin = length - 1;
}
XIC2[bin] += it2->getIntensity();
}
double total_itensity1 = std::accumulate(XIC1.begin(), XIC1.end(), 0.0);
double total_itensity2 = std::accumulate(XIC2.begin(), XIC2.end(), 0.0);
double ratio = total_itensity2 / (total_itensity1 + 1);
//cout << pc_ms2_rt << "/" << pc_mz << " has ratio: " << ratio << " determined on " << length << " bins" << endl;
if (ratio < 1.0 / fold_change)
{
control_XIC_larger.push_back(pc_ms2_rt);
}
else if (ratio > fold_change)
{
treatment_XIC_larger.push_back(pc_ms2_rt);
}
else
{
indifferent_XICs.push_back(pc_ms2_rt);
continue;
}
/*
for (Size k = 0; k != length; ++k)
{
cout << k << ": " << rt_start + rttol / length * k << ": " << XIC1[k] << " " << XIC2[k] << endl;
}
*/
}
cout << "control larger: " << control_XIC_larger.size() << " treatment larger: " << treatment_XIC_larger.size() << " indifferent: " << indifferent_XICs.size() << endl;
return;
}
ExitCodes main_(int, const char**) override
{
// Parameter parsing
const string control_mzml(getStringOption_("control"));
const string treatment_mzml(getStringOption_("treatment"));
const string out_mzml(getStringOption_("out"));
const double mz_tolerance_ppm = getDoubleOption_("mz_tol");
const double fold_change = getDoubleOption_("fold_change");
const double rt_tolerance_s = getDoubleOption_("rt_tol");
// load experiments
PeakMap exp_control;
FileHandler().loadExperiment(control_mzml, exp_control, {FileTypes::MZML}, log_type_);
PeakMap exp_treatment;
FileHandler().loadExperiment(treatment_mzml, exp_treatment, {FileTypes::MZML}, log_type_);
// extract precursor mz and rts
vector<double> pc_mzs;
vector<double> pc_ms2_rts;
for (Size i = 0; i != exp_treatment.size(); ++i)
{
if (exp_treatment[i].getMSLevel() == 2)
{
if (!exp_treatment[i].getPrecursors().empty())
{
// cout << i << endl;
double pc_mz = exp_treatment[i].getPrecursors()[0].getMZ();
double ms2_rt = exp_treatment[i].getRT(); // use rt of MS2
pc_mzs.push_back(pc_mz);
pc_ms2_rts.push_back(ms2_rt);
}
}
}
vector<double> control_XIC_larger_rts;
vector<double> treatment_XIC_larger_rts;
vector<double> indifferent_XICs_rts;
filterByFoldChange(exp_control, exp_treatment,
pc_ms2_rts, pc_mzs,
rt_tolerance_s, mz_tolerance_ppm, fold_change,
control_XIC_larger_rts,
treatment_XIC_larger_rts,
indifferent_XICs_rts);
PeakMap exp_out = exp_treatment;
exp_out.clear(false); // don't clear meta-data
for (Size i = 0; i != exp_treatment.size(); ++i)
{
Size ms_level = exp_treatment[i].getMSLevel();
if (ms_level == 1)
{
exp_out.addSpectrum(exp_treatment[i]);
continue;
}
else if (ms_level == 2)
{
// determine if pc is in list -> passed
double rt = exp_treatment[i].getRT();
for (Size j = 0; j != treatment_XIC_larger_rts.size(); ++j)
{
if (fabs(rt - treatment_XIC_larger_rts[j]) <= 0.001)
{
exp_out.addSpectrum(exp_treatment[i]);
break;
}
}
}
}
FileHandler().storeExperiment(out_mzml, exp_out, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPRNPxlXICFilter tool;
return tool.main(argc, argv);
}
///@endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/SpectraFilterNLargest.cpp | .cpp | 3,906 | 135 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Mathias Walzer$
// $Authors: $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/PROCESSING/FILTERING/NLargest.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <typeinfo>
using namespace OpenMS;
using namespace std;
/**
@page TOPP_SpectraFilterNLargest SpectraFilterNLargest
@brief Filters the top Peaks in the given spectra according to a given schema/thresholdset
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → SpectraFilter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any tool operating on MS peak data @n (in mzML format)</td>
</tr>
</table>
</CENTER>
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_SpectraFilterNLargest.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_SpectraFilterNLargest.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPSpectraFilterNLargest :
public TOPPBase
{
public:
TOPPSpectraFilterNLargest() :
TOPPBase("SpectraFilterNLargest", "Keeps only the n largest peaks per spectrum.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "input file ");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerOutputFile_("out", "<file>", "", "output file ");
setValidFormats_("out", ListUtils::create<String>("mzML"));
// register one section for each algorithm
registerSubsection_("algorithm", "Algorithm parameter subsection.");
}
Param getSubsectionDefaults_(const String & /*section*/) const override
{
return NLargest().getParameters();
}
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
//input/output files
String in(getStringOption_("in"));
String out(getStringOption_("out"));
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
PeakMap exp;
FileHandler().loadExperiment(in, exp, {FileTypes::MZML}, log_type_);
//-------------------------------------------------------------
// if meta data arrays are present, remove them and warn
//-------------------------------------------------------------
if (exp.clearMetaDataArrays())
{
writeLogWarn_("Warning: Spectrum meta data arrays cannot be sorted. They are deleted.");
}
//-------------------------------------------------------------
// filter
//-------------------------------------------------------------
Param filter_param = getParam_().copy("algorithm:", true);
writeDebug_("Used filter parameters", filter_param, 3);
NLargest filter;
filter.setParameters(filter_param);
filter.filterPeakMap(exp);
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
//annotate output with data processing info
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::FILTERING));
FileHandler().storeExperiment(out, exp, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPSpectraFilterNLargest tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/XMLValidator.cpp | .cpp | 5,318 | 170 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Marc Sturm $
// --------------------------------------------------------------------------
#include <OpenMS/config.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/FORMAT/VALIDATORS/XMLValidator.h>
#include <OpenMS/FORMAT/IdXMLFile.h>
#include <OpenMS/FORMAT/PepXMLFile.h>
#include <OpenMS/FORMAT/MzMLFile.h>
#include <OpenMS/FORMAT/MzIdentMLFile.h>
#include <OpenMS/FORMAT/ConsensusXMLFile.h>
#include <OpenMS/FORMAT/ParamXMLFile.h>
#include <OpenMS/FORMAT/MzDataFile.h>
#include <OpenMS/FORMAT/MzXMLFile.h>
#include <OpenMS/FORMAT/FeatureXMLFile.h>
#include <OpenMS/FORMAT/TraMLFile.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_XMLValidator XMLValidator
@brief Validates XML files against an XSD schema.
When a schema file is given, the input file is simply validated against the schema.
When no schema file is given, the tool tries to determine the file type and
validates the file against the latest schema version.
@note XML schema files for the %OpenMS XML formats and several other XML
formats can be found in the folder
OpenMS/share/OpenMS/SCHEMAS/
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_XMLValidator.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_XMLValidator.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPXMLValidator :
public TOPPBase
{
public:
TOPPXMLValidator() :
TOPPBase("XMLValidator", "Validates XML files against an XSD schema.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "file to validate");
setValidFormats_("in", ListUtils::create<String>("mzML,mzData,featureXML,mzid,idXML,consensusXML,mzXML,ini,pepXML,traML,xml"));
registerInputFile_("schema", "<file>", "", "schema to validate against.\nIf no schema is given, the file is validated against the latest schema of the file type.", false);
setValidFormats_("schema", ListUtils::create<String>("xsd"));
}
ExitCodes main_(int, const char**) override
{
String in = getStringOption_("in");
String schema = getStringOption_("schema");
bool valid = false;
if (!schema.empty()) //schema explicitly given
{
valid = XMLValidator().isValid(in, schema);
}
else //no schema given
{
//determine input type
FileTypes::Type in_type = FileHandler::getType(in);
if (in_type == FileTypes::UNKNOWN)
{
writeLogError_("Error: Could not determine input file type and no xsd schema was provided!");
return PARSE_ERROR;
}
cout << endl << "Validating " << FileTypes::typeToName(in_type) << " file";
switch (in_type)
{
case FileTypes::MZDATA:
cout << " against schema version " << MzDataFile().getVersion() << endl;
valid = MzDataFile().isValid(in, std::cerr);
break;
case FileTypes::FEATUREXML:
cout << " against schema version " << FeatureXMLFile().getVersion() << endl;
valid = FeatureXMLFile().isValid(in, std::cerr);
break;
case FileTypes::IDXML:
cout << " against schema version " << IdXMLFile().getVersion() << endl;
valid = IdXMLFile().isValid(in, std::cerr);
break;
case FileTypes::CONSENSUSXML:
cout << " against schema version " << ConsensusXMLFile().getVersion() << endl;
valid = ConsensusXMLFile().isValid(in, std::cerr);
break;
case FileTypes::MZXML:
cout << " against schema version " << MzXMLFile().getVersion() << endl;
valid = MzXMLFile().isValid(in, std::cerr);
break;
case FileTypes::INI:
cout << " against schema version " << ParamXMLFile().getVersion() << endl;
valid = ParamXMLFile().isValid(in, std::cerr);
break;
case FileTypes::PEPXML:
cout << " against schema version " << PepXMLFile().getVersion() << endl;
valid = PepXMLFile().isValid(in, std::cerr);
break;
case FileTypes::MZML:
cout << " against schema version " << MzMLFile().getVersion() << endl;
valid = MzMLFile().isValid(in, std::cerr);
break;
case FileTypes::TRAML:
cout << " against schema version " << TraMLFile().getVersion() << endl;
valid = TraMLFile().isValid(in, std::cerr);
break;
default:
cout << endl << "Aborted: Validation of this file type is not supported!" << endl;
return PARSE_ERROR;
}
}
//Result
if (valid)
{
cout << "Success: the file is valid!" << endl;
return EXECUTION_OK;
}
else
{
cout << "Failed: errors are listed above!" << endl;
return PARSE_ERROR;
}
}
};
int main(int argc, const char** argv)
{
TOPPXMLValidator tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/SageAdapter.cpp | .cpp | 39,805 | 882 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Timo Sachsenberg $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/SearchEngineBase.h>
#include <OpenMS/ANALYSIS/ID/PeptideIndexing.h>
#include <OpenMS/ANALYSIS/ID/OpenSearchModificationAnalysis.h>
#include <OpenMS/DATASTRUCTURES/DefaultParamHandler.h>
#include <OpenMS/FORMAT/MzMLFile.h>
#include <OpenMS/FORMAT/PepXMLFile.h>
#include <OpenMS/FORMAT/IdXMLFile.h>
#include <OpenMS/FORMAT/ControlledVocabulary.h>
#include <OpenMS/FORMAT/PercolatorInfile.h>
#include <OpenMS/FORMAT/HANDLERS/IndexedMzMLDecoder.h>
#include <OpenMS/FORMAT/DATAACCESS/MSDataWritingConsumer.h>
#include <OpenMS/CHEMISTRY/ModificationsDB.h>
#include <OpenMS/CHEMISTRY/ProteaseDB.h>
#include <OpenMS/CHEMISTRY/ResidueDB.h>
#include <OpenMS/CHEMISTRY/ResidueModification.h>
#include <OpenMS/CHEMISTRY/ModifiedPeptideGenerator.h>
#include <OpenMS/PROCESSING/ID/IDFilter.h>
#include <OpenMS/SYSTEM/File.h>
#include <fstream>
#include <regex>
#include <QStringList>
#include <chrono>
#include <map>
#include <vector>
#include <algorithm>
#include <cmath>
#include <numeric>
#include <boost/math/distributions/normal.hpp>
using namespace OpenMS;
using namespace std;
using boost::math::normal;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_SageAdapter SageAdapter
@brief Identifies peptides in MS/MS spectra via sage.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → SageAdapter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any signal-/preprocessing tool @n (in mzML format)</td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFilter or @n any protein/peptide processing tool</td>
</tr>
</table>
</CENTER>
@em Sage must be installed before this wrapper can be used.
Only the closed-search identification mode of Sage is supported by this adapter.
Currently, also neither "wide window" (= open or DIA) mode, nor "chimeric" mode is supported,
because of limitations in OpenMS' data structures and file formats.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_SageAdapter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_SageAdapter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
#define CHRONOSET
class TOPPSageAdapter :
public SearchEngineBase
{
public:
TOPPSageAdapter() :
SearchEngineBase("SageAdapter", "Annotates MS/MS spectra using Sage.", true,
{
{"Michael Lazear",
"Sage: An Open-Source Tool for Fast Proteomics Searching and Quantification at Scale",
"J. Proteome Res. 2023, 22, 11, 3652–3659",
"https://doi.org/10.1021/acs.jproteome.3c00486"}
})
{
}
// Note: Modification analysis functionality has been moved to OpenSearchModificationAnalysis class
protected:
// create a template-based configuration file for sage
// variable values correspond to sage parameter that can be configured via TOPP tool parameter.
// values will be pasted into the config_template at the corresponding tag. E.g. bucket_size at tag ##bucket_size##
static constexpr size_t bucket_size = 8192;
static constexpr size_t min_len = 5;
static constexpr size_t max_len = 50;
static constexpr size_t missed_cleavages = 2;
static constexpr double fragment_min_mz = 200.0;
static constexpr double fragment_max_mz = 2000.0;
static constexpr double peptide_min_mass = 500.0;
static constexpr double peptide_max_mass = 5000.0;
static constexpr size_t min_ion_index = 2;
static constexpr size_t max_variable_mods = 2;
const std::string precursor_tol_unit = "ppm";
static constexpr double precursor_tol_left = -6.0;
static constexpr double precursor_tol_right = 6.0;
const std::string fragment_tol_unit = "ppm";
static constexpr double fragment_tol_left = -10.0;
static constexpr double fragment_tol_right = 10.0;
const std::string isotope_errors = "-1, 3";
const std::string charges_if_not_annotated = "2, 5";
static constexpr size_t min_matched_peaks = 6;
static constexpr size_t report_psms = 1;
static constexpr size_t min_peaks = 15;
static constexpr size_t max_peaks = 150;
std::string config_template = R"(
{
"database": {
"bucket_size": ##bucket_size##,
"enzyme": {
"missed_cleavages": ##missed_cleavages##,
"min_len": ##min_len##,
"max_len": ##max_len##,
##enzyme_details##
},
"fragment_min_mz": ##fragment_min_mz##,
"fragment_max_mz": ##fragment_max_mz##,
"peptide_min_mass": ##peptide_min_mass##,
"peptide_max_mass": ##peptide_max_mass##,
"ion_kinds": ["b", "y"],
"min_ion_index": ##min_ion_index##,
"static_mods": {
##static_mods##
},
"variable_mods": {
##variable_mods##
},
"max_variable_mods": ##max_variable_mods##,
"generate_decoys": false,
"decoy_tag": "##decoy_prefix##"
},
"precursor_tol": {
"##precursor_tol_unit##": [
##precursor_tol_left##,
##precursor_tol_right##
]
},
"fragment_tol": {
"##fragment_tol_unit##": [
##fragment_tol_left##,
##fragment_tol_right##
]
},
"precursor_charge": [
##charges_if_not_annotated##
],
"isotope_errors": [
##isotope_errors##
],
"deisotope": ##deisotope##,
"chimera": ##chimera##,
"predict_rt": ##predict_rt##,
"min_peaks": ##min_peaks##,
"max_peaks": ##max_peaks##,
"min_matched_peaks": ##min_matched_peaks##,
"report_psms": ##report_psms##,
"wide_window": ##wide_window##
}
)";
// formats a single mod entry as sage json entry
String getModDetails(const ResidueModification* mod, const Residue* res)
{
String origin;
if (mod->getTermSpecificity() == ResidueModification::N_TERM)
{
origin += "^";
}
else if (mod->getTermSpecificity() == ResidueModification::C_TERM)
{
origin += "$";
}
else if (mod->getTermSpecificity() == ResidueModification::PROTEIN_N_TERM)
{
origin += "[";
}
else if (mod->getTermSpecificity() == ResidueModification::PROTEIN_C_TERM)
{
origin += "]";
}
if (res != nullptr && res->getOneLetterCode() != "X") // omit letter for "any AA"
{
origin += res->getOneLetterCode();
}
return String("\"") + origin + "\": " + String(mod->getDiffMonoMass());
}
// formats all mod entries into a single multi-line json string
String getModDetailsString(const OpenMS::ModifiedPeptideGenerator::MapToResidueType& mod_map)
{
String mod_details;
for (auto it = mod_map.val.begin(); it != mod_map.val.end(); ++it)
{
const auto& mod = it->first;
const auto& res = it->second;
mod_details += getModDetails(mod, res);
if (std::next(it) != mod_map.val.end())
{
mod_details += ",\n";
}
}
return mod_details;
}
// impute values into config_template
// TODO just iterate over all options??
String imputeConfigIntoTemplate()
{
String config_file = config_template;
config_file.substitute("##bucket_size##", String(getIntOption_("bucket_size")));
config_file.substitute("##min_len##", String(getIntOption_("min_len")));
config_file.substitute("##max_len##", String(getIntOption_("max_len")));
config_file.substitute("##missed_cleavages##", String(getIntOption_("missed_cleavages")));
config_file.substitute("##fragment_min_mz##", String(getDoubleOption_("fragment_min_mz")));
config_file.substitute("##fragment_max_mz##", String(getDoubleOption_("fragment_max_mz")));
config_file.substitute("##peptide_min_mass##", String(getDoubleOption_("peptide_min_mass")));
config_file.substitute("##peptide_max_mass##", String(getDoubleOption_("peptide_max_mass")));
config_file.substitute("##min_ion_index##", String(getIntOption_("min_ion_index")));
config_file.substitute("##max_variable_mods##", String(getIntOption_("max_variable_mods")));
config_file.substitute("##precursor_tol_unit##", getStringOption_("precursor_tol_unit") == "Da" ? "da" : "ppm"); // sage might expect lower-case "da"
config_file.substitute("##precursor_tol_left##", String(getDoubleOption_("precursor_tol_left")));
config_file.substitute("##precursor_tol_right##", String(getDoubleOption_("precursor_tol_right")));
config_file.substitute("##fragment_tol_unit##", getStringOption_("fragment_tol_unit") == "Da" ? "da" : "ppm"); // sage might expect lower-case "da"
config_file.substitute("##fragment_tol_left##", String(getDoubleOption_("fragment_tol_left")));
config_file.substitute("##fragment_tol_right##", String(getDoubleOption_("fragment_tol_right")));
config_file.substitute("##isotope_errors##", getStringOption_("isotope_error_range"));
config_file.substitute("##charges_if_not_annotated##", getStringOption_("charges"));
config_file.substitute("##min_matched_peaks##", String(getIntOption_("min_matched_peaks")));
config_file.substitute("##min_peaks##", String(getIntOption_("min_peaks")));
config_file.substitute("##max_peaks##", String(getIntOption_("max_peaks")));
config_file.substitute("##report_psms##", String(getIntOption_("report_psms")));
config_file.substitute("##deisotope##", getStringOption_("deisotope"));
config_file.substitute("##chimera##", getStringOption_("chimera"));
config_file.substitute("##predict_rt##", getStringOption_("predict_rt"));
config_file.substitute("##decoy_prefix##", getStringOption_("decoy_prefix"));
config_file.substitute("##wide_window##", getStringOption_("wide_window"));
//Look at decoy handling
String enzyme = getStringOption_("enzyme");
String enzyme_details;
if (enzyme == "Trypsin")
{
enzyme_details =
R"("cleave_at": "KR",
"restrict": "P",
"c_terminal": true)";
}
else if (enzyme == "Trypsin/P")
{
enzyme_details =
R"("cleave_at": "KR",
"restrict": null,
"c_terminal": true)";
}
else if (enzyme == "Chymotrypsin")
{
enzyme_details =
R"("cleave_at": "FWYL",
"restrict": "P",
"c_terminal": true)";
}
else if (enzyme == "Chymotrypsin/P")
{
enzyme_details =
R"("cleave_at": "FWYL",
"restrict": null,
"c_terminal": true)";
}
else if (enzyme == "Arg-C")
{
enzyme_details =
R"("cleave_at": "R",
"restrict": "P",
"c_terminal": true)";
}
else if (enzyme == "Arg-C/P")
{
enzyme_details =
R"("cleave_at": "R",
"restrict": null,
"c_terminal": true)";
}
else if (enzyme == "Lys-C")
{
enzyme_details =
R"("cleave_at": "K",
"restrict": "P",
"c_terminal": true)";
}
else if (enzyme == "Lys-C/P")
{
enzyme_details =
R"("cleave_at": "K",
"restrict": null,
"c_terminal": true)";
}
else if (enzyme == "Lys-N")
{
enzyme_details =
R"("cleave_at": "K",
"restrict": null,
"c_terminal": false)";
}
else if (enzyme == "no cleavage")
{
enzyme_details =
R"("cleave_at": "$")";
}
else if (enzyme == "unspecific cleavage")
{
enzyme_details =
R"("cleave_at": "")";
}
else if (enzyme == "glutamyl endopeptidase")
{
enzyme_details =
R"("cleave_at": "E",
"restrict": "E",
"c_terminal":true)";
}
else if (enzyme == "leukocyte elastase")
{
enzyme_details =
R"("cleave_at": "ALIV",
"restrict": null,
"c_terminal":true)";
}
config_file.substitute("##enzyme_details##", enzyme_details);
auto fixed_mods = getStringList_("fixed_modifications");
set<String> fixed_unique(fixed_mods.begin(), fixed_mods.end());
fixed_mods.assign(fixed_unique.begin(), fixed_unique.end());
ModifiedPeptideGenerator::MapToResidueType fixed_mod_map = ModifiedPeptideGenerator::getModifications(fixed_mods); // std::unordered_map<const ResidueModification*, const Residue*> val;
String static_mods_details = getModDetailsString(fixed_mod_map);
auto variable_mods = getStringList_("variable_modifications");
set<String> variable_unique(variable_mods.begin(), variable_mods.end());
variable_mods.assign(variable_unique.begin(), variable_unique.end());
ModifiedPeptideGenerator::MapToResidueType variable_mod_map = ModifiedPeptideGenerator::getModifications(variable_mods);
String variable_mods_details = getModDetailsString(variable_mod_map);
//Treat variables as list for sage v0.15 and beyond
StringList static_mods_details_list;
StringList variable_mods_details_list;
String static_mods_details_split = static_mods_details;
String variable_mods_details_split = variable_mods_details;
static_mods_details_split.split(",", static_mods_details_list);
variable_mods_details_split.split(",", variable_mods_details_list);
String temp_String_var;
for (auto& x : variable_mods_details_list)
{
StringList temp_split;
x.split(":", temp_split);
temp_split.insert(temp_split.begin()+1, ":[");
temp_split.insert(temp_split.end(), "]");
String temp_split_Str = "";
for (auto& y : temp_split)
{
temp_split_Str = temp_split_Str + y;
}
temp_String_var = temp_String_var + "," + temp_split_Str ;
}
String temp_String_var_Fin = temp_String_var.substr(1, temp_String_var.size()-1);
config_file.substitute("##static_mods##", static_mods_details);
config_file.substitute("##variable_mods##", temp_String_var_Fin);
return config_file;
}
std::tuple<std::string, std::string, std::string> getVersionNumber_(const std::string& multi_line_input)
{
std::regex version_regex("Version ([0-9]+)\\.([0-9]+)\\.([0-9]+)");
std::sregex_iterator it(multi_line_input.begin(), multi_line_input.end(), version_regex);
std::smatch match = *it;
std::cout << "Found Sage version string: " << match.str() << std::endl;
return make_tuple(it->str(1), it->str(2), it->str(3)); // major, minor, patch
}
void registerOptionsAndFlags_() override
{
registerInputFileList_("in", "<files>", StringList(), "Input files separated by blank");
setValidFormats_("in", { "mzML" } );
registerOutputFile_("out", "<file>", "", "Single output file containing all search results.", true, false);
setValidFormats_("out", { "idXML" } );
registerInputFile_("database", "<file>", "", "FASTA file", true, false, {"skipexists"});
setValidFormats_("database", { "FASTA" } );
registerInputFile_("sage_executable", "<executable>",
// choose the default value according to the platform where it will be executed
#ifdef OPENMS_WINDOWSPLATFORM
"sage.exe",
#else
"sage",
#endif
"The Sage executable. Provide a full or relative path, or make sure it can be found in your PATH environment.", true, false, {"is_executable"});
registerStringOption_("decoy_prefix", "<prefix>", "DECOY_", "Prefix on protein accession used to distinguish decoy from target proteins. Decoy proteins in the FASTA file should have this prefix in their accession. NOTE: Decoy suffix is currently not supported by Sage.", false, false);
registerIntOption_("batch_size", "<int>", 0, "Number of files to load and search in parallel. Setting this to 0 (default) uses an automatic value (typically number of CPUs/2). Default: 0", false, false);
registerDoubleOption_("precursor_tol_left", "<double>", -6.0, "Start (left side) of the precursor tolerance window w.r.t. precursor location. This value is relative to the experimental precursor mass and used to define the lower bound of the search window. Must be negative (e.g., -6 ppm means 6 ppm below the observed mass).", false, false);
registerDoubleOption_("precursor_tol_right", "<double>", 6.0, "End (right side) of the precursor tolerance window w.r.t. precursor location. This value is added to the experimental precursor mass to define the upper bound of the search window. Must be positive (e.g., 6 ppm means 6 ppm above the observed mass).", false, false);
registerStringOption_("precursor_tol_unit", "<unit>", "ppm", "Unit of precursor tolerance (ppm or Da)", false, false);
setValidStrings_("precursor_tol_unit", ListUtils::create<String>("ppm,Da"));
registerDoubleOption_("fragment_tol_left", "<double>", -20.0, "Start (left side) of the fragment tolerance window w.r.t. fragment location. This value reduces the experimental fragment mass to define the lower bound of the search window. Must be negative (e.g., -20 ppm means 20 ppm below the observed mass).", false, false);
registerDoubleOption_("fragment_tol_right", "<double>", 20.0, "End (right side) of the fragment tolerance window w.r.t. fragment location. This value is added to the experimental fragment mass to define the upper bound of the search window. Must be positive (e.g., 20 ppm means 20 ppm above the observed mass).", false, false);
registerStringOption_("fragment_tol_unit", "<unit>", "ppm", "Unit of fragment tolerance (ppm or Da)", false, false);
setValidStrings_("fragment_tol_unit", ListUtils::create<String>("ppm,Da"));
// add advanced options
registerIntOption_("min_matched_peaks", "<int>", min_matched_peaks, "Minimum number of b+y ions required to match for PSM to be reported. Default: 6", false, true);
registerIntOption_("min_peaks", "<int>", min_peaks, "Minimum number of peaks required for a spectrum to be considered. Spectra with fewer peaks will be ignored. Default: 15", false, true);
registerIntOption_("max_peaks", "<int>", max_peaks, "Take the top N most intense MS2 peaks to search. Default: 150", false, true);
registerIntOption_("report_psms", "<int>", report_psms, "Number of peptide-spectrum matches (PSMs) to report for each spectrum. The top N scoring PSMs will be reported. Values higher than 1 can be useful for chimeric spectra but may affect downstream statistical analysis. Default: 1", false, true);
registerIntOption_("bucket_size", "<int>", bucket_size, "How many fragments are in each internal mass bucket. Default: 8192 (optimal for high-resolution data). Try increasing it to 32768 or 65536 for low-resolution data. See also: fragment_tol_*", false, true);
registerIntOption_("min_len", "<int>", min_len, "Minimum peptide length (in amino acids). Default: 5", false, true);
registerIntOption_("max_len", "<int>", max_len, "Maximum peptide length (in amino acids). Default: 50", false, true);
registerIntOption_("missed_cleavages", "<int>", missed_cleavages, "Maximum number of missed enzymatic cleavages to allow in peptide generation. Default: 2", false, true);
registerDoubleOption_("fragment_min_mz", "<double>", fragment_min_mz, "Minimum fragment m/z to consider. Fragment ions below this m/z will be ignored. Default: 200.0", false, true);
registerDoubleOption_("fragment_max_mz", "<double>", fragment_max_mz, "Maximum fragment m/z to consider. Fragment ions above this m/z will be ignored. Default: 2000.0", false, true);
registerDoubleOption_("peptide_min_mass", "<double>", peptide_min_mass, "Minimum monoisotopic peptide mass to consider for in silico digestion. Peptides below this mass will be excluded from the search database. Default: 500.0", false, true);
registerDoubleOption_("peptide_max_mass", "<double>", peptide_max_mass, "Maximum monoisotopic peptide mass to consider for in silico digestion. Peptides above this mass will be excluded from the search database. Default: 5000.0", false, true);
registerIntOption_("min_ion_index", "<int>", min_ion_index, "Minimum ion index to consider for preliminary scoring. This parameter controls which fragment ions are used in preliminary scoring. Default: 2 (skips b1/b2/y1/y2 ions, which are often missing or unreliable). Setting this to 1 would only skip b1/y1 ions. Does not affect the final scoring of PSMs.", false, true);
registerIntOption_("max_variable_mods", "<int>", max_variable_mods, "Maximum number of variable modifications allowed per peptide. Default: 2", false, true);
registerStringOption_("isotope_error_range", "<start,end>", isotope_errors, "Range of C13 isotope errors to consider for precursor matching, specified as 'start,end' (e.g., '-1,3'). For a range of '-1,3', Sage will consider all isotope errors from -1 to +3 (i.e., -1, 0, 1, 2, 3). This is useful when the monoisotopic peak may not be selected. Can include negative values. Default: '-1,3'. Note: Searching with isotope errors is slower than using a wider precursor tolerance.", false, true);
registerStringOption_("charges", "<start,end>", charges_if_not_annotated, "Range of precursor charge states to consider if not annotated in the file, specified as 'start,end' (e.g., '2,5'). For a range of '2,5', Sage will consider charge states 2, 3, 4, and 5. This is only used when charge state information is missing from the input file. Default: '2,5'"
, false, true);
//Search Enzyme
vector<String> all_enzymes;
ProteaseDB::getInstance()->getAllNames(all_enzymes);
registerStringOption_("enzyme", "<cleavage site>", "Trypsin", "The enzyme used for peptide digestion.", false, false);
setValidStrings_("enzyme", all_enzymes);
//Modifications
vector<String> all_mods;
ModificationsDB::getInstance()->getAllSearchModifications(all_mods);
registerStringList_("fixed_modifications", "<mods>", ListUtils::create<String>("Carbamidomethyl (C)", ','), "Fixed modifications, specified using Unimod (www.unimod.org) terms, e.g. 'Carbamidomethyl (C)' or 'Oxidation (M)'", false);
setValidStrings_("fixed_modifications", all_mods);
registerStringList_("variable_modifications", "<mods>", ListUtils::create<String>("Oxidation (M)", ','), "Variable modifications, specified using Unimod (www.unimod.org) terms, e.g. 'Carbamidomethyl (C)' or 'Oxidation (M)'", false);
setValidStrings_("variable_modifications", all_mods);
//FDR and misc
registerDoubleOption_("q_value_threshold", "<double>", 1, "The FDR (False Discovery Rate) threshold for filtering peptides. PSMs with q-values above this threshold will be excluded. Default: 1 (no filtering)", false, false);
registerStringOption_("annotate_matches", "<bool>", "true", "Whether fragment ion matches should be annotated in the output. This provides additional information about which theoretical ions matched experimental peaks. Default: true", false, false);
registerStringOption_("deisotope", "<bool>", "false", "Perform deisotoping and charge state deconvolution on MS2 spectra. Recommended for high-resolution MS2 data. May interfere with TMT-MS2 quantification. Default: false", false, false );
registerStringOption_("chimera", "<bool>", "false", "Enable chimeric spectra search mode. When enabled, multiple peptide identifications can be reported for each MS2 scan, useful for co-fragmenting peptides. Default: false", false, false );
registerStringOption_("predict_rt", "<bool>", "false", "Use retention time prediction model as a feature for machine learning scoring. Note: This is incompatible with label-free quantification (LFQ). Default: false", false, false );
registerStringOption_("wide_window", "<bool>", "false", "Enable wide-window/DIA search mode. When enabled, the precursor_tol parameter is ignored and a dynamic precursor tolerance is used. Default: false", false, false);
registerStringOption_("smoothing", "<bool>", "true", "Whether to smooth the PTM (post-translational modification) mass histogram and pick local maxima. If false, uses raw histogram data. Default: true", false, false);
// register peptide indexing parameter (with defaults for this search engine)
registerPeptideIndexingParameter_(PeptideIndexing().getParameters());
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
// Validate tolerance parameters
double precursor_tol_left = getDoubleOption_("precursor_tol_left");
double precursor_tol_right = getDoubleOption_("precursor_tol_right");
double fragment_tol_left = getDoubleOption_("fragment_tol_left");
double fragment_tol_right = getDoubleOption_("fragment_tol_right");
// Warn if tolerance parameters seem incorrect
if (precursor_tol_left > 0)
{
OPENMS_LOG_WARN << "WARNING: precursor_tol_left is positive (" << precursor_tol_left << "). "
<< "This parameter is used to reduce the experimental mass, so it should typically be negative. "
<< "A positive value will likely produce an incorrect search window." << std::endl;
}
if (precursor_tol_right < 0)
{
OPENMS_LOG_WARN << "WARNING: precursor_tol_right is negative (" << precursor_tol_right << "). "
<< "This parameter is ADDED to the experimental mass, so it should typically be positive. "
<< "A negative value will likely produce an incorrect search window." << std::endl;
}
if (fragment_tol_left > 0)
{
OPENMS_LOG_WARN << "WARNING: fragment_tol_left is positive (" << fragment_tol_left << "). "
<< "This parameter is used to reduce the experimental mass, so it should typically be negative. "
<< "A positive value will likely produce an incorrect search window." << std::endl;
}
if (fragment_tol_right < 0)
{
OPENMS_LOG_WARN << "WARNING: fragment_tol_right is negative (" << fragment_tol_right << "). "
<< "This parameter is ADDED to the experimental mass, so it should typically be positive. "
<< "A negative value will likely produce an incorrect search window." << std::endl;
}
// do this early, to see if Sage is installed
String sage_executable = getStringOption_("sage_executable");
std::cout << sage_executable << " sage executable" << std::endl;
String proc_stdout, proc_stderr;
TOPPBase::ExitCodes exit_code = runExternalProcess_(sage_executable.toQString(), QStringList() << "--help", proc_stdout, proc_stderr, "");
if (exit_code != EXECUTION_OK)
{
return exit_code;
}
auto major_minor_patch = getVersionNumber_(proc_stdout);
String sage_version = std::get<0>(major_minor_patch) + "." + std::get<1>(major_minor_patch) + "." + std::get<2>(major_minor_patch);
//-------------------------------------------------------------
// run sage
//-------------------------------------------------------------
StringList input_files = getStringList_("in");
String output_file = getStringOption_("out");
String output_folder = File::path(output_file);
String fasta_file = getStringOption_("database");
int batch = getIntOption_("batch_size");
int threads = getIntOption_("threads");
String decoy_prefix = getStringOption_("decoy_prefix");
// create config
String config = imputeConfigIntoTemplate();
// store config in config_file
OPENMS_LOG_INFO << "Creating temp file name..." << std::endl;
String config_file = File::getTempDirectory() + "/" + File::getUniqueName() + ".json";
OPENMS_LOG_INFO << "Creating Sage config file..." << config_file << std::endl;
ofstream config_stream(config_file.c_str());
config_stream << config;
config_stream.close();
// keep config file if debug mode is set
if (getIntOption_("debug") > 1)
{
String debug_config_file = output_folder + "/" + File::getUniqueName() + ".json";
ofstream debug_config_stream(debug_config_file.c_str());
debug_config_stream << config;
debug_config_stream.close();
}
String annotation_check;
QStringList arguments;
if ( (getStringOption_("annotate_matches").compare("true")) == 0)
{
arguments << config_file.toQString()
<< "-f" << fasta_file.toQString()
<< "-o" << output_folder.toQString()
<< "--annotate-matches"
<< "--write-pin";
}
else
{
arguments << config_file.toQString()
<< "-f" << fasta_file.toQString()
<< "-o" << output_folder.toQString()
<< "--write-pin";
}
if (batch >= 1) arguments << "--batch-size" << String(batch).toQString();
for (auto s : input_files) arguments << s.toQString();
OPENMS_LOG_INFO << "Sage command line: " << sage_executable << " " << arguments.join(' ').toStdString() << std::endl;
//std::chrono lines for testing/writing purposes only!
std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
// Set RAYON_NUM_THREADS environment variable to control Sage's thread usage
// Only set if threads > 0; if threads == 0, let Rayon auto-detect (use all CPUs)
std::map<QString, QString> sage_env;
if (threads > 0)
{
sage_env["RAYON_NUM_THREADS"] = String(threads).toQString();
}
// Sage execution with the executable and the arguments StringList
exit_code = runExternalProcess_(sage_executable.toQString(), arguments, "", sage_env);
std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();
#ifdef CHRONOSET
std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::seconds>(end - begin).count() << "[s]" << std::endl;
#endif
if (exit_code != EXECUTION_OK)
{
return exit_code;
}
//-------------------------------------------------------------
// writing IdXML output
//-------------------------------------------------------------
// read the sage output
OPENMS_LOG_INFO << "Reading sage output..." << std::endl;
StringList filenames;
StringList extra_scores = {"ln(-poisson)", "ln(delta_best)", "ln(delta_next)",
"ln(matched_intensity_pct)", "longest_b", "longest_y",
"longest_y_pct", "matched_peaks", "scored_candidates"};
double FDR_threshhold = getDoubleOption_("q_value_threshold");
PeptideIdentificationList peptide_identifications = PercolatorInfile::load(
output_folder + "/results.sage.pin",
true,
"ln(hyperscore)", //TODO can we get sage's "sage_discriminant_score" out of the pin? Probably not. Suboptimal!
extra_scores,
filenames,
decoy_prefix,
FDR_threshhold,
true);
for (auto& id : peptide_identifications)
{
auto& hits = id.getHits();
for (auto& h : hits)
{
for (const auto& meta : extra_scores)
{
if (h.metaValueExists(meta))
{
h.setMetaValue("SAGE:" + meta, h.getMetaValue(meta));
h.removeMetaValue(meta);
}
}
}
}
String smoothing_string = getStringOption_("smoothing");
bool smoothing = !(smoothing_string.compare("true"));
// Use shared modification analysis functionality
OpenSearchModificationAnalysis mod_analyzer;
auto modification_summaries = mod_analyzer.analyzeModifications(
peptide_identifications,
0.01, // precursor_mass_tolerance
false, // precursor_mass_tolerance_unit_ppm (0.01 Da)
smoothing,
output_file
);
// remove hits without charge state assigned or charge outside of default range (fix for downstream bugs). TODO: remove if all charges annotated in sage
IDFilter::filterPeptidesByCharge(peptide_identifications, 2, numeric_limits<int>::max());
if (filenames.empty()) filenames = getStringList_("in");
// TODO: allow optional split and create multiple idXMLs one per input file
vector<ProteinIdentification> protein_identifications(1, ProteinIdentification());
writeDebug_("write idXMLFile", 1);
protein_identifications[0].setPrimaryMSRunPath(filenames);
protein_identifications[0].setDateTime(DateTime::now());
protein_identifications[0].setSearchEngine("Sage");
protein_identifications[0].setSearchEngineVersion(sage_version);
DateTime now = DateTime::now();
String identifier("Sage_" + now.get());
protein_identifications[0].setIdentifier(identifier);
for (auto & pid : peptide_identifications)
{
pid.setIdentifier(identifier);
pid.setScoreType("ln(hyperscore)");
pid.setHigherScoreBetter(true);
}
auto& search_parameters = protein_identifications[0].getSearchParameters();
// protein_identifications[0].getSearchParameters().enzyme_term_specificity = static_cast<EnzymaticDigestion::Specificity>(num_enzyme_termini[getStringOption_("num_enzyme_termini")]);
protein_identifications[0].getSearchParameters().db = getStringOption_("database");
// add extra scores for percolator rescoring
vector<String> percolator_features = { "score" };
for (auto s : extra_scores) percolator_features.push_back("SAGE:" + s);
search_parameters.setMetaValue("extra_features", ListUtils::concatenate(percolator_features, ","));
auto enzyme = *ProteaseDB::getInstance()->getEnzyme(getStringOption_("enzyme"));
search_parameters.digestion_enzyme = enzyme; // needed for indexing
search_parameters.enzyme_term_specificity = EnzymaticDigestion::SPEC_FULL;
search_parameters.charges = "2:5"; // probably hard-coded in sage https://github.com/lazear/sage/blob/master/crates/sage/src/scoring.rs#L301
search_parameters.mass_type = ProteinIdentification::PeakMassType::MONOISOTOPIC;
search_parameters.fixed_modifications = getStringList_("fixed_modifications");
search_parameters.variable_modifications = getStringList_("variable_modifications");
search_parameters.missed_cleavages = getIntOption_("missed_cleavages");
search_parameters.fragment_mass_tolerance = (std::fabs(getDoubleOption_("fragment_tol_left")) + std::fabs(getDoubleOption_("fragment_tol_right"))) * 0.5;
search_parameters.precursor_mass_tolerance = (std::fabs(getDoubleOption_("precursor_tol_left")) + std::fabs(getDoubleOption_("precursor_tol_right"))) * 0.5;
search_parameters.precursor_mass_tolerance_ppm = getStringOption_("precursor_tol_unit") == "ppm";
search_parameters.fragment_mass_tolerance_ppm = getStringOption_("fragment_tol_unit") == "ppm";
// write all (!) parameters as metavalues to the search parameters
if (!protein_identifications.empty())
{
DefaultParamHandler::writeParametersToMetaValues(this->getParam_(), protein_identifications[0].getSearchParameters(), this->getToolPrefix());
}
// if "reindex" parameter is set to true: will perform reindexing
if (auto ret = reindex_(protein_identifications, peptide_identifications); ret != EXECUTION_OK) return ret;
map<String,unordered_map<int,String>> file2specnr2nativeid;
for (const auto& mzml : input_files)
{
// TODO stream mzml?
MzMLFile m;
MSExperiment exp;
auto opts = m.getOptions();
opts.setMSLevels({2,3});
opts.setFillData(false);
//opts.setMetadataOnly(true);
m.setOptions(opts);
m.load(mzml, exp);
String nIDType = "";
if (!exp.getSourceFiles().empty())
{
// TODO we could also guess the regex from the first nativeID if it is not stored here
// but I refuse to link to Boost::regex just for this
// Someone has to rework the API first!
nIDType = exp.getSourceFiles()[0].getNativeIDTypeAccession();
}
for (const auto& spec : exp)
{
const String& nID = spec.getNativeID();
int nr = SpectrumLookup::extractScanNumber(nID, nIDType);
if (nr >= 0)
{
auto [it, inserted] = file2specnr2nativeid.emplace(File::basename(mzml), unordered_map<int,String>({{nr,nID}}));
if (!inserted)
{
it->second.emplace(nr,nID);
}
}
}
}
map<Size, String> idxToFile;
StringList fnInRun;
protein_identifications[0].getPrimaryMSRunPath(fnInRun);
Size cnt = 0;
for (const auto& f : fnInRun)
{
idxToFile.emplace(cnt, f);
++cnt;
}
for (auto& id : peptide_identifications)
{
Int64 scanNrAsInt = 0;
try
{ // check if spectrum reference is a string that just contains a number
scanNrAsInt = id.getSpectrumReference().toInt64();
// no exception -> conversion to int was successful. Now lookup full native ID in corresponding file for given spectrum number.
// idxToFile values can be full paths but file2specnr2nativeid keys are basenames, so normalize first
String file_basename = File::basename(idxToFile[id.getMetaValue(Constants::UserParam::ID_MERGE_INDEX)]);
auto file_it = file2specnr2nativeid.find(file_basename);
if (file_it != file2specnr2nativeid.end())
{
id.setSpectrumReference(file_it->second.at(scanNrAsInt));
}
}
catch (...)
{
}
}
// Annotate FAIMS compensation voltage if present in any input file
// Pre-group peptide indices by file for efficient lookup (avoids O(files * peptides))
std::map<Size, std::vector<Size>> file_to_peptide_indices;
for (Size i = 0; i < peptide_identifications.size(); ++i)
{
const auto& pep = peptide_identifications[i];
if (pep.metaValueExists(Constants::UserParam::ID_MERGE_INDEX))
{
file_to_peptide_indices[pep.getMetaValue(Constants::UserParam::ID_MERGE_INDEX)].push_back(i);
}
}
for (const auto& mzml : input_files)
{
// Find file index for this mzML
Size file_idx = 0;
for (const auto& [idx, fname] : idxToFile)
{
if (File::basename(fname) == File::basename(mzml))
{
file_idx = idx;
break;
}
}
// Skip if no peptides for this file
auto it = file_to_peptide_indices.find(file_idx);
if (it == file_to_peptide_indices.end() || it->second.empty())
{
continue;
}
// Load mzML metadata (no peak data needed)
MzMLFile m;
MSExperiment exp_full;
auto opts = m.getOptions();
opts.setFillData(false);
m.setOptions(opts);
m.load(mzml, exp_full);
// Collect peptide IDs for this file
PeptideIdentificationList file_peptides;
file_peptides.reserve(it->second.size());
for (Size idx : it->second)
{
file_peptides.push_back(peptide_identifications[idx]);
}
// Annotate FAIMS and copy back
SpectrumMetaDataLookup::addMissingFAIMSToPeptideIDs(file_peptides, exp_full);
for (Size i = 0; i < file_peptides.size(); ++i)
{
if (file_peptides[i].metaValueExists(Constants::UserParam::FAIMS_CV))
{
peptide_identifications[it->second[i]].setMetaValue(
Constants::UserParam::FAIMS_CV,
file_peptides[i].getMetaValue(Constants::UserParam::FAIMS_CV));
}
}
}
IdXMLFile().store(output_file, protein_identifications, peptide_identifications);
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPSageAdapter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/OpenNuXL.cpp | .cpp | 276,667 | 6,622 | // Copyright (c) 2002-present, The OpenMS Team -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Timo Sachsenberg $
// --------------------------------------------------------------------------
#include <OpenMS/KERNEL/StandardTypes.h>
#include <OpenMS/CONCEPT/Constants.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/DATASTRUCTURES/Param.h>
#include <OpenMS/KERNEL/MSSpectrum.h>
#include <OpenMS/METADATA/SpectrumSettings.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/FORMAT/MzMLFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FASTAFile.h>
#include <OpenMS/CHEMISTRY/ProteaseDigestion.h>
#include <OpenMS/DATASTRUCTURES/ListUtilsIO.h>
#include <OpenMS/ANALYSIS/ID/PeptideIndexing.h>
#include <OpenMS/ANALYSIS/ID/FalseDiscoveryRate.h>
#include <OpenMS/PROCESSING/CALIBRATION/PrecursorCorrection.h>
#include <OpenMS/ANALYSIS/XLMS/OPXLSpectrumProcessingAlgorithms.h>
#include <OpenMS/CHEMISTRY/DecoyGenerator.h>
#include <OpenMS/FEATUREFINDER/FeatureFinderIdentificationAlgorithm.h>
#include <OpenMS/ANALYSIS/ID/MorpheusScore.h>
#include <OpenMS/ANALYSIS/ID/HyperScore.h>
#include <OpenMS/ANALYSIS/NUXL/NuXLDeisotoper.h>
#include <OpenMS/ANALYSIS/NUXL/NuXLModificationsGenerator.h>
#include <OpenMS/ANALYSIS/NUXL/NuXLReport.h>
#include <OpenMS/ANALYSIS/NUXL/NuXLAnnotatedHit.h>
#include <OpenMS/ANALYSIS/NUXL/NuXLAnnotateAndLocate.h>
#include <OpenMS/ANALYSIS/NUXL/NuXLConstants.h>
#include <OpenMS/ANALYSIS/NUXL/NuXLFDR.h>
#include <OpenMS/ANALYSIS/NUXL/NuXLMarkerIonExtractor.h>
#include <OpenMS/ANALYSIS/NUXL/NuXLFragmentAnnotationHelper.h>
#include <OpenMS/ANALYSIS/NUXL/NuXLFragmentIonGenerator.h>
#include <OpenMS/ANALYSIS/NUXL/NuXLParameterParsing.h>
#include <OpenMS/ANALYSIS/NUXL/NuXLPresets.h>
#include <OpenMS/CHEMISTRY/ElementDB.h>
#include <OpenMS/CHEMISTRY/ResidueDB.h>
#include <OpenMS/CHEMISTRY/ModificationsDB.h>
#include <OpenMS/CHEMISTRY/ProteaseDB.h>
#include <OpenMS/CHEMISTRY/ResidueModification.h>
#include <OpenMS/CHEMISTRY/ModifiedPeptideGenerator.h>
// preprocessing and filtering
#include <OpenMS/PROCESSING/CALIBRATION/InternalCalibration.h>
#include <OpenMS/ANALYSIS/ID/SimpleSearchEngineAlgorithm.h>
#include <OpenMS/ANALYSIS/QUANTITATION/KDTreeFeatureMaps.h>
#include <OpenMS/ANALYSIS/ID/PrecursorPurity.h>
#include <OpenMS/PROCESSING/FILTERING/ThresholdMower.h>
#include <OpenMS/PROCESSING/FILTERING/NLargest.h>
#include <OpenMS/PROCESSING/FILTERING/WindowMower.h>
#include <OpenMS/PROCESSING/SCALING/Normalizer.h>
#include <OpenMS/PROCESSING/SCALING/SqrtScaler.h>
#include <OpenMS/COMPARISON/BinnedSpectralContrastAngle.h>
#include <OpenMS/METADATA/SpectrumLookup.h>
#include <OpenMS/CHEMISTRY/TheoreticalSpectrumGenerator.h>
#include <OpenMS/COMPARISON/SpectrumAlignment.h>
#include <OpenMS/FORMAT/IdXMLFile.h>
#include <OpenMS/FORMAT/TextFile.h>
#include <OpenMS/MATH/MathFunctions.h>
#include <OpenMS/MATH/StatisticFunctions.h>
#include <boost/regex.hpp>
#include <boost/math/distributions/binomial.hpp>
#include <boost/math/distributions/normal.hpp>
#include <boost/math/distributions/beta.hpp>
#include <OpenMS/FEATUREFINDER/FeatureFinderMultiplexAlgorithm.h>
#include <OpenMS/CONCEPT/VersionInfo.h>
#include <OpenMS/ML/SVM/SimpleSVM.h>
#include <OpenMS/ANALYSIS/ID/AScore.h>
#include <OpenMS/PROCESSING/ID/IDFilter.h>
#include <OpenMS/KERNEL/BinnedSpectrum.h>
#include <OpenMS/SYSTEM/File.h>
#include <QtCore/QStringList>
#include <map>
#include <algorithm>
#include <iterator>
#include <cmath>
#ifdef _OPENMP
#include <omp.h>
#define NUMBER_OF_THREADS (omp_get_num_threads())
#else
#define NUMBER_OF_THREADS (1)
#endif
//#define DEBUG_OpenNuXL 1
//#define FILTER_BAD_SCORES_ID_TAGS filter out some good hits
#define CALCULATE_LONGEST_TAG
#define CONSIDER_AA_LOSSES 1
using namespace OpenMS;
using namespace OpenMS::Internal;
using namespace std;
struct NuXLLinearRescore
{
/* @brief create a single main score using best and worst hits from peptides and XLs (without considering target/decoy information to prevent overfitting)
This effectively scales the score to the range [0,1] and allows to use the same score for both peptides and XLs without leaking target/decoy information.
*/
static void apply(PeptideIdentificationList& peptide_ids)
{
StringList feature_set; // additional scores considered for score calibration
/*
feature_set
<< "missed_cleavages"
<< "isotope_error"
<< "NuXL:modds"
<< "NuXL:pl_modds"
<< "NuXL:mass_error_p"
<< "NuXL:tag_XLed"
<< "NuXL:tag_unshifted"
<< "NuXL:tag_shifted"
<< "NuXL:isXL"
<< "NuXL:total_MIC"
<< "-ln(poisson)"
<< "nr_candidates";
*/
// TODO: try to we make the size of the feature set dependend on the number of samples according to https://academic.oup.com/bioinformatics/article/21/8/1509/249540
feature_set
<< "NuXL:modds"
<< "NuXL:pl_modds"
<< "NuXL:isXL"
<< "NuXL:mass_error_p"
<< "NuXL:tag_XLed"
<< "NuXL:tag_unshifted"
<< "NuXL:tag_shifted"
<< "missed_cleavages"
<< "NuXL:ladder_score"
<< "variable_modifications"; // TODO: eval
//<< "nr_candidates";
///////////////////////////////////////// SVM score recalibration
// find size of minority class and create map with highest score at the top/beginning
map<double, size_t, std::greater<double>> pep;
map<double, size_t, std::greater<double>> XL;
// ignore target/decoy information to prevent overfitting
// build map from cross-link scores and peptide scores to the PeptideIdentification index
for (size_t index = 0; index != peptide_ids.size(); ++index)
{
if (peptide_ids[index].getHits().empty()) continue;
const PeptideHit& ph = peptide_ids[index].getHits()[0]; // get best match to current spectrum
bool is_XL = !(static_cast<int>(ph.getMetaValue("NuXL:isXL")) == 0);
double score = ph.getScore();
if (is_XL)
{
XL[score] = index;
}
else
{
pep[score] = index;
}
}
size_t minority_class = std::min({pep.size(), XL.size()});
cout << "Peptide (target+decoy)\t XL (target+decoy):" << endl;
cout << pep.size() << "\t" << XL.size() << endl;
if (minority_class > 500) minority_class = 500;
// We don't want to use target/decoy information for training the SVM. We roughly approximate true/false by using the top and bottom of the scores
map<double, size_t> pep_top(pep.begin(), std::next(pep.begin(), minority_class/2));
map<double, size_t> pep_bottom(pep.rbegin(), std::next(pep.rbegin(), minority_class/2));
map<double, size_t> XL_top(XL.begin(), std::next(XL.begin(), minority_class/2));
map<double, size_t> XL_bottom(XL.rbegin(), std::next(XL.rbegin(), minority_class/2));
unordered_set<size_t> top_indices, bottom_indices;
for (auto & p : pep_top) top_indices.insert(p.second);
for (auto & p : pep_bottom) bottom_indices.insert(p.second);
for (auto & p : XL_top) top_indices.insert(p.second);
for (auto & p : XL_bottom) bottom_indices.insert(p.second);
pep.clear();
XL.clear();
if (minority_class > 10)
{
SimpleSVM::PredictorMap predictors;
map<Size, double> labels;
// copy all scores in predictors ("score" + all from feature_set).
// Only add labels for top hits (rank = 0)
size_t current_row(0);
for (size_t index = 0; index != peptide_ids.size(); ++index)
{
const vector<PeptideHit>& phits = peptide_ids[index].getHits();
for (size_t psm_rank = 0; psm_rank != phits.size(); ++psm_rank, ++current_row)
{
const PeptideHit& ph = phits[psm_rank];
predictors["score"].push_back(ph.getScore());
predictors["length"].push_back(ph.getSequence().size());
//predictors["charge"].push_back(ph.getCharge()); // did not improve training
for (auto & f : feature_set)
{
double value = ph.getMetaValue(f);
predictors[f].push_back(value);
}
// only add label for training data (rank = 0 and previously selected for training)
if (psm_rank == 0 && top_indices.count(index) != 0)
{
labels[current_row] = 1.0;
}
else if (psm_rank == 0 && bottom_indices.count(index) != 0)
{
labels[current_row] = 0.0;
}
}
}
SimpleSVM svm;
Param svm_param = svm.getParameters();
svm_param.setValue("kernel", "linear");
svm_param.setValue("log2_C", ListUtils::create<double>("-5,-1,1,5,7,11,15"));
svm_param.setValue("log2_p", ListUtils::create<double>("-15,-9,-6,-3.32192809489,0,3.32192809489,6,9,15"));
svm.setParameters(svm_param);
svm.setup(predictors, labels);
vector<SimpleSVM::Prediction> predictions;
OPENMS_LOG_INFO << "Predicting class probabilities:" << endl;
svm.predict(predictions);
std::map<String, double> feature_weights;
svm.getFeatureWeights(feature_weights);
OPENMS_LOG_DEBUG << "Feature weights:" << endl;
for (const auto& m : feature_weights)
{
OPENMS_LOG_DEBUG << "w: " << m.first << "\t" << m.second << endl;
}
OPENMS_LOG_DEBUG << "Feature scaling:" << endl;
auto feature_scaling = svm.getScaling();
for (const auto& m : feature_scaling)
{
OPENMS_LOG_DEBUG << m.first << "\t" << m.second.first << "\t" << m.second.second << endl;
}
size_t psm_index(0);
for (size_t index = 0; index != peptide_ids.size(); ++index)
{
vector<PeptideHit>& phits = peptide_ids[index].getHits();
for (size_t psm_rank = 0; psm_rank != phits.size(); ++psm_rank, ++psm_index)
{
PeptideHit& ph = phits[psm_rank];
ph.setScore(predictions[psm_index].probabilities[1]); // set probability of being a true hit as score
}
}
}
else
{
OPENMS_LOG_INFO << "Not enough data for SVM training." << std::endl;
}
}
#ifdef SVM_TD_RESCORE
void calculateDiscriminantScoreWithTDInfo()
{
///////////////////////////////////////// SVM score recalibration
// find size of minority class
map<double, size_t> pep_t;
map<double, size_t> pep_d;
map<double, size_t> XL_t;
map<double, size_t> XL_d;
// map scores to index (sorted)
for (size_t index = 0; index != peptide_ids.size(); ++index)
{
if (peptide_ids[index].getHits().empty()) continue;
const PeptideHit& ph = peptide_ids[index].getHits()[0]; // get best match to current spectrum
bool is_target = !ph.isDecoy();
bool is_XL = !(static_cast<int>(ph.getMetaValue("NuXL:isXL")) == 0);
double score = ph.getScore();
if (is_target)
{
if (is_XL)
{
XL_t[score] = index;
}
else
{
pep_t[score] = index;
}
}
else
{
if (is_XL)
{
XL_d[score] = index;
}
else
{
pep_d[score] = index;
}
}
}
size_t minority_class = std::min({pep_t.size(), pep_d.size(), XL_t.size(), XL_d.size()});
cout << "Peptide (target/decoy)\t XL (target/decoy):" << endl;
cout << pep_t.size() << "\t" << pep_d.size() << "\t" << XL_t.size() << "\t" << XL_d.size() << endl;
if (minority_class > 250) minority_class = 250;
// keep only top elements (=highest scoring hits) for all classes
pep_t.erase(pep_t.begin(), next(pep_t.begin(), pep_t.size() - minority_class));
pep_d.erase(pep_d.begin(), next(pep_d.begin(), pep_d.size() - minority_class));
XL_t.erase(XL_t.begin(), next(XL_t.begin(), XL_t.size() - minority_class));
XL_d.erase(XL_d.begin(), next(XL_d.begin(), XL_d.size() - minority_class));
// training indices = index of peptide id that correspond to top scoring hits (of the 4 classes)
set<size_t> training_indices;
for (auto & l : {pep_t, pep_d, XL_t, XL_d})
{
for (auto & i : l) training_indices.insert(i.second);
}
if (minority_class > 100)
{
SimpleSVM::PredictorMap predictors;
map<Size, Int> labels;
// copy all scores in predictors ("score" + all from feature_set).
// Only add labels for balanced training set
size_t current_row(0);
for (size_t index = 0; index != peptide_ids.size(); ++index)
{
const vector<PeptideHit>& phits = peptide_ids[index].getHits();
for (size_t psm_rank = 0; psm_rank != phits.size(); ++psm_rank)
{
const PeptideHit& ph = phits[psm_rank];
bool is_target = !ph.isDecoy();
double score = ph.getScore();
// predictors["score"].push_back(score);
predictors["length"].push_back(ph.getSequence().size());
for (auto & f : feature_set_)
{
double value = ph.getMetaValue(f);
predictors[f].push_back(value);
}
// only add label for training data (rank = 0 and previously selected for training)
if (psm_rank == 0 && training_indices.count(index) > 0)
{
labels[current_row] = is_target;
}
++current_row;
}
}
SimpleSVM svm;
Param svm_param = svm.getParameters();
svm_param.setValue("kernel", "linear");
// svm_param.setValue("kernel", "RBF");
// svm_param.setValue("log2_C", ListUtils::create<double>("0"));
// svm_param.setValue("log2_gamma", ListUtils::create<double>("1"));
svm.setParameters(svm_param);
svm.setup(predictors, labels);
vector<SimpleSVM::Prediction> predictions;
cout << "Predicting class probabilities:" << endl;
svm.predict(predictions);
std::map<String, double> feature_weights;
svm.getFeatureWeights(feature_weights);
/*
cout << "Feature weights:" << endl;
for (const auto& m : feature_weights)
{
cout << m.first << "\t" << m.second << endl;
}
cout << "Feature scaling:" << endl;
auto feature_scaling = svm.getScaling();
for (const auto& m : feature_scaling)
{
cout << m.first << "\t" << m.second.first << "\t" << m.second.second << endl;
}
*/
size_t psm_index(0);
for (size_t index = 0; index != peptide_ids.size(); ++index)
{
vector<PeptideHit>& phits = peptide_ids[index].getHits();
for (size_t psm_rank = 0; psm_rank != phits.size(); ++psm_rank, ++psm_index)
{
PeptideHit& ph = phits[psm_rank];
ph.setScore(predictions[psm_index].probabilities[1]); // set probability of being a target as score
}
peptide_ids[index].sort();
}
// IdXMLFile().store(out_idxml + "_svm.idXML", protein_ids, peptide_ids);
}
}
#endif
};
struct NuXLRTPrediction
{
SimpleSVM svm;
String nucleotides = "CATGUXS";
String amino_acids = "ACDEFGHIKLMNPQRSTVWYkmsty"; // all AA + modified MSTY
map<char, double> encodeAAHist_(const AASequence& aa_seq)
{
map<char, double> v;
for (auto& c : aa_seq)
{
char code = c.getOneLetterCode()[0];
if (c.isModified()) code = tolower(code); // mark modified in lower case
v[code] += 1.0;
}
return v;
}
map<char, double> encodeNAHist_(const String& seq)
{
map<char, double> v;
for (auto& c : seq)
{
if (c == '+' || c == '-') break; // e.g.: "UUA-H2O" we are only interested in UUA
v[c] += 1.0;
}
return v;
}
std::tuple<SimpleSVM::PredictorMap, map<size_t, double>> buildPredictorsAndResponseFromIdentifiedFeatures_(const FeatureMap& features)
{
std::cout << "Feature encoding..." << std::endl;
SimpleSVM::PredictorMap x;
map<size_t, double> y;
// encode identification for SVR (mainly AA hist. and length)
size_t index{};
for (const auto& f : features)
{
const auto& pids = f.getPeptideIdentifications();
if (pids.empty()) continue;
const auto& phits = pids[0].getHits();
if (phits.empty()) continue;
const auto& ph = phits[0];
const String& seq = ph.getSequence().toUnmodifiedString();
auto encoded_aas = encodeAAHist_(ph.getSequence());
for (const auto& c : amino_acids)
{
if (auto it = encoded_aas.find(c); it != encoded_aas.end())
{
x[String(c)].push_back(it->second);
x["freq_" + String(c)].push_back(it->second / seq.size());
}
else
{
x[String(c)].push_back(0.0);
x["freq_" + String(c)].push_back(0.0);
}
}
// term AAs
for (const auto& c : amino_acids)
{
if (c == seq[0])
{
x["Nterm_" + String(c)].push_back(1.0);
}
else
{
x["Nterm_" + String(c)].push_back(0.0);
}
}
for (const auto& c : {'R', 'K'})
{
if (c == seq[seq.size() - 1])
{
x["Cterm_" + String(c)].push_back(1.0);
}
else
{
x["Cterm_" + String(c)].push_back(0.0);
}
}
x["AA_length"].push_back(seq.size() / 100.0); // length feature
x["charge"].push_back(f.getCharge());
x["mass"].push_back(f.getCharge() * f.getMZ()); // approx mass
// nucleotide histogram
auto nas = (String)ph.getMetaValue("NuXL:NA", String(""));
auto encoded_nas = encodeNAHist_(nas);
for (const auto& c : nucleotides)
{
if (auto it = encoded_nas.find(c); it != encoded_nas.end())
{
x["NA:"+String(c)].push_back(it->second);
}
else
{
x["NA:"+String(c)].push_back(0.0);
}
}
const double RT = f.getRT();
y[index++] = RT;
}
std::cout << "Feature vector: ";
for (auto f : x) cout << f.first << " (non-zero: " << std::count_if(f.second.begin(), f.second.end(), [](double v) { return v != 0.0; }) << ")" << std::endl;
std::cout << "done..." << std::endl;
return make_tuple(x,y);
}
std::tuple<SimpleSVM::PredictorMap, map<size_t, double>> buildPredictorsAndResponse_(const PeptideIdentificationList& peptides, bool all_hits)
{
std::cout << "Feature encoding..." << std::endl;
SimpleSVM::PredictorMap x;
map<size_t, double> y;
// encode identification for SVR (mainly AA hist. and length)
size_t index{};
for (const auto& pid : peptides)
{
const auto& phits = pid.getHits();
for (const auto& ph : phits)
{
const String& seq = ph.getSequence().toUnmodifiedString();
auto encoded_aas = encodeAAHist_(ph.getSequence());
for (const auto& c : amino_acids)
{
if (auto it = encoded_aas.find(c); it != encoded_aas.end())
{
x[String(c)].push_back(it->second);
x["freq_" + String(c)].push_back(it->second / seq.size());
}
else
{
x[String(c)].push_back(0.0);
x["freq_" + String(c)].push_back(0.0);
}
}
// term AAs
for (const auto& c : amino_acids)
{
if (c == seq[0])
{
x["Nterm_" + String(c)].push_back(1.0);
}
else
{
x["Nterm_" + String(c)].push_back(0.0);
}
}
for (const auto& c : {'R', 'K'})
{
if (c == seq[seq.size() - 1])
{
x["Cterm_" + String(c)].push_back(1.0);
}
else
{
x["Cterm_" + String(c)].push_back(0.0);
}
}
x["AA_length"].push_back(seq.size() / 100.0); // length feature
x["charge"].push_back(ph.getCharge());
x["mass"].push_back(ph.getCharge() * pid.getMZ()); // approx mass
// nucleotide histogram
auto nas = (String)ph.getMetaValue("NuXL:NA", String(""));
auto encoded_nas = encodeNAHist_(nas);
for (const auto& c : nucleotides)
{
if (auto it = encoded_nas.find(c); it != encoded_nas.end())
{
x["NA:"+String(c)].push_back(it->second);
}
else
{
x["NA:"+String(c)].push_back(0.0);
}
}
const double RT = pid.getRT();
y[index++] = RT;
if (!all_hits) break;
}
}
std::cout << "Feature vector: ";
for (auto f : x) cout << f.first << " (non-zero: " << std::count_if(f.second.begin(), f.second.end(), [](double v) { return v != 0.0; }) << ")" << std::endl;
std::cout << "done..." << std::endl;
return make_tuple(x,y);
}
// spectra centroided MS1 (MS2 optional), IDs need to be filtered to contain only high confidend ones
void train(const std::string& spectra_filename,
PeptideIdentificationList peptides,
const vector<ProteinIdentification>& proteins)
{
// detect features for accurate (elution profile apex) retention times
FeatureFinderIdentificationAlgorithm ffid_algo;
MzMLFile mzml;
mzml.getOptions().addMSLevel(1);
mzml.load(spectra_filename, ffid_algo.getMSData());
FeatureMap features;
ffid_algo.run(peptides, proteins, {}, {}, features, FeatureMap(), spectra_filename);
auto [x, y] = buildPredictorsAndResponseFromIdentifiedFeatures_(features);
auto param = svm.getParameters();
param.setValue("kernel", "RBF");
//param.setValue("kernel", "linear");
svm.setParameters(param);
svm.setup(x, y, false); // set up regression and train
}
void annotatePredictions_(const vector<SimpleSVM::Prediction>& preds, PeptideIdentificationList& peptides, bool all_hits)
{
size_t i{};
for (auto& pid : peptides)
{
auto& phits = pid.getHits();
for (auto& ph : phits)
{
double err = preds[i].outcome - pid.getRT();
ph.setMetaValue("RT_error", err);
ph.setMetaValue("RT_predict", preds[i].outcome);
// std::cout << preds[i].label << " " << pid.getRT() << " " << err << std::endl;
++i;
if (!all_hits) break;
}
}
}
// annotates the RT error on all data
void predict(PeptideIdentificationList& peptides)
{
std::cout << "Predicting..." << std::endl;
const bool all_hits{true};
auto [x, y] = buildPredictorsAndResponse_(peptides, all_hits); // also predict for second or worse
vector<SimpleSVM::Prediction> predictions;
svm.predict(x, predictions);
annotatePredictions_(predictions, peptides, all_hits);
}
};
// stores which residues (known to give rise to immonium ions) are in the sequence
struct ImmoniumIonsInPeptide
{
explicit ImmoniumIonsInPeptide(const String& s)
{
for (const char & c : s)
{
switch (c)
{
case 'Y': Y = true; break;
case 'W': W = true; break;
case 'F': F = true; break;
case 'H': H = true; break;
case 'C': C = true; break;
case 'P': P = true; break;
case 'I':
case 'L': L = true; break;
case 'K': K = true; break;
case 'M': M = true; break;
case 'Q': Q = true; break;
case 'E': E = true; break;
default: break;
}
}
}
bool Y = false;
bool W = false;
bool F = false;
bool H = false;
bool C = false;
bool P = false;
bool L = false;
bool K = false;
bool M = false;
bool Q = false;
bool E = false;
};
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_OpenNuXL OpenNuXL
@brief Annotate NA to peptide crosslinks in MS/MS spectra.
1. Input files: centroided mzML file or Thermo raw file
2. Data preprocessing:
- (optional, precursor_correction) precursor mass correction
- (optional, autotune) step determines precursor and fragment mass accuracy
- performs a peptide database search for non-XL peptides
- tries to improve results using percoltator
- based on the best identifications (1% FDR)
- (optional, IDFilter) spectra are marked to be generated by a confident non-XLed peptide
- new fragment mass tolerance = 4.0 * 68th percentile of median absolute fragment error (identipy)
- (optional, pcrecalibration) correct global fragment error by subtracting the median
2. Configuring XL search
- user configuration
- presets for different protocols (UV, DEB, NM, etc.)
3. Generation of Precursor/MS1 adduct to nucleotides to fragment adduct rules
4. Fragment spectra preprocessing
- Remove zero intensities
- Remove isotopic peaks and determine charge (unknown charge set to z=1)
- Remove interfering peaks from cofragmentation
- Scale intensity by taking the root to reduce impact of high-intensity peaks
- Normalize max intensity of a spectrum to 1.0
- Keep 20 highest-intensity peaks in 100 m/z windows
- Keep max. 400 peaks per spectrum
to highly charged fragments in the low m/z region
- Calculate and store TIC of filtered spectrum
5. Experimental feature precalculation from spectra
- Precalculate nucleotide tags
- Calculate intensity ranks
- Calculate amino acid tags
6. Generate decoy sequences
- Uses maximum number of attempts to shuffle in order to minimize sequence similarity
- Sequence similarity target vs. decoy is calculated by overlap prefix with prefix
and prefix to suffix (e.g, reversing is also discouraged if possible)
7. Search:
For all proteins:
- Diggest current protein
- For all peptides of current digest
- Skip peptide if already searched
- Determine potential immonium ions of peptide
- Apply fixed modification(s) to peptide
- Apply variable modifications(s) to peptide to get modification peptidoforms
- For all modified peptide sequences (of current peptide)
-
- For peptides with methionine try to also score plain precursors - CH4S (NM and DEB related methionie cross-link)
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_OpenNuXL.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_OpenNuXL.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class OpenNuXL :
public TOPPBase
{
bool fast_scoring_ = true; // fast or all fragment adduct scoring mode
set<char> can_xl_; ///< nucleotides that can form cross-links
public:
OpenNuXL() :
TOPPBase("OpenNuXL", "Annotate RNA/DNA-peptide cross-links in MS/MS spectra.", false)
{
}
static constexpr double MIN_HYPERSCORE = 0.1; // hit's with lower score than this will be neglected (usually 1 or 0 matches)
static constexpr double MIN_TOTAL_LOSS_IONS = 1; // minimum number of matches to unshifted ions
static constexpr double MIN_SHIFTED_IONS = 1; // minimum number of matches to shifted ions (applies to XLs only)
protected:
/// percolator feature set
StringList feature_set_;
bool has_IM_{false}; // compatible IM annotated
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "input file ");
setValidFormats_("in", ListUtils::create<String>("mzML,raw"));
registerInputFile_("NET_executable", "<executable>", "", "The .NET framework executable. Only required on linux and mac.", false, true, ListUtils::create<String>("skipexists"));
registerInputFile_("ThermoRaw_executable", "<file>", "ThermoRawFileParser.exe", "The ThermoRawFileParser executable.", false, true, ListUtils::create<String>("skipexists"));
registerInputFile_("database", "<file>", "", "The protein database used for identification.");
setValidFormats_("database", ListUtils::create<String>("fasta"));
registerOutputFile_("out", "<file>", "", "output file ");
setValidFormats_("out", ListUtils::create<String>("idXML"));
registerOutputFile_("out_tsv", "<file>", "", "tsv output file", false);
setValidFormats_("out_tsv", ListUtils::create<String>("tsv"));
registerOutputFile_("out_xls", "<file>", "", "XL output file with group q-values calculated at the XL PSM-level. Generated for the highest FDR threshold in report:xlFDR.", false);
setValidFormats_("out_xls", ListUtils::create<String>("idXML"));
registerStringOption_("output_folder", "<folder>", "", "Store intermediate files (and final result) also in this output folder. Convenient for TOPPAS/KNIME/etc. users because these files are otherwise only stored in tmp folders.", false, false);
registerTOPPSubsection_("precursor", "Precursor (Parent Ion) Options");
registerDoubleOption_("precursor:mass_tolerance", "<tolerance>", 6.0, "Precursor mass tolerance (+/- around precursor m/z).", false);
StringList precursor_mass_tolerance_unit_valid_strings;
precursor_mass_tolerance_unit_valid_strings.emplace_back("ppm");
precursor_mass_tolerance_unit_valid_strings.emplace_back("Da");
registerStringOption_("precursor:mass_tolerance_unit", "<unit>", "ppm", "Unit of precursor mass tolerance.", false, false);
setValidStrings_("precursor:mass_tolerance_unit", precursor_mass_tolerance_unit_valid_strings);
registerIntOption_("precursor:min_charge", "<num>", 2, "Minimum precursor charge to be considered.", false, false);
registerIntOption_("precursor:max_charge", "<num>", 5, "Maximum precursor charge to be considered.", false, false);
// consider one before annotated monoisotopic peak and the annotated one
IntList isotopes = {0};
registerIntList_("precursor:isotopes", "<num>", isotopes, "Corrects for mono-isotopic peak misassignments. (E.g.: 1 = prec. may be misassigned to first isotopic peak).", false, false);
registerTOPPSubsection_("fragment", "Fragments (Product Ion) Options");
registerDoubleOption_("fragment:mass_tolerance", "<tolerance>", 20.0, "Fragment mass tolerance (+/- around fragment m/z).", false);
StringList fragment_mass_tolerance_unit_valid_strings;
fragment_mass_tolerance_unit_valid_strings.emplace_back("ppm");
fragment_mass_tolerance_unit_valid_strings.emplace_back("Da");
registerStringOption_("fragment:mass_tolerance_unit", "<unit>", "ppm", "Unit of fragment mass tolerance.", false, false);
setValidStrings_("fragment:mass_tolerance_unit", fragment_mass_tolerance_unit_valid_strings);
registerTOPPSubsection_("modifications", "Modifications Options");
vector<String> all_mods;
ModificationsDB::getInstance()->getAllSearchModifications(all_mods);
registerStringList_("modifications:fixed", "<mods>", ListUtils::create<String>(""), "Fixed modifications, specified using UniMod (www.unimod.org) terms, e.g. 'Carbamidomethyl (C)'.", false);
setValidStrings_("modifications:fixed", all_mods);
registerStringList_("modifications:variable", "<mods>", ListUtils::create<String>("Oxidation (M)"), "Variable modifications, specified using UniMod (www.unimod.org) terms, e.g. 'Oxidation (M)'", false);
setValidStrings_("modifications:variable", all_mods);
registerIntOption_("modifications:variable_max_per_peptide", "<num>", 2, "Maximum number of residues carrying a variable modification per candidate peptide.", false, false);
registerTOPPSubsection_("peptide", "Peptide Options");
registerIntOption_("peptide:min_size", "<num>", 6, "Minimum size a peptide must have after digestion to be considered in the search.", false, true);
registerIntOption_("peptide:max_size", "<num>", 1e6, "Maximum size a peptide may have after digestion to be considered in the search.", false, true);
registerIntOption_("peptide:missed_cleavages", "<num>", 2, "Number of missed cleavages.", false, false);
StringList all_enzymes;
ProteaseDB::getInstance()->getAllNames(all_enzymes);
registerStringOption_("peptide:enzyme", "<cleavage site>", "Trypsin/P", "The enzyme used for peptide digestion.", false);
setValidStrings_("peptide:enzyme", all_enzymes);
registerTOPPSubsection_("report", "Reporting Options");
registerIntOption_("report:top_hits", "<num>", 1, "Maximum number of top scoring hits per spectrum that are reported.", false, true);
registerDoubleOption_("report:peptideFDR", "<num>", 0.01, "Maximum q-value of non-cross-linked peptides. (0 = disabled).", false, true);
registerDoubleList_("report:xlFDR", "<num>", { 0.01, 0.1, 1.0 }, "Maximum q-value of cross-linked peptides at the PSM-level. (0 or 1 = disabled). If multiple values are provided, multiple output files will be created.", false, true);
registerDoubleList_("report:xl_peptidelevel_FDR", "<num>", { 1.00, 1.0, 1.0 }, "Maximum q-value of cross-linked peptides at the peptide-level. (0 or 1 = disabled). Needs to have same size as PSM-level FDR. Filtering is applied together with the correpsonding value in the report:xlFDR list.", false, true);
registerInputFile_("percolator_executable", "<executable>",
// choose the default value according to the platform where it will be executed
#ifdef OPENMS_WINDOWSPLATFORM
"percolator.exe",
#else
"percolator",
#endif
"Percolator executable of the installation e.g. 'percolator.exe'", false, false, ListUtils::create<String>("skipexists"));
// NuXL specific
registerTOPPSubsection_("NuXL", "NuXL Options");
registerStringOption_("NuXL:presets", "<option>", "none", "Set precursor and fragment adducts from presets (recommended). Custom presets can be defined in a 'nuxl_presets.json' file placed in the share/OpenMS/NUXL/ directory or specified via NuXL:presets_file.", false, false);
registerInputFile_("NuXL:presets_file", "<file>", "", "Optional custom path to nuxl_presets.json file. If not provided, the default file in share/OpenMS/NUXL/ will be used.", false, true);
setValidFormats_("NuXL:presets_file", ListUtils::create<String>("json"));
// append StringLists
String custom_presets_file = getStringOption_("NuXL:presets_file");
StringList all_presets = NuXLPresets::getAllPresetsNames(custom_presets_file);
setValidStrings_("NuXL:presets", all_presets);
// store presets (for visual inspection only) in ini
for (const auto& p : all_presets)
{
if (p == "none") continue;
String subsection_name = "presets:" + p;
registerTOPPSubsection_(subsection_name, "Presets for " + p + " cross-link protocol (Note: changes will be ignored).");
StringList target_nucleotides, mappings, modifications, fragment_adducts;
String can_cross_link;
NuXLPresets::getPresets(p, custom_presets_file, target_nucleotides, mappings, modifications, fragment_adducts, can_cross_link);
registerStringList_(subsection_name + ":target_nucleotides", "", target_nucleotides, "", false, true);
registerStringList_(subsection_name + ":mapping", "", mappings, "", false, true);
registerStringOption_(subsection_name + ":can_cross_link", "", can_cross_link, "", false, true);
registerStringList_(subsection_name + ":modifications", "", modifications, "", false, true);
registerStringList_(subsection_name + ":fragment_adducts", "", fragment_adducts, "", false, true);
}
registerIntOption_("NuXL:length", "", 2, "Oligonucleotide maximum length. 0 = disable search for NA variants.", false);
registerStringOption_("NuXL:sequence", "", "", "Sequence to restrict the generation of oligonucleotide chains. (disabled for empty sequence).", false);
registerStringList_("NuXL:target_nucleotides",
"",
{"A=C10H14N5O7P", "C=C9H14N3O8P", "G=C10H14N5O8P", "U=C9H13N2O9P"},
"format: target nucleotide=empirical formula of nucleoside monophosphate \n e.g. A=C10H14N5O7P, ..., U=C10H14N5O7P, X=C9H13N2O8PS where X represents e.g. tU \n or e.g. Y=C10H14N5O7PS where Y represents tG.",
false,
false);
registerStringList_("NuXL:nt_groups",
"",
{},
"Restrict which nucleotides can cooccur in a precursor adduct to be able to search both RNA and DNA (format: 'AU CG').",
false,
false);
registerStringList_("NuXL:mapping", "", {"A->A", "C->C", "G->G", "U->U"}, "format: source->target e.g. A->A, ..., U->U, U->X.", false, false);
// define if nucleotide can cross-link (produce y,b,a,immonium-ion shifts) in addition to marker ions
registerStringOption_("NuXL:can_cross_link",
"<option>",
"U",
"format: 'U' if only U forms cross-links. 'CATG' if C, A, G, and T form cross-links.",
false,
false);
StringList modifications;
modifications.emplace_back("U:");
modifications.emplace_back("U:-H2O");
modifications.emplace_back("U:-HPO3");
modifications.emplace_back("U:-H3PO4");
// fragment adducts that may occur for every precursor adduct (if chemically feasible in terms of elements may not be negative)
StringList fragment_adducts = {"U:C9H10N2O5;U-H3PO4",
"U:C4H4N2O2;U'",
"U:C4H2N2O1;U'-H2O",
"U:C3O;C3O",
"U:C9H13N2O9P1;U",
"U:C9H11N2O8P1;U-H2O",
"U:C9H12N2O6;U-HPO3"
};
registerStringList_("NuXL:fragment_adducts",
"",
fragment_adducts,
"format: [target nucleotide]:[formula] or [precursor adduct]->[fragment adduct formula];[name]: e.g., 'U:C9H10N2O5;U-H3PO4' or 'U:U-H2O->C9H11N2O8P1;U-H2O'.",
false,
false);
registerStringList_("NuXL:modifications", "", modifications, "format: empirical formula e.g U: U:-H2O, ..., U:H2O+PO3.", false, false);
registerStringOption_("NuXL:scoring", "<method>", "slow", "Scoring algorithm used in prescoring (fast: total-loss only, slow: all losses).", false, false);
setValidStrings_("NuXL:scoring", {"fast", "slow"});
registerStringOption_("NuXL:decoys", "<bool>", "true", "Generate decoys internally (recommended).", false, false);
setValidStrings_("NuXL:decoys", {"true", "false"});
registerIntOption_("NuXL:decoy_factor", "<num>", 1, "Ratio of decoys to targets.", false, true);
registerFlag_("NuXL:CysteineAdduct", "Use this flag if the +152 adduct is expected.", true);
registerFlag_("NuXL:filter_fractional_mass", "Use this flag to filter non-crosslinks by fractional mass.", true);
registerFlag_("NuXL:carbon_labeled_fragments", "Generate fragment shifts assuming full labeling of carbon (e.g. completely labeled U13).", true);
registerFlag_("NuXL:only_xl", "Only search cross-links and ignore non-cross-linked peptides.", true);
registerDoubleOption_("NuXL:filter_small_peptide_mass", "<threshold>", 600.0, "Filter precursor that can only correspond to non-crosslinks by mass.", false, true);
registerDoubleOption_("NuXL:marker_ions_tolerance", "<tolerance>", 0.03, "Tolerance used to determine marker ions (Da).", false, true);
registerStringList_("filter", "<list>", {"filter_pc_mass_error", "autotune", "idfilter"}, "Filtering steps applied to results.", false, true);
setValidStrings_("filter", {"filter_pc_mass_error", "impute_decoy_medians", "filter_bad_partial_loss_scores", "autotune", "idfilter", "spectrumclusterfilter", "pcrecalibration", "optimize", "RTpredict"});
registerDoubleOption_("window_size", "<number>", 75.0, "Peak window for spectra precprocessing.", false, true);
registerIntOption_("peak_count", "<number>", 20, "Retained peaks in peak window.", false, true);
}
void definePercolatorFeatureSet_(const StringList& data_dependent_features)
{
/* default features added in PercolatorAdapter:
* SpecId, ScanNr, ExpMass, CalcMass, mass,
* peplen, charge#min..#max, enzN, enzC, enzInt, dm, absdm
*/
feature_set_
<< "missed_cleavages"
<< "NuXL:mass_error_p"
<< "NuXL:err"
<< "NuXL:total_loss_score"
<< "NuXL:modds"
<< "NuXL:immonium_score"
<< "NuXL:precursor_score"
<< "NuXL:MIC"
<< "NuXL:Morph"
<< "NuXL:total_MIC"
<< "NuXL:ladder_score"
<< "NuXL:sequence_score"
<< "NuXL:total_Morph"
<< "NuXL:total_HS"
<< "NuXL:tag_XLed"
<< "NuXL:tag_unshifted"
<< "NuXL:tag_shifted"
<< "NuXL:aminoacid_max_tag"
<< "NuXL:aminoacid_id_to_max_tag_ratio"
<< "nr_candidates"
<< "-ln(poisson)"
<< "NuXL:explained_peak_fraction"
<< "NuXL:theo_peak_fraction"
<< "NuXL:wTop50"
<< "NuXL:marker_ions_score"
<< "NuXL:partial_loss_score"
<< "NuXL:pl_MIC"
<< "NuXL:pl_err"
<< "NuXL:pl_Morph"
<< "NuXL:pl_modds"
<< "NuXL:pl_pc_MIC"
<< "NuXL:pl_im_MIC"
<< "NuXL:isPhospho"
<< "NuXL:isXL"
<< "NuXL:score"
<< "isotope_error"
<< "variable_modifications"
<< "precursor_intensity_log10"
<< "NuXL:NA_MASS_z0"
<< "NuXL:NA_length"
<< "nucleotide_mass_tags"
<< "n_theoretical_peaks";
// add additional (e.g., data dependent features that might not be present in every dataset)
for (const auto& d : data_dependent_features) { feature_set_ << d; }
// one-hot encoding of cross-linked nucleotide
for (const auto& c : can_xl_) { feature_set_ << String("NuXL:XL_" + String(c)); }
}
// bad score or less then two peaks matching and less than 1% explained signal
static bool badTotalLossScore(float hyperScore, float tlss_Morph, float tlss_total_MIC)
{
return (hyperScore < MIN_HYPERSCORE
|| tlss_Morph < MIN_TOTAL_LOSS_IONS + 1.0
|| tlss_total_MIC < 0.01);
}
static bool badPartialLossScore(float tlss_Morph, float plss_Morph, float plss_MIC, float plss_im_MIC, float plss_pc_MIC, float marker_ions_score)
{
if (plss_Morph + tlss_Morph < 5.03) return true; // less than 5 peaks? 3% TIC
if (plss_MIC + plss_im_MIC + plss_pc_MIC + marker_ions_score < 0.03) return true;
// if we don't see shifted ladder ions, we need at least some signal in the shifted immonium ions
return (plss_Morph < MIN_SHIFTED_IONS && plss_im_MIC < 0.03);
}
map<String, PrecursorPurity::PurityScores> calculatePrecursorPurities_(const String& in_mzml, double precursor_mass_tolerance, bool precursor_mass_tolerance_unit_ppm) const
{
map<String, PrecursorPurity::PurityScores> purities;
PeakMap tmp_spectra;
// Important: load both MS1 and MS2 for precursor purity annotation
MzMLFile().load(in_mzml, tmp_spectra);
int nMS1 = std::count_if(tmp_spectra.begin(), tmp_spectra.end(), [](MSSpectrum& s){ return s.getMSLevel() == 1; });
OPENMS_LOG_INFO << "Using " << nMS1 << " spectra for precursor purity calculation." << endl;
if (nMS1 != 0)
{
// if isolation windows are properly annotated and correct if necessary
checkAndCorrectIsolationWindows_(tmp_spectra);
purities = PrecursorPurity::computePrecursorPurities(tmp_spectra, precursor_mass_tolerance, precursor_mass_tolerance_unit_ppm, true); // true = ignore missing PCs
}
return purities;
}
static double matchOddsScore_(const size_t N, const size_t matches, const double p)
{
if (N == 0) return 0;
const double pscore = boost::math::ibeta(matches + 1, N - matches, p);
if (pscore <= std::numeric_limits<double>::min())
{
cout.precision(17);
OPENMS_LOG_DEBUG << "matches,N,p: " << matches << " " << N << " " << p << "=" << -log10(std::numeric_limits<double>::min()) << endl;
return -log10(std::numeric_limits<double>::min());
}
const double minusLog10p1pscore = -log10(pscore);
return minusLog10p1pscore;
}
/*
@param[in] N number of theoretical peaks
@param[in] peak_in_spectrum number of experimental peaks
@param[in] matched_size number of matched theoretical peaks
static double matchOddsScore_(
const Size& N,
const float fragment_mass_tolerance_Da,
const Size peaks_in_spectrum,
const float mass_range_Da,
const Size matched_size)
{
if (matched_size < 1 || N < 1) { return 0; }
// Nd/w (number of peaks in spectrum * fragment mass tolerance in Da / MS/MS mass range in Da - see phoshoRS)
//const double p = peaks_in_spectrum * fragment_mass_tolerance_Da / mass_range_Da;
const double p = 20.0 / 100.0; // level 20.0 / mz 100.0 (see WindowMower)
const double pscore = boost::math::ibeta(matched_size + 1, N - matched_size, p);
if (pscore <= std::numeric_limits<double>::min()) return -log10(std::numeric_limits<double>::min());
const double minusLog10p1pscore = -log10(pscore);
return minusLog10p1pscore;
}
*/
static void generateTheoreticalMZsZ1_(const AASequence& peptide,
const Residue::ResidueType& res_type,
std::vector<double>& mzs)
{
const Size N = peptide.size();
mzs.resize(N-1);
double mono_weight(Constants::PROTON_MASS_U);
if (res_type == Residue::BIon || res_type == Residue::AIon || res_type == Residue::CIon)
{
if (peptide.hasNTerminalModification())
{
mono_weight += peptide.getNTerminalModification()->getDiffMonoMass();
}
switch (res_type)
{
case Residue::AIon: mono_weight += Residue::getInternalToAIon().getMonoWeight(); break;
case Residue::BIon: mono_weight += Residue::getInternalToBIon().getMonoWeight(); break;
case Residue::CIon: mono_weight += Residue::getInternalToCIon().getMonoWeight(); break;
default: break;
}
for (Size i = 0; i < N-1; ++i) // < N-1: don't add last residue as it is part of precursor
{
mono_weight += peptide[i].getMonoWeight(Residue::Internal);
mzs[i] = mono_weight;
}
}
else // if (res_type == Residue::XIon || res_type == Residue::YIon || res_type == Residue::ZIon)
{
if (peptide.hasCTerminalModification())
{
mono_weight += peptide.getCTerminalModification()->getDiffMonoMass();
}
switch (res_type)
{
case Residue::XIon: mono_weight += Residue::getInternalToXIon().getMonoWeight(); break;
case Residue::YIon: mono_weight += Residue::getInternalToYIon().getMonoWeight(); break;
case Residue::ZIon: mono_weight += Residue::getInternalToZIon().getMonoWeight(); break;
default: break;
}
for (Size i = N-1; i > 0; --i) // > 0: don't add last residue (part of precursor ion)
{
mono_weight += peptide[i].getMonoWeight(Residue::Internal);
mzs[N-1-i] = mono_weight;
}
}
}
static double logfactorial_(UInt x)
{
const double dx = static_cast<double>(x);
if (x < 2) { return 0; }
double z(0);
for (double y = 2; y <= dx; ++y) { z += std::log(y); }
return z;
}
// score ions without nucleotide shift
static void scorePeptideIons_(
const PeakSpectrum& exp_spectrum,
const DataArrays::IntegerDataArray& exp_charges,
const vector<double>& total_loss_template_z1_b_ions,
const vector<double>& total_loss_template_z1_y_ions,
const double peptide_mass_without_NA,
const unsigned int pc_charge,
const ImmoniumIonsInPeptide& iip,
const double fragment_mass_tolerance,
const bool fragment_mass_tolerance_unit_ppm,
std::vector<double>& intensity_sum,
std::vector<double>& b_ions,
std::vector<double>& y_ions,
vector<bool>& peak_matched,
float& hyperScore,
float& MIC,
float& Morph,
float& modds,
float& err,
float& pc_MIC,
float& im_MIC,
size_t& n_theoretical_peaks)
{
OPENMS_PRECONDITION(exp_spectrum.size() >= 1, "Experimental spectrum empty.");
OPENMS_PRECONDITION(exp_charges.size() == exp_spectrum.size(), "Error: HyperScore: #charges != #peaks in experimental spectrum.");
OPENMS_PRECONDITION(total_loss_template_z1_b_ions.size() == total_loss_template_z1_y_ions.size(), "b- and y-ion arrays must have same size.");
OPENMS_PRECONDITION(total_loss_template_z1_b_ions.size() > 0, "b- and y-ion arrays must not be empty.");
OPENMS_PRECONDITION(intensity_sum.size() == total_loss_template_z1_b_ions.size(), "Sum array needs to be of same size as b-ion array");
OPENMS_PRECONDITION(intensity_sum.size() == b_ions.size(), "Sum array needs to be of same size as b-ion array");
OPENMS_PRECONDITION(intensity_sum.size() == y_ions.size(), "Sum array needs to be of same size as y-ion array");
OPENMS_PRECONDITION(peak_matched.size() == exp_spectrum.size(), "Peak matched needs to be of same size as experimental spectrum");
OPENMS_PRECONDITION(std::count_if(peak_matched.begin(), peak_matched.end(), [](bool b){return b == true;}) == 0, "Peak matched must be initialized to false");
double dot_product(0.0), b_mean_err(0.0), y_mean_err(0.0);
const Size N = intensity_sum.size();
size_t matches(0);
// maximum charge considered
const unsigned int max_z = std::min(2U, static_cast<unsigned int>(pc_charge - 1));
// match b-ions
for (Size z = 1; z <= max_z; ++z)
{
n_theoretical_peaks += total_loss_template_z1_b_ions.size();
for (Size i = 0; i < total_loss_template_z1_b_ions.size(); ++i)
{
const double theo_mz = (total_loss_template_z1_b_ions[i]
+ (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
// iterate over peaks in experimental spectrum in given fragment tolerance around theoretical peak
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
// found peak match
const double abs_err_Da = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_Da < max_dist_dalton)
{
if (!peak_matched[index])
{
double intensity = exp_spectrum[index].getIntensity();
dot_product += intensity;
b_mean_err += Math::getPPMAbs(exp_mz, theo_mz);
b_ions[i] += intensity;
++matches;
peak_matched[index] = true;
}
}
}
}
// match a-ions
vector<double> a_ions(b_ions.size(), 0.0);
const double diff2b = -27.994915; // b-ion and a-ion ('CO' mass diff from b- to a-ion)
for (Size z = 1; z <= max_z; ++z)
{
n_theoretical_peaks += total_loss_template_z1_b_ions.size();
for (Size i = 0; i < total_loss_template_z1_b_ions.size(); ++i)
{
const double theo_mz = (total_loss_template_z1_b_ions[i] + diff2b
+ (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
// iterate over peaks in experimental spectrum in given fragment tolerance around theoretical peak
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
// found peak match
const double abs_err_Da = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_Da < max_dist_dalton)
{
if (!peak_matched[index])
{
double intensity = exp_spectrum[index].getIntensity();
dot_product += intensity;
a_ions[i] += intensity;
++matches;
peak_matched[index] = true;
}
}
}
}
// match y-ions
for (Size z = 1; z <= max_z; ++z)
{
n_theoretical_peaks += total_loss_template_z1_y_ions.size();
for (Size i = 0; i < total_loss_template_z1_y_ions.size(); ++i)
{
const double theo_mz = (total_loss_template_z1_y_ions[i] + (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
// iterate over peaks in experimental spectrum in given fragment tolerance around theoretical peak
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
// found peak match
const double abs_err_Da = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_Da < max_dist_dalton)
{
if (!peak_matched[index])
{
double intensity = exp_spectrum[index].getIntensity();
y_mean_err += Math::getPPMAbs(exp_mz, theo_mz);
dot_product += intensity;
y_ions[N-1 - i] += intensity;
++matches;
peak_matched[index] = true;
}
}
}
}
#ifdef CONSIDER_AA_LOSSES
// block peaks matching to AA related neutral losses so they get counted for explained peak fraction calculation
// b-H2O
for (double diff2b : { -18.010565 } )
{
for (Size z = 1; z <= max_z; ++z)
{
for (Size i = 0; i < total_loss_template_z1_b_ions.size(); ++i)
{
const double theo_mz = (total_loss_template_z1_b_ions[i] + diff2b
+ (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
const double abs_err_DA = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_DA < max_dist_dalton)
{
peak_matched[index] = true;
}
}
}
}
// y-H2O and y-NH3
for (double diff2b : { -18.010565, -17.026549 } )
{
for (Size z = 1; z <= max_z; ++z)
{
for (Size i = 0; i < total_loss_template_z1_y_ions.size(); ++i)
{
const double theo_mz = (total_loss_template_z1_y_ions[i] + diff2b
+ (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
const double abs_err_DA = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_DA < max_dist_dalton)
{
peak_matched[index] = true;
}
}
}
}
#endif
// determine b+a and y-ion count
UInt y_ion_count(0), b_ion_count(0), a_ion_count(0);
for (Size i = 0; i != b_ions.size(); ++i)
{
if (b_ions[i] > 0)
{
intensity_sum[i] += b_ions[i];
++b_ion_count;
}
}
for (Size i = 0; i != y_ions.size(); ++i)
{
if (y_ions[i] > 0)
{
intensity_sum[i] += y_ions[i];
++y_ion_count;
}
}
for (Size i = 0; i != a_ions.size(); ++i)
{
if (a_ions[i] > 0)
{
intensity_sum[i] += a_ions[i];
++a_ion_count;
}
}
OPENMS_PRECONDITION(exp_spectrum.getFloatDataArrays()[0].getName() == "TIC", "No TIC stored in spectrum meta data.");
OPENMS_PRECONDITION(exp_spectrum.getFloatDataArrays()[0].size() == 1, "Exactly one TIC expected.");
const double& TIC = exp_spectrum.getFloatDataArrays()[0][0];
if (y_ion_count == 0 && b_ion_count == 0) // Note: don't check for a_ion_count here as this leads to division by zero at err calculation below
{
hyperScore = 0;
MIC = 0;
Morph = 0;
err = fragment_mass_tolerance;
}
else
{
const double bFact = logfactorial_(b_ion_count);
const double aFact = logfactorial_(a_ion_count);
const double yFact = logfactorial_(y_ion_count);
hyperScore = std::log1p(dot_product) + yFact + bFact + aFact;
MIC = std::accumulate(intensity_sum.begin(), intensity_sum.end(), 0.0);
for (auto& i : intensity_sum) { i /= TIC; } // scale intensity sum
MIC /= TIC;
Morph = b_ion_count + y_ion_count + MIC;
err = (y_mean_err + b_mean_err)/(b_ion_count + y_ion_count);
}
static const double M_star_pc_loss = EmpiricalFormula("CH4S").getMonoWeight(); // methionine related loss on precursor (see NuXLAnnotateAndLocate for annotation related code)
vector<double> precursor_losses = {0.0, -18.010565, -17.026548}; // normal, loss of water, loss of ammonia
if (iip.M)
{
precursor_losses.push_back(M_star_pc_loss);
}
// match precusor ions z = 1..pc_charge
double pc_match_count(0);
for (double pc_loss : precursor_losses ) // normal, loss of water, loss of ammonia
{
for (Size z = 1; z <= pc_charge; ++z)
{
const double theo_mz = (peptide_mass_without_NA + pc_loss + z * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
// found peak match
if (exp_z == z && std::abs(theo_mz - exp_mz) < max_dist_dalton)
{
if (!peak_matched[index])
{
const double intensity = exp_spectrum[index].getIntensity();
pc_MIC += intensity;
pc_match_count += 1.0;
++matches;
peak_matched[index] = true;
}
}
++n_theoretical_peaks;
}
}
pc_MIC /= TIC;
pc_MIC += pc_match_count; // Morpheus score
// shifted immonium ions
// lambda to match one peak and sum up in score
auto match_one_peak_z1 = [&](const double& theo_mz, float& score)
{
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
auto index = exp_spectrum.findNearest(theo_mz);
if (exp_charges[index] == 1 &&
std::abs(theo_mz - exp_spectrum[index].getMZ()) < max_dist_dalton) // found peak match
{
if (!peak_matched[index])
{
score += exp_spectrum[index].getIntensity();
++matches;
peak_matched[index] = true;
}
}
++n_theoretical_peaks;
};
// see DOI: 10.1021/pr3007045 A Systematic Investigation into the Nature of Tryptic HCD Spectra
static const double imY = EmpiricalFormula("C8H10NO").getMonoWeight(); // 85%
static const double imW = EmpiricalFormula("C10H11N2").getMonoWeight(); // 84%
static const double imF = EmpiricalFormula("C8H10N").getMonoWeight(); // 84%
static const double imL = EmpiricalFormula("C5H12N").getMonoWeight(); // I/L 76%
static const double imH = EmpiricalFormula("C5H8N3").getMonoWeight(); // 70%
static const double imC = EmpiricalFormula("C2H6NS").getMonoWeight(); // CaC 61%
static const double imK1 = EmpiricalFormula("C5H13N2").getMonoWeight(); // 2%
static const double imP = EmpiricalFormula("C4H8N").getMonoWeight(); //?
static const double imQ = 101.0715; // 52%
static const double imE = 102.0555; // 37%
static const double imM = 104.0534; // 3%
// static const double imN = 87.05584; // 11%
// static const double imD = 88.03986; // 4%
if (iip.Y)
{
match_one_peak_z1(imY, im_MIC);
}
if (iip.W)
{
match_one_peak_z1(imW, im_MIC);
}
if (iip.F)
{
match_one_peak_z1(imF, im_MIC);
}
if (iip.H)
{
match_one_peak_z1(imH, im_MIC);
}
if (iip.C)
{
match_one_peak_z1(imC, im_MIC);
}
if (iip.P)
{
match_one_peak_z1(imP, im_MIC);
}
if (iip.L)
{
match_one_peak_z1(imL, im_MIC);
}
if (iip.K)
{
match_one_peak_z1(imK1, im_MIC);
}
if (iip.M)
{
match_one_peak_z1(imM, im_MIC);
}
if (iip.Q)
{
match_one_peak_z1(imQ, im_MIC);
}
if (iip.E)
{
match_one_peak_z1(imE, im_MIC);
}
im_MIC /= TIC;
// if we only have 1 peak assume some kind of average error to not underestimate the real error to much
err = Morph > 2 ? err : 2.0 * fragment_mass_tolerance * 1e-6 * 1000.0;
//const double p_random_match = exp_spectrum.getFloatDataArrays()[1][0];
const double p_random_match = 1e-3;
OPENMS_PRECONDITION(n_theoretical_peaks > 0, "Error: no theoretical peaks are generated");
modds = matchOddsScore_(n_theoretical_peaks, matches, p_random_match);
}
static void scoreShiftedLadderIons_(
const vector<NuXLFragmentAdductDefinition>& partial_loss_modification,
const vector<double>& partial_loss_template_z1_b_ions,
const vector<double>& partial_loss_template_z1_y_ions,
const double peptide_mass_without_NA,
const unsigned int pc_charge,
const ImmoniumIonsInPeptide& iip,
const double fragment_mass_tolerance,
const bool fragment_mass_tolerance_unit_ppm,
const PeakSpectrum& exp_spectrum,
const DataArrays::IntegerDataArray& exp_charges,
std::vector<double>& intensity_sum,
std::vector<double>& b_ions,
std::vector<double>& y_ions,
std::vector<bool>& peak_matched,
float& plss_hyperScore,
float& plss_MIC,
float& plss_Morph,
float& plss_err,
float& plss_modds,
float& plss_pc_MIC,
float& plss_im_MIC,
size_t& n_theoretical_peaks/*,
bool is_decoy*/)
{
OPENMS_PRECONDITION(exp_spectrum.size() >= 1, "Experimental spectrum empty.");
OPENMS_PRECONDITION(exp_charges.size() == exp_spectrum.size(), "Error: HyperScore: #charges != #peaks in experimental spectrum.");
OPENMS_PRECONDITION(intensity_sum.size() == partial_loss_template_z1_b_ions.size(), "Sum array needs to be of same size as b-ion array");
OPENMS_PRECONDITION(intensity_sum.size() == partial_loss_template_z1_y_ions.size(), "Sum array needs to be of same size as y-ion array");
OPENMS_PRECONDITION(intensity_sum.size() == b_ions.size(), "Sum array needs to be of same size as b-ion array");
OPENMS_PRECONDITION(intensity_sum.size() == y_ions.size(), "Sum array needs to be of same size as y-ion array");
OPENMS_PRECONDITION(partial_loss_template_z1_b_ions.size() == partial_loss_template_z1_y_ions.size(), "b- and y-ion arrays must have same size.");
OPENMS_PRECONDITION(partial_loss_template_z1_b_ions.size() > 0, "b- and y-ion arrays must not be empty.");
auto ambiguous_match = [&](const double& mz, const double z, const String& name)->bool
{
auto it = fragment_adduct2block_if_masses_present.find(name); // get vector of blocked mass lists
if (it != fragment_adduct2block_if_masses_present.end())
{
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
for (auto ml : it->second)
{ // for all blocked mass lists
bool mass_list_matches{true};
for (const double m : ml)
{ // for all masses in current mass list
Size index = exp_spectrum.findNearest(mz - m * z);
const double exp_mz = exp_spectrum[index].getMZ();
const double abs_err_Da = std::fabs(mz - m * z - exp_mz);
if (abs_err_Da >= max_dist_dalton) // no match? then this mass list is not ambigous (break and continue with next one)
{
mass_list_matches = false;
break;
}
}
if (mass_list_matches) { return true; } // mass list matched every peak -> ambiguous explanation
}
}
return false;
};
double dot_product(0.0), b_mean_err(0.0), y_mean_err(0.0);
const Size N = intensity_sum.size(); // number of bonds = length of peptide - 1
size_t n_theoretical_XL_peaks(0);
size_t matches(0);
// maximum charge considered
const unsigned int max_z = std::min(2U, static_cast<unsigned int>(pc_charge - 1));
const double diff2b = -27.994915; // b-ion and a-ion ('CO' mass diff from b- to a-ion)
// find best matching adducts and charge b-ions
vector<std::tuple<size_t, size_t, NuXLFragmentAdductDefinition>> matches_z_fa;
for (Size z = 1; z <= max_z; ++z)
{
for (const NuXLFragmentAdductDefinition & fa : partial_loss_modification)
{
size_t z_fa(0);
for (Size i = 0; i < partial_loss_template_z1_b_ions.size(); ++i)
{
const double theo_mz = (partial_loss_template_z1_b_ions[i] + fa.mass
+ (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
// iterate over peaks in experimental spectrum in given fragment tolerance around theoretical peak
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
// found peak match
const double abs_err_Da = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_Da < max_dist_dalton)
{
if (!peak_matched[index]) // not already matched (e.g., by unshifted peak)
{
z_fa++;
}
}
}
if (z_fa != 0)
matches_z_fa.emplace_back(z_fa, z, fa);
}
}
for (Size z = 1; z <= max_z; ++z)
{
for (const NuXLFragmentAdductDefinition & fa : partial_loss_modification)
{
size_t z_fa(0);
for (Size i = 0; i < partial_loss_template_z1_b_ions.size(); ++i)
{
const double theo_mz = (partial_loss_template_z1_b_ions[i] + fa.mass + diff2b
+ (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
// iterate over peaks in experimental spectrum in given fragment tolerance around theoretical peak
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
// found peak match
const double abs_err_Da = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_Da < max_dist_dalton)
{
if (!peak_matched[index]) // not already matched (e.g., by unshifted peak)
{
z_fa++;
}
}
}
if (z_fa != 0)
matches_z_fa.emplace_back(z_fa, z, fa);
}
}
for (Size z = 1; z <= max_z; ++z)
{
for (const NuXLFragmentAdductDefinition & fa : partial_loss_modification)
{
size_t z_fa(0);
for (Size i = 1; i < partial_loss_template_z1_y_ions.size(); ++i) // Note that we start at (i=1 -> y2) as trypsin would otherwise not cut at cross-linking site
{
const double theo_mz = (partial_loss_template_z1_y_ions[i] + fa.mass
+ (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
// iterate over peaks in experimental spectrum in given fragment tolerance around theoretical peak
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
// found peak match
const double abs_err_Da = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_Da < max_dist_dalton)
{
if (!peak_matched[index]) // not already matched (e.g., by unshifted peak)
{
z_fa++;
}
}
}
if (z_fa != 0)
matches_z_fa.emplace_back(z_fa, z, fa);
}
}
std::sort(matches_z_fa.begin(), matches_z_fa.end(), [](const auto& a, const auto& b)
{
return std::tie(get<0>(a), get<1>(a), get<2>(a)) > std::tie(get<0>(b), get<1>(b), get<2>(b));
}
); // sorts by first element descending
if (matches_z_fa.size() > 3) matches_z_fa.resize(3); // keep top 3 adducts (most peaks matched)
// match b-ions
for (const auto& t : matches_z_fa) // for best 3 adducts
// for (Size z = 1; z <= max_z; ++z)
{
const Size z = get<1>(t);
const NuXLFragmentAdductDefinition & fa = get<2>(t);
// for (const NuXLFragmentAdductDefinition & fa : partial_loss_modification)
{
n_theoretical_XL_peaks += partial_loss_template_z1_b_ions.size();
for (Size i = 0; i < partial_loss_template_z1_b_ions.size(); ++i)
{
const double theo_mz = (partial_loss_template_z1_b_ions[i] + fa.mass
+ (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
// iterate over peaks in experimental spectrum in given fragment tolerance around theoretical peak
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
// found peak match
const double abs_err_Da = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_Da < max_dist_dalton)
{
if (!peak_matched[index])
{
double intensity = exp_spectrum[index].getIntensity();
b_mean_err += Math::getPPMAbs(exp_mz, theo_mz);
dot_product += intensity;
b_ions[i] += intensity;
peak_matched[index] = true;
matches++;
}
}
}
}
}
// match a-ions
vector<double> a_ions(b_ions.size(), 0.0);
for (const auto& t : matches_z_fa) // for best 3 adducts
// for (Size z = 1; z <= max_z; ++z)
{
const Size z = get<1>(t);
const NuXLFragmentAdductDefinition & fa = get<2>(t);
// for (const NuXLFragmentAdductDefinition & fa : partial_loss_modification)
{
n_theoretical_XL_peaks += partial_loss_template_z1_b_ions.size();
for (Size i = 0; i < partial_loss_template_z1_b_ions.size(); ++i)
{
const double theo_mz = (partial_loss_template_z1_b_ions[i] + fa.mass + diff2b
+ (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
// iterate over peaks in experimental spectrum in given fragment tolerance around theoretical peak
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
// found peak match
const double abs_err_Da = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_Da < max_dist_dalton)
{
if (!peak_matched[index])
{
double intensity = exp_spectrum[index].getIntensity();
dot_product += intensity;
a_ions[i] += intensity;
peak_matched[index] = true;
matches++;
}
}
}
}
}
// match y-ions
for (const auto& t : matches_z_fa) // for best 3 adducts
// for (Size z = 1; z <= max_z; ++z)
{
const Size z = get<1>(t);
const NuXLFragmentAdductDefinition & fa = get<2>(t);
// for (const NuXLFragmentAdductDefinition & fa : partial_loss_modification)
{
n_theoretical_XL_peaks += partial_loss_template_z1_y_ions.size() - 1;
for (Size i = 1; i < partial_loss_template_z1_y_ions.size(); ++i) // Note that we start at (i=1 -> y2) as trypsin would does not cut at cross-linking site
{
const double theo_mz = (partial_loss_template_z1_y_ions[i] + fa.mass
+ (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
// iterate over peaks in experimental spectrum in given fragment tolerance around theoretical peak
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
// found peak match
const double abs_err_Da = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_Da < max_dist_dalton)
{
if (!peak_matched[index])
{
double intensity = exp_spectrum[index].getIntensity();
y_mean_err += Math::getPPMAbs(exp_mz, theo_mz);
dot_product += intensity;
y_ions[N-1 - i] += intensity;
peak_matched[index] = true;
matches++;
}
}
}
}
}
#ifdef CONSIDER_AA_LOSSES
// block peaks matching to AA related neutral losses so they don't get matched to NA shifts
// b-H2O
for (double diff2b : { -18.010565 } )
{
for (Size z = 1; z <= max_z; ++z)
{
for (const NuXLFragmentAdductDefinition & fa : partial_loss_modification)
{
for (Size i = 0; i < partial_loss_template_z1_b_ions.size(); ++i)
{
const double theo_mz = (partial_loss_template_z1_b_ions[i] + fa.mass + diff2b
+ (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
const double abs_err_Da = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_Da < max_dist_dalton)
{
if (!peak_matched[index])
{
peak_matched[index] = true;
}
}
}
}
}
}
// match y-ions
// y-H2O and y-NH3
for (double diff2b : { -18.010565, -17.026549 } )
for (Size z = 1; z <= max_z; ++z)
{
for (const NuXLFragmentAdductDefinition & fa : partial_loss_modification)
{
for (Size i = 1; i < partial_loss_template_z1_y_ions.size(); ++i) // Note that we start at (i=1 -> y2) as trypsin does not cut at cross-linking site
{
const double theo_mz = (partial_loss_template_z1_y_ions[i] + fa.mass + diff2b
+ (z-1) * Constants::PROTON_MASS_U) / z;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
const double abs_err_Da = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_Da < max_dist_dalton)
{
if (!peak_matched[index])
{
peak_matched[index] = true;
}
}
}
}
}
#endif
UInt y_ion_count(0), b_ion_count(0), a_ion_count(0);
double b_sum(0.0);
for (Size i = 0; i != b_ions.size(); ++i)
{
if (b_ions[i] > 0)
{
intensity_sum[i] += b_ions[i];
b_sum += b_ions[i];
++b_ion_count;
}
}
double y_sum(0.0);
for (Size i = 0; i != y_ions.size(); ++i)
{
if (y_ions[i] > 0)
{
intensity_sum[i] += y_ions[i];
y_sum += y_ions[i];
++y_ion_count;
}
}
double a_sum(0.0);
for (Size i = 0; i != a_ions.size(); ++i)
{
if (a_ions[i] > 0)
{
intensity_sum[i] += a_ions[i];
a_sum += a_ions[i];
++a_ion_count;
}
}
OPENMS_PRECONDITION(exp_spectrum.getFloatDataArrays()[0].getName() == "TIC", "No TIC stored in spectrum meta data.");
OPENMS_PRECONDITION(exp_spectrum.getFloatDataArrays()[0].size() == 1, "Exactly one TIC expected.");
const double& TIC = exp_spectrum.getFloatDataArrays()[0][0];
if (y_ion_count == 0 && b_ion_count == 0) //Note: don't check for a_ion_count here as this might lead to division by zero when plss_err is calculated
{
plss_hyperScore = 0;
plss_MIC = 0;
plss_Morph = 0;
plss_err = fragment_mass_tolerance;
}
else
{
const double bFact = logfactorial_(b_ion_count);
const double aFact = logfactorial_(a_ion_count);
const double yFact = logfactorial_(y_ion_count);
plss_hyperScore = std::log1p(dot_product) + yFact + bFact + aFact;
plss_MIC = std::accumulate(intensity_sum.begin(), intensity_sum.end(), 0.0);
for (auto& i : intensity_sum) { i /= TIC; } // scale intensity sum
plss_MIC /= TIC;
plss_Morph = b_ion_count + y_ion_count + plss_MIC;
plss_err = (y_mean_err + b_mean_err)/(b_ion_count + y_ion_count);
}
// match (partially) shifted precusor ions z = 1..pc_charge
double pc_match_count(0);
for (double pc_loss : {0.0, -18.010565, -17.026548} ) // normal, loss of water, loss of ammonia
{
const double peptide_mass = peptide_mass_without_NA + pc_loss;
for (Size z = 1; z <= pc_charge; ++z)
{
for (const NuXLFragmentAdductDefinition & fa : partial_loss_modification)
{
const double theo_mz = (peptide_mass + fa.mass + z * Constants::PROTON_MASS_U) / z;
// TODO move out?
auto it = std::find(exp_spectrum.getStringDataArrays()[0].begin(), exp_spectrum.getStringDataArrays()[0].end(), fa.name);
bool has_tag_that_matches_fragmentadduct = (it != exp_spectrum.getStringDataArrays()[0].end());
if (has_tag_that_matches_fragmentadduct && ambiguous_match(theo_mz, z, fa.name)) continue;
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
Size index = exp_spectrum.findNearest(theo_mz);
const double exp_mz = exp_spectrum[index].getMZ();
const Size exp_z = exp_charges[index];
// found peak match
const double abs_err_Da = std::abs(theo_mz - exp_mz);
if (exp_z == z && abs_err_Da < max_dist_dalton)
{
if (!peak_matched[index])
{
const double intensity = exp_spectrum[index].getIntensity();
plss_pc_MIC += intensity;
pc_match_count += 1.0;
peak_matched[index] = true;
matches++;
}
}
++n_theoretical_XL_peaks;
}
}
}
plss_pc_MIC /= TIC;
plss_pc_MIC += pc_match_count; // Morpheus score
////////////////////////////////////////////////////////////////////////////////
// match shifted immonium ions
// lambda to match one peak and sum up in score
auto match_one_peak_z1 = [&](const double& theo_mz, float& score)
{
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? theo_mz * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
auto index = exp_spectrum.findNearest(theo_mz);
if (exp_charges[index] == 1 &&
std::abs(theo_mz - exp_spectrum[index].getMZ()) < max_dist_dalton) // found peak match
{
if (!peak_matched[index])
{
score += exp_spectrum[index].getIntensity();
peak_matched[index] = true;
// matches++;
}
}
// ++n_theoretical_XL_peaks;
};
static const double imY = EmpiricalFormula("C8H10NO").getMonoWeight();
static const double imW = EmpiricalFormula("C10H11N2").getMonoWeight();
static const double imF = EmpiricalFormula("C8H10N").getMonoWeight();
static const double imH = EmpiricalFormula("C5H8N3").getMonoWeight();
static const double imC = EmpiricalFormula("C2H6NS").getMonoWeight();
static const double imP = EmpiricalFormula("C4H8N").getMonoWeight();
static const double imL = EmpiricalFormula("C5H12N").getMonoWeight();
static const double imK1 = EmpiricalFormula("C5H13N2").getMonoWeight();
static const double imK2 = EmpiricalFormula("C5H10N1").getMonoWeight();
// static const double imK3 = EmpiricalFormula("C6H13N2O").getMonoWeight();
static const double imQ = 101.0715;
static const double imE = 102.0555;
static const double imM = 104.0534;
static const double imM_star = EmpiricalFormula("CH5S").getMonoWeight();
for (const NuXLFragmentAdductDefinition & fa : partial_loss_modification)
{
if (iip.Y)
{
match_one_peak_z1(imY + fa.mass, plss_im_MIC);
}
if (iip.W)
{
match_one_peak_z1(imW + fa.mass, plss_im_MIC);
}
if (iip.F)
{
match_one_peak_z1(imF + fa.mass, plss_im_MIC);
}
if (iip.H)
{
match_one_peak_z1(imH + fa.mass, plss_im_MIC);
}
if (iip.C)
{
match_one_peak_z1(imC + fa.mass, plss_im_MIC);
}
if (iip.P)
{
match_one_peak_z1(imP + fa.mass, plss_im_MIC);
}
if (iip.L)
{
match_one_peak_z1(imL + fa.mass, plss_im_MIC);
}
if (iip.K)
{
match_one_peak_z1(imK1 + fa.mass, plss_im_MIC);
// according to A. Stuetzer mainly observed with C‘-NH3 (94.0167 Da)
match_one_peak_z1(imK2 + fa.mass, plss_im_MIC);
// usually only observed without shift (A. Stuetzer)
// TODO: only enable for DNA? get's sometimes matched by chance
// match_one_peak_z1(imK3 + fa.mass, plss_im_MIC);
}
if (iip.M)
{
match_one_peak_z1(imM + fa.mass, plss_im_MIC);
match_one_peak_z1(imM_star + fa.mass, plss_im_MIC); // methionine related immonium ion often seen in DEB and NM
}
if (iip.Q)
{
match_one_peak_z1(imQ + fa.mass, plss_im_MIC);
}
if (iip.E)
{
match_one_peak_z1(imE + fa.mass, plss_im_MIC);
}
}
plss_im_MIC /= TIC;
// if we only have 1 peak assume some kind of average error to not underestimate the real error to much
// plss_err = plss_Morph > 2 ? plss_err : fragment_mass_tolerance;
assert(n_theoretical_XL_peaks != 0);
//const double p_random_match = exp_spectrum.getFloatDataArrays()[1][0];
const double p_random_match = 1e-3;
plss_modds = matchOddsScore_(n_theoretical_XL_peaks, matches, p_random_match);
n_theoretical_peaks += n_theoretical_XL_peaks;
}
/*
* Combine subscores of all-ion scoring.
*/
static float calculateCombinedScore(
const NuXLAnnotatedHit& ah/*,
const bool isXL,
const double nucleotide_mass_tags
*/
//, const double fraction_of_top50annotated
)
{
return ah.modds + ah.pl_modds;
}
static float calculateFastScore(const NuXLAnnotatedHit& ah)
{
return ah.modds;
}
/*
* Score fragments carrying NA adducts
*/
static void scoreXLIons_(
const vector<NuXLFragmentAdductDefinition> &partial_loss_modification,
const ImmoniumIonsInPeptide& iip,
const PeakSpectrum &exp_spectrum,
const double peptide_mass_without_NA,
double fragment_mass_tolerance,
bool fragment_mass_tolerance_unit_ppm,
const vector<double> &partial_loss_template_z1_b_ions,
const vector<double> &partial_loss_template_z1_y_ions,
const PeakSpectrum &marker_ions_sub_score_spectrum_z1,
vector<double>& intensity_sum,
vector<double>& b_ions,
vector<double>& y_ions,
vector<bool>& matched_peaks,
float &partial_loss_sub_score,
float &marker_ions_sub_score,
float &plss_MIC,
float &plss_err,
float &plss_Morph,
float &plss_modds,
float &plss_pc_MIC,
float &plss_im_MIC,
size_t &n_theoretical_peaks,
const PeakSpectrum &all_possible_marker_ion_sub_score_spectrum_z1)
{
(void)marker_ions_sub_score_spectrum_z1; // unused parameter
OPENMS_PRECONDITION(!partial_loss_template_z1_b_ions.empty(), "Empty partial loss spectrum provided.");
OPENMS_PRECONDITION(intensity_sum.size() == partial_loss_template_z1_b_ions.size(), "Sum array needs to be of same size as b-ion array");
OPENMS_PRECONDITION(intensity_sum.size() == partial_loss_template_z1_y_ions.size(), "Sum array needs to be of same size as y-ion array");
const SignedSize& exp_pc_charge = exp_spectrum.getPrecursors()[0].getCharge();
//const double exp_pc_mz = exp_spectrum.getPrecursors()[0].getMZ();
if (!all_possible_marker_ion_sub_score_spectrum_z1.empty())
{
auto const & r = MorpheusScore::compute(fragment_mass_tolerance * 2.0,
fragment_mass_tolerance_unit_ppm,
exp_spectrum,
exp_spectrum.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX],
all_possible_marker_ion_sub_score_spectrum_z1,
all_possible_marker_ion_sub_score_spectrum_z1.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX]);
marker_ions_sub_score = r.TIC != 0 ? r.MIC / r.TIC : 0;
// count marker ions
// n_theoretical_peaks += marker_ions_sub_score_spectrum_z1.size();
}
scoreShiftedLadderIons_(
partial_loss_modification,
partial_loss_template_z1_b_ions,
partial_loss_template_z1_y_ions,
peptide_mass_without_NA,
exp_pc_charge,
iip,
fragment_mass_tolerance,
fragment_mass_tolerance_unit_ppm,
exp_spectrum,
exp_spectrum.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX],
intensity_sum,
b_ions,
y_ions,
matched_peaks,
partial_loss_sub_score,
plss_MIC,
plss_Morph,
plss_err,
plss_modds,
plss_pc_MIC,
plss_im_MIC,
n_theoretical_peaks/*,
is_decoy*/);
#ifdef DEBUG_OpenNuXL
LOG_DEBUG << "scan index: " << scan_index << " achieved score: " << score << endl;
#endif
}
// De novo tagger
class OpenNuXLTagger
{
public:
// initalize tagger with minimum/maximum tag length and +- tolerance ppm
explicit OpenNuXLTagger(float tol = 0.05, size_t min_tag_length = 0, size_t max_tag_length = 65535)
{
tol_ = tol;
min_tag_length_ = min_tag_length;
max_tag_length_ = max_tag_length;
const std::set<const Residue*> aas = ResidueDB::getInstance()->getResidues("Natural19WithoutI");
for (const auto& r : aas)
{
const char letter = r->getOneLetterCode()[0];
const float mass = r->getMonoWeight(Residue::Internal);
mass2aa[mass] = letter;
#ifdef DEBUG_OPENNUXL_TAGGER
OPENMS_LOG_INFO << "Mass: " << mass << "\t" << letter << endl;
#endif
}
min_gap_ = mass2aa.begin()->first - tol;
max_gap_ = mass2aa.rbegin()->first + tol;
#ifdef DEBUG_OPENNUXL_TAGGER
TheoreticalSpectrumGenerator test;
auto param = test.getParameters();
param.setValue("add_first_prefix_ion", "true");
param.setValue("add_abundant_immonium_ions", "false"); // we add them manually for charge 1
param.setValue("add_precursor_peaks", "true");
param.setValue("add_all_precursor_charges", "false"); // we add them manually for every charge
param.setValue("add_metainfo", "false");
param.setValue("add_a_ions", "false");
param.setValue("add_b_ions", "true");
param.setValue("add_c_ions", "false");
param.setValue("add_x_ions", "false");
param.setValue("add_y_ions", "true");
param.setValue("add_z_ions", "false");
test.setParameters(param);
MSSpectrum test_s;
test.getSpectrum(test_s, AASequence::fromString("TESTPEPTIDE"), 1, 1);
OPENMS_LOG_INFO << "should be ESTPEPTIDE:" << getLongestTag(test_s) << endl;
#endif
}
void getTag(const std::vector<float>& mzs, std::set<std::string>& tags) const
{
// start peak
if (min_tag_length_ > mzs.size()) return; // avoid segfault
std::string tag;
for (size_t i = 0; i < mzs.size() - min_tag_length_; ++i)
{
getTag_(tag, mzs, i, tags);
tag.clear();
}
}
// generate tags from mass vector @mzs using the standard residues in ResidueDB
void getTag(const MSSpectrum& spec, std::set<std::string>& tags) const
{
const size_t N = spec.size();
if (N < min_tag_length_) { return; }
// copy to float vector (speed)
std::vector<float> mzs;
mzs.reserve(N);
for (auto const& p : spec) { mzs.push_back(p.getMZ()); }
getTag(mzs, tags);
}
// generate tags from mass vector @mzs using the standard residues in ResidueDB
std::string getLongestTag(const MSSpectrum& spec) const
{
std::set<std::string> tags;
getTag(spec, tags);
if (tags.empty()) return "";
//cout << "Ntags:" << tags.size() << endl;
//for (const auto & s: tags) { cout << s << endl; }
const auto longest = std::max_element(tags.cbegin(), tags.cend(),
[](const std::string& lhs, const std::string& rhs) { return lhs.size() < rhs.size(); });
//cout << "longest:" << *longest << endl;
return *longest;
}
// note: this is much more efficient
size_t getLongestTagLength(const MSSpectrum& spec) const
{
// simple DP to detect longest tag
const size_t N = spec.size();
if (N < 2) return 0;
std::vector<float> mzs;
mzs.reserve(N);
for (auto const& p : spec) { mzs.push_back(p.getMZ()); }
std::vector<size_t> max_tag(N, 0); // maximum tag length up to this peak
size_t longest_tag = 0;
for (size_t i = 0; i < N - 1; ++i)
{
for (size_t k = i + 1; k < N; ++k)
{
const double gap = mzs[k] - mzs[i];
if (gap > max_gap_) { break; }
const char aa = getAAByMass_(gap);
if (aa == ' ') { continue; } // can't extend tag to k-th peak
if (max_tag[k] < max_tag[i] + 1) // check if we found a longer tag to k-th peak
{
++max_tag[k]; // update longest tag to this peak
if (longest_tag < max_tag[k]) { longest_tag = max_tag[k]; }
}
}
}
return longest_tag;
}
private:
float min_gap_; // will be set to smallest residue mass in ResidueDB
float max_gap_; // will be set to highest residue mass in ResidueDB
float tol_; // < tolerance
size_t min_tag_length_; // < minimum tag length
size_t max_tag_length_; // < maximum tag length
std::map<float, char> mass2aa;
char getAAByMass_(float m) const
{
// fast check for border cases
if (m < min_gap_ || m > max_gap_) return ' ';
auto left = mass2aa.lower_bound(m - tol_);
//if (left == mass2aa.end()) return ' '; // cannot happen, since we checked boundaries above
if (fabs(left->first - m) < tol_) return left->second;
return ' ';
}
void getTag_(std::string & tag, const std::vector<float>& mzs, const size_t i, std::set<std::string>& tags) const
{
const size_t N = mzs.size();
size_t j = i + 1;
// recurse for all peaks in distance < max_gap
while (j < N)
{
if (tag.size() == max_tag_length_) { return; } // maximum tag size reached? - continue with next parent
const float gap = mzs[j] - mzs[i];
if (gap > max_gap_) { return; } // already too far away - continue with next parent
const char aa = getAAByMass_(gap);
#ifdef DEBUG_OPENNUXL_TAGGER
OPENMS_LOG_INFO << i << "\t" << j << "\t" << mzs[i] << "\t" << mzs[j] << "\t" << gap << "\t'" << aa << "'" << endl;
#endif
if (aa == ' ') { ++j; continue; } // can't extend tag
tag += aa;
getTag_(tag, mzs, j, tags);
if (tag.size() >= min_tag_length_) tags.insert(tag);
tag.pop_back(); // remove last char
++j;
}
}
};
struct RankScores
{
double explained_peak_fraction = 0.0;
size_t explained_peaks = 0;
double wTop50 = 0.0;
};
/*
class SmallestElements
{
private:
int max_size_;
public:
priority_queue<size_t, std::vector<size_t>, std::greater<size_t>> pq;
explicit SmallestElements(size_t size):
max_size_(size)
{
}
void tryAdd(size_t v)
{
if ((int)pq.size() < max_size_)
{
pq.push(v);
return;
}
if (v < pq.top())
{
pq.pop(); //get rid of the root
pq.push(v); //priority queue will automatically restructure
}
}
};
*/
RankScores rankScores_(const MSSpectrum& spectrum, vector<bool> peak_matched)
{
if (spectrum.empty()) return {0.0, 0, 1e10};
const double matched = std::accumulate(peak_matched.begin(), peak_matched.end(), 0);
if (matched == 0) return {0.0, 0, 1e10};
RankScores r;
vector<double> matched_ranks;
for (size_t i = 0; i != peak_matched.size(); ++i)
{
if (!peak_matched[i]) { continue; }
matched_ranks.push_back(spectrum.getIntegerDataArrays()[NuXLConstants::IA_RANK_INDEX][i]);
}
std::sort(matched_ranks.begin(), matched_ranks.end());
double avg_int{};
size_t n_unexplained_greater_avg{};
for (size_t i = 0; i != spectrum.size(); ++i)
{
if (peak_matched[i]) avg_int += spectrum[i].getIntensity() / matched;
}
// cout << "Matched.: " << matched << endl;
// cout << "Avg.: " << avg_int << endl;
for (size_t i = 0; i != spectrum.size(); ++i)
{
if (peak_matched[i] == false && spectrum[i].getIntensity() > avg_int) ++n_unexplained_greater_avg;
}
// number of unexplained peaks with intensity higher than the mean intensity of matche peaks
r.wTop50 = n_unexplained_greater_avg;
r.explained_peaks = matched;
r.explained_peak_fraction = matched / (double)spectrum.size();
return r;
}
/*
RankScores rankScores_(const MSSpectrum& spectrum, vector<bool> peak_matched)
{
double matched = std::accumulate(peak_matched.begin(), peak_matched.end(), 0);
SmallestElements top7ranks(7);
RankScores r;
for (size_t i = 0; i != peak_matched.size(); ++i)
{
if (!peak_matched[i])
{
continue;
}
else
{
const double rank = 1 + spectrum.getIntegerDataArrays()[NuXLConstants::IA_RANK_INDEX][i]; // ranks start at 0 -> add 1
r.rp += 1.0/matched * log((double)rank);
top7ranks.tryAdd(rank);
}
}
size_t median = peak_matched.size() / 2; // init to number of peaks / 2
for (size_t i = 1; i <= 4; ++i)
{
if (top7ranks.pq.empty()) break;
median = top7ranks.pq.top();
top7ranks.pq.pop();
}
r.wTop50 = median;
r.rp = exp(r.rp - 1.0 / matched * lgamma(matched+1)); // = rp / lowest possible rp given number of matches
return r;
}
*/
static map<String, vector<vector<double>>> fragment_adduct2block_if_masses_present;
static set<double> getSetOfAdductMasses(const NuXLParameterParsing::NucleotideToFragmentAdductMap & nucleotide_to_fragment_adducts)
{
set<double> adduct_mass;
for (const auto & p : nucleotide_to_fragment_adducts)
{
for (const auto & fa : p.second)
{
adduct_mass.insert(fa.mass);
}
}
return adduct_mass;
}
static map<double, map<const Residue*, double>> getMapAAPlusAdductMass(const set<double>& adduct_mass,
const std::string& debug_file)
{
map<double, map<const Residue*, double>> aa_plus_adduct_mass;
auto residues = ResidueDB::getInstance()->getResidues("Natural19WithoutI");
for (const double d : adduct_mass)
{ // for every fragment adduct mass
for (const Residue* r : residues)
{ // calculate mass of adduct bound to an internal residue
double m = d + r->getMonoWeight(Residue::Internal);
aa_plus_adduct_mass[m][r] = d; // mass, residue, shift mass
}
}
// add mass shits of plain residues (= no adduct)
for (const Residue* r : residues)
{
double m = r->getMonoWeight(Residue::Internal);
aa_plus_adduct_mass[m][r] = 0;
}
if (!debug_file.empty())
{
// output ambiguous masses
ofstream of;
of.open(debug_file);
of << "Ambiguous residues (+adduct) masses that exactly match to other masses." << endl;
of << "Total\tResidue\tAdduct" << endl;
for (auto& m : aa_plus_adduct_mass)
{
double mass = m.first;
if (m.second.size() == 1) continue;
// more than one residue / adduct registered for that mass
for (auto& a : m.second)
{
of << mass << "\t" << a.first->getOneLetterCode() << "\t" << a.second << "\n";
}
}
of.close();
// Calculate background statistics on shifts
OPENMS_LOG_DEBUG << "mass\tresidue\tshift:" << endl;
for (const auto& mra : aa_plus_adduct_mass)
{
double m = mra.first;
const map<const Residue*, double>& residue2adduct = mra.second;
for (auto& r2a : residue2adduct)
{
OPENMS_LOG_DEBUG << m << "\t" << r2a.first->getOneLetterCode() << "\t" << r2a.second << endl;
}
}
}
return aa_plus_adduct_mass;
}
static map<double, set<String>> getAdductMass2Name(const NuXLParameterParsing::NucleotideToFragmentAdductMap& nucleotide_to_fragment_adducts)
{
map<double, set<String>> adduct_mass2adduct_names;
for (const auto & p : nucleotide_to_fragment_adducts)
{
for (const auto & fa : p.second)
{
adduct_mass2adduct_names[fa.mass].insert(fa.name);
}
}
return adduct_mass2adduct_names;
}
static map<double, map<const Residue*, String>> getMapAAPlusAdductMassToResidueToAdductName(const NuXLParameterParsing::NucleotideToFragmentAdductMap& nucleotide_to_fragment_adducts)
{
map<double, map<const Residue*, String>> res_adduct_mass2residue2adduct;
auto residues = ResidueDB::getInstance()->getResidues("Natural19WithoutI");
for (const auto & p : nucleotide_to_fragment_adducts)
{
for (const auto & fa : p.second)
{
for (const Residue* r : residues)
{
double m = fa.mass + r->getMonoWeight(Residue::Internal); // mass of residue + fragment adduct
res_adduct_mass2residue2adduct[m][r] = fa.name; // e.g. 432.1->'L'->'U-H2O'
}
}
}
return res_adduct_mass2residue2adduct;
}
static void getTagToAdduct(
const NuXLParameterParsing::NucleotideToFragmentAdductMap& nucleotide_to_fragment_adducts,
map<String, set<String>>& tag2ADs,
unordered_map<String, unordered_set<String>>& ADs2tag,
const double fragment_mass_tolerance,
const bool fragment_mass_tolerance_unit_ppm)
{
// create map from adduct mass to adduct name (e.g., "U-H2O")
map<double, set<String>> adduct_mass2adduct_names = getAdductMass2Name(nucleotide_to_fragment_adducts);
// create map from residue + adduct to residue to adduct names
map<double, map<const Residue*, String>> res_adduct_mass2residue2adduct = getMapAAPlusAdductMassToResidueToAdductName(nucleotide_to_fragment_adducts);
auto residues = ResidueDB::getInstance()->getResidues("Natural19WithoutI");
// check if 2 AA match 1AA + adduct
for (const Residue* a : residues)
{
double am = a->getMonoWeight(Residue::Internal);
for (const Residue* b : residues)
{ // for all pairs of residues
double bm = b->getMonoWeight(Residue::Internal);
const double abmass = am + bm;
// take 1000 Da as reference mass so we get meaningful Da from ppm
const float tolerance = fragment_mass_tolerance_unit_ppm ? Math::ppmToMass(fragment_mass_tolerance, abmass + 1000.0) : fragment_mass_tolerance;
// find all (shifted/normal) residues that match to an observed shift
auto left = res_adduct_mass2residue2adduct.lower_bound(abmass - tolerance);
auto right = res_adduct_mass2residue2adduct.upper_bound(abmass + tolerance);
for (; left != right; ++left)
{ // found at least one AA + adduct mass that matches to A+B mass
auto& residues2adductname = left->second;
const String& A = a->getOneLetterCode();
const String& B = b->getOneLetterCode();
String tag = A+B;
// sort AA tag because there is no difference between e.g., "AB" or "BA"
std::sort(tag.begin(), tag.end());
for (auto& r2s : residues2adductname)
{
const String& adduct_name = r2s.second;
OPENMS_LOG_DEBUG << abmass << ":" << tag << "=" << r2s.first->getOneLetterCode() << "+" << adduct_name << endl;
tag2ADs[tag].insert(adduct_name);
ADs2tag[adduct_name].insert(tag);
vector<double> list;
list.push_back(am);
list.push_back(bm);
// for every fragment adduct name, store amino acid masses that that would also match
fragment_adduct2block_if_masses_present[adduct_name].push_back(list);
}
}
}
}
// 2 AA vs adduct (e.g., is it 2 AA or is a fragment ion observed both with and without adduct)
for (const Residue* a : residues)
{
double am = a->getMonoWeight(Residue::Internal);
for (const Residue* b : residues)
{
double bm = b->getMonoWeight(Residue::Internal);
const double abmass = am + bm;
// take 1000 Da as reference mass so we get meaningful Da from ppm
const float tolerance = fragment_mass_tolerance_unit_ppm ? Math::ppmToMass(fragment_mass_tolerance, abmass + 1000.0) : fragment_mass_tolerance;
auto left = adduct_mass2adduct_names.lower_bound(abmass - tolerance);
auto right = adduct_mass2adduct_names.upper_bound(abmass + tolerance);
for (; left != right; ++left)
{ // found at least one adduct mass that matches to A+B mass
const String& A = a->getOneLetterCode();
const String& B = b->getOneLetterCode();
String tag = A+B;
// sort AA tag because there is no difference between e.g., "AB" or "BA"
std::sort(tag.begin(), tag.end());
for (auto& adduct_name : left->second)
{
OPENMS_LOG_DEBUG << (am + bm) << ":" << tag << "=" << adduct_name << endl;
tag2ADs[tag].insert(adduct_name);
ADs2tag[adduct_name].insert(tag);
vector<double> list;
list.push_back(am);
list.push_back(bm);
fragment_adduct2block_if_masses_present[adduct_name].push_back(list);
}
}
}
}
// The bad cases where one AA matches to one AA + adduct or even one AA to adduct
// This is expected to happy quite a bit in real data.
// 1 (heavy) AA vs 1 (light) AA + adduct
for (const Residue* a : residues)
{
double am = a->getMonoWeight(Residue::Internal);
// take 1000 Da as reference mass so we get meaningful Da from ppm
const float tolerance = fragment_mass_tolerance_unit_ppm ? Math::ppmToMass(fragment_mass_tolerance, am + 1000.0) : fragment_mass_tolerance;
// find all (shifted/normal) residues that match to an observed shift
auto left = res_adduct_mass2residue2adduct.lower_bound(am - tolerance);
auto right = res_adduct_mass2residue2adduct.upper_bound(am + tolerance);
for (; left != right; ++left)
{ // at least one AA matches another AA + adduct mass
auto& residues2adductname = left->second;
const String& A = a->getOneLetterCode();
for (auto& r2s : residues2adductname)
{
const String& adduct_name = r2s.second;
OPENMS_LOG_DEBUG<< am << ":" << A << "=" << r2s.first->getOneLetterCode() << "+" << adduct_name << endl;
tag2ADs[A].insert(adduct_name);
ADs2tag[adduct_name].insert(A);
vector<double> list;
list.push_back(am);
fragment_adduct2block_if_masses_present[adduct_name].push_back(list);
}
}
}
// 1 AA vs adduct (e.g., worst case an AA matches to an adduct)
for (const Residue* a : residues)
{
double am = a->getMonoWeight(Residue::Internal);
// take 1000 Da as reference mass so we get meaningful Da from ppm
const float tolerance = fragment_mass_tolerance_unit_ppm ? Math::ppmToMass(fragment_mass_tolerance, am + 1000.0) : fragment_mass_tolerance;
auto left = adduct_mass2adduct_names.lower_bound(am - tolerance);
auto right = adduct_mass2adduct_names.upper_bound(am + tolerance);
for (; left != right; ++left)
{ // at least one adduct mass matches another AA
const String& A = a->getOneLetterCode();
for (auto& adduct_name : left->second)
{
OPENMS_LOG_DEBUG << am << ":" << A << "=" << adduct_name << endl;
tag2ADs[A].insert(adduct_name);
ADs2tag[adduct_name].insert(A);
vector<double> list;
list.push_back(am);
fragment_adduct2block_if_masses_present[adduct_name].push_back(list);
}
}
}
}
static void calculateAATagsOfLength1and2(MSExperiment& exp, const map<String, set<String>>& tag2ADs)
{
// precalculate AA tags of length 1-2nt and store potential conflicting adducts in the spectrum meta data
OpenNuXLTagger tagger(0.03,1,2); // calculate tags of length 1-2nt
for (auto & spec : exp)
{
if (spec.getMSLevel() != 2) continue;
std::set<std::string> tags;
tagger.getTag(spec, tags);
spec.getStringDataArrays().push_back({});
for (std::string s : tags) // map tag to ambiguous fragment adduct and store
{
std::sort(s.begin(), s.end());
if (const auto it = tag2ADs.find(s); it != tag2ADs.end())
{
for (const auto& ad : it->second)
{
spec.getStringDataArrays().back().push_back(ad); // store adduct name
}
}
}
}
}
static void getAdductAndAAPlusAdductMassCountsFromSpectra(
const NuXLParameterParsing::NucleotideToFragmentAdductMap& nucleotide_to_fragment_adducts,
MSExperiment& exp,
map<double, size_t>& adduct_mass_count,
map<double, size_t>& aa_plus_adduct_mass_count,
const double fragment_mass_tolerance,
const bool fragment_mass_tolerance_unit_ppm,
std::string debug_file)
{
// create the set of all fragment adduct masses
set<double> adduct_mass = getSetOfAdductMasses(nucleotide_to_fragment_adducts);
// create map from residue+adduct mass shift to residue + adduct (including no adduct (!))
// e.g. allows checking if an observed mass shift could be induced by a cross-linked fragment
map<double, map<const Residue*, double> > aa_plus_adduct_mass = getMapAAPlusAdductMass(adduct_mass, debug_file);
for (auto & spec : exp)
{
if (spec.getMSLevel() != 2 || spec.empty()) continue;
// faster
vector<double> mzs;
vector<double> charges;
for (auto const& p : spec) { mzs.push_back(p.getMZ()); }
for (auto const& p : spec.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX]) { charges.push_back(p); }
size_t match(0);
size_t in_mass_range(0);
// for all fragment peak pairs of matching charge
for (Size i = 0; i != mzs.size(); ++i)
{
for (Size j = i+1; j < mzs.size(); ++j)
{
if (charges[i] != charges[j]) continue;
double m = mzs[j];
double dm = m - mzs[i];
const float tolerance = fragment_mass_tolerance_unit_ppm ? Math::ppmToMass(fragment_mass_tolerance, m) : fragment_mass_tolerance;
double mass_delta = dm * charges[i];
// dm already so large that I can't match to largest adduct_mass anymore? done
if (mass_delta > *adduct_mass.rbegin() + tolerance) break;
auto left = adduct_mass.lower_bound(mass_delta - tolerance);
if (left == adduct_mass.end()) continue; // not found
++in_mass_range; // mass range of all adduct masses TODO: improve?
// count if distance matches to adduct mass
if (fabs(*left - mass_delta) < tolerance )
{
++match;
++adduct_mass_count[*left]; // note: potentially dangerous because of floating point precision
}
}
}
// count how often a shift matches a residue + adduct mass (including mass 0 for unmodified residue)
size_t aa_plus_adduct_in_mass_range(0);
for (Size i = 0; i != mzs.size(); ++i)
{
for (Size j = i+1; j < mzs.size(); ++j)
{
double m = mzs[j];
double dm = m - mzs[i];
if (charges[i] != charges[j]) continue;
const float tolerance = fragment_mass_tolerance_unit_ppm ? Math::ppmToMass(fragment_mass_tolerance, m) : fragment_mass_tolerance;
// find all (shifted/normal) residues that match to an observed shift
auto left = aa_plus_adduct_mass.lower_bound((dm * charges[i]) - tolerance);
auto right = aa_plus_adduct_mass.upper_bound((dm * charges[i]) + tolerance);
for (; left != right; ++left)
{
++aa_plus_adduct_in_mass_range;
if (fabs(left->first - (dm * charges[i])) < tolerance )
{
++aa_plus_adduct_mass_count[left->first];
}
}
}
}
spec.getFloatDataArrays().resize(3);
spec.getFloatDataArrays()[2].resize(1);
// spec.getFloatDataArrays()[2][0] = (double)match / (double)in_mass_range; // TODO: this doesn't seem to normalize well for noise etc.
spec.getFloatDataArrays()[2][0] = matchOddsScore_((double)in_mass_range, (double)match, 1e-3); // count all in mass range as trial, match as success, p=0.001
spec.getFloatDataArrays()[2].setName("nucleotide_mass_tags");
}
// reformat to get: amino acid, mass, count statistics for spectra
map<const Residue*, map<double, size_t>> aa2mass2count;
for (const auto& mc : aa_plus_adduct_mass_count)
{
double mass = mc.first;
size_t count = mc.second;
auto it = aa_plus_adduct_mass.lower_bound(mass - 1e-6); // "exact" match
if (it == aa_plus_adduct_mass.end()) continue;
const map<const Residue*, double>& residue2adduct = it->second;
for (auto& r2a : residue2adduct)
{
const Residue* residue = r2a.first;
String name = residue->getName();
aa2mass2count[residue][mass] = count;
}
}
for (const auto& aa2 : aa2mass2count)
{
auto& mass2count = aa2.second;
for (const auto& m2c : mass2count)
{
double current_mass = m2c.first;
size_t current_residue_count = m2c.second;
if (!debug_file.empty())
{
OPENMS_LOG_DEBUG << aa2.first->getName() << "\t" << current_mass << "\t" << current_residue_count << endl; // aa, mass, count
}
}
}
if (!debug_file.empty())
{
OPENMS_LOG_DEBUG << "Normalized counts per residue:" << endl;
for (const auto& aa2 : aa2mass2count)
{
auto& mass2count = aa2.second;
for (const auto& m2c : mass2count)
{
// normalize by counts
double current_mass = m2c.first;
size_t current_residue_count = m2c.second;
size_t unmodified_residue_count = mass2count.begin()->second;
double frequency_normalized = (double)current_residue_count / unmodified_residue_count;
OPENMS_LOG_DEBUG << aa2.first->getName() << "\t" << current_mass << "\t" << frequency_normalized << endl; // aa mass count
}
}
}
OPENMS_LOG_DEBUG << "Distinct residue + adduct masses (including residues without shift): " << aa_plus_adduct_mass_count.size() << endl;
}
static void calculateIntensityRanks(MSExperiment& exp)
{
OPENMS_LOG_INFO << "Calculating ranks..." << endl;
for (auto & spec : exp)
{
if (spec.getMSLevel() != 2) continue;
// initialize original index locations
vector<size_t> idx(spec.size());
std::iota(idx.begin(), idx.end(), 0);
// sort indexes based on comparing intensity values (0 = highest intensity)
sort(idx.begin(), idx.end(),
[&spec](size_t i1, size_t i2) { return spec[i1].getIntensity() > spec[i2].getIntensity(); });
spec.getIntegerDataArrays().resize(NuXLConstants::IA_RANK_INDEX + 1);
spec.getIntegerDataArrays()[NuXLConstants::IA_RANK_INDEX].clear();
for (int rank : idx) { spec.getIntegerDataArrays()[NuXLConstants::IA_RANK_INDEX].push_back(rank); }
spec.getIntegerDataArrays()[NuXLConstants::IA_RANK_INDEX].setName("intensity_rank");
}
OPENMS_LOG_INFO << " done!" << endl;
}
static void calculateLongestAASequenceTag(MSExperiment& exp)
{
OPENMS_LOG_INFO << "Calculating longest mass tags..." << endl;
OpenNuXLTagger tagger(0.03, 3);
for (auto & spec : exp)
{
if (spec.getMSLevel() != 2) continue;
spec.getIntegerDataArrays().resize(NuXLConstants::IA_DENOVO_TAG_INDEX + 1);
spec.getIntegerDataArrays()[NuXLConstants::IA_DENOVO_TAG_INDEX].resize(1);
spec.getIntegerDataArrays()[NuXLConstants::IA_DENOVO_TAG_INDEX][0] = 0;
#ifdef CALCULATE_LONGEST_TAG
size_t longest_tag = tagger.getLongestTagLength(spec);
spec.getIntegerDataArrays()[NuXLConstants::IA_DENOVO_TAG_INDEX][0] = longest_tag;
//spec.getIntegerDataArrays()[NuXLConstants::IA_DENOVO_TAG_INDEX][0] = tagger.getLongestTag(spec).size(); // slow
#endif
spec.getIntegerDataArrays()[NuXLConstants::IA_DENOVO_TAG_INDEX].setName("longest_tag");
}
OPENMS_LOG_INFO << " done!" << endl;
}
void calculateNucleotideTags_(PeakMap& exp,
const double fragment_mass_tolerance,
const bool fragment_mass_tolerance_unit_ppm,
const NuXLParameterParsing::NucleotideToFragmentAdductMap& nucleotide_to_fragment_adducts)
{
// check for theoretically ambiguous fragment shifts: AA tags of length 1-2 without adduct that match to two AA + adduct, one AA + adduct, just an adduct
map<String, set<String>> tag2ADs; // AA tags that match adduct names in mass
unordered_map<String, unordered_set<String>> ADs2tag;
getTagToAdduct(nucleotide_to_fragment_adducts, tag2ADs, ADs2tag, fragment_mass_tolerance, fragment_mass_tolerance_unit_ppm);
// calculate and annotate in stringdataarray
calculateAATagsOfLength1and2(exp, tag2ADs);
// similar to a mass tagger we look for mass shifts that match to adduct masses or AA+adduct masses
// annotates in spec.getFloatDataArrays()[2] / name "nucleotide_mass_tags";
map<double, size_t> adduct_mass_count;
map<double, size_t> aa_plus_adduct_mass_count;
// Output CSV to same directory as input file
String input_file = getStringOption_("in");
String dir = File::path(input_file);
String csv_file;
csv_file = dir;
csv_file.ensureLastChar('/');
csv_file += File::basename(input_file) + ".ambiguous_masses.csv";
getAdductAndAAPlusAdductMassCountsFromSpectra(nucleotide_to_fragment_adducts, exp, adduct_mass_count, aa_plus_adduct_mass_count, fragment_mass_tolerance, fragment_mass_tolerance_unit_ppm, csv_file);
if (debug_level_ > 0) { OPENMS_LOG_DEBUG << "Total counts per residue:" << endl; }
}
// An interval has start time and end time
struct Interval_
{
double start, end;
};
// Compares two intervals according to their staring time.
static bool IntervalGreater_(const Interval_& a, const Interval_& b)
{
return std::tie(b.start, b.end) < std::tie(a.start, b.end);
}
/*
double getAreaOfIntervalUnion_(std::vector<Interval_> i)
{
if (i.empty()) return 0.0;
// sort the intervals in increasing order of start time
std::sort(i.begin(), i.end(), IntervalGreater_);
// create an empty stack of intervals
std::stack<Interval_> s;
// push the first interval to stack
s.push(i[0]);
// Start from the next interval and merge if necessary
for (const Interval_& interval : i)
{
// get interval from stack top
Interval_ top = s.top();
// if current interval is not overlapping with stack top,
// push it to the stack
if (top.end < interval.start)
{
s.push(interval);
}
else if (top.end < interval.end)
{
// merge: update the end time of top
top.end = interval.end;
s.pop();
s.push(top);
}
}
// calculate area
double area{};
while (!s.empty())
{
Interval_ t = s.top();
area += t.end - t.start;
s.pop();
}
return area;
}
*/
/* @brief Filter spectra to remove noise.
- Remove zero intensities
- Remove isotopic peaks and determine charge
- Filter interfering peaks
- Scale by root to reduce impact of high-intensity peaks
- Normalize max intensity to 1.0
- Set Unknown charge to z=1. Otherwise we get a lot of spurious matches
- Keep 20 highest-intensity peaks in 100 m/z windows
- Keep max. 400 peaks per spectrum
to highly charged fragments in the low m/z region
- Calculate TIC of filtered spectrum
*/
void preprocessSpectra_(PeakMap& exp,
/*
double fragment_mass_tolerance,
bool fragment_mass_tolerance_unit_ppm,
*/
bool single_charge_spectra,
bool annotate_charge,
double window_size,
size_t peakcount,
const std::map<String, PrecursorPurity::PurityScores>& purities)
{
// filter MS2 map
// remove 0 intensities
ThresholdMower threshold_mower_filter;
threshold_mower_filter.filterPeakMap(exp);
#pragma omp parallel for
for (SignedSize exp_index = 0; exp_index < (SignedSize)exp.size(); ++exp_index)
{
MSSpectrum & spec = exp[exp_index];
// sort by mz
spec.sortByPosition();
// deisotope
NuXLDeisotoper::deisotopeAndSingleCharge(spec,
0.01,
false,
1, 3,
false,
2, 10,
single_charge_spectra,
annotate_charge,
false, // no iso peak count annotation
true, // decreasing isotope model
2, // enforce only starting from second peak
true); // add up intensities
}
filterPeakInterference_(exp, purities);
// remove empty spectra as they can cause trouble downstream
auto& sp = exp.getSpectra();
sp.erase(std::remove_if(sp.begin(), sp.end(), [](const MSSpectrum& s) { return s.empty(); }), exp.end());
/*
SqrtMower sqrt_mower_filter;
sqrt_mower_filter.filterPeakMap(exp);
*/
Normalizer normalizer;
normalizer.filterPeakMap(exp);
// sort by rt
exp.sortSpectra(false);
// filter settings
WindowMower window_mower_filter;
Param filter_param = window_mower_filter.getParameters();
filter_param.setValue("windowsize", window_size, "The size of the sliding window along the m/z axis.");
filter_param.setValue("peakcount", peakcount, "The number of peaks that should be kept.");
filter_param.setValue("movetype", "jump", "Whether sliding window (one peak steps) or jumping window (window size steps) should be used.");
window_mower_filter.setParameters(filter_param);
NLargest nlargest_filter = NLargest(400);
#ifdef DEBUG_OpenNuXL
BinnedSpectrum peak_density(MSSpectrum(), 0.03, false, 0, 0);
#endif
#pragma omp parallel for
for (SignedSize exp_index = 0; exp_index < (SignedSize)exp.size(); ++exp_index)
{
MSSpectrum & spec = exp[exp_index];
// sort by mz
spec.sortByPosition();
if (annotate_charge)
{
// set Unknown charge to z=1. Otherwise we get a lot of spurious matches
// to highly charged fragments in the low m/z region
if (spec.empty()) continue; // TODO: maybe add empty integerdataarray in deisotoping? seems to be missing here
DataArrays::IntegerDataArray& ia = spec.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX]; // charge array
for (int & z : ia) { if (z == 0) { z = 1; } }
}
#ifdef DEBUG_OpenNuXL
OPENMS_LOG_DEBUG << "after deisotoping..." << endl;
OPENMS_LOG_DEBUG << "Fragment m/z and intensities for spectrum: " << exp_index << endl;
OPENMS_LOG_DEBUG << "Fragment charges in spectrum: " << exp_index << endl;
if (spec.getIntegerDataArrays().size())
for (Size i = 0; i != spec.size(); ++i)
OPENMS_LOG_DEBUG << spec[i].getMZ() << "\t" << spec[i].getIntensity() << "\t" << ia[i] << endl;
OPENMS_LOG_DEBUG << endl;
#endif
// remove noise
window_mower_filter.filterPeakSpectrum(spec);
#ifdef DEBUG_OpenNuXL
OPENMS_LOG_DEBUG << "after mower..." << endl;
OPENMS_LOG_DEBUG << "Fragment m/z and intensities for spectrum: " << exp_index << endl;
for (Size i = 0; i != spec.size(); ++i) OPENMS_LOG_DEBUG << spec[i].getMZ() << "\t" << spec[i].getIntensity() << endl;
OPENMS_LOG_DEBUG << "Fragment charges in spectrum: " << exp_index << endl;
if (spec.getIntegerDataArrays().size())
for (Size i = 0; i != spec.size(); ++i)
OPENMS_LOG_DEBUG << spec[i].getMZ() << "\t" << spec[i].getIntensity() << "\t" << ia[i] << endl;
#endif
nlargest_filter.filterPeakSpectrum(spec);
#ifdef DEBUG_OpenNuXL
OPENMS_LOG_DEBUG << "after nlargest..." << endl;
OPENMS_LOG_DEBUG << "Fragment m/z and intensities for spectrum: " << exp_index << endl;
for (Size i = 0; i != spec.size(); ++i) OPENMS_LOG_DEBUG << spec[i].getMZ() << "\t" << spec[i].getIntensity() << endl;
OPENMS_LOG_DEBUG << "Fragment charges in spectrum: " << exp_index << endl;
if (spec.getIntegerDataArrays().size())
for (Size i = 0; i != spec.size(); ++i)
OPENMS_LOG_DEBUG << spec[i].getMZ() << "\t" << spec[i].getIntensity() << "\t" << ia[i] << endl;
#endif
// sort (nlargest changes order)
spec.sortByPosition();
#ifdef DEBUG_OpenNuXL
OPENMS_LOG_DEBUG << "after sort..." << endl;
OPENMS_LOG_DEBUG << "Fragment m/z and intensities for spectrum: " << exp_index << endl;
for (Size i = 0; i != spec.size(); ++i) OPENMS_LOG_DEBUG << spec[i].getMZ() << "\t" << spec.getIntensity() << endl;
if (spec.getIntegerDataArrays().size())
for (Size i = 0; i != spec.size(); ++i)
OPENMS_LOG_DEBUG << spec[i].getMZ() << "\t" << spec[i].getIntensity() << "\t" << ia[i] << endl;
#endif
#ifdef DEBUG_OpenNuXL
BinnedSpectrum bs(spec, 0.03, false, 0, 0);
bs.getBins().coeffs().cwiseMax(1);
#pragma omp critical (peak_density_access)
peak_density.getBins() += bs.getBins();
#endif
// calculate TIC and store in float data array
spec.getFloatDataArrays().clear();
spec.getFloatDataArrays().resize(1);
double TIC = spec.calculateTIC();
spec.getFloatDataArrays()[0].push_back(TIC);
spec.getFloatDataArrays()[0].setName("TIC");
/*
vector<Interval_> is;
const double precursor_mass = spec.getPrecursors()[0].getMZ() * spec.getPrecursors()[0].getCharge();
for (const auto& p : spec)
{
const double mz = p.getMZ();
if (mz > precursor_mass) break; // don't consider peaks after precursor mass
const double tol = fragment_mass_tolerance_unit_ppm ? fragment_mass_tolerance * 1e-6 * mz : fragment_mass_tolerance;
Interval_ a;
a.start = mz - tol;
a.end = mz + tol;
is.push_back(a);
}
spec.getFloatDataArrays().resize(2);
spec.getFloatDataArrays()[1].setName("P_RANDOM_MATCH");
const double area_of_union = getAreaOfIntervalUnion_(is);
const double p_random_match = std::max(area_of_union / precursor_mass, 1e-6); // cap at low value
spec.getFloatDataArrays()[1].resize(1);
//cout << p_random_match << " " << area_of_union << " " << precursor_mass << " " << spec.getNativeID() << endl;
spec.getFloatDataArrays()[1][0] = p_random_match;
*/
}
#ifdef DEBUG_OpenNuXL
ofstream dist_file;
dist_file.open(getStringOption_("in") + ".fragment_dist.csv");
dist_file << "m/z\tfragments" << "\n";
for (double mz = 0; mz < 2500.0; mz+=0.03)
{
dist_file << mz << "\t" << peak_density.getBinIntensity(mz) << "\n";
}
dist_file.close();
#endif
if (debug_level_ > 10) MzMLFile().store("debug_filtering.mzML", exp);
}
void filterTopNAnnotations_(vector<vector<NuXLAnnotatedHit>>& ahs, const Size top_hits)
{
#ifdef _OPENMP
#pragma omp parallel for
#endif
for (SignedSize scan_index = 0; scan_index < (SignedSize)ahs.size(); ++scan_index)
{
// sort and keeps n best elements according to score
const Size topn = top_hits > ahs[scan_index].size() ? ahs[scan_index].size() : top_hits;
std::partial_sort(ahs[scan_index].begin(), ahs[scan_index].begin() + topn, ahs[scan_index].end(), NuXLAnnotatedHit::hasBetterScore);
ahs[scan_index].resize(topn);
ahs[scan_index].shrink_to_fit();
}
}
void rescoreFastHits_(
const PeakMap& exp,
vector<vector<NuXLAnnotatedHit>>& annotated_hits,
const NuXLModificationMassesResult& mm,
const ModifiedPeptideGenerator::MapToResidueType& fixed_modifications,
const ModifiedPeptideGenerator::MapToResidueType& variable_modifications,
Size max_variable_mods_per_peptide,
double fragment_mass_tolerance,
bool fragment_mass_tolerance_unit_ppm,
const NuXLParameterParsing::PrecursorsToMS2Adducts & all_feasible_adducts)
{
TheoreticalSpectrumGenerator partial_loss_spectrum_generator;
auto param = partial_loss_spectrum_generator.getParameters();
param.setValue("add_first_prefix_ion", "true");
param.setValue("add_abundant_immonium_ions", "false"); // we add them manually for charge 1
param.setValue("add_precursor_peaks", "true");
param.setValue("add_all_precursor_charges", "false"); // we add them manually for every charge
param.setValue("add_metainfo", "true");
param.setValue("add_a_ions", "true");
param.setValue("add_b_ions", "true");
param.setValue("add_c_ions", "false");
param.setValue("add_x_ions", "false");
param.setValue("add_y_ions", "true");
param.setValue("add_z_ions", "false");
partial_loss_spectrum_generator.setParameters(param);
#ifdef _OPENMP
#pragma omp parallel for
#endif
for (SignedSize scan_index = 0; scan_index < (SignedSize)annotated_hits.size(); ++scan_index)
{
vector<NuXLAnnotatedHit> new_hits;
// for each PSM of this spectrum
for (Size i = 0; i != annotated_hits[scan_index].size(); ++i)
{
// determine NA on precursor from index in map
auto mod_combinations_it = mm.mod_combinations.begin();
std::advance(mod_combinations_it, annotated_hits[scan_index][i].NA_mod_index); // advance to sum formula at index NA_mod_index
const auto& NA_adducts = mod_combinations_it->second; // set of all NA adducts for current sum formula (e.g, U-H2O and C-NH3 have same elemental composition)
auto NA_adduct_it = mod_combinations_it->second.begin();
for (size_t NA_adduct_amb_index = 0; NA_adduct_amb_index != NA_adducts.size(); ++NA_adduct_amb_index)
{ // for all NA adducts with current sum formula (e.g, U-H2O and C-NH3)
const String& precursor_na_adduct = *NA_adduct_it;
const vector<NucleotideToFeasibleFragmentAdducts>& feasible_MS2_adducts = all_feasible_adducts.at(precursor_na_adduct).feasible_adducts;
if (precursor_na_adduct == "none")
{
new_hits.push_back(annotated_hits[scan_index][i]);
}
else
{
// if we have a cross-link, copy PSM information for each cross-linkable nucleotides
for (auto const & c : feasible_MS2_adducts)
{
NuXLAnnotatedHit a(annotated_hits[scan_index][i]);
a.cross_linked_nucleotide = c.first; // nucleotide
a.NA_adduct_amb_index = NA_adduct_amb_index;
new_hits.push_back(a);
}
}
}
}
annotated_hits[scan_index].swap(new_hits);
}
// fill in values of slow scoring so they can be used in percolator
for (Size scan_index = 0; scan_index != annotated_hits.size(); ++scan_index)
{
// for each PSM of this spectrum
for (Size i = 0; i != annotated_hits[scan_index].size(); ++i)
{
NuXLAnnotatedHit& ah = annotated_hits[scan_index][i];
// reconstruct fixed and variable modified peptide sequence (without NA)
const String& unmodified_sequence = ah.sequence.getString();
AASequence aas = AASequence::fromString(unmodified_sequence);
vector<AASequence> all_modified_peptides;
ModifiedPeptideGenerator::applyFixedModifications(fixed_modifications, aas);
ModifiedPeptideGenerator::applyVariableModifications(variable_modifications, aas, max_variable_mods_per_peptide, all_modified_peptides);
AASequence fixed_and_variable_modified_peptide = all_modified_peptides[ah.peptide_mod_index];
double current_peptide_mass_without_NA = fixed_and_variable_modified_peptide.getMonoWeight();
// determine NA on precursor from index in map
auto mod_combinations_it = mm.mod_combinations.begin();
std::advance(mod_combinations_it, ah.NA_mod_index);
const auto& NA_adducts = mod_combinations_it->second; // set of all NA adducts for current sum formula (e.g, U-H2O and C-NH3 have same elemental composition)
auto NA_adduct_it = mod_combinations_it->second.begin();
for (size_t NA_adduct_amb_index = 0; NA_adduct_amb_index != NA_adducts.size(); ++NA_adduct_amb_index, ++NA_adduct_it)
{ // for all NA adducts with current sum formula (e.g, U-H2O and C-NH3)
const String& precursor_na_adduct = *NA_adduct_it;
const vector<NucleotideToFeasibleFragmentAdducts>& feasible_MS2_adducts = all_feasible_adducts.at(precursor_na_adduct).feasible_adducts;
const vector<NuXLFragmentAdductDefinition>& marker_ions = all_feasible_adducts.at(precursor_na_adduct).marker_ions;
const double precursor_na_mass = EmpiricalFormula(mod_combinations_it->first).getMonoWeight();
if (precursor_na_adduct == "none")
{
// const double tags = exp[scan_index].getFloatDataArrays()[1][0];
ah.score = OpenNuXL::calculateCombinedScore(ah/*, false, tags*/);
continue;
}
// determine current nucleotide and associated partial losses
vector<NuXLFragmentAdductDefinition> partial_loss_modification;
for (auto const & nuc_2_adducts : feasible_MS2_adducts)
{
if (nuc_2_adducts.first == ah.cross_linked_nucleotide)
{
partial_loss_modification = nuc_2_adducts.second;
}
}
// TODO: not needed to generate all templates (but will make not much of a difference
// as this is only done a few thousand times during post-scoring). That way,
// the code is basically the same as in the main scoring loop.
PeakSpectrum partial_loss_template_z1,
partial_loss_template_z2,
partial_loss_template_z3;
partial_loss_spectrum_generator.getSpectrum(partial_loss_template_z1, fixed_and_variable_modified_peptide, 1, 1);
partial_loss_spectrum_generator.getSpectrum(partial_loss_template_z2, fixed_and_variable_modified_peptide, 2, 2);
partial_loss_spectrum_generator.getSpectrum(partial_loss_template_z3, fixed_and_variable_modified_peptide, 3, 3);
PeakSpectrum marker_ions_sub_score_spectrum_z1,
partial_loss_spectrum_z1,
partial_loss_spectrum_z2;
// nucleotide is associated with certain NA-related fragment losses?
if (!partial_loss_modification.empty())
{
// shifted b- / y- / a-ions
// generate shifted_immonium_ions_sub_score_spectrum.empty
NuXLFragmentIonGenerator::generatePartialLossSpectrum(unmodified_sequence,
current_peptide_mass_without_NA,
precursor_na_adduct,
precursor_na_mass,
1,
partial_loss_modification,
partial_loss_template_z1,
partial_loss_template_z2,
partial_loss_template_z3,
partial_loss_spectrum_z1);
NuXLFragmentIonGenerator::generatePartialLossSpectrum(unmodified_sequence,
current_peptide_mass_without_NA,
precursor_na_adduct,
precursor_na_mass,
2, // don't know the charge of the precursor at that point
partial_loss_modification,
partial_loss_template_z1,
partial_loss_template_z2,
partial_loss_template_z3,
partial_loss_spectrum_z2);
}
// add shifted marker ions
marker_ions_sub_score_spectrum_z1.getStringDataArrays().resize(1); // annotation
marker_ions_sub_score_spectrum_z1.getIntegerDataArrays().resize(1); // annotation
NuXLFragmentIonGenerator::addMS2MarkerIons(
marker_ions,
marker_ions_sub_score_spectrum_z1,
marker_ions_sub_score_spectrum_z1.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX],
marker_ions_sub_score_spectrum_z1.getStringDataArrays()[0]);
const PeakSpectrum& exp_spectrum = exp[scan_index];
float partial_loss_sub_score(0),
marker_ions_sub_score(0),
plss_MIC(0),
plss_err(fragment_mass_tolerance),
plss_Morph(0),
plss_modds(0);
//TODO: dieser Teil ist anders
postScorePartialLossFragments_( unmodified_sequence.size(),
exp_spectrum,
fragment_mass_tolerance,
fragment_mass_tolerance_unit_ppm,
partial_loss_spectrum_z1,
partial_loss_spectrum_z2,
marker_ions_sub_score_spectrum_z1,
partial_loss_sub_score,
marker_ions_sub_score,
plss_MIC,
// plss_err,
plss_Morph,
plss_modds);
// fill in missing scores not considered in fast scoring
ah.pl_MIC = plss_MIC;
ah.pl_err = plss_err;
ah.pl_Morph = plss_Morph;
ah.pl_modds = plss_modds;
// add extra matched ion current
// TODO: this differs a bit
ah.total_MIC += plss_MIC + marker_ions_sub_score;
// scores from shifted peaks
ah.marker_ions_score = marker_ions_sub_score;
ah.partial_loss_score = partial_loss_sub_score;
// combined score
// const double tags = exp[scan_index].getFloatDataArrays()[2][0];
ah.score = OpenNuXL::calculateCombinedScore(ah/*, true, tags*/);
}
}
}
}
/* @brief Localization step of the cross-link identification engine.
1. Generates all fragment adducts based on the attached precursor adduct
2. Calculates an additive score that considers the presence or absence of evidence for a cross-linking site
3. Add additional meta information for PSM.
*/
void postScoreHits_(const PeakMap& exp,
vector<vector<NuXLAnnotatedHit> >& annotated_XL_hits,
vector<vector<NuXLAnnotatedHit> >& annotated_peptide_hits,
const NuXLModificationMassesResult& mm,
const ModifiedPeptideGenerator::MapToResidueType& fixed_modifications,
const ModifiedPeptideGenerator::MapToResidueType& variable_modifications,
Size max_variable_mods_per_peptide,
double fragment_mass_tolerance,
bool fragment_mass_tolerance_unit_ppm,
const NuXLParameterParsing::PrecursorsToMS2Adducts & all_feasible_adducts)
{
assert(exp.size() == annotated_XL_hits.size());
assert(exp.size() == annotated_peptide_hits.size());
// If we did a (total-loss) only fast scoring, PSMs were not associated with a nucleotide.
// To make the localization code work for both fast and slow (all-shifts) scoring,
// we copy PSMs for every cross-linkable nucleotide present in the precursor.
// Then we recalculate the XL specific scores not conisdered in fast scoring
if (fast_scoring_)
{
rescoreFastHits_(exp, annotated_XL_hits, mm, fixed_modifications, variable_modifications, max_variable_mods_per_peptide, fragment_mass_tolerance, fragment_mass_tolerance_unit_ppm, all_feasible_adducts);
rescoreFastHits_(exp, annotated_peptide_hits, mm, fixed_modifications, variable_modifications, max_variable_mods_per_peptide, fragment_mass_tolerance, fragment_mass_tolerance_unit_ppm, all_feasible_adducts);
}
NuXLAnnotateAndLocate::annotateAndLocate_(exp, annotated_XL_hits, mm, fixed_modifications, variable_modifications, max_variable_mods_per_peptide, fragment_mass_tolerance, fragment_mass_tolerance_unit_ppm, all_feasible_adducts);
NuXLAnnotateAndLocate::annotateAndLocate_(exp, annotated_peptide_hits, mm, fixed_modifications, variable_modifications, max_variable_mods_per_peptide, fragment_mass_tolerance, fragment_mass_tolerance_unit_ppm, all_feasible_adducts);
}
void fillSpectrumID_(
const vector<NuXLAnnotatedHit>& ahs,
PeptideIdentification& pi,
const NuXLModificationMassesResult& mm,
const ModifiedPeptideGenerator::MapToResidueType& fixed_modifications,
const ModifiedPeptideGenerator::MapToResidueType& variable_modifications,
const Size max_variable_mods_per_peptide,
const Size scan_index,
const MSSpectrum& spec,
const map<String, PrecursorPurity::PurityScores>& purities,
const vector<size_t>& nr_candidates,
const vector<size_t>& matched_peaks
/*,
const String& can_cross_link*/)
{
pi.setMetaValue("scan_index", static_cast<unsigned int>(scan_index));
pi.setMetaValue("spectrum_reference", spec.getNativeID());
pi.setScoreType("NuXLScore");
pi.setHigherScoreBetter(true);
pi.setRT(spec.getRT());
pi.setMZ(spec.getPrecursors()[0].getMZ());
double precursor_intensity_log10 = log10(1.0 + spec.getPrecursors()[0].getIntensity());
pi.setMetaValue("precursor_intensity_log10", precursor_intensity_log10);
Size charge = spec.getPrecursors()[0].getCharge();
// create full peptide hit structure from annotated hits
vector<PeptideHit> phs = pi.getHits();
for (auto const & ah : ahs)
{
PeptideHit ph;
ph.setCharge(charge);
// get unmodified string
const String & s = ah.sequence.getString();
OPENMS_POSTCONDITION(!s.empty(), "Error: empty sequence in annotated hits.");
AASequence aas = AASequence::fromString(s);
// reapply modifications (because for memory reasons we only stored the index and recreation is fast)
vector<AASequence> all_modified_peptides;
ModifiedPeptideGenerator::applyFixedModifications(fixed_modifications, aas);
ModifiedPeptideGenerator::applyVariableModifications(variable_modifications, aas, max_variable_mods_per_peptide, all_modified_peptides);
// reannotate much more memory heavy AASequence object
AASequence fixed_and_variable_modified_peptide = all_modified_peptides[ah.peptide_mod_index];
ph.setScore(ah.score);
ph.setMetaValue(String("NuXL:score"), ah.score); // important for Percolator feature set because the PeptideHit score might be overwritten by a q-value
// - # of variable mods
// - Phosphopeptide
int is_phospho(0);
int n_var_mods = 0;
for (Size i = 0; i != fixed_and_variable_modified_peptide.size(); ++i)
{
const Residue& r = fixed_and_variable_modified_peptide[i];
if (!r.isModified()) continue;
if (variable_modifications.val.find(r.getModification()) != variable_modifications.val.end())
{
++n_var_mods;
}
if (r.getModification()->getId() == "Phospho") { is_phospho = 1; }
}
auto n_term_mod = fixed_and_variable_modified_peptide.getNTerminalModification();
auto c_term_mod = fixed_and_variable_modified_peptide.getCTerminalModification();
if (n_term_mod != nullptr &&
variable_modifications.val.find(n_term_mod) != variable_modifications.val.end()) ++n_var_mods;
if (c_term_mod != nullptr &&
variable_modifications.val.find(c_term_mod) != variable_modifications.val.end()) ++n_var_mods;
ph.setMetaValue(String("variable_modifications"), n_var_mods);
ph.setMetaValue(String("n_theoretical_peaks"), ah.n_theoretical_peaks);
// determine empirical formula of NA modification from index in map
auto mod_combinations_it = mm.mod_combinations.cbegin();
std::advance(mod_combinations_it, ah.NA_mod_index);
// determine precursor
auto NA_adduct_it = mod_combinations_it->second.cbegin(); // set of all NA adducts for current sum formula (e.g, U-H2O and C-NH3 have same elemental composition)
std::advance(NA_adduct_it, ah.NA_adduct_amb_index);
ph.setMetaValue(String("NuXL:mass_error_p"), ah.mass_error_p);
ph.setMetaValue(String("NuXL:total_loss_score"), ah.total_loss_score);
ph.setMetaValue(String("NuXL:immonium_score"), ah.immonium_score);
ph.setMetaValue(String("NuXL:precursor_score"), ah.precursor_score);
ph.setMetaValue(String("NuXL:marker_ions_score"), ah.marker_ions_score);
ph.setMetaValue(String("NuXL:partial_loss_score"), ah.partial_loss_score);
// total loss and partial loss (pl) related subscores (matched ion current, avg. fragment error, morpheus score)
ph.setMetaValue(String("NuXL:MIC"), ah.MIC);
ph.setMetaValue(String("NuXL:err"), ah.err);
ph.setMetaValue(String("NuXL:Morph"), ah.Morph);
ph.setMetaValue(String("NuXL:modds"), ah.modds);
ph.setMetaValue(String("NuXL:pl_MIC"), ah.pl_MIC);
ph.setMetaValue(String("NuXL:pl_err"), ah.pl_err);
ph.setMetaValue(String("NuXL:pl_Morph"), ah.pl_Morph);
ph.setMetaValue(String("NuXL:pl_modds"), ah.pl_modds);
ph.setMetaValue(String("NuXL:pl_pc_MIC"), ah.pl_pc_MIC);
ph.setMetaValue(String("NuXL:pl_im_MIC"), ah.pl_im_MIC);
ph.setMetaValue(String("NuXL:total_Morph"), ah.Morph + ah.pl_Morph);
ph.setMetaValue(String("NuXL:total_HS"), ah.total_loss_score + ah.partial_loss_score);
ph.setMetaValue(String("NuXL:tag_XLed"), ah.tag_XLed);
ph.setMetaValue(String("NuXL:tag_unshifted"), ah.tag_unshifted);
ph.setMetaValue(String("NuXL:tag_shifted"), ah.tag_shifted);
ph.setMetaValue(String("NuXL:total_MIC"), ah.total_MIC); // fraction of matched ion current from total + partial losses
const String NA = *NA_adduct_it;
ph.setMetaValue(String("NuXL:NA"), NA); // the nucleotide formula e.g., U-H2O
double na_mass_z0 = EmpiricalFormula(mod_combinations_it->first).getMonoWeight(); // NA uncharged mass via empirical formula
// length of oligo
size_t NA_length = NA.find_first_of("+-");
if (NA_length == std::string::npos)
{
if (na_mass_z0 > 0)
{
ph.setMetaValue(String("NuXL:NA_length"), NA.size());
}
else
{
ph.setMetaValue(String("NuXL:NA_length"), 0);
}
}
else
{
ph.setMetaValue(String("NuXL:NA_length"), NA_length);
}
ph.setMetaValue("NuXL:NT", String(ah.cross_linked_nucleotide)); // the cross-linked nucleotide
ph.setMetaValue("NuXL:NA_MASS_z0", na_mass_z0); // NA uncharged mass via empirical formula
ph.setMetaValue("NuXL:isXL", na_mass_z0 > 0);
ph.setMetaValue("NuXL:isPhospho", is_phospho);
ph.setMetaValue("NuXL:best_localization_score", ah.best_localization_score);
if (!ah.localization_scores.empty())
{
ph.setMetaValue("NuXL:localization_scores", ah.localization_scores);
}
else
{
ph.setMetaValue("NuXL:localization_scores", "NA");
}
ph.setMetaValue("NuXL:best_localization", ah.best_localization);
ph.setMetaValue("NuXL:best_localization_position", ah.best_localization_position);
// one-hot encoding of cross-linked nucleotide
for (const auto& c : can_xl_)
{
if (c == ah.cross_linked_nucleotide)
{
ph.setMetaValue(String("NuXL:XL_" + String(c)), 1);
}
else
{
ph.setMetaValue(String("NuXL:XL_" + String(c)), 0);
}
}
// also annotate PI to hit so it is available to percolator
ph.setMetaValue("precursor_intensity_log10", precursor_intensity_log10);
if (!purities.empty())
{
ph.setMetaValue("precursor_purity", purities.at(spec.getNativeID()).signal_proportion);
}
if (has_IM_)
{
ph.setMetaValue("IM", spec.getDriftTime());
}
ph.setMetaValue("nucleotide_mass_tags", (double)spec.getFloatDataArrays()[2][0]);
int maxtag = spec.getIntegerDataArrays()[NuXLConstants::IA_DENOVO_TAG_INDEX][0];
ph.setMetaValue("NuXL:aminoacid_max_tag", maxtag);
const double id2maxtag = maxtag == 0 ? 0 : (double)std::max(ah.tag_unshifted, ah.tag_shifted) / (double)maxtag; // longest shift of this peptide vs. longest tag found
ph.setMetaValue("NuXL:aminoacid_id_to_max_tag_ratio", id2maxtag);
ph.setMetaValue("nr_candidates", nr_candidates[scan_index]);
// calculate -ln(poisson) for the number of matched peaks
double lambda = (double)matched_peaks[scan_index]/(double)nr_candidates[scan_index];
double k = (size_t)ah.Morph + (size_t)ah.pl_Morph;
double ln_poisson = k * std::log(lambda) - lambda - std::lgamma(k + 1);
ln_poisson = !isfinite(ln_poisson) ? 315 : -ln_poisson;
ph.setMetaValue("-ln(poisson)", ln_poisson);
ph.setMetaValue("NuXL:explained_peak_fraction", ah.explained_peak_fraction);
ph.setMetaValue("NuXL:theo_peak_fraction", ah.matched_theo_fraction);
ph.setMetaValue("NuXL:wTop50", ah.wTop50);
ph.setPeakAnnotations(ah.fragment_annotations);
ph.setMetaValue("isotope_error", static_cast<int>(ah.isotope_error));
ph.setMetaValue(String("NuXL:ladder_score"), ah.ladder_score);
ph.setMetaValue(String("NuXL:sequence_score"), ah.sequence_score);
ph.setMetaValue(String("CalcMass"), + (fixed_and_variable_modified_peptide.getMonoWeight(Residue::Full, charge) + na_mass_z0)/charge); // overwrites CalcMass in PercolatorAdapter
// set the amino acid sequence (for complete loss spectra this is just the variable and modified peptide. For partial loss spectra it additionally contains the loss induced modification)
ph.setSequence(fixed_and_variable_modified_peptide);
ProteaseDigestion pd;
const String enzyme = getStringOption_("peptide:enzyme");
pd.setEnzyme(enzyme);
size_t num_mc = pd.countInternalCleavageSites(aas.toUnmodifiedString());
ph.setMetaValue("missed_cleavages", num_mc);
phs.push_back(ph); // add new hit
}
pi.setHits(phs);
pi.sort();
// assign (unique) ranks
phs = pi.getHits();
for (Size r = 0; r != phs.size(); ++r)
{
phs[r].setRank(static_cast<int>(r));
}
pi.setHits(phs);
}
/**
1. Reconstruct original peptide from memory efficient structure
2. Add additional meta information for PSM.
*/
void postProcessHits_(const PeakMap& exp,
vector<vector<NuXLAnnotatedHit> >& annotated_XL_hits,
vector<vector<NuXLAnnotatedHit> >& annotated_peptide_hits,
vector<ProteinIdentification>& protein_ids,
PeptideIdentificationList& peptide_ids,
const NuXLModificationMassesResult& mm,
const ModifiedPeptideGenerator::MapToResidueType& fixed_modifications,
const ModifiedPeptideGenerator::MapToResidueType& variable_modifications,
Size max_variable_mods_per_peptide,
const map<String, PrecursorPurity::PurityScores>& purities,
const vector<size_t>& nr_candidates,
const vector<size_t>& matched_peaks
/*,
const String& can_cross_link*/)
{
assert(annotated_XL_hits.size() == annotated_peptide_hits.size());
SignedSize hit_count = static_cast<SignedSize>(annotated_XL_hits.size());
for (SignedSize scan_index = 0; scan_index < hit_count; ++scan_index)
{
const MSSpectrum& spec = exp[scan_index];
vector<NuXLAnnotatedHit>& ahs_XL = annotated_XL_hits[scan_index];
vector<NuXLAnnotatedHit>& ahs_peptide = annotated_peptide_hits[scan_index];
if (ahs_XL.empty() && ahs_peptide.empty()) continue;
// create empty PeptideIdentification object and fill meta data
peptide_ids.push_back(PeptideIdentification());
if (!ahs_XL.empty())
{
fillSpectrumID_(
ahs_XL,
peptide_ids.back(), // append hits
mm,
fixed_modifications,
variable_modifications,
max_variable_mods_per_peptide,
scan_index,
spec,
purities,
nr_candidates,
matched_peaks
/*,
can_cross_link*/);
}
if (!ahs_peptide.empty())
{
fillSpectrumID_(
ahs_peptide,
peptide_ids.back(), // append hits
mm,
fixed_modifications,
variable_modifications,
max_variable_mods_per_peptide,
scan_index,
spec,
purities,
nr_candidates,
matched_peaks
/*,
can_cross_link*/);
}
}
// hits have rank and are sorted by score
map<String, Size> sequence_is_topPSM;
map<String, set<int>> sequence_charges; // of top PSM
map<String, Size> sequence_is_XL;
map<String, Size> sequence_is_peptide;
for (const auto & pid : peptide_ids)
{
if (pid.getHits().empty()) continue;
const auto & top_hit = pid.getHits()[0];
const String& unmodified_sequence = top_hit.getSequence().toUnmodifiedString();
++sequence_is_topPSM[unmodified_sequence];
sequence_charges[unmodified_sequence].insert(top_hit.getCharge());
if (static_cast<int>(top_hit.getMetaValue("NuXL:isXL")) == 1)
{
++sequence_is_XL[unmodified_sequence];
}
else
{
++sequence_is_peptide[unmodified_sequence];
}
}
for (auto & pid : peptide_ids)
{
for (auto & ph : pid.getHits())
{
const String& unmodified_sequence = ph.getSequence().toUnmodifiedString();
if (sequence_is_topPSM.find(unmodified_sequence) != sequence_is_topPSM.end())
{
ph.setMetaValue("CountSequenceIsTop", sequence_is_topPSM[unmodified_sequence]);
ph.setMetaValue("CountSequenceCharges", sequence_charges[unmodified_sequence].size());
ph.setMetaValue("CountSequenceIsXL", sequence_is_XL[unmodified_sequence]);
ph.setMetaValue("CountSequenceIsPeptide", sequence_is_peptide[unmodified_sequence]);
}
}
}
/*
// one hot encoding of adduct
set<string> identified_adducts;
for (auto & pid : peptide_ids)
{
for (auto & ph : pid.getHits())
{
identified_adducts.insert(ph.getMetaValue("NuXL:NA"));
}
}
for (auto& s : identified_adducts)
{
feature_set_ << String("NuXL:MS1Adduct_") + s;
}
for (auto & pid : peptide_ids)
{
for (auto & ph : pid.getHits())
{
string adduct = ph.getMetaValue("NuXL:NA");
for (auto& s : identified_adducts)
{
size_t one_hot = (adduct == s) ? 1 : 0;
ph.setMetaValue(String("NuXL:MS1Adduct_") + s, one_hot);
}
}
}
*/
// protein identifications (leave as is...)
protein_ids = vector<ProteinIdentification>(1);
protein_ids[0].setDateTime(DateTime::now());
protein_ids[0].setSearchEngine("OpenNuXL");
protein_ids[0].setSearchEngineVersion(VersionInfo::getVersion());
ProteinIdentification::SearchParameters search_parameters;
search_parameters.db = getStringOption_("database");
search_parameters.charges = String(getIntOption_("precursor:min_charge")) + ":" + String(getIntOption_("precursor:max_charge"));
search_parameters.fixed_modifications = getStringList_("modifications:fixed");
search_parameters.variable_modifications = getStringList_("modifications:variable");
search_parameters.missed_cleavages = getIntOption_("peptide:missed_cleavages");
search_parameters.fragment_mass_tolerance = getDoubleOption_("fragment:mass_tolerance");
search_parameters.precursor_mass_tolerance = getDoubleOption_("precursor:mass_tolerance");
search_parameters.precursor_mass_tolerance_ppm = getStringOption_("precursor:mass_tolerance_unit") == "ppm" ? true : false;
search_parameters.fragment_mass_tolerance_ppm = getStringOption_("fragment:mass_tolerance_unit") == "ppm" ? true : false;
search_parameters.digestion_enzyme = *ProteaseDB::getInstance()->getEnzyme(getStringOption_("peptide:enzyme"));
search_parameters.setMetaValue("feature_extractor", "TOPP_PSMFeatureExtractor");
search_parameters.setMetaValue("extra_features", ListUtils::concatenate(feature_set_, ","));
protein_ids[0].setSearchParameters(search_parameters);
}
void mapPrecursorMassesToScans(const Int min_precursor_charge,
const Int max_precursor_charge,
const IntList &precursor_isotopes,
const double small_peptide_mass_filter_threshold,
const Size peptide_min_size,
const PeakMap & spectra,
multimap<double, pair<Size, int>> & multimap_mass_2_scan_index) const
{
Size fractional_mass_filtered(0), small_peptide_mass_filtered(0);
for (MSExperiment::ConstIterator s_it = spectra.begin(); s_it != spectra.end(); ++s_it)
{
int scan_index = s_it - spectra.begin();
vector<Precursor> precursor = s_it->getPrecursors();
// there should only one precursor and MS2 should contain at least a few peaks to be considered (e.g. at least for every AA in the peptide)
if (precursor.size() == 1 && s_it->size() >= peptide_min_size)
{
int precursor_charge = precursor[0].getCharge();
if (precursor_charge < min_precursor_charge
|| precursor_charge > max_precursor_charge)
{
continue;
}
double precursor_mz = precursor[0].getMZ();
// map (corrected) precursor mass to spectra
for (int i : precursor_isotopes)
{
double precursor_mass = (double) precursor_charge * precursor_mz - (double) precursor_charge * Constants::PROTON_MASS_U;
// corrected for monoisotopic misassignments of the precursor annotation
if (i != 0) { precursor_mass -= i * Constants::C13C12_MASSDIFF_U; }
if (getFlag_("NuXL:filter_fractional_mass"))
{
if (precursor_mass < 1750.0 && precursor_mass - floor(precursor_mass) < 0.2)
{
fractional_mass_filtered++;
continue;
}
}
if (precursor_mass < small_peptide_mass_filter_threshold)
{
small_peptide_mass_filtered++;
continue;
}
multimap_mass_2_scan_index.insert(make_pair(precursor_mass, make_pair(scan_index, i)));
}
}
}
}
// calculate PSMs using total loss scoring (no NA-shifted fragments) - used in fast scoring
static void addPSMsTotalLossScoring_(
const PeakSpectrum& exp_spectrum,
const StringView sequence,
const Size & mod_pep_idx,
const Size & na_mod_idx,
const double & current_peptide_mass,
const double & current_peptide_mass_without_NA,
const double & exp_pc_mass,
const ImmoniumIonsInPeptide & iip,
const int & isotope_error,
const vector<double> & total_loss_template_z1_b_ions,
const vector<double> & total_loss_template_z1_y_ions,
const boost::math::normal & gaussian_mass_error,
const double & fragment_mass_tolerance,
const bool & fragment_mass_tolerance_unit_ppm,
vector<NuXLAnnotatedHit> & annotated_hits,
#ifdef _OPENMP
omp_lock_t & annotated_hits_lock,
#endif
const Size& report_top_hits)
{
const int & exp_pc_charge = exp_spectrum.getPrecursors()[0].getCharge();
float total_loss_score(0),
tlss_MIC(0),
tlss_err(1.0),
tlss_Morph(0),
tlss_modds(0),
pc_MIC(0),
im_MIC(0);
size_t n_theoretical_peaks(0);
vector<double> intensity_sum(total_loss_template_z1_b_ions.size(), 0.0);
vector<double> b_ions(total_loss_template_z1_b_ions.size(), 0.0);
vector<double> y_ions(total_loss_template_z1_b_ions.size(), 0.0);
vector<bool> peak_matched(exp_spectrum.size(), false);
scorePeptideIons_(
exp_spectrum,
exp_spectrum.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX],
total_loss_template_z1_b_ions,
total_loss_template_z1_y_ions,
current_peptide_mass_without_NA,
exp_pc_charge,
iip,
fragment_mass_tolerance,
fragment_mass_tolerance_unit_ppm,
intensity_sum,
b_ions,
y_ions,
peak_matched,
total_loss_score,
tlss_MIC,
tlss_Morph,
tlss_modds,
tlss_err,
pc_MIC,
im_MIC,
n_theoretical_peaks
);
const double tlss_total_MIC = tlss_MIC + im_MIC + (pc_MIC - floor(pc_MIC));
// early-out if super bad score
if (badTotalLossScore(total_loss_score, tlss_Morph, tlss_total_MIC)) { return; }
const double mass_error_ppm = (current_peptide_mass - exp_pc_mass) / exp_pc_mass * 1e6;
const double mass_error_score = pdf(gaussian_mass_error, mass_error_ppm) / pdf(gaussian_mass_error, 0.0);
// add peptide hit
NuXLAnnotatedHit ah;
ah.mass_error_p = mass_error_score;
ah.sequence = sequence; // copy StringView
ah.peptide_mod_index = mod_pep_idx;
ah.total_loss_score = total_loss_score;
ah.MIC = tlss_MIC;
ah.err = tlss_err;
ah.Morph = tlss_Morph;
ah.modds = tlss_modds;
ah.immonium_score = im_MIC;
ah.precursor_score = pc_MIC;
ah.total_MIC = tlss_total_MIC;
ah.NA_mod_index = na_mod_idx;
ah.isotope_error = isotope_error;
ah.n_theoretical_peaks = n_theoretical_peaks;
auto range = make_pair(intensity_sum.begin(), intensity_sum.end());
ah.ladder_score = ladderScore_(range) / (double)intensity_sum.size();
range = longestCompleteLadder_(intensity_sum.begin(), intensity_sum.end());
if (range.second != range.first) // see Mascot Percolator paper
{
ah.sequence_score = ladderScore_(range) / (double)intensity_sum.size();
}
// simple combined score in fast scoring:
ah.score = calculateFastScore(ah);
#ifdef DEBUG_OpenNuXL
LOG_DEBUG << "best score in pre-score: " << score << endl;
#endif
#ifdef _OPENMP
omp_set_lock(&(annotated_hits_lock));
#endif
{
annotated_hits.emplace_back(move(ah));
// prevent vector from growing indefinitly (memory) but don't shrink the vector every time
if (annotated_hits.size() >= 2 * report_top_hits)
{
std::partial_sort(annotated_hits.begin(), annotated_hits.begin() + report_top_hits, annotated_hits.end(), NuXLAnnotatedHit::hasBetterScore);
annotated_hits.resize(report_top_hits);
}
}
#ifdef _OPENMP
omp_unset_lock(&(annotated_hits_lock));
#endif
}
// check for misannotation (absolute m/z instead of offset) and correct
void checkAndCorrectIsolationWindows_(MSExperiment& e) const
{
int isolation_windows_reannotated(0);
int isolation_windows_reannotation_error(0);
for (MSSpectrum & s : e)
{
if (s.getMSLevel() == 2 && s.getPrecursors().size() == 1)
{
Precursor& p = s.getPrecursors()[0];
if (p.getIsolationWindowLowerOffset() > 100.0 && p.getIsolationWindowUpperOffset() > 100.0)
{
// in most cases lower and upper offset contain the absolute values.
// if that is the case we use those
double left = -(p.getIsolationWindowLowerOffset() - p.getMZ());
double right = p.getIsolationWindowUpperOffset() - p.getMZ();
if (left > 0.0 && right > 0.0)
{
p.setIsolationWindowLowerOffset(left);
p.setIsolationWindowUpperOffset(right);
}
else // in some files from PD the target m/z is sometimes outside the isolation window (bug?)
{
double half_w = (right - left) / 2.0;
left = p.getMZ() - half_w;
right = p.getMZ() + half_w;
p.setIsolationWindowLowerOffset(left);
p.setIsolationWindowUpperOffset(right);
isolation_windows_reannotation_error++;
}
isolation_windows_reannotated++;
}
}
}
if (isolation_windows_reannotated > 0)
{
OPENMS_LOG_WARN << "Isolation windows format was incorrect. Reannotated " << isolation_windows_reannotated << " precursors windows. " << endl;
if (isolation_windows_reannotation_error > 0)
{
OPENMS_LOG_WARN << "Reannotation failed for " << isolation_windows_reannotation_error
<< " precursors windows because the target m/z was outside of boundaries." << endl;
}
}
}
// returns iterator on start of longest non-zero sequence and end on one-after non-zero sequence (complete ladder)
template<class Iterator>
static pair<Iterator, Iterator> longestCompleteLadder_(Iterator b, Iterator e)
{
int max_l = 0;
Iterator best_start(b);
for (auto i = b; i != e;) // iterate once over vector
{
for (; i != e && *i <= 0.0; ++i) {}; // skip zeros
if (i == e) // end?
{
return make_pair(best_start, best_start + max_l);
}
int l = 0;
Iterator start(i);
for (; i != e && *i > 0.0; ++i) { ++l; } // count sequence of non-zeros
if (l > max_l) // longer sequence found?
{
best_start = start;
max_l = l;
}
if (i == e) // end?
{
return make_pair(best_start, best_start + max_l);
}
}
return make_pair(best_start, best_start + max_l);
}
template<class Iterator>
static float ladderScore_(pair<Iterator, Iterator> p)
{
float MIC(0);
int count(0);
for (; p.first != p.second; ++p.first)
{
if (*p.first > 0.0)
{
MIC += *p.first;
++count;
}
}
return count + MIC; // Morph score of matched (complete / partial) ladder
}
String convertRawFile_(const String& in, bool no_peak_picking = false)
{
writeLogInfo_("RawFileReader reading tool. Copyright 2016 by Thermo Fisher Scientific, Inc. All rights reserved");
String net_executable = getStringOption_("NET_executable");
TOPPBase::ExitCodes exit_code;
QStringList arguments;
String out = in + ".mzML";
// check if this file exists and not empty so we can skip further conversions
if (!File::empty(out)) { return out; }
#ifdef OPENMS_WINDOWSPLATFORM
if (net_executable.empty())
{ // default on Windows: if no mono executable is set use the "native" .NET one
arguments << String("-i=" + in).toQString()
<< String("--output_file=" + out).toQString()
<< String("-f=2").toQString() // indexedMzML
<< String("-e").toQString(); // ignore instrument errors
if (no_peak_picking) { arguments << String("--noPeakPicking").toQString(); }
exit_code = runExternalProcess_(getStringOption_("ThermoRaw_executable").toQString(), arguments);
}
else
{ // use e.g., mono
arguments << getStringOption_("ThermoRaw_executable").toQString()
<< String("-i=" + in).toQString()
<< String("--output_file=" + out).toQString()
<< String("-f=2").toQString()
<< String("-e").toQString();
if (no_peak_picking) { arguments << String("--noPeakPicking").toQString(); }
exit_code = runExternalProcess_(net_executable.toQString(), arguments);
}
#else
// default on Mac, Linux: use mono
net_executable = net_executable.empty() ? "mono" : net_executable;
arguments << getStringOption_("ThermoRaw_executable").toQString()
<< String("-i=" + in).toQString()
<< String("--output_file=" + out).toQString()
<< String("-f=2").toQString()
<< String("-e").toQString();
if (no_peak_picking) { arguments << String("--noPeakPicking").toQString(); }
exit_code = runExternalProcess_(net_executable.toQString(), arguments);
#endif
if (exit_code != ExitCodes::EXECUTION_OK)
{
OPENMS_LOG_ERROR << "File conversion from RAW file to mzML failed." << endl;
}
else
{
OPENMS_LOG_INFO << "Raw File successfuly converted to mzML." << endl;
OPENMS_LOG_INFO << "Please delete it if not needed anymore." << endl;
}
return out;
}
// datastructure to store longest tag in unshifte/shifted sequence and tag spanning the XL position
struct XLTags
{
size_t tag_unshifted = 0;
size_t tag_shifted = 0;
size_t tag_XLed = 0; // tag that contains the transition from unshifted to shifted
};
XLTags getLongestABYLadderWithShift(
const vector<double>& ab,
const vector<double>& y,
const vector<double>& ab_xl,
const vector<double>& y_xl)
{
OPENMS_PRECONDITION(ab.size() == y.size(), "b and y ion arrays must have same size");
OPENMS_PRECONDITION(ab_xl.size() == y_xl.size(), "cross-linked b and y ion arrays must have same size");
XLTags tags;
const int n = (int)ab.size();
// calculate longest consecutive unshifted / shifted sequence and longest sequence spanning unshifted + shifted residues
vector<int> runAB(n, 0);
size_t run(0);
size_t max_ab_run(0);
for (int l = 0; l != n; ++l)
{
if (ab[l] == 0) { run = 0; continue; }
++run;
runAB[l] = run;
if (run > max_ab_run) max_ab_run = run;
}
// runAB[i] now contains current run length e.g.: 000123400100 for prefix ions
vector<int> runY(n, 0);
run = 0;
size_t max_y_run(0);
for (int l = (int)n - 1; l >= 0; --l)
{
if (y[l] == 0) { run = 0; continue; }
++run;
runY[l] = run;
if (run > max_y_run) max_y_run = run;
}
// runY[i] now contains current run length e.g.: 000432100100 for suffix ions
tags.tag_unshifted = std::max(max_ab_run, max_y_run);
const size_t n_xl = ab_xl.size();
if (n_xl != 0)
{
OPENMS_PRECONDITION(n_xl == n, "xl and non-xl arrays need to have same size");
// for XL we calculate the runs in reverse order so we can later quickly calculate the maximum run
// through non-cross-linked and cross-linked ions
vector<int> runAB_XL(n_xl, 0);
run = 0;
size_t max_ab_shifted(0);
for (int x = (int)n_xl - 1; x >= 0; --x) // note the reverse order
{
if (ab_xl[x] == 0) { run = 0; continue; }
++run;
runAB_XL[x] = run;
if (run > max_ab_shifted) max_ab_shifted = run;
}
// max_ab_shifted contains longest run of shifted ab ions
// runAB_XL[i] now contains the longest run in X starting at position i e.g.: 00003210000 for prefix ions
vector<int> runY_XL(n_xl, 0);
run = 0;
size_t max_y_shifted(0);
for (int x = 0; x != (int)n_xl; ++x)
{
if (y_xl[x] == 0) { run = 0; continue; }
++run;
runY_XL[x] = run;
if (run > max_y_shifted) max_y_shifted = run;
}
// runY_XL[i] now contains the longest run in X starting at position i e.g.: 00001230000 for suffix ions
tags.tag_shifted = std::max(max_ab_shifted, max_y_shifted);
size_t maximum_ab_tag_length(0);
// calculate maximum tag that spans linear intensities and at least one XLed amino acid for prefix ions
for (Size i = 0; i < n_xl - 1; ++i)
{
if (runAB[i] == 0 || runAB_XL[i + 1] == 0) continue; // must have one cross-linked amino acid next to non-cross-linked amino acid
const size_t tag_length = runAB[i] + runAB_XL[i + 1]; // tag length if cross-link is introduced at amino acid i+1
if (tag_length > maximum_ab_tag_length) maximum_ab_tag_length = tag_length;
}
size_t maximum_y_tag_length(0);
// same for suffix ions
for (Size i = 0; i < n_xl - 1; ++i)
{
if (runY_XL[i] == 0 || runY[i + 1] == 0) continue; // must have one cross-linked amino acid next to non-cross-linked amino acid
const size_t tag_length = runY_XL[i] + runY[i + 1]; // tag length with cross-linked part and non-cross-linked
if (tag_length > maximum_y_tag_length) maximum_y_tag_length = tag_length;
}
tags.tag_XLed = std::max(maximum_ab_tag_length, maximum_y_tag_length);
}
return tags;
}
XLTags getLongestLadderWithShift(const vector<double>& intL, const vector<double>& intXL)
{
// calculate longest consecutive unshifted / shifted sequence and longest sequence spanning unshifted + shifted residues
XLTags tags;
vector<int> prefixRunL(intL.size(), 0);
size_t run(0);
for (int l = 0; l != (int)intL.size(); ++l)
{
if (intL[l] == 0) { run = 0; continue; }
++run;
prefixRunL[l] = run;
if (run > tags.tag_unshifted) tags.tag_unshifted = run;
}
// tags.tag_unshifted contains longest run
// prefixRunL[i] now contains current run length e.g.: 000123400100 for prefix ions
vector<int> suffixRunL(intL.size(), 0);
run = 0;
for (int l = (int)intL.size() - 1; l >= 0; --l)
{
if (intL[l] == 0) { run = 0; continue; }
++run;
suffixRunL[l] = run;
}
// suffixRunL[i] now contains current run length e.g.: 000432100100 for suffix ions
if (!intXL.empty())
{
// for XL we calculate the runs in reverse order so we can later quickly calculate the maximum run
// through non-cross-linked and cross-linked ions
vector<int> prefixRunX(intXL.size(), 0);
run = 0;
for (int x = (int)intXL.size() - 1; x >= 0; --x) // note the reverse order
{
if (intXL[x] == 0) { run = 0; continue; }
++run;
prefixRunX[x] = run;
if (run > tags.tag_shifted) tags.tag_shifted = run;
}
// tags.tag_shifted contains longest run
// prefixRunX[i] now contains the longest run in X starting at position i e.g.: 00003210000 for prefix ions
vector<int> suffixRunX(intXL.size(), 0);
run = 0;
for (int x = 0; x != (int)intXL.size(); ++x)
{
if (intXL[x] == 0) { run = 0; continue; }
++run;
suffixRunX[x] = run;
}
// suffixRunX[i] now contains the longest run in X starting at position i e.g.: 00001230000 for suffix ions
size_t maximum_tag_length(0);
// calculate maximum tag that spans linear intensities and at least one XLed amino acid for prefix ions
for (Size i = 0; i < intXL.size() - 1; ++i)
{
if ( prefixRunL[i] == 0 || prefixRunX[i + 1] == 0) continue; // must have one cross-linked amino acid next to non-cross-linked amino acid
const size_t tag_length = prefixRunL[i] + prefixRunX[i + 1]; // tag length if cross-link is introduced at amino acid i+1
if (tag_length > maximum_tag_length) maximum_tag_length = tag_length;
}
// same for suffix ions
for (Size i = 0; i < intXL.size() - 1; ++i)
{
if (suffixRunX[i] == 0 || suffixRunL[i + 1] == 0) continue; // must have one cross-linked amino acid next to non-cross-linked amino acid
const size_t tag_length = suffixRunX[i] + suffixRunL[i + 1]; // tag length with cross-linked part and non-cross-linked
if (tag_length > maximum_tag_length) maximum_tag_length = tag_length;
}
tags.tag_XLed = maximum_tag_length;
}
return tags;
}
ExitCodes correctPrecursors(MSExperiment& ms_centroided)
{
//-------------------------------------------------------------
// HighRes Precursor Mass Correction
//-------------------------------------------------------------
std::vector<double> deltaMZs, mzs, rts;
std::set<Size> corrected_to_highest_intensity_peak = PrecursorCorrection::correctToHighestIntensityMS1Peak(
ms_centroided,
0.01, // check if we can estimate this from data (here it is given in m/z not ppm)
false, // is ppm = false
deltaMZs,
mzs,
rts
);
writeLogInfo_("Info: Corrected " + String(corrected_to_highest_intensity_peak.size()) + " precursors.");
if (!deltaMZs.empty())
{
vector<double> deltaMZs_ppm, deltaMZs_ppmabs;
for (Size i = 0; i != deltaMZs.size(); ++i)
{
deltaMZs_ppm.push_back(Math::getPPM(mzs[i], mzs[i] + deltaMZs[i]));
deltaMZs_ppmabs.push_back(Math::getPPMAbs(mzs[i], mzs[i] + deltaMZs[i]));
}
double median = Math::median(deltaMZs_ppm.begin(), deltaMZs_ppm.end());
double MAD = Math::MAD(deltaMZs_ppm.begin(), deltaMZs_ppm.end(), median);
double median_abs = Math::median(deltaMZs_ppmabs.begin(), deltaMZs_ppmabs.end());
double MAD_abs = Math::MAD(deltaMZs_ppmabs.begin(), deltaMZs_ppmabs.end(), median_abs);
writeLogInfo_("Precursor correction to highest intensity peak:\n median delta m/z = "
+ String(median) + " ppm MAD = " + String(MAD)
+ "\n median delta m/z (abs.) = " + String(median_abs)
+ " ppm MAD = " + String(MAD_abs));
}
FeatureMap features;
{
MSExperiment e(ms_centroided); // FFM seems to delete passed spectra
FeatureFinderMultiplexAlgorithm algorithm;
Param p = algorithm.getParameters();
p.setValue("algorithm:labels", ""); // label-free
p.setValue("algorithm:charge", "2:5");
p.setValue("algorithm:rt_typical", 30.0);
p.setValue("algorithm:rt_band", 3.0); // max 3 seconds shifts between isotopic traces
p.setValue("algorithm:rt_min", 4.0);
p.setValue("algorithm:spectrum_type", "centroid");
algorithm.setParameters(p);
algorithm.run(e, true);
features = algorithm.getFeatureMap();
writeLogInfo_("Detected peptides: " + String(features.size()));
}
set<Size> correct_to_nearest_feature = PrecursorCorrection::correctToNearestFeature(
features,
ms_centroided,
20.0,
0.01,
false,
true,
false,
false,
3,
10);
writeLogInfo_("Precursor correction to feature:\n succesful in = "
+ String(correct_to_nearest_feature.size()) + " cases.");
return EXECUTION_OK;
}
void optimizeFDR(PeptideIdentificationList& peptide_ids)
{
size_t most_XLs{0};
double best_p{1}, best_q{1};
double max_rt = 0.01;
double max_pl_modds = 0.01;
double max_modds = 0.01;
double max_mass_error_p = 0.01;
// double max_wTop50 = 0;
// double max_length = 0;
PeptideIdentificationList pids{peptide_ids};
for (auto& pid : pids)
{
if (pid.getRT() > max_rt) max_rt = pid.getRT();
auto hits = pid.getHits();
for (auto& h : hits)
{
// if (h.getSequence().size() > max_length) max_length = h.getSequence().size();
if ((double)h.getMetaValue("NuXL:pl_modds") > max_pl_modds) max_pl_modds = h.getMetaValue("NuXL:pl_modds");
if ((double)h.getMetaValue("NuXL:modds") > max_modds) max_modds = h.getMetaValue("NuXL:modds");
if ((double)h.getMetaValue("NuXL:mass_error_p") > max_mass_error_p) max_mass_error_p = h.getMetaValue("NuXL:mass_error_p");
// if ((double)h.getMetaValue("NuXL:wTop50") > max_wTop50) max_wTop50 = h.getMetaValue("NuXL:wTop50");
}
}
for (double q = 0.0; q < 1.01; q = q + 0.1)
for (double p = 0.0; p < 1.01; p = p + 0.1)
{
PeptideIdentificationList pids{peptide_ids};
for (auto& pid : pids)
{
auto hits = pid.getHits();
for (auto& h : hits)
{
const double pl_modds = (double)h.getMetaValue("NuXL:pl_modds") / max_pl_modds;
const double modds = (double)h.getMetaValue("NuXL:modds") / max_modds;
const double pc_err = (double)h.getMetaValue("NuXL:mass_error_p") / max_mass_error_p;
// const double wTop50 = (double)h.getMetaValue("NuXL:wTop50") / max_wTop50;
// const double length = (double)h.getSequence().size() / max_length;
const double w1 = (1.0 - p) * modds + p * pl_modds;
const double w2 = (1.0 - q) * w1 + q * pc_err;
// const double w2 = (1.0 - q) * w1 - q * length;
// const double w2 = (1.0 - q) * w1 - q * wTop50;
h.setScore(w2);
}
pid.setHits(hits);
pid.sort();
// iterate over peptide hits and set rank
vector<PeptideHit>& phs = pid.getHits();
for (Size r = 0; r != phs.size(); ++r)
{
phs[r].setRank(static_cast<int>(r));
}
}
NuXLFDR fdr(1);
PeptideIdentificationList pep_pi, xl_pi;
fdr.calculatePeptideAndXLQValueAtPSMLevel(pids, pep_pi, xl_pi);
IDFilter::keepNBestHits(xl_pi, 1);
IDFilter::filterHitsByScore(pep_pi, 0.01); // 1% peptide FDR, TODO: pROC
IDFilter::filterHitsByScore(xl_pi, 0.1); // 10% XL FDR, TODO: pROC
IDFilter::removeEmptyIdentifications(xl_pi);
IDFilter::removeEmptyIdentifications(pep_pi);
//cout << "p/q: " << p << "/" << q << " most XLs: " << most_XLs << " current: " << xl_pi.size() << endl;
if (xl_pi.size() + pep_pi.size() > most_XLs)
{
most_XLs = xl_pi.size() + pep_pi.size();
best_p = p;
best_q = q;
OPENMS_LOG_DEBUG << "found better p/q: " << p << "/" << q << " most: " << most_XLs << " current: " << xl_pi.size() << endl;
}
}
// apply best weighting
for (auto& pid : peptide_ids)
{
auto hits = pid.getHits();
for (auto& h : hits)
{
const double pl_modds = (double)h.getMetaValue("NuXL:pl_modds") / max_pl_modds;
const double modds = (double)h.getMetaValue("NuXL:modds") / max_modds;
const double pc_err = (double)h.getMetaValue("NuXL:mass_error_p") / max_mass_error_p;
// const double length = (double)h.getSequence().size() / max_length;
// const double wTop50 = (double)h.getMetaValue("NuXL:wTop50") / max_wTop50;
const double w1 = (1.0 - best_p) * modds + best_p * pl_modds;
const double w2 = (1.0 - best_q) * w1 + best_q * pc_err;
// const double w2 = (1.0 - best_q) * w1 + best_q * length;
// const double w2 = (1.0 - best_q) * w1 - best_q * wTop50;
h.setScore(w2);
}
pid.setHits(hits);
pid.sort();
}
}
std::tuple<IMFormat, DriftTimeUnit> getMS2IMType(const MSExperiment& spectra)
{
IMFormat IM_format = IMTypes::determineIMFormat(spectra);
DriftTimeUnit IM_unit = DriftTimeUnit::NONE;
if (IM_format == IMFormat::MULTIPLE_SPECTRA)
{
OPENMS_LOG_INFO << "Ion Mobility annotated at the spectrum level." << std::endl;
auto im_it = std::find_if_not(spectra.begin(), spectra.end(),
[](const MSSpectrum& s)
{ // skip non-MS2 spectra and spectra without DriftTime annotation
if (s.getMSLevel() != 2)
return true;
return s.getDriftTimeUnit() == DriftTimeUnit::NONE;
});
if (im_it != spectra.end())
{
IM_unit = im_it->getDriftTimeUnit();
}
}
else if (IM_format == IMFormat::NONE)
{
OPENMS_LOG_INFO << "No Ion Mobility annotated at the spectrum level." << std::endl;
}
else if (IM_format == IMFormat::CONCATENATED)
{
OPENMS_LOG_INFO << "Concatenated Ion Mobility not supported. IM values need to be annotated at the spectrum level." << std::endl;
}
else if (IM_format == IMFormat::MIXED)
{
OPENMS_LOG_INFO << "Mixed Ion Mobility not supported. IM values need to be annotated at the spectrum level." << std::endl;
}
return make_tuple(IM_format, IM_unit);
}
void convertVSSCToCCS(MSExperiment& spectra)
{ // confirmed values with alpha and MaxQuant
OPENMS_LOG_INFO << "Converting 1/k0 to CCS values." << std::endl;
constexpr double bruker_CCS_coef = 1059.62245; // constant coefficient for Bruker in the Mason-Schamp equation
constexpr double IM_N2_gas_mass = 28.0; // like in alpha code
for (auto& s : spectra)
{
const double IM = s.getDriftTime();
const double mz = s.getPrecursors()[0].getMZ();
const double charge = s.getPrecursors()[0].getCharge();
const double mass = mz * charge;
const double reduced_mass = mass * IM_N2_gas_mass / (mass + IM_N2_gas_mass);
const double CCS = IM * charge * bruker_CCS_coef / std::sqrt(reduced_mass); // Mason-Schamp equation
s.setDriftTime(CCS);
}
}
void filterPeakInterference_(PeakMap& spectra, const map<String, PrecursorPurity::PurityScores>& purities, double fragment_mass_tolerance = 20.0, bool fragment_mass_tolerance_unit_ppm = true)
{
double filtered_peaks_count{0};
size_t filtered_spectra{0};
for (auto& s : spectra)
{
unordered_set<size_t> idx_to_remove;
auto it = purities.find(s.getNativeID());
if (it != purities.end())
{
for (const auto& interfering_peak : it->second.interfering_peaks)
{
const double max_dist_dalton = fragment_mass_tolerance_unit_ppm ? interfering_peak.getMZ() * fragment_mass_tolerance * 1e-6 : fragment_mass_tolerance;
auto pos = s.findNearest(interfering_peak.getMZ(), max_dist_dalton, max_dist_dalton);
if (pos != -1)
{
idx_to_remove.insert(pos);
}
}
vector<size_t> idx_to_keep; // inverse
for (size_t i = 0; i != s.size(); ++i)
{ // add indices we don't want to remove
if (idx_to_remove.find(i) == idx_to_remove.end()) idx_to_keep.push_back(i);
}
filtered_peaks_count += idx_to_remove.size();
s.select(idx_to_keep);
}
++filtered_spectra;
}
OPENMS_LOG_INFO << "Filtered out " << filtered_peaks_count << " peaks in total that matched to precursor interference." << endl;
if (filtered_spectra > 0) OPENMS_LOG_INFO << " On average " << filtered_peaks_count / (double)filtered_spectra << " peaks per MS2." << endl;
}
ExitCodes main_(int, const char**) override
{
ProgressLogger progresslogger;
progresslogger.setLogType(log_type_);
// Parameter: Input
FileHandler fh;
FileTypes::Type in_type = fh.getType(getStringOption_("in"));
String in_mzml;
if (in_type == FileTypes::MZML)
{
in_mzml = getStringOption_("in");
}
else if (in_type == FileTypes::RAW)
{
in_mzml = convertRawFile_(getStringOption_("in"));
}
String out_idxml = getStringOption_("out");
String in_db = getStringOption_("database");
String out_xl_idxml = getStringOption_("out_xls");
// create extra output directory if set
String extra_output_directory = getStringOption_("output_folder");
if (!extra_output_directory.empty())
{
// convert path to absolute path
extra_output_directory = File::absolutePath(extra_output_directory);
// create directory if not present
if (!File::exists(extra_output_directory))
{
File::makeDir(extra_output_directory);
}
}
//
Int min_precursor_charge = getIntOption_("precursor:min_charge");
Int max_precursor_charge = getIntOption_("precursor:max_charge");
double precursor_mass_tolerance = getDoubleOption_("precursor:mass_tolerance");
double fragment_mass_tolerance = getDoubleOption_("fragment:mass_tolerance");
bool generate_decoys = getStringOption_("NuXL:decoys") == "true";
Size decoy_factor = static_cast<Size>(getIntOption_("NuXL:decoy_factor"));
StringList filter = getStringList_("filter");
bool filter_pc_mass_error = find(filter.begin(), filter.end(), "filter_pc_mass_error") != filter.end();
bool impute_decoy_medians = find(filter.begin(), filter.end(), "impute_decoy_medians") != filter.end();
bool filter_bad_partial_loss_scores = find(filter.begin(), filter.end(), "filter_bad_partial_loss_scores") != filter.end();
bool autotune = find(filter.begin(), filter.end(), "autotune") != filter.end();
bool idfilter = find(filter.begin(), filter.end(), "idfilter") != filter.end();
bool spectrumclusterfilter = find(filter.begin(), filter.end(), "spectrumclusterfilter") != filter.end();
bool pcrecalibration = find(filter.begin(), filter.end(), "pcrecalibration") != filter.end();
bool optimize = find(filter.begin(), filter.end(), "optimize") != filter.end();
bool RTpredict = find(filter.begin(), filter.end(), "RTpredict") != filter.end();
if (pcrecalibration)
{ // recalibrate data and store in file {input_file_name}_pc.mzML
MSExperiment e;
MzMLFile().load(in_mzml, e);
correctPrecursors(e);
in_mzml = FileHandler::stripExtension(in_mzml) + "_pc.mzML";
OPENMS_LOG_INFO << "Writing calibrated file to: " << in_mzml << endl;
MzMLFile().store(in_mzml, e);
}
InternalCalibration ic; // only filled if pcrecalibration is set and there are enough calibrants
// autotune (only works if non-XL peptides present)
set<String> skip_peptide_spectrum;
double global_fragment_error(0);
if (autotune || idfilter)
{
SimpleSearchEngineAlgorithm sse;
vector<ProteinIdentification> prot_ids;
PeptideIdentificationList pep_ids;
Param p = sse.getParameters();
p.setValue("precursor:mass_tolerance", precursor_mass_tolerance);
p.setValue("precursor:mass_tolerance_unit", getStringOption_("precursor:mass_tolerance_unit"));
p.setValue("fragment:mass_tolerance", fragment_mass_tolerance);
p.setValue("fragment:mass_tolerance_unit", getStringOption_("fragment:mass_tolerance_unit"));
auto var_mods = ListUtils::create<std::string>(getStringList_("modifications:variable"));
if (find(var_mods.begin(), var_mods.end(), "Phospho (S)") == var_mods.end()) { var_mods.push_back("Phospho (S)"); }
if (find(var_mods.begin(), var_mods.end(), "Phospho (T)") == var_mods.end()) { var_mods.push_back("Phospho (T)"); }
if (find(var_mods.begin(), var_mods.end(), "Phospho (Y)") == var_mods.end()) { var_mods.push_back("Phospho (Y)"); }
if (find(var_mods.begin(), var_mods.end(), "Oxidation (M)") == var_mods.end()) { var_mods.push_back("Oxidation (M)"); }
auto fixed_mods = ListUtils::create<std::string>(getStringList_("modifications:fixed"));
p.setValue("modifications:fixed", fixed_mods);
p.setValue("modifications:variable", var_mods);
p.setValue("modifications:variable_max_per_peptide", 2);
p.setValue("peptide:missed_cleavages", 2);
p.setValue("precursor:isotopes", IntList{0, 1});
p.setValue("decoys", generate_decoys ? "true" : "false");
p.setValue("enzyme", getStringOption_("peptide:enzyme"));
p.setValue("annotate:PSM",
vector<string>{
Constants::UserParam::FRAGMENT_ERROR_MEDIAN_PPM_USERPARAM,
Constants::UserParam::PRECURSOR_ERROR_PPM_USERPARAM,
Constants::UserParam::MATCHED_PREFIX_IONS_FRACTION,
Constants::UserParam::MATCHED_SUFFIX_IONS_FRACTION
});
sse.setParameters(p);
OPENMS_LOG_INFO << "Running autotune..." << endl;
sse.search(in_mzml, in_db, prot_ids, pep_ids);
if (RTpredict)
{
NuXLRTPrediction rt_pred;
auto peptides = pep_ids;
FalseDiscoveryRate().apply(peptides);
IDFilter::filterHitsByScore(peptides, 0.05);
IDFilter::removeDecoyHits(peptides);
IDFilter::keepBestPerPeptide(peptides, true, true, 1);
rt_pred.train(in_mzml, peptides, prot_ids);
rt_pred.predict(pep_ids);
// add RT prediction as extra feature for percolator
auto search_parameters = prot_ids[0].getSearchParameters();
String new_features = (String)search_parameters.getMetaValue("extra_features") + String(",RT_error,RT_predict");
search_parameters.setMetaValue("extra_features", new_features);
prot_ids[0].setSearchParameters(search_parameters);
}
/// try to run percolator
{
vector<ProteinIdentification> perc_prot_ids;
PeptideIdentificationList perc_pep_ids;
const String percolator_executable = getStringOption_("percolator_executable");
bool sufficient_PSMs_for_score_recalibration = pep_ids.size() > 1000;
if (!percolator_executable.empty() && sufficient_PSMs_for_score_recalibration) // only try to call percolator if we have some PSMs
{
String perc_in = out_idxml;
perc_in.substitute(".idXML", "_sse_perc_in.idXML");
IdXMLFile().store(perc_in, prot_ids, pep_ids);
// run percolator on idXML
String perc_out = out_idxml;
perc_out.substitute(".idXML", "_sse_perc_out.idXML");
String weights_out = out_idxml;
weights_out.substitute(".idXML", "_sse_perc.weights");
QStringList process_params;
process_params << "-in" << perc_in.toQString()
<< "-out" << perc_out.toQString()
<< "-percolator_executable" << percolator_executable.toQString()
<< "-train_best_positive"
<< "-score_type" << "q-value"
<< "-post_processing_tdc"
<< "-weights" << weights_out.toQString()
// << "-nested_xval_bins" << "3"
;
if (getStringOption_("peptide:enzyme") == "Lys-C")
{
process_params << "-enzyme" << "lys-c";
}
TOPPBase::ExitCodes exit_code = runExternalProcess_(QString("PercolatorAdapter"), process_params);
if (exit_code != EXECUTION_OK)
{
OPENMS_LOG_WARN << "Score recalibration failed in IDFilter. Using original results." << endl;
}
else
{
// load back idXML
IdXMLFile().load(perc_out, perc_prot_ids, perc_pep_ids);
// generate filtered results
IDFilter::keepNBestHits(perc_pep_ids, 1);
IDFilter::removeUnreferencedProteins(perc_prot_ids, perc_pep_ids);
}
}
OPENMS_LOG_INFO << "Filtering ..." << endl;
IDFilter::filterHitsByScore(perc_pep_ids, 0.01); // 1% PSM-FDR
IDFilter::removeEmptyIdentifications(perc_pep_ids);
OPENMS_LOG_INFO << "Peptide PSMs at 1% FDR: " << perc_pep_ids.size() << endl;
// ID-filter part for linear peptides
if (idfilter)
{
for (const auto& pi : perc_pep_ids)
{
skip_peptide_spectrum.insert((String)pi.getMetaValue("spectrum_reference")); // get native id
}
}
if (spectrumclusterfilter)
{
Size skipped_similar_spectra(0);
// load MS2 map
PeakMap spectra;
MzMLFile f;
f.setLogType(log_type_);
PeakFileOptions options;
options.clearMSLevels();
options.addMSLevel(2);
f.getOptions() = options;
f.load(in_mzml, spectra);
spectra.sortSpectra(true);
SpectrumLookup lookup;
lookup.readSpectra(spectra);
// build kdtree
Param p;
p.setValue("rt_tol", 60.0);
p.setValue("mz_tol", precursor_mass_tolerance);
p.setValue("mz_unit", "ppm");
FeatureMap fmap;
for (Size i = 0; i != spectra.size(); ++i)
{
const MSSpectrum& s = spectra[i];
Feature feat;
feat.setMZ(s.getPrecursors()[0].getMZ());
feat.setRT(s.getRT());
feat.setMetaValue("native_id", s.getNativeID());
fmap.push_back(feat);
}
vector<FeatureMap> fmaps;
fmaps.push_back(std::move(fmap));
KDTreeFeatureMaps kdtree(fmaps, p);
// filter all coeluting MS2 with high spectral similarity to identified one
for (const auto& pi : perc_pep_ids)
{
String this_native_id = (String)pi.getMetaValue("spectrum_reference");
std::vector<Size> result_indices;
// find neighbors
double m = Math::ppmToMass(precursor_mass_tolerance, pi.getMZ());
kdtree.queryRegion(pi.getRT() - 60.0, pi.getRT() + 60.0, pi.getMZ() - m, pi.getMZ() + m, result_indices);
if (result_indices.size() > 1)
{
for (Size ix : result_indices)
{
auto f = kdtree.feature(ix);
const String other_native_id = f->getMetaValue("native_id");
// skip self-comparison and already identified spectra
if (this_native_id == other_native_id || skip_peptide_spectrum.count(other_native_id) > 0) continue;
const MSSpectrum& this_spec = spectra[lookup.findByNativeID(this_native_id)];
const MSSpectrum& other_spec = spectra[lookup.findByNativeID(other_native_id)];
BinnedSpectrum bs1 (this_spec, BinnedSpectrum::DEFAULT_BIN_WIDTH_LOWRES, false, 1, BinnedSpectrum::DEFAULT_BIN_OFFSET_LOWRES);
BinnedSpectrum bs2 (other_spec, BinnedSpectrum::DEFAULT_BIN_WIDTH_LOWRES, false, 1, BinnedSpectrum::DEFAULT_BIN_OFFSET_LOWRES);
const float contrast_angle = BinnedSpectralContrastAngle()(bs1, bs2);
if (contrast_angle > 0.9)
{
skip_peptide_spectrum.insert(other_native_id);
skipped_similar_spectra++;
}
}
}
}
OPENMS_LOG_INFO << "Excluded coelution precursors with high spectral similarity: " << skipped_similar_spectra << endl;
}
}
////////// end percolator part
OPENMS_LOG_INFO << "Calculating FDR..." << endl;
FalseDiscoveryRate fdr;
fdr.apply(pep_ids);
OPENMS_LOG_INFO << "Filtering ..." << endl;
IDFilter::filterHitsByScore(pep_ids, 0.01); // 1% PSM-FDR
IDFilter::removeEmptyIdentifications(pep_ids);
OPENMS_LOG_INFO << "Peptide PSMs at 1% FDR (no percolator): " << pep_ids.size() << endl;
if (pep_ids.size() > 100)
{
vector<double> median_fragment_error_ppm_abs;
vector<double> median_fragment_error_ppm;
vector<double> precursor_error_ppm;
double mean_prefix_ions_fraction{}, mean_suffix_ions_fraction{};
for (const auto& pi : pep_ids)
{
const PeptideHit& ph = pi.getHits()[0];
if (ph.metaValueExists(Constants::UserParam::MATCHED_PREFIX_IONS_FRACTION))
{
mean_prefix_ions_fraction += (double)ph.getMetaValue(Constants::UserParam::MATCHED_PREFIX_IONS_FRACTION);
}
if (ph.metaValueExists(Constants::UserParam::MATCHED_SUFFIX_IONS_FRACTION))
{
mean_suffix_ions_fraction += (double)ph.getMetaValue(Constants::UserParam::MATCHED_SUFFIX_IONS_FRACTION);
}
if (ph.metaValueExists(Constants::UserParam::FRAGMENT_ERROR_MEDIAN_PPM_USERPARAM))
{
//cout << ph.getMetaValue("median_fragment_error_ppm") << endl;
double fragment_error = (double)ph.getMetaValue(Constants::UserParam::FRAGMENT_ERROR_MEDIAN_PPM_USERPARAM);
median_fragment_error_ppm_abs.push_back(fabs(fragment_error));
median_fragment_error_ppm.push_back(fragment_error);
}
if (ph.metaValueExists(Constants::UserParam::PRECURSOR_ERROR_PPM_USERPARAM))
{
precursor_error_ppm.push_back((double)ph.getMetaValue(Constants::UserParam::PRECURSOR_ERROR_PPM_USERPARAM));
}
}
sort(median_fragment_error_ppm_abs.begin(), median_fragment_error_ppm_abs.end());
sort(median_fragment_error_ppm.begin(), median_fragment_error_ppm.end());
sort(precursor_error_ppm.begin(), precursor_error_ppm.end());
// use 68-percentile as in identipy
double new_fragment_mass_tolerance = 4.0 * median_fragment_error_ppm_abs[median_fragment_error_ppm_abs.size() * 0.68];
global_fragment_error = median_fragment_error_ppm[median_fragment_error_ppm.size() * 0.5]; // median of all fragment errors
double left_precursor_mass_tolerance = precursor_error_ppm[precursor_error_ppm.size() * 0.005];
double median_precursor_mass_tolerance = precursor_error_ppm[precursor_error_ppm.size() * 0.5];
double right_precursor_mass_tolerance = precursor_error_ppm[precursor_error_ppm.size() * 0.995];
mean_suffix_ions_fraction /= (double) pep_ids.size();
mean_prefix_ions_fraction /= (double) pep_ids.size();
OPENMS_LOG_INFO << "Mean prefix/suffix ions fraction: " << mean_prefix_ions_fraction << "/" << mean_suffix_ions_fraction << endl;
if (autotune)
{
fragment_mass_tolerance = new_fragment_mass_tolerance; // set new fragment mass tolerance
}
OPENMS_LOG_INFO << "New fragment mass tolerance (ppm): " << new_fragment_mass_tolerance << endl;
OPENMS_LOG_INFO << "Global fragment mass shift (ppm): " << global_fragment_error << endl;
OPENMS_LOG_INFO << "Estimated precursor mass tolerance (ppm): " << left_precursor_mass_tolerance << "\t" << median_precursor_mass_tolerance << "\t" << right_precursor_mass_tolerance << endl;
}
else
{
OPENMS_LOG_INFO << "autotune: too few non-cross-linked peptides found. Will keep parameters as-is." << endl;
}
if (pcrecalibration)
{
ic.setLogType(log_type_);
ic.fillCalibrants(pep_ids, precursor_mass_tolerance);
if (global_fragment_error != 0)
{
PeakMap spectra;
MzMLFile f;
f.load(in_mzml, spectra);
spectra.sortSpectra(true);
for (auto & s : spectra)
{
if (s.getMSLevel() != 2) continue;
for (auto & p : s)
{
p.setMZ(p.getMZ() - Math::ppmToMass(global_fragment_error, p.getMZ())); // correct global fragment error
}
}
f.store(in_mzml, spectra); // TODO: do we really want to overwrite this?
}
}
}
OPENMS_LOG_INFO << "IDFilter excludes " << skip_peptide_spectrum.size() << " spectra." << endl;
String out_tsv = getStringOption_("out_tsv");
fast_scoring_ = getStringOption_("NuXL:scoring") == "fast" ? true : false;
// true positives: assumed gaussian distribution of mass error
// with sigma^2 = precursor_mass_tolerance
boost::math::normal gaussian_mass_error(0.0, sqrt(precursor_mass_tolerance));
bool precursor_mass_tolerance_unit_ppm = (getStringOption_("precursor:mass_tolerance_unit") == "ppm");
IntList precursor_isotopes = getIntList_("precursor:isotopes");
bool fragment_mass_tolerance_unit_ppm = (getStringOption_("fragment:mass_tolerance_unit") == "ppm");
double marker_ions_tolerance = getDoubleOption_("NuXL:marker_ions_tolerance");
double small_peptide_mass_filter_threshold = getDoubleOption_("NuXL:filter_small_peptide_mass");
StringList fixedModNames = getStringList_("modifications:fixed");
set<String> fixed_unique(fixedModNames.begin(), fixedModNames.end());
Size peptide_min_size = getIntOption_("peptide:min_size");
if (fixed_unique.size() != fixedModNames.size())
{
OPENMS_LOG_WARN << "duplicate fixed modification provided." << endl;
return ILLEGAL_PARAMETERS;
}
StringList varModNames = getStringList_("modifications:variable");
set<String> var_unique(varModNames.begin(), varModNames.end());
if (var_unique.size() != varModNames.size())
{
OPENMS_LOG_WARN << "duplicate variable modification provided." << endl;
return ILLEGAL_PARAMETERS;
}
ModifiedPeptideGenerator::MapToResidueType fixed_modifications = ModifiedPeptideGenerator::getModifications(fixedModNames);
ModifiedPeptideGenerator::MapToResidueType variable_modifications = ModifiedPeptideGenerator::getModifications(varModNames);
Size max_variable_mods_per_peptide = getIntOption_("modifications:variable_max_per_peptide");
size_t report_top_hits = (size_t)getIntOption_("report:top_hits");
double peptide_FDR = getDoubleOption_("report:peptideFDR");
DoubleList XL_FDR = getDoubleList_("report:xlFDR");
if (XL_FDR.empty()) XL_FDR.push_back(1.0); // needs at least one entry - otherwise fails because no XL result file can be written to out_xls
DoubleList XL_peptidelevel_FDR = getDoubleList_("report:xl_peptidelevel_FDR");
if (XL_FDR.size() != XL_peptidelevel_FDR.size())
{
throw Exception::InvalidValue(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
"q-value list for PSMs and peptides differ in size.",
String(XL_FDR.size()) + "!=" + String(XL_peptidelevel_FDR.size()));
}
// determine maximum FDR treshold used to create result files
double xl_fdr_max{1.0};
if (!XL_FDR.empty())
{
xl_fdr_max = *std::max_element(XL_FDR.begin(), XL_FDR.end());
}
StringList nt_groups = getStringList_("NuXL:nt_groups");
// read list of nucleotides that can directly cross-link
// these are responsible for shifted fragment ions. Their fragment adducts thus determine which shifts will be observed on b-,a-,y-ions
StringList modifications;
StringList fragment_adducts;
String can_cross_link;
// string format: target,formula e.g. "A=C10H14N5O7P", ..., "U=C10H14N5O7P", "X=C9H13N2O8PS" where X represents tU
StringList target_nucleotides;
// string format: source->target e.g. "A->A", ..., "U->U", "U->X"
StringList mappings;
////////////////////////////////////////////////////////////////////
// set user configuration or from presets
bool isRNA{false};
if (getStringOption_("NuXL:presets") == "none")
{
target_nucleotides = getStringList_("NuXL:target_nucleotides");
mappings = getStringList_("NuXL:mapping");
modifications = getStringList_("NuXL:modifications");
fragment_adducts = getStringList_("NuXL:fragment_adducts");
can_cross_link = getStringOption_("NuXL:can_cross_link");
for (const auto& t : target_nucleotides) // TODO: improve: checking also formula is safe
{
if (t.hasPrefix("U") || t.hasPrefix("u"))
{
isRNA = true;
}
else if (t.hasPrefix("T") || t.hasPrefix("t"))
{
isRNA = false;
}
}
}
else
{ // set from presets
String p = getStringOption_("NuXL:presets");
String custom_presets_file = getStringOption_("NuXL:presets_file");
NuXLPresets::getPresets(p, custom_presets_file, target_nucleotides, mappings, modifications, fragment_adducts, can_cross_link);
// set if DNA or RNA preset
if (p.hasSubstring("RNA"))
{
isRNA = true;
}
else if (p.hasSubstring("DNA"))
{
isRNA = false;
}
}
// convert string to set
for (const auto& c : can_cross_link) { can_xl_.insert(c); } // sort and make unique
String sequence_restriction = getStringOption_("NuXL:sequence");
Int max_nucleotide_length = getIntOption_("NuXL:length");
bool cysteine_adduct = getFlag_("NuXL:CysteineAdduct");
/////////////////////////////////////////////////////////////////////////////
// Generation of Precursor/MS1 adduct to nucleotides to fragment adduct rules
// generate mapping from empirical formula to mass and empirical formula to (one or more) precursor adducts
NuXLModificationMassesResult mm;
if (max_nucleotide_length != 0)
{
mm = NuXLModificationsGenerator::initModificationMassesNA(
target_nucleotides,
nt_groups,
can_xl_,
mappings,
modifications,
sequence_restriction,
cysteine_adduct,
max_nucleotide_length);
}
if (!getFlag_("NuXL:only_xl"))
{
mm.formula2mass[""] = 0; // insert "null" modification otherwise peptides without NA will not be searched
mm.mod_combinations[""].insert("none");
}
// parse tool parameter and generate all fragment adducts
// first, we determine which fragments adducts can be generated from a single nucleotide (that has no losses)
NuXLParameterParsing::NucleotideToFragmentAdductMap nucleotide_to_fragment_adducts = NuXLParameterParsing::getTargetNucleotideToFragmentAdducts(fragment_adducts);
// calculate all feasible fragment adducts from all possible precursor adducts
NuXLParameterParsing::PrecursorsToMS2Adducts all_feasible_fragment_adducts = NuXLParameterParsing::getAllFeasibleFragmentAdducts(mm, nucleotide_to_fragment_adducts, can_xl_, true, isRNA);
std::vector<NuXLFragmentAdductDefinition> all_possible_marker_ion_masses = NuXLParameterParsing::getMarkerIonsMassSet(all_feasible_fragment_adducts);
PeakSpectrum all_possible_marker_ion_sub_score_spectrum_z1;
// add shifted marker ions of charge 1
all_possible_marker_ion_sub_score_spectrum_z1.getStringDataArrays().resize(1); // annotation
all_possible_marker_ion_sub_score_spectrum_z1.getIntegerDataArrays().resize(1); // annotation
NuXLFragmentIonGenerator::addMS2MarkerIons(
all_possible_marker_ion_masses,
all_possible_marker_ion_sub_score_spectrum_z1,
all_possible_marker_ion_sub_score_spectrum_z1.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX],
all_possible_marker_ion_sub_score_spectrum_z1.getStringDataArrays()[0]);
NuXLFDR fdr(report_top_hits);
// load MS2 map
PeakMap spectra;
MzMLFile f;
f.setLogType(log_type_);
map<String, PrecursorPurity::PurityScores> purities = calculatePrecursorPurities_(in_mzml, precursor_mass_tolerance, precursor_mass_tolerance_unit_ppm);
// define percolator feature set
StringList data_dependent_features; // percolator features that only exist e.g., if MS1 spectra were present
if (!purities.empty()) data_dependent_features << "precursor_purity";
PeakFileOptions options;
options.clearMSLevels();
options.addMSLevel(2);
f.getOptions() = options;
f.load(in_mzml, spectra);
spectra.sortSpectra(true);
for (const auto& s : spectra) { if (!s.getIntegerDataArrays().empty()) throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Input spectra must not contain integer data arrays."); }
// determine IM format (if any)
auto [IM_format, IM_unit] = getMS2IMType(spectra);
has_IM_ = (IM_unit != DriftTimeUnit::NONE);
if (has_IM_)
{
OPENMS_LOG_INFO << "Adding Ion Mobility to feature set." << std::endl;
data_dependent_features << "IM";
}
// convert 1/k0 to CCS
if (IM_unit == DriftTimeUnit::VSSC)
{
convertVSSCToCCS(spectra);
}
// all data dependent features (IM available or not, precursor intensities from MS1 available etc.) are known. We can define percolator features.
definePercolatorFeatureSet_(data_dependent_features);
// only executed if we have a pre-search with enough calibrants
if (ic.getCalibrationPoints().size() > 1)
{
MZTrafoModel::MODELTYPE md = MZTrafoModel::MODELTYPE::LINEAR;
bool use_RANSAC = true;
Size RANSAC_initial_points = (md == MZTrafoModel::MODELTYPE::LINEAR) ? 2 : 3;
Math::RANSACParam p(RANSAC_initial_points, 70, 10, 30, true); // TODO: check defaults (taken from tool)
MZTrafoModel::setRANSACParams(p);
// these limits are a little loose, but should prevent grossly wrong models without burdening the user with yet another parameter.
MZTrafoModel::setCoefficientLimits(25.0, 25.0, 0.5);
IntList ms_level = {1};
double rt_chunk = 300.0; // 5 minutes covered by each linear model
String qc_residual_path, qc_residual_png_path;
if (!ic.calibrate(spectra, ms_level, md, rt_chunk, use_RANSAC,
10.0,
5.0,
"",
"",
qc_residual_path,
qc_residual_png_path,
"Rscript"))
{
OPENMS_LOG_WARN << "\nCalibration failed. See error message above!" << std::endl;
}
}
progresslogger.startProgress(0, 1, "Filtering spectra...");
const double window_size = getDoubleOption_("window_size");
const size_t peak_count = getIntOption_("peak_count");
preprocessSpectra_(spectra,
// fragment_mass_tolerance,
// fragment_mass_tolerance_unit_ppm,
false, // keep charge as is
true, window_size, peak_count, purities); // annotate charge
progresslogger.endProgress();
progresslogger.startProgress(0, 1, "Calculate Nucleotide Tags...");
calculateNucleotideTags_(spectra, fragment_mass_tolerance, fragment_mass_tolerance_unit_ppm, nucleotide_to_fragment_adducts);
progresslogger.endProgress();
// calculate peak intensity ranks and store in spec.getIntegerDataArrays()[NuXLConstants::IA_RANK_INDEX] / name "intensity_rank"
progresslogger.startProgress(0, 1, "Calculate intensity ranks...");
calculateIntensityRanks(spectra);
progresslogger.endProgress();
// calculate longest AA sequence tag and annotates it length in spec.getIntegerDataArrays()[NuXLConstants::IA_DENOVO_TAG_INDEX][0]
progresslogger.startProgress(0, 1, "Calculate AA Tags...");
calculateLongestAASequenceTag(spectra);
progresslogger.endProgress();
// build multimap of precursor mass to scan index (and perform some mass and length based filtering)
progresslogger.startProgress(0, 1, "Mapping precursors to scan...");
using MassToScanMultiMap = multimap<double, pair<Size, int>>;
MassToScanMultiMap multimap_mass_2_scan_index; // map precursor mass to scan index and (potential) isotopic missassignment
mapPrecursorMassesToScans(min_precursor_charge,
max_precursor_charge,
precursor_isotopes,
small_peptide_mass_filter_threshold,
peptide_min_size,
spectra,
multimap_mass_2_scan_index);
progresslogger.endProgress();
// preallocate storage for PSMs
vector<size_t> nr_candidates(spectra.size(), 0);
vector<size_t> matched_peaks(spectra.size(), 0);
vector<vector<NuXLAnnotatedHit> > annotated_XLs(spectra.size(), vector<NuXLAnnotatedHit>());
for (auto & a : annotated_XLs) { a.reserve(2 * report_top_hits); }
vector<vector<NuXLAnnotatedHit> > annotated_peptides(spectra.size(), vector<NuXLAnnotatedHit>());
for (auto & a : annotated_peptides) { a.reserve(2 * report_top_hits); }
#ifdef _OPENMP
// locking is done at the spectrum level to ensure good parallelisation
vector<omp_lock_t> annotated_XLs_lock(annotated_XLs.size());
for (size_t i = 0; i != annotated_XLs_lock.size(); i++) { omp_init_lock(&(annotated_XLs_lock[i])); }
vector<omp_lock_t> annotated_peptides_lock(annotated_peptides.size());
for (size_t i = 0; i != annotated_peptides_lock.size(); i++) { omp_init_lock(&(annotated_peptides_lock[i])); }
#endif
// load fasta file
progresslogger.startProgress(0, 1, "Load database from FASTA file...");
FASTAFile fastaFile;
vector<FASTAFile::FASTAEntry> fasta_db;
fastaFile.load(in_db, fasta_db);
progresslogger.endProgress();
// generate decoy protein sequences
if (generate_decoys)
{
progresslogger.startProgress(0, 1, "Generating decoys...");
ProteaseDigestion digestor;
const String enzyme = getStringOption_("peptide:enzyme");
digestor.setEnzyme(enzyme);
digestor.setMissedCleavages(0); // for decoy generation disable missed cleavages
// append decoy proteins
const size_t old_size = fasta_db.size();
for (size_t i = 0; i != old_size; ++i)
{
FASTAFile::FASTAEntry e = fasta_db[i];
std::vector<AASequence> output;
digestor.digest(AASequence::fromString(e.sequence), output);
// generate decoy peptides from current digest
e.sequence = "";
for (const auto & aas : output)
{
if (aas.size() <= 2) { e.sequence += aas.toUnmodifiedString(); continue; }
// static const DecoyGenerator dg;
// e.sequence += dg.reversePeptides(aas, enzyme).toUnmodifiedString();
DecoyGenerator dg; // important to create inside the loop with same seed. Otherwise same peptides end up creating different decoys -> much more decoys than targets
dg.setSeed(4711);
for (Size i = 0; i < decoy_factor; ++i) // decoy_factor = how many decoys to generate per target
{
e.sequence += dg.shufflePeptides(aas, enzyme).toUnmodifiedString();
}
}
e.identifier = "DECOY_" + e.identifier;
fasta_db.push_back(e);
}
// randomize order of targets and decoys to introduce no global
// bias in cases where a target has same score as decoy. (we always take the first best scoring one)
Math::RandomShuffler r{4711};
r.portable_random_shuffle(fasta_db.begin(),fasta_db.end());
progresslogger.endProgress();
}
// set up enzyme
const Size missed_cleavages = getIntOption_("peptide:missed_cleavages");
ProteaseDigestion digestor;
digestor.setEnzyme(getStringOption_("peptide:enzyme"));
digestor.setMissedCleavages(missed_cleavages);
progresslogger.startProgress(0, (Size)(fasta_db.end() - fasta_db.begin()), "Scoring peptide models against spectra...");
// lookup for processed peptides. must be defined outside of omp section and synchronized
set<StringView> processed_petides;
// set minimum size of peptide after digestion
Size min_peptide_length = (Size)getIntOption_("peptide:min_size");
Size max_peptide_length = (Size)getIntOption_("peptide:max_size");
Size count_proteins(0), count_peptides(0);
Size count_decoy_peptides(0), count_target_peptides(0);
#ifdef _OPENMP
#pragma omp parallel for schedule(guided)
#endif
for (SignedSize fasta_index = 0; fasta_index < (SignedSize)fasta_db.size(); ++fasta_index) // iterate over all proteins from the fasta file
{
#ifdef _OPENMP
#pragma omp atomic
#endif
++count_proteins;
IF_MASTERTHREAD
{
progresslogger.setProgress((SignedSize)count_proteins);
}
vector<StringView> current_digest;
auto const & current_fasta_entry = fasta_db[fasta_index];
bool is_decoy = current_fasta_entry.identifier[5] == '_'; // faster check than current_fasta_entry.identifier.hasPrefix("DECOY_")
// digest the protein into peptides (filter out too short or too long peptides)
digestor.digestUnmodified(current_fasta_entry.sequence, current_digest, min_peptide_length, max_peptide_length);
// for each peptide of the current digest
for (auto cit = current_digest.begin(); cit != current_digest.end(); ++cit)
{
bool already_processed = false;
#ifdef _OPENMP
#pragma omp critical (processed_peptides_access)
#endif
{
// skip peptide (and all modified variants) if already processed
if (processed_petides.find(*cit) != processed_petides.end())
{
already_processed = true;
}
else
{
processed_petides.insert(*cit);
}
}
if (already_processed) { continue; }
#ifdef _OPENMP
#pragma omp atomic
#endif
++count_peptides;
if (is_decoy)
{
#ifdef _OPENMP
#pragma omp atomic
#endif
++count_decoy_peptides;
}
else
{
#ifdef _OPENMP
#pragma omp atomic
#endif
++count_target_peptides;
}
const String unmodified_sequence = cit->getString();
// only process peptides without ambiguous amino acids (placeholder / any amino acid)
if (unmodified_sequence.find_first_of("XBZ") != std::string::npos) continue;
// determine which residues might give rise to an immonium ion
ImmoniumIonsInPeptide iip(unmodified_sequence);
// create an AASequence object from the peptide sequence and modify it to contain static modification
AASequence aas = AASequence::fromString(unmodified_sequence);
ModifiedPeptideGenerator::applyFixedModifications(fixed_modifications, aas);
// use the peptide with fixed modification(s) to generate all variably modified versions of the peptide (e.g., with Oxidiation (M) )
vector<AASequence> all_modified_peptides;
ModifiedPeptideGenerator::applyVariableModifications(variable_modifications, aas, max_variable_mods_per_peptide, all_modified_peptides);
for (SignedSize mod_pep_idx = 0; mod_pep_idx < (SignedSize)all_modified_peptides.size(); ++mod_pep_idx)
{ // for all (modified) peptide sequences in the digest of the current protein
const AASequence& fixed_and_variable_modified_peptide = all_modified_peptides[mod_pep_idx];
double current_peptide_mass_without_NA = fixed_and_variable_modified_peptide.getMonoWeight();
//create an empty theoretical spectrum. total_loss_spectrum_z2 contains both charge 1 and charge 2 peaks
vector<double> total_loss_template_z1_b_ions, total_loss_template_z1_y_ions;
// spectrum containing additional peaks for sub scoring
PeakSpectrum marker_ions_sub_score_spectrum;
// iterate over all NA sequences, calculate peptide mass, and generate complete loss spectrum only once as this can potentially be reused
Size NA_mod_index = 0;
// for the current variably and statically modified peptide, apply all precursor adducts (RNA/DNA oligos with/without losses)
for (std::map<String, double>::const_iterator na_mod_it = mm.formula2mass.begin();
na_mod_it != mm.formula2mass.end();
++na_mod_it, ++NA_mod_index)
{
const double precursor_na_mass = na_mod_it->second;
const double current_peptide_mass = current_peptide_mass_without_NA + precursor_na_mass; // add NA mass
// determine MS2 precursors that match to the current peptide mass
MassToScanMultiMap::const_iterator low_it, up_it;
if (precursor_mass_tolerance_unit_ppm) // ppm
{
low_it = multimap_mass_2_scan_index.lower_bound(current_peptide_mass - current_peptide_mass * precursor_mass_tolerance * 1e-6);
up_it = multimap_mass_2_scan_index.upper_bound(current_peptide_mass + current_peptide_mass * precursor_mass_tolerance * 1e-6);
}
else // Dalton
{
low_it = multimap_mass_2_scan_index.lower_bound(current_peptide_mass - precursor_mass_tolerance);
up_it = multimap_mass_2_scan_index.upper_bound(current_peptide_mass + precursor_mass_tolerance);
}
if (low_it == up_it) { continue; } // no matching precursor in data
// add peaks for b- and y- ions with charge 1 (sorted by m/z)
// total / complete loss spectra are generated for fast and (slow) full scoring
// only create complete loss spectrum (=without RNA/RNA) once as this is rather costly and needs only to be done once per peptide
if (total_loss_template_z1_b_ions.empty())
{
generateTheoreticalMZsZ1_(fixed_and_variable_modified_peptide, Residue::ResidueType::BIon, total_loss_template_z1_b_ions);
generateTheoreticalMZsZ1_(fixed_and_variable_modified_peptide, Residue::ResidueType::YIon, total_loss_template_z1_y_ions);
}
// retrieve NA adduct name
auto mod_combinations_it = mm.mod_combinations.begin();
std::advance(mod_combinations_it, NA_mod_index);
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// all ion scoring
//
if (!fast_scoring_)
{
const auto& NA_adducts = mod_combinations_it->second; // set of all NA adducts for current sum formula (e.g, U-H2O and C-NH3 have same elemental composition)
auto NA_adduct_it = mod_combinations_it->second.begin();
for (size_t NA_adduct_amb_index = 0; NA_adduct_amb_index != NA_adducts.size(); ++NA_adduct_amb_index, ++NA_adduct_it)
{ // for all NA adducts with current sum formula (e.g, U-H2O and C-NH3)
const String& precursor_na_adduct = *NA_adduct_it;
if (precursor_na_adduct == "none")
{
// score peptide without NA (same method as fast scoring)
for (auto & l = low_it; l != up_it; ++l)
{
const Size & scan_index = l->second.first;
const PeakSpectrum & exp_spectrum = spectra[scan_index];
// count candidate for spectrum
#pragma omp atomic
++nr_candidates[scan_index];
//const double exp_pc_mass = l->first;
const int & isotope_error = l->second.second;
const int & exp_pc_charge = exp_spectrum.getPrecursors()[0].getCharge();
float total_loss_score(0),
tlss_MIC(0),
tlss_err(0),
tlss_Morph(0),
tlss_modds(0),
pc_MIC(0),
im_MIC(0);
size_t n_theoretical_peaks(0);
vector<double> intensity_linear(total_loss_template_z1_b_ions.size(), 0.0);
vector<double> b_ions(total_loss_template_z1_b_ions.size(), 0.0);
vector<double> y_ions(total_loss_template_z1_b_ions.size(), 0.0);
vector<bool> peak_matched(exp_spectrum.size(), false);
scorePeptideIons_(
exp_spectrum,
exp_spectrum.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX],
total_loss_template_z1_b_ions,
total_loss_template_z1_y_ions,
current_peptide_mass_without_NA,
exp_pc_charge,
iip,
fragment_mass_tolerance,
fragment_mass_tolerance_unit_ppm,
intensity_linear,
b_ions,
y_ions,
peak_matched,
total_loss_score,
tlss_MIC,
tlss_Morph,
tlss_modds,
tlss_err,
pc_MIC,
im_MIC,
n_theoretical_peaks
);
const double tlss_total_MIC = tlss_MIC + im_MIC + (pc_MIC - floor(pc_MIC));
// total_loss_score = total_loss_score - 0.22 * (double)cit->size();
if (badTotalLossScore(total_loss_score, tlss_Morph, tlss_total_MIC)) { continue; }
const double mass_error_ppm = (current_peptide_mass - l->first) / l->first * 1e6;
const double mass_error_score = pdf(gaussian_mass_error, mass_error_ppm) / pdf(gaussian_mass_error, 0.0);
// add peptide hit
NuXLAnnotatedHit ah;
ah.NA_adduct_amb_index = NA_adduct_amb_index; // store index the entry in the set of ambiguous precursor adducts
ah.mass_error_p = mass_error_score;
ah.sequence = *cit; // copy StringView
ah.peptide_mod_index = mod_pep_idx;
ah.MIC = tlss_MIC;
ah.err = tlss_err;
ah.Morph = tlss_Morph;
ah.modds = tlss_modds;
ah.total_loss_score = total_loss_score;
ah.immonium_score = im_MIC;
ah.precursor_score = pc_MIC;
ah.total_MIC = tlss_total_MIC;
ah.NA_mod_index = NA_mod_index;
ah.isotope_error = isotope_error;
auto range = make_pair(intensity_linear.begin(), intensity_linear.end());
ah.ladder_score = ladderScore_(range) / (double)intensity_linear.size();
range = longestCompleteLadder_(intensity_linear.begin(), intensity_linear.end());
if (range.second != range.first)
{
ah.sequence_score = ladderScore_(range) / (double)intensity_linear.size();
}
RankScores rankscores = rankScores_(exp_spectrum, peak_matched);
ah.explained_peak_fraction = rankscores.explained_peak_fraction;
if (rankscores.explained_peaks > 0) ah.matched_theo_fraction = rankscores.explained_peaks / (float)n_theoretical_peaks;
ah.wTop50 = rankscores.wTop50;
// do we have at least one ladder peak
// const XLTags longest_tags = getLongestLadderWithShift(intensity_linear, vector<double>());
const XLTags longest_tags = getLongestABYLadderWithShift(b_ions, y_ions, vector<double>(), vector<double>());
#ifdef FILTER_BAD_SCORES_ID_TAGS
if (longest_tags.tag_unshifted == 0) continue;
#endif
ah.tag_XLed = longest_tags.tag_XLed;
ah.tag_unshifted = longest_tags.tag_unshifted;
ah.tag_shifted = longest_tags.tag_shifted;
// combined score
//const double tags = exp_spectrum.getFloatDataArrays()[2][0];
ah.n_theoretical_peaks = n_theoretical_peaks;
// count matched peaks for spectrum
#pragma omp atomic
matched_peaks[scan_index] += (size_t)ah.Morph;
ah.score = OpenNuXL::calculateCombinedScore(ah/*false, tags*/);
//ah.score = OpenNuXL::calculateFastScore(ah); does this work too
#ifdef DEBUG_OpenNuXL
OPENMS_LOG_DEBUG << "best score in pre-score: " << score << endl;
#endif
#ifdef _OPENMP
omp_set_lock(&(annotated_peptides_lock[scan_index]));
#endif
{
annotated_peptides[scan_index].emplace_back(move(ah));
// prevent vector from growing indefinitly (memory) but don't shrink the vector every time
if (annotated_peptides[scan_index].size() >= 2 * report_top_hits)
{
std::partial_sort(annotated_peptides[scan_index].begin(), annotated_peptides[scan_index].begin() + report_top_hits, annotated_peptides[scan_index].end(), NuXLAnnotatedHit::hasBetterScore);
annotated_peptides[scan_index].resize(report_top_hits);
}
}
#ifdef _OPENMP
omp_unset_lock(&(annotated_peptides_lock[scan_index]));
#endif
}
}
else // score peptide with NA MS1 adduct
{
// generate all partial loss spectra (excluding the complete loss spectrum) merged into one spectrum
// get NA fragment shifts in the MS2 (based on the precursor RNA/DNA)
auto const & all_NA_adducts = all_feasible_fragment_adducts.at(precursor_na_adduct);
const vector<NucleotideToFeasibleFragmentAdducts>& feasible_MS2_adducts = all_NA_adducts.feasible_adducts;
// get marker ions
const vector<NuXLFragmentAdductDefinition>& marker_ions = all_NA_adducts.marker_ions;
PeakSpectrum marker_ions_sub_score_spectrum_z1;
// add shifted marker ions of charge 1
marker_ions_sub_score_spectrum_z1.getStringDataArrays().resize(1); // annotation
marker_ions_sub_score_spectrum_z1.getIntegerDataArrays().resize(1); // annotation
NuXLFragmentIonGenerator::addMS2MarkerIons(
marker_ions,
marker_ions_sub_score_spectrum_z1,
marker_ions_sub_score_spectrum_z1.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX],
marker_ions_sub_score_spectrum_z1.getStringDataArrays()[0]);
//cout << "'" << precursor_na_adduct << "'" << endl;
//OPENMS_POSTCONDITION(!feasible_MS2_adducts.empty(),
// String("FATAL: No feasible adducts for " + precursor_na_adduct).c_str());
// Do we have (nucleotide) specific fragmentation adducts? for the current NA adduct on the precursor?
// If so, generate spectra for shifted ion series
// score individually for every nucleotide
for (auto const & nuc_2_adducts : feasible_MS2_adducts)
{
// determine current nucleotide and associated partial losses
const char& cross_linked_nucleotide = nuc_2_adducts.first;
const vector<NuXLFragmentAdductDefinition>& partial_loss_modification = nuc_2_adducts.second;
// e.g., a precursor adduct of T-C4H5N3O1 is not feasible as it would lead to N(-1). T
// This should be filtered out during generation of feasible fragment adducts.
// This is just a safeguard to prevent regression.
assert(!partial_loss_modification.empty());
if (partial_loss_modification.empty()) OPENMS_LOG_ERROR << "Empty partial loss modification" << endl;
// nucleotide is associated with certain NA-related fragment losses?
vector<double> partial_loss_template_z1_bions, partial_loss_template_z1_yions;
if (!partial_loss_modification.empty())
{
generateTheoreticalMZsZ1_(fixed_and_variable_modified_peptide, Residue::BIon, partial_loss_template_z1_bions);
generateTheoreticalMZsZ1_(fixed_and_variable_modified_peptide, Residue::YIon, partial_loss_template_z1_yions);
}
for (auto & l = low_it; l != up_it; ++l)
{
const Size & scan_index = l->second.first;
const PeakSpectrum & exp_spectrum = spectra[scan_index];
//////////////////////////////////////////
// ID-Filter
if (skip_peptide_spectrum.find(exp_spectrum.getNativeID()) != skip_peptide_spectrum.end()) { continue; }
#pragma omp atomic
++nr_candidates[scan_index]; // count candidate for spectrum
const int & isotope_error = l->second.second;
float tlss_MIC(0),
tlss_err(1.0),
tlss_Morph(0),
tlss_modds(0),
partial_loss_sub_score(0),
marker_ions_sub_score(0),
total_loss_score(0),
pc_MIC(0),
im_MIC(0);
size_t n_theoretical_peaks(0);
const int & exp_pc_charge = exp_spectrum.getPrecursors()[0].getCharge();
vector<double> intensity_linear(total_loss_template_z1_b_ions.size(), 0.0);
vector<bool> peak_matched(exp_spectrum.size(), false);
vector<double> b_ions(total_loss_template_z1_b_ions.size(), 0.0); // b & a ions
vector<double> y_ions(total_loss_template_z1_b_ions.size(), 0.0);
scorePeptideIons_(
exp_spectrum,
exp_spectrum.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX],
total_loss_template_z1_b_ions,
total_loss_template_z1_y_ions,
current_peptide_mass_without_NA,
exp_pc_charge,
iip,
fragment_mass_tolerance,
fragment_mass_tolerance_unit_ppm,
intensity_linear,
b_ions,
y_ions,
peak_matched,
total_loss_score,
tlss_MIC,
tlss_Morph,
tlss_modds,
tlss_err,
pc_MIC,
im_MIC,
n_theoretical_peaks
);
const double tlss_total_MIC = tlss_MIC + im_MIC + (pc_MIC - floor(pc_MIC));
if (badTotalLossScore(total_loss_score, tlss_Morph, tlss_total_MIC)) { continue; }
vector<double> intensity_xls(total_loss_template_z1_b_ions.size(), 0.0);
vector<double> b_xl_ions(b_ions.size(), 0.0);
vector<double> y_xl_ions(b_ions.size(), 0.0);
float plss_MIC(0),
plss_err(fragment_mass_tolerance),
plss_Morph(0),
plss_modds(0),
plss_pc_MIC(0),
plss_im_MIC(0);
scoreXLIons_(partial_loss_modification,
iip,
exp_spectrum,
current_peptide_mass_without_NA,
fragment_mass_tolerance,
fragment_mass_tolerance_unit_ppm,
partial_loss_template_z1_bions,
partial_loss_template_z1_yions,
marker_ions_sub_score_spectrum_z1,
intensity_xls,
b_xl_ions,
y_xl_ions,
peak_matched,
partial_loss_sub_score,
marker_ions_sub_score,
plss_MIC,
plss_err,
plss_Morph,
plss_modds,
plss_pc_MIC,
plss_im_MIC,
n_theoretical_peaks,
all_possible_marker_ion_sub_score_spectrum_z1 // for better MIC calculation
);
const double total_MIC = tlss_MIC + im_MIC + (pc_MIC - floor(pc_MIC)) + plss_MIC + (plss_pc_MIC - floor(plss_pc_MIC)) + plss_im_MIC + marker_ions_sub_score;
// decreases number of hits (especially difficult cases - but also number of false discoveries)
if (filter_bad_partial_loss_scores && badPartialLossScore(tlss_Morph, plss_Morph, plss_MIC, plss_im_MIC, (plss_pc_MIC - floor(plss_pc_MIC)), marker_ions_sub_score))
{
continue;
}
const double mass_error_ppm = (current_peptide_mass - l->first) / l->first * 1e6;
const double mass_error_score = pdf(gaussian_mass_error, mass_error_ppm) / pdf(gaussian_mass_error, 0.0);
// add peptide hit
NuXLAnnotatedHit ah;
ah.NA_adduct_amb_index = NA_adduct_amb_index; // store index the entry in the set of ambiguous precursor adducts
ah.mass_error_p = mass_error_score;
ah.sequence = *cit; // copy StringView
ah.peptide_mod_index = mod_pep_idx;
/*
/////////////////////////////////////////////////////////////////////////////// test recalculate hyperscore on merged XL/non-XL ladders
size_t y_ion_count = std::count_if(y_ions.begin(), y_ions.end(), [](double d) { return d > 1e-6; });
size_t b_ion_count = std::count_if(b_ions.begin(), b_ions.end(), [](double d) { return d > 1e-6; });
size_t dot_product = std::accumulate(intensity_xls.begin(), intensity_xls.end(), 0.0);
dot_product = std::accumulate(intensity_linear.begin(), intensity_linear.end(), dot_product);
const double yFact = logfactorial_(y_ion_count);
const double bFact = logfactorial_(b_ion_count);
partial_loss_sub_score = log1p(dot_product) + yFact + bFact;
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
*/
ah.total_loss_score = total_loss_score;
ah.MIC = tlss_MIC;
ah.immonium_score = im_MIC;
ah.precursor_score = pc_MIC;
ah.err = tlss_err;
ah.Morph = tlss_Morph;
ah.modds = tlss_modds;
ah.pl_MIC = plss_MIC;
ah.pl_err = plss_err;
ah.pl_Morph = plss_Morph;
ah.pl_modds = plss_modds;
ah.pl_pc_MIC = plss_pc_MIC;
ah.pl_im_MIC = plss_im_MIC;
ah.cross_linked_nucleotide = cross_linked_nucleotide;
ah.total_MIC = total_MIC;
// scores from shifted peaks
ah.marker_ions_score = marker_ions_sub_score;
ah.partial_loss_score = partial_loss_sub_score;
ah.NA_mod_index = NA_mod_index;
ah.isotope_error = isotope_error;
auto range = make_pair(intensity_linear.begin(), intensity_linear.end());
ah.ladder_score = ladderScore_(range) / (double)intensity_linear.size();
range = longestCompleteLadder_(intensity_linear.begin(), intensity_linear.end());
if (range.second != range.first)
{
ah.sequence_score = ladderScore_(range) / (double)intensity_linear.size();
}
RankScores rankscores = rankScores_(exp_spectrum, peak_matched);
ah.explained_peak_fraction = rankscores.explained_peak_fraction;
if (rankscores.explained_peaks > 0) ah.matched_theo_fraction = rankscores.explained_peaks / (float)n_theoretical_peaks;
ah.wTop50 = rankscores.wTop50;
// does it have at least one shift from non-cross-linked AA to the neighboring cross-linked one
// const XLTags longest_tags = getLongestLadderWithShift(intensity_linear, intensity_xls);
const XLTags longest_tags = getLongestABYLadderWithShift(b_ions, y_ions, b_xl_ions, y_xl_ions);
#ifdef FILTER_BAD_SCORES_ID_TAGS
if (longest_tags.tag_XLed == 0) { continue; }
#endif
ah.tag_XLed = longest_tags.tag_XLed;
ah.tag_unshifted = longest_tags.tag_unshifted;
ah.tag_shifted = longest_tags.tag_shifted;
// combined score
//const double tags = exp_spectrum.getFloatDataArrays()[2][0];
ah.n_theoretical_peaks = n_theoretical_peaks;
#pragma omp atomic
matched_peaks[scan_index] += (size_t)ah.Morph + (size_t)ah.pl_Morph ;
ah.score = OpenNuXL::calculateCombinedScore(ah/*, true,tags*/ );
#ifdef DEBUG_OpenNuXL
OPENMS_LOG_DEBUG << "best score in pre-score: " << score << endl;
#endif
#ifdef _OPENMP
omp_set_lock(&(annotated_XLs_lock[scan_index]));
#endif
{
annotated_XLs[scan_index].emplace_back(move(ah));
// prevent vector from growing indefinitely (memory) but don't shrink the vector every time
if (annotated_XLs[scan_index].size() >= 2 * report_top_hits)
{
std::partial_sort(annotated_XLs[scan_index].begin(), annotated_XLs[scan_index].begin() + report_top_hits, annotated_XLs[scan_index].end(), NuXLAnnotatedHit::hasBetterScore);
annotated_XLs[scan_index].resize(report_top_hits);
}
}
#ifdef _OPENMP
omp_unset_lock(&(annotated_XLs_lock[scan_index]));
#endif
}
} // for every nucleotide in the precursor
}
}
}
else // fast scoring
{
for (auto & l = low_it; l != up_it; ++l)
{
const Size & scan_index = l->second.first;
const String& precursor_na_adduct = *mod_combinations_it->second.begin(); // For fast scoring it should be sufficient to only consider any of the adducts for this mass and formula (e.g., C-H3N vs U-H2O)
MSSpectrum& exp_spectrum = spectra[scan_index];
if (precursor_na_adduct != "none" && skip_peptide_spectrum.find(exp_spectrum.getNativeID()) != skip_peptide_spectrum.end()) continue;
#pragma omp atomic
++nr_candidates[scan_index];
const int & isotope_error = l->second.second;
const double & exp_pc_mass = l->first;
// generate PSMs for spectrum[scan_index] and add them to annotated hits
addPSMsTotalLossScoring_(
spectra[scan_index],
*cit, // string view on unmodified sequence
mod_pep_idx, // index of peptide mod
NA_mod_index, // index of NA mod
current_peptide_mass,
current_peptide_mass_without_NA,
exp_pc_mass,
iip,
isotope_error,
total_loss_template_z1_b_ions,
total_loss_template_z1_y_ions,
gaussian_mass_error,
fragment_mass_tolerance,
fragment_mass_tolerance_unit_ppm,
annotated_peptides[scan_index],
#ifdef _OPENMP
annotated_peptides_lock[scan_index],
#endif
report_top_hits
);
}
}
}
}
}
} // end: all proteins have been processed
progresslogger.endProgress();
OPENMS_LOG_INFO << "Proteins: " << count_proteins << endl;
OPENMS_LOG_INFO << "Peptides: " << count_peptides << endl;
OPENMS_LOG_INFO << "Peptides (targets): " << count_target_peptides << endl;
OPENMS_LOG_INFO << "Peptides (decoys): " << count_decoy_peptides << endl;
OPENMS_LOG_INFO << "Processed peptides: " << processed_petides.size() << endl;
PeptideIdentificationList peptide_ids;
vector<ProteinIdentification> protein_ids;
progresslogger.startProgress(0, 1, "Post-processing PSMs... (spectra filtering)");
///////////////////////////////////////////////////////////////////////////////////////////////////
// Localization
//
// reload spectra from disc with same settings as before (important to keep same spectrum indices)
spectra.clear(true);
f.load(in_mzml, spectra);
spectra.sortSpectra(true);
//auto [IM_format, IM_unit] = getMS2IMType(spectra);
// convert 1/k0 to CCS
if (IM_unit == DriftTimeUnit::VSSC)
{
convertVSSCToCCS(spectra);
}
preprocessSpectra_(spectra,
// fragment_mass_tolerance,
// fragment_mass_tolerance_unit_ppm,
false, // no single charge (false)
true, window_size, peak_count, purities); // annotate charge (true)
calculateNucleotideTags_(spectra, fragment_mass_tolerance, fragment_mass_tolerance_unit_ppm, nucleotide_to_fragment_adducts);
calculateIntensityRanks(spectra);
calculateLongestAASequenceTag(spectra);
progresslogger.endProgress();
progresslogger.startProgress(0, 1, "Post-processing PSMs... (localization of cross-links)");
assert(spectra.size() == annotated_XLs.size());
assert(spectra.size() == annotated_peptides.size());
// remove all but top n scoring for localization (usually all but the first one)
filterTopNAnnotations_(annotated_XLs, report_top_hits);
filterTopNAnnotations_(annotated_peptides, report_top_hits);
postScoreHits_(spectra,
annotated_XLs,
annotated_peptides,
mm,
fixed_modifications,
variable_modifications,
max_variable_mods_per_peptide,
fragment_mass_tolerance,
fragment_mass_tolerance_unit_ppm,
all_feasible_fragment_adducts);
progresslogger.endProgress();
progresslogger.startProgress(0, 1, "Post-processing PSMs... (annotation)");
// remove all but top n scoring PSMs again
// Note: this is currently necessary as postScoreHits_ might reintroduce nucleotide specific hits for fast scoring
filterTopNAnnotations_(annotated_XLs, report_top_hits);
filterTopNAnnotations_(annotated_peptides, report_top_hits);
postProcessHits_(spectra,
annotated_XLs,
annotated_peptides,
protein_ids,
peptide_ids,
mm,
fixed_modifications,
variable_modifications,
max_variable_mods_per_peptide,
purities,
nr_candidates,
matched_peaks);
progresslogger.endProgress();
protein_ids[0].setPrimaryMSRunPath({"file://" + File::basename(in_mzml)});
// reindex ids
PeptideIndexing indexer;
Param param_pi = indexer.getParameters();
param_pi.setValue("decoy_string_position", "prefix");
param_pi.setValue("enzyme:name", getStringOption_("peptide:enzyme"));
param_pi.setValue("enzyme:specificity", "full");
param_pi.setValue("missing_decoy_action", "silent");
param_pi.setValue("write_protein_sequence", "true");
param_pi.setValue("write_protein_description", "true");
indexer.setParameters(param_pi);
PeptideIndexing::ExitCodes indexer_exit = indexer.run(fasta_db, protein_ids, peptide_ids);
if ((indexer_exit != PeptideIndexing::ExitCodes::EXECUTION_OK) &&
(indexer_exit != PeptideIndexing::ExitCodes::PEPTIDE_IDS_EMPTY))
{
if (indexer_exit == PeptideIndexing::ExitCodes::DATABASE_EMPTY)
{
return INPUT_FILE_EMPTY;
}
else if (indexer_exit == PeptideIndexing::ExitCodes::UNEXPECTED_RESULT)
{
return UNEXPECTED_RESULT;
}
else
{
return UNKNOWN_ERROR;
}
}
StringList meta_values_to_export;
meta_values_to_export.push_back("NuXL:total_loss_score");
meta_values_to_export.push_back("NuXL:partial_loss_score");
meta_values_to_export.push_back("CountSequenceIsTop");
meta_values_to_export.push_back("CountSequenceCharges");
meta_values_to_export.push_back("CountSequenceIsXL");
meta_values_to_export.push_back("CountSequenceIsPeptide");
meta_values_to_export.push_back("NuXL:MIC");
meta_values_to_export.push_back("NuXL:pl_pc_MIC");
meta_values_to_export.push_back("NuXL:pl_MIC");
meta_values_to_export.push_back("nr_candidates");
meta_values_to_export.push_back("-ln(poisson)");
meta_values_to_export.push_back("isotope_error");
if (RTpredict)
{
meta_values_to_export.push_back("RT_predict");
meta_values_to_export.push_back("RT_error");
}
// annotate NuXL related information to hits and create report
vector<NuXLReportRow> csv_rows = NuXLReport::annotate(spectra, peptide_ids, meta_values_to_export, marker_ions_tolerance);
if (generate_decoys)
{
map<double, double, std::greater<double>> map_score2ppm;
for (size_t index = 0; index != peptide_ids.size(); ++index)
{
if (peptide_ids[index].getHits().empty()) continue;
if (!peptide_ids[index].getHits()[0].isDecoy())
{
double ppm_error = peptide_ids[index].getHits()[0].getMetaValue(OpenMS::Constants::UserParam::PRECURSOR_ERROR_PPM_USERPARAM);
map_score2ppm[peptide_ids[index].getHits()[0].getScore()] = ppm_error;
}
}
// calculate mean ppm error from top scoring PSMs (max. 1000 considered)
double mean(0), mean_negative(0), mean_positive(0);
size_t c(0), c_negative(0), c_positive(0);
for (auto it = map_score2ppm.begin(); it != map_score2ppm.end(); ++it)
{
mean += it->second;
++c;
if (c >= 1000) break;
}
if (c != 0) { mean /= c; }
for (auto it = map_score2ppm.begin(); it != map_score2ppm.end(); ++it)
{
if (it->second > 0) continue; // skip positive ppm
mean_negative += it->second;
++c_negative;
if (c_negative >= 1000) break;
}
if (c_negative != 0) { mean_negative /= c_negative; }
for (auto it = map_score2ppm.begin(); it != map_score2ppm.end(); ++it)
{
if (it->second < 0) continue; // skip negative ppm
mean_positive += it->second;
++c_positive;
if (c_positive >= 1000) break;
}
if (c_positive != 0) { mean_positive /= c_positive; }
double sd(0), sd_negative(0), sd_positive(0);
auto it = map_score2ppm.begin();
for (size_t i = 0; i != c; ++i)
{
sd += pow(it->second - mean, 2.0);
if (it->second < 0) sd_negative += pow(it->second - mean, 2.0);
if (it->second > 0) sd_positive += pow(it->second - mean, 2.0);
++it;
}
if (c != 0)
{
sd = sqrt(1.0/static_cast<double>(c) * sd);
if (c_negative != 0)
{
sd_negative = sqrt(1.0/static_cast<double>(c_negative) * sd_negative);
}
if (c_positive != 0)
{
sd_positive = sqrt(1.0/static_cast<double>(c_positive) * sd_positive);
}
OPENMS_LOG_INFO << "mean ppm error: " << mean << " sd: " << sd << " 5*sd: " << 5*sd << " calculated based on " << c << " best ids." << endl;
OPENMS_LOG_INFO << "mean negative ppm error: " << mean_negative << " sd: " << sd_negative << " 5*sd: " << 5*sd_negative << " calculated based on " << c_negative << " best ids." << endl;
OPENMS_LOG_INFO << "mean positive ppm error: " << mean_positive << " sd: " << sd_positive << " 5*sd: " << 5*sd_positive << " calculated based on " << c_positive << " best ids." << endl;
}
if (filter_pc_mass_error && c != 0)
{
// as we are dealing with a very large search space, filter out all PSMs with mass error > 5 *sd
for (size_t index = 0; index != peptide_ids.size(); ++index)
{
vector<PeptideHit>& phs = peptide_ids[index].getHits();
if (phs.empty()) continue;
auto new_end = std::remove_if(phs.begin(), phs.end(),
[&sd, &mean](const PeptideHit & ph)
{
return fabs((double)ph.getMetaValue(Constants::UserParam::PRECURSOR_ERROR_PPM_USERPARAM)) - fabs(mean) > 5.0*sd;
});
phs.erase(new_end, phs.end());
}
IDFilter::removeEmptyIdentifications(peptide_ids);
}
map_score2ppm.clear();
if (impute_decoy_medians)
{
OPENMS_LOG_INFO << "Imputing decoy medians." << endl;
// calculate median score of decoys for specific meta value
auto metaMedian = [](const PeptideIdentificationList & peptide_ids, const String name)->double
{
vector<double> decoy_XL_scores;
for (const auto & pi : peptide_ids)
{
for (const auto & ph : pi.getHits())
{
const bool is_XL = !(static_cast<int>(ph.getMetaValue("NuXL:isXL")) == 0);
if (!is_XL) continue; // skip linear peptides as these don't have the XL values set
if (!ph.isDecoy()) continue;
double score = ph.getMetaValue(name);
decoy_XL_scores.push_back(score);
}
}
std::sort(decoy_XL_scores.begin(), decoy_XL_scores.end(), greater<double>());
return Math::median(decoy_XL_scores.begin(), decoy_XL_scores.end());
};
/*
// all medians
auto metaMean = [](const PeptideIdentificationList & peptide_ids, const String name)->double
{
vector<double> decoy_XL_scores;
for (const auto & pi : peptide_ids)
{
for (const auto & ph : pi.getHits())
{
const bool is_XL = !(static_cast<int>(ph.getMetaValue("NuXL:isXL")) == 0);
if (!is_XL) continue; // skip linear peptides as these don't have the XL values set
if (!ph.isDecoy()) continue;
double score = ph.getMetaValue(name);
decoy_XL_scores.push_back(score);
}
}
std::sort(decoy_XL_scores.begin(), decoy_XL_scores.end(), greater<double>());
return Math::mean(decoy_XL_scores.begin(), decoy_XL_scores.end());
};
*/
map<String, double> medians;
for (const String mn : { "NuXL:marker_ions_score", "NuXL:partial_loss_score", "NuXL:pl_MIC", "NuXL:pl_err", "NuXL:pl_Morph", "NuXL:pl_modds", "NuXL:pl_pc_MIC", "NuXL:pl_im_MIC" })
{
medians[mn] = metaMedian(peptide_ids, mn);
OPENMS_LOG_DEBUG << "median(" << mn << "):" << medians[mn] << endl;
//medians[mn] = metaMean(peptide_ids, mn);
}
size_t imputed(0);
for (auto & pi : peptide_ids)
{
for (auto & ph : pi.getHits())
{
const bool is_XL = !(static_cast<int>(ph.getMetaValue("NuXL:isXL")) == 0);
if (!is_XL)
{
for (const String mn : { "NuXL:marker_ions_score", "NuXL:partial_loss_score", "NuXL:pl_MIC", "NuXL:pl_err", "NuXL:pl_Morph", "NuXL:pl_modds", "NuXL:pl_pc_MIC", "NuXL:pl_im_MIC" })
{
ph.setMetaValue(mn, medians[mn]); // impute missing with medians
}
++imputed;
}
}
pi.sort();
// iterate over peptide hits and set rank
vector<PeptideHit>& phs = pi.getHits();
for (Size r = 0; r != phs.size(); ++r)
{
phs[r].setRank(static_cast<int>(r));
}
}
OPENMS_LOG_INFO << "Imputed XL features in " << imputed << " linear peptides." << endl;
}
// q-value at PSM level irrespective of class (XL/non-XL)
//fdr.QValueAtPSMLevel(peptide_ids);
if (optimize)
{
OPENMS_LOG_INFO << "Parameter optimization." << endl;
//optimizeFDR(peptide_ids);
NuXLLinearRescore::apply(peptide_ids);
OPENMS_LOG_DEBUG << "done." << endl;
}
/*
PeptideIdentificationList pep_pi, xl_pi;
fdr.calculatePeptideAndXLQValueAtPSMLevel(peptide_ids, pep_pi, xl_pi);
fdr.mergePeptidesAndXLs(pep_pi, xl_pi, peptide_ids);
xl_pi.clear();
pep_pi.clear();
*/
vector<string> positive_weights_features =
{ "NuXL:mass_error_p",
"NuXL:total_loss_score",
"NuXL:modds",
"NuXL:immonium_score",
"NuXL:MIC",
"NuXL:Morph",
"NuXL:total_MIC",
"NuXL:ladder_score",
"NuXL:sequence_score",
"NuXL:total_Morph",
"NuXL:total_HS",
"NuXL:tag_XLed",
"NuXL:tag_unshifted",
"NuXL:tag_shifted",
"NuXL:explained_peak_fraction",
"NuXL:theo_peak_fraction",
"NuXL:marker_ions_score",
"NuXL:partial_loss_score",
"NuXL:pl_MIC",
"NuXL:pl_Morph",
"NuXL:pl_modds",
"NuXL:pl_pc_MIC",
"NuXL:pl_im_MIC",
"NuXL:score" };
vector<string> negative_weights_features =
{ "NuXL:err",
"variable_modifications",
"isotope_error" };
/*Neutral?
<< "NuXL:err"
<< "NuXL:immonium_score"
<< "NuXL:precursor_score"
<< "NuXL:aminoacid_max_tag"
<< "NuXL:aminoacid_id_to_max_tag_ratio"
<< "nr_candidates"
<< "NuXL:wTop50"
<< "NuXL:isPhospho"
<< "NuXL:isXL"
<< "isotope_error"
<< "variable_modifications"
<< "precursor_intensity_log10"
<< "NuXL:NA_MASS_z0"
<< "NuXL:NA_length"
<< "nucleotide_mass_tags"
<< "n_theoretical_peaks";
*/
if (RTpredict)
{
NuXLFDR fdr(1); // 1=keep only top scoring per spectrum
PeptideIdentificationList pep_pi, xl_pi;
fdr.calculatePeptideAndXLQValueAtPSMLevel(peptide_ids, pep_pi, xl_pi);
IDFilter::keepNBestHits(xl_pi, 1);
IDFilter::filterHitsByScore(pep_pi, 0.05);
IDFilter::filterHitsByScore(xl_pi, 0.05);
IDFilter::removeEmptyIdentifications(xl_pi);
IDFilter::removeEmptyIdentifications(pep_pi);
pep_pi.insert(std::end(pep_pi),
std::make_move_iterator(std::begin(xl_pi)),
std::make_move_iterator(std::end(xl_pi)));
NuXLRTPrediction rt_pred;
rt_pred.train(in_mzml, pep_pi, protein_ids); // train on best XLs and best peptides
rt_pred.predict(peptide_ids); // predict on all
// add RT prediction as extra feature for percolator
auto search_parameters = protein_ids[0].getSearchParameters();
String new_features = (String)search_parameters.getMetaValue("extra_features") + String(",RT_error,RT_predict");
search_parameters.setMetaValue("extra_features", new_features);
protein_ids[0].setSearchParameters(search_parameters);
}
// write ProteinIdentifications and PeptideIdentifications to IdXML
IdXMLFile().store(out_idxml, protein_ids, peptide_ids);
// generate filtered results
IDFilter::keepNBestHits(peptide_ids, 1);
IDFilter::removeUnreferencedProteins(protein_ids, peptide_ids);
// split PSMs into XLs and non-XLs but keep only best one of both
OPENMS_LOG_INFO << "Calculating peptide and XL q-values." << endl;
String original_PSM_output_filename(out_idxml);
original_PSM_output_filename.substitute(".idXML", "_");
PeptideIdentificationList pep_pi, xl_pi;
if (extra_output_directory.empty())
{
fdr.calculatePeptideAndXLQValueAndFilterAtPSMLevel(protein_ids,
peptide_ids,
pep_pi,
peptide_FDR,
peptide_FDR, // for now we choose same peptide-level FDR = PSM-level FDR for non-cross-links
xl_pi,
XL_FDR,
XL_peptidelevel_FDR,
original_PSM_output_filename,
decoy_factor);
// copy XL results (with highest threshold=little filtering) to output
if (!out_xl_idxml.empty())
{
File::copy(original_PSM_output_filename + String::number(xl_fdr_max, 4) + "_XLs.idXML", out_xl_idxml);
}
}
else
{ // use output_folder
String id_xml_out = extra_output_directory;
id_xml_out.ensureLastChar('/');
id_xml_out += File::basename(out_idxml).substitute(".idXML", "_");
fdr.calculatePeptideAndXLQValueAndFilterAtPSMLevel(protein_ids,
peptide_ids,
pep_pi,
peptide_FDR,
peptide_FDR, // for now we choose same peptide-level FDR = PSM-level FDR for non-cross-links
xl_pi,
XL_FDR,
XL_peptidelevel_FDR,
id_xml_out,
decoy_factor);
// copy XL results (with highest threshold=little filtering) to output
if (!out_xl_idxml.empty())
{
File::copy(id_xml_out + String::number(xl_fdr_max, 4) + "_XLs.idXML", out_xl_idxml);
}
}
OPENMS_LOG_INFO << "done." << endl;
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// score recalibration with percolator
String percolator_executable = getStringOption_("percolator_executable");
bool sufficient_PSMs_for_score_recalibration = (xl_pi.size() + pep_pi.size()) >= 1000;
if (!percolator_executable.empty() && sufficient_PSMs_for_score_recalibration) // only try to call percolator if we have some PSMs
{
// run percolator on idXML
String perc_out = out_idxml;
perc_out.substitute(".idXML", "_perc.idXML");
String weights_out = out_idxml;
weights_out.substitute(".idXML", ".weights");
String pin = out_idxml;
pin.substitute(".idXML", ".tsv");
QStringList process_params;
process_params << "-in" << out_idxml.toQString()
<< "-out" << perc_out.toQString()
<< "-percolator_executable" << percolator_executable.toQString()
<< "-train_best_positive"
<< "-score_type" << "svm"
<< "-unitnorm"
<< "-post_processing_tdc"
// << "-nested_xval_bins" << "3"
<< "-weights" << weights_out.toQString()
<< "-out_pin" << pin.toQString();
if (getStringOption_("peptide:enzyme") == "Lys-C")
{
process_params << "-enzyme" << "lys-c";
}
// process_params << "-out_pout_target" << "merged_target.tab" << "-out_pout_decoy" << "merged_decoy.tab";
OPENMS_LOG_INFO << "Running percolator." << endl;
TOPPBase::ExitCodes exit_code = runExternalProcess_(QString("PercolatorAdapter"), process_params);
OPENMS_LOG_INFO << "done." << endl;
if (exit_code != EXECUTION_OK)
{
OPENMS_LOG_WARN << "Score recalibration failed." << endl;
}
else
{
// load back idXML
IdXMLFile().load(perc_out, protein_ids, peptide_ids);
// generate filtered results
IDFilter::keepNBestHits(peptide_ids, 1);
IDFilter::removeUnreferencedProteins(protein_ids, peptide_ids);
// annotate NuXL related information to hits and create report
vector<NuXLReportRow> csv_rows_percolator = NuXLReport::annotate(spectra, peptide_ids, meta_values_to_export, marker_ions_tolerance);
// save report
if (!out_tsv.empty())
{
TextFile csv_file;
csv_file.addLine(NuXLReportRowHeader().getString("\t", meta_values_to_export));
for (const NuXLReportRow& r : csv_rows_percolator)
{
csv_file.addLine(r.getString("\t"));
}
const String out_percolator_tsv = FileHandler::stripExtension(out_tsv) + "_perc.tsv";
csv_file.store(out_percolator_tsv);
}
PeptideIdentificationList pep_pi, xl_pi;
String percolator_PSM_output_filename(out_idxml);
percolator_PSM_output_filename.substitute(".idXML", "_perc_");
OPENMS_LOG_INFO << "Calculating peptide and XL q-values for percolator results." << endl;
if (extra_output_directory.empty())
{
fdr.calculatePeptideAndXLQValueAndFilterAtPSMLevel(protein_ids,
peptide_ids,
pep_pi,
peptide_FDR,
peptide_FDR, // for now we choose same peptide-level FDR = PSM-level FDR for non-cross-links
xl_pi,
XL_FDR,
XL_peptidelevel_FDR,
percolator_PSM_output_filename,
decoy_factor);
// copy XL results (with highest threshold=little filtering) to outut TODO: first copy would not be needed
if (!out_xl_idxml.empty())
{
File::copy(percolator_PSM_output_filename + String::number(xl_fdr_max, 4) + "_XLs.idXML", out_xl_idxml);
}
}
else
{ // use output_folder
String id_xml_out = extra_output_directory;
id_xml_out.ensureLastChar('/');
id_xml_out += File::basename(out_idxml).substitute(".idXML", "_perc_");
fdr.calculatePeptideAndXLQValueAndFilterAtPSMLevel(protein_ids,
peptide_ids,
pep_pi,
peptide_FDR,
peptide_FDR, // for now we choose same peptide-level FDR = PSM-level FDR for non-cross-links
xl_pi,
XL_FDR,
XL_peptidelevel_FDR,
id_xml_out,
decoy_factor);
// copy XL results (with highest threshold=little filtering) to output TODO: first copy would not be needed if percolator succeeds
if (!out_xl_idxml.empty())
{
File::copy(id_xml_out + String::number(xl_fdr_max, 4) + "_XLs.idXML", out_xl_idxml);
}
}
OPENMS_LOG_INFO << "done." << endl;
}
}
else
{
if (sufficient_PSMs_for_score_recalibration == false)
{
OPENMS_LOG_WARN << "Too few PSMs for score recalibration. Skipped." << endl;
}
}
}
else // no decoys
{
// write ProteinIdentifications and PeptideIdentifications to IdXML
IdXMLFile().store(out_idxml, protein_ids, peptide_ids);
// TODO: probably not supported / wrong at this point
}
// save report
if (!out_tsv.empty())
{
TextFile csv_file;
csv_file.addLine(NuXLReportRowHeader().getString("\t", meta_values_to_export));
for (Size i = 0; i != csv_rows.size(); ++i)
{
csv_file.addLine(csv_rows[i].getString("\t"));
}
csv_file.store(out_tsv);
}
#ifdef _OPENMP
// free locks
for (size_t i = 0; i != annotated_XLs_lock.size(); i++) { omp_destroy_lock(&(annotated_XLs_lock[i])); }
for (size_t i = 0; i != annotated_peptides_lock.size(); i++) { omp_destroy_lock(&(annotated_peptides_lock[i])); }
#endif
return EXECUTION_OK;
}
// only used for fast scoring
static void postScorePartialLossFragments_(const Size peptide_size,
const PeakSpectrum &exp_spectrum,
double fragment_mass_tolerance,
bool fragment_mass_tolerance_unit_ppm,
const PeakSpectrum &partial_loss_spectrum_z1,
const PeakSpectrum &partial_loss_spectrum_z2,
const PeakSpectrum &marker_ions_sub_score_spectrum_z1,
float &partial_loss_sub_score,
float &marker_ions_sub_score,
float &plss_MIC,
//float &plss_err,
float &plss_Morph,
float &plss_modds)
{
OPENMS_PRECONDITION(fragment_mass_tolerance_unit_ppm, "absolute fragment mass toleranes not implemented.");
const SignedSize& exp_pc_charge = exp_spectrum.getPrecursors()[0].getCharge();
if (!marker_ions_sub_score_spectrum_z1.empty())
{
auto const & r = MorpheusScore::compute(fragment_mass_tolerance * 2.0,
fragment_mass_tolerance_unit_ppm,
exp_spectrum,
exp_spectrum.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX],
marker_ions_sub_score_spectrum_z1,
marker_ions_sub_score_spectrum_z1.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX]);
marker_ions_sub_score = r.TIC != 0 ? r.MIC / r.TIC : 0;
}
if (!partial_loss_spectrum_z1.empty()) // check if we generated partial loss spectra
{
vector<double> intensity_sum(peptide_size, 0.0);
MSSpectrum const * pl_spec = &partial_loss_spectrum_z1;
if (exp_pc_charge >= 3)
{
pl_spec = &partial_loss_spectrum_z2;
}
partial_loss_sub_score = HyperScore::compute(fragment_mass_tolerance,
fragment_mass_tolerance_unit_ppm,
exp_spectrum,
exp_spectrum.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX],
*pl_spec,
pl_spec->getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX],
intensity_sum);
auto const & pl_sub_scores = MorpheusScore::compute(fragment_mass_tolerance,
fragment_mass_tolerance_unit_ppm,
exp_spectrum,
exp_spectrum.getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX],
*pl_spec,
pl_spec->getIntegerDataArrays()[NuXLConstants::IA_CHARGE_INDEX]);
plss_MIC = pl_sub_scores.TIC != 0 ? pl_sub_scores.MIC / pl_sub_scores.TIC : 0;
plss_Morph = pl_sub_scores.score;
// if we only have 1 peak assume some kind of average error to not underestimate the real error to much
// plss_err = plss_Morph > 2 ? pl_sub_scores.err_ppm : fragment_mass_tolerance;
//const double p_random_match = exp_spectrum.getFloatDataArrays()[1][0];
const double p_random_match = 1e-3;
plss_modds = matchOddsScore_(pl_spec->size(), (int)plss_Morph, p_random_match);
}
#ifdef DEBUG_OpenNuXL
OPENMS_LOG_DEBUG << "scan index: " << scan_index << " achieved score: " << score << endl;
#endif
}
};
map<String, vector<vector<double>>> OpenNuXL::fragment_adduct2block_if_masses_present = {};
int main(int argc, const char** argv)
{
OpenNuXL tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/Decharger.cpp | .cpp | 6,876 | 187 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/SYSTEM/StopWatch.h>
#include <OpenMS/ANALYSIS/DECHARGING/FeatureDeconvolution.h>
#include <OpenMS/KERNEL/FeatureMap.h>
#include <OpenMS/FORMAT/FileHandler.h>
// #include <OpenMS/FORMAT/FeatureXMLFile.h>
// #include <OpenMS/FORMAT/ConsensusXMLFile.h>
#include <OpenMS/KERNEL/ConsensusMap.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_Decharger Decharger
@brief Decharges a feature map by clustering charge variants of a peptide to zero-charge entities.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → Decharger →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FeatureFinderCentroided </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_ProteinQuantifier</td>
</tr>
</table>
</CENTER>
The Decharger uses an ILP approach to group charge variants of the same peptide, which
usually occur in ESI ionization mode. The resulting zero-charge peptides, which are defined by RT and mass,
are written to consensusXML. Intensities of charge variants are summed up. The position of the zero charge
variant is the average of all clustered peptides in each dimension (m/z and RT).
It is also possible to include adducted species to the charge ladders (see 'potential_adducts' parameter).
Via this mechanism it is also possible to use this tool to find pairs/triples/quadruples/... in labeled data (by specifing the mass
tag weight as an adduct). If mass tags induce an RT shift (e.g. deuterium labeled data) you can also specify this also in the adduct list.
This will allow to tighten the RT search window, thus reducing false positive results.
This tool is described in the following publication:
Bielow C, Ruzek S, Huber CG, Reinert K. Optimal decharging and clustering of charge ladders generated in ESI-MS. J Proteome Res 2010; 9: 2688.<br>
DOI: 10.1021/pr100177k
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_Decharger.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_Decharger.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPDecharger :
virtual public TOPPBase
{
public:
TOPPDecharger() :
TOPPBase("Decharger", "Decharges and merges different feature charge variants of the same peptide.", true,
{ // citation(s), specific for this tool
{ "Bielow C, Ruzek S, Huber CG, Reinert K", "Optimal decharging and clustering of charge ladders generated in ESI-MS", "J Proteome Res 2010; 9: 2688", "10.1021/pr100177k" }
}
)
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "input file ");
setValidFormats_("in", ListUtils::create<String>("featureXML"));
registerOutputFile_("out_cm", "<file>", "", "output consensus map");
registerOutputFile_("out_fm", "<file>", "", "output feature map", false);
registerOutputFile_("outpairs", "<file>", "", "output file", false);
setValidFormats_("out_fm", ListUtils::create<String>("featureXML"));
setValidFormats_("out_cm", ListUtils::create<String>("consensusXML"));
setValidFormats_("outpairs", ListUtils::create<String>("consensusXML"));
addEmptyLine_();
registerSubsection_("algorithm", "Feature decharging algorithm section");
}
Param getSubsectionDefaults_(const String & /*section*/) const override
{
// there is only one subsection: 'algorithm' (s.a) .. and in it belongs the FeatureDecharger param
FeatureDeconvolution fdc;
Param tmp;
tmp.insert("FeatureDeconvolution:", fdc.getParameters());
return tmp;
}
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
String infile = getStringOption_("in");
String outfile_fm = getStringOption_("out_fm");
String outfile_cm = getStringOption_("out_cm");
String outfile_p = getStringOption_("outpairs");
FeatureDeconvolution fdc;
Param const & dc_param = getParam_().copy("algorithm:FeatureDeconvolution:", true);
writeDebug_("Parameters passed to Decharger", dc_param, 3);
fdc.setParameters(dc_param);
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
writeDebug_("Loading input file", 1);
typedef FeatureMap FeatureMapType;
FeatureMapType map_in, map_out;
FileHandler().loadFeatures(infile, map_in);
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
ConsensusMap cm, cm2;
StopWatch a;
a.start();
fdc.compute(map_in, map_out, cm, cm2);
a.stop();
//std::cerr << "took: " << a.getClockTime() << " seconds\n\n\n";
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
writeDebug_("Saving output files", 1);
// Set filename for all column headers
for (auto& header : cm.getColumnHeaders())
{
header.second.filename = infile;
}
for (auto& header : cm2.getColumnHeaders())
{
header.second.filename = infile;
}
//annotate output with data processing info
addDataProcessing_(map_out, getProcessingInfo_(DataProcessing::CHARGE_DECONVOLUTION));
addDataProcessing_(cm, getProcessingInfo_(DataProcessing::CHARGE_DECONVOLUTION));
addDataProcessing_(cm2, getProcessingInfo_(DataProcessing::CHARGE_DECONVOLUTION));
FileHandler().storeConsensusFeatures(outfile_cm, cm, {FileTypes::CONSENSUSXML});
if (!outfile_p.empty())
{
FileHandler().storeConsensusFeatures(outfile_p, cm2, {FileTypes::CONSENSUSXML});
}
if (!outfile_fm.empty())
{
FileHandler().storeFeatures(outfile_fm, map_out, {FileTypes::FEATUREXML});
}
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPDecharger tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/QualityControl.cpp | .cpp | 24,332 | 585 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Tom Waschischeck $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/ANALYSIS/ID/IDConflictResolverAlgorithm.h>
#include <OpenMS/CONCEPT/Exception.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FASTAFile.h>
#include <OpenMS/FORMAT/MzTab.h>
#include <OpenMS/FORMAT/MzTabFile.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/METADATA/PeptideIdentification.h>
#include <OpenMS/METADATA/MetaInfoInterfaceUtils.h>
#include <OpenMS/QC/Contaminants.h>
#include <OpenMS/QC/FragmentMassError.h>
#include <OpenMS/QC/FWHM.h>
#include <OpenMS/QC/MissedCleavages.h>
#include <OpenMS/QC/Ms2IdentificationRate.h>
#include <OpenMS/QC/MzCalibration.h>
#include <OpenMS/QC/PeptideMass.h>
#include <OpenMS/QC/PSMExplainedIonCurrent.h>
#include <OpenMS/QC/RTAlignment.h>
#include <OpenMS/QC/TIC.h>
#include <OpenMS/QC/Ms2SpectrumStats.h>
#include <OpenMS/QC/MQEvidenceExporter.h>
#include <OpenMS/QC/MQMsmsExporter.h>
#include <OpenMS/QC/MQExporterHelper.h>
#include <map>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_QualityControl QualityControl
@brief Generates an mzTab file from various sources of a pipeline (mainly a ConsensusXML) which can be used for QC plots (e.g. via the R package 'PTXQC').
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=4> → QualityControl →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FeatureLinkerUnlabeledKD (or FLs; for consensusXML)</td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=3> <a href="https://github.com/cbielow/PTXQC/" target="_blank">PTX-QC</a> </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDMapper (for featureXMLs)</td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_InternalCalibration </td>
</tr>
</table>
</CENTER>
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_QualityControl.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_QualityControl.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPQualityControl : public TOPPBase
{
public:
TOPPQualityControl()
: TOPPBase("QualityControl", "Computes various QC metrics.\nMany input formats are supported only the consensusXML is required.\nThe more optional files you provide, the more metrics you get.", true)
{
}
protected:
// this function will be used to register the tool parameters
// it gets automatically called on tool execution
void registerOptionsAndFlags_() override
{
registerInputFile_("in_cm", "<file>", "", "ConsensusXML input, generated by FeatureLinker.", true);
setValidFormats_("in_cm", {"consensusXML"});
registerInputFileList_("in_raw", "<files>", {}, "MzML input (after InternalCalibration, if available)", false);
setValidFormats_("in_raw", {"mzML"});
registerInputFileList_("in_postFDR", "<files>", {}, "FeatureXMLs after FDR filtering", false);
setValidFormats_("in_postFDR", {"featureXML"});
registerOutputFile_("out", "<file>", "", "Output mzTab with QC information", false);
setValidFormats_("out", { "mzTab" });
registerOutputFile_("out_cm", "<file>", "", "ConsensusXML with QC information (as metavalues)", false);
setValidFormats_("out_cm", { "consensusXML" });
registerOutputFileList_("out_feat", "<files>", {}, "FeatureXMLs with QC information (as metavalues)", false);
setValidFormats_("out_feat", { "featureXML" });
registerTOPPSubsection_("FragmentMassError", "Fragment Mass Error settings");
registerStringOption_("FragmentMassError:unit", "<unit>", "auto", "Unit for mass tolerance. 'auto' uses information from FeatureXML", false);
setValidStrings_("FragmentMassError:unit", std::vector<String>(FragmentMassError::names_of_toleranceUnit, FragmentMassError::names_of_toleranceUnit + (int)FragmentMassError::ToleranceUnit::SIZE_OF_TOLERANCEUNIT));
registerDoubleOption_("FragmentMassError:tolerance", "<double>", 20, "m/z search window for matching peaks in two spectra", false);
registerInputFile_("in_contaminants", "<file>", "", "Proteins considered contaminants", false);
setValidFormats_("in_contaminants", {"fasta"});
registerInputFile_("in_fasta", "<file>", "", "FASTA file used during MS/MS identification (including decoys). If the protein description contains 'GN=...' then gene names will be extracted", false);
setValidFormats_("in_fasta", {"fasta"});
registerInputFileList_("in_trafo", "<file>", {}, "trafoXMLs from MapAligners", false);
setValidFormats_("in_trafo", {"trafoXML"});
registerTOPPSubsection_("MS2_id_rate", "MS2 ID Rate settings");
registerFlag_("MS2_id_rate:assume_all_target", "Forces the metric to run even if target/decoy annotation is missing (accepts all pep_ids as target hits).", false);
// MaxQuant-compatible output
registerTOPPSubsection_("out_txt", "Write MaxQuant-compatible .txt files");
registerOutputDir_("out_txt:directory", "<Path>", "", "If a Path is given, '.txt' files compatible with MaxQuant will be created in this directory. If the directory does not exist, it will be created.", false);
registerFlag_("out_txt:omit_mq_evidence", "Do NOT write the evidence.txt into 'out_txt:directory'?", false);
registerFlag_("out_txt:omit_mq_msms", "Do NOT write the msms.txt into 'out_txt:directory'?", false);
//TODO get ProteinQuantifier output for PRT section
}
// the main_ function is called after all parameters are read
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
//
// Read input, check for same length and get that length
QCBase::Status status;
UInt64 number_exps(0);
StringList in_raw = updateFileStatus_(status, number_exps, "in_raw", QCBase::Requires::RAWMZML);
StringList in_postFDR = updateFileStatus_(status, number_exps, "in_postFDR", QCBase::Requires::POSTFDRFEAT);
StringList in_trafo = updateFileStatus_(status, number_exps, "in_trafo", QCBase::Requires::TRAFOALIGN);
// load databases and other single file inputs
String in_contaminants = getStringOption_("in_contaminants");
vector<FASTAFile::FASTAEntry> contaminants;
if (!in_contaminants.empty())
{
FASTAFile().load(in_contaminants, contaminants);
status |= QCBase::Requires::CONTAMINANTS;
}
//the additional file the user passed, with annotate genenames & protnames
String fasta_file = getStringOption_("in_fasta");
vector<FASTAFile::FASTAEntry> prot_description;
if(!fasta_file.empty())
{
OPENMS_LOG_INFO << "Loading FASTA ... " << fasta_file << std::endl;
FASTAFile().load(fasta_file, prot_description);
}
ConsensusMap cmap;
String in_cm = getStringOption_("in_cm");
FileHandler().loadConsensusFeatures(in_cm, cmap, {FileTypes::CONSENSUSXML});
for (ConsensusFeature & cf: cmap) // make sure that the first PeptideIdentification of a ConsensusFeature is the one with the highest Score
{
sortVectorOfPeptideIDsbyScore_(cf.getPeptideIdentifications());
}
std::vector<FeatureMap> fmaps;
if (in_postFDR.empty())
{
status |= QCBase::Requires::POSTFDRFEAT;
fmaps = cmap.split(ConsensusMap::SplitMeta::COPY_ALL);
bool is_labeled_cmap = QCBase::isLabeledExperiment(cmap);
if (is_labeled_cmap) // for labeled input (e.g. iTRAQ/TMT/SILAC)
{
OPENMS_LOG_INFO << "Labeled data detected!" << std::endl;
if (number_exps != 1) // no features given, but >1 trafos...
{
throw Exception::Precondition(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, String("More than one mzML or TrafoXML were given, but this is not supported in 'labeled' mode."));
}
// number_exps can remain 1, since we only need to annotate the first FMap with metavalues (the others only have exact copies)
// ...
}
else // unlabeled == LFQ mode
{
OPENMS_LOG_INFO << "Unlabeled data detected in ConsensusXML detected! This functionality is currently only supported if you also provide the featureXML files!"
<< std::endl;
throw Exception::NotImplemented(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION);
// currently missing:
// - invert RT of all features+their PepIDs to allow RTmetric to work (if TrafoXMLs are provided) -- or even better: delegate this to the RTMetric
// - the SearchParameters are currently taken from the first ProteinIdentificaion of the FMaps...
// however, during splitting, all ProtID's from the CMap are blindly copied to all FMaps (it should only pick the correct one)...
OPENMS_LOG_INFO << "Unlabeled data detected in ConsensusXML detected! Data will be extracted from there. If you can, provide the FeatureXML files for potentially more metrics." << std::endl;
if (number_exps != fmaps.size())
{
throw Exception::Precondition(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
String("Number of Maps in the ConsensusMap (") + fmaps.size() +
") does not match length of -in_raw or -in_trafo (" + number_exps + ").");
}
}
}
FeatureMap* fmap;
// mztab writer requires single PIs per CF
// adds 'feature_id' metavalue to all PIs before moving them to remember the uniqueID of the CF
// check for identical IDs of the ConsensusFeatures in Export from MQEvidence_result.txt
IDConflictResolverAlgorithm::resolve(cmap);
//-------------------------------------------------------------
// prot/pepID-identifier --> ms-run-path
//-------------------------------------------------------------
ProteinIdentification::Mapping mp_c(cmap.getProteinIdentifications());
//-------------------------------------------------------------
// Build a PepID Map to later find the corresponding PepID in the CMap
//-------------------------------------------------------------
multimap<String, std::pair<Size, Size>> customID_to_cpepID; // multimap is required because a PepID could be duplicated by IDMapper and appear >=1 in a featureMap
customID_to_cpepID = PeptideIdentification::buildUIDsFromAllPepIDs(cmap);
for (Size i = 0; i < cmap.size(); ++i)
{
// connect CF (stored in PEP section) with its peptides (stored in PSM section) ... they might get separated later by IDConflictResolverAlgorithm
cmap[i].setMetaValue("cf_id", i);
for (auto& pep_id : cmap[i].getPeptideIdentifications())
{
pep_id.setMetaValue("cf_id", i);
}
}
for (auto& pep_id : cmap.getUnassignedPeptideIdentifications())
{
pep_id.setMetaValue("cf_id", -1);
}
// check flags
bool all_target_flag = getFlag_("MS2_id_rate:assume_all_target");
double tolerance_value = getDoubleOption_("FragmentMassError:tolerance");
auto it = std::find(QCBase::names_of_toleranceUnit, QCBase::names_of_toleranceUnit + (int) QCBase::ToleranceUnit::SIZE_OF_TOLERANCEUNIT, getStringOption_("FragmentMassError:unit"));
auto idx = std::distance(QCBase::names_of_toleranceUnit, it);
auto tolerance_unit = QCBase::ToleranceUnit(idx);
// Instantiate the QC metrics
Contaminants qc_contaminants;
FragmentMassError qc_frag_mass_err;
FWHM qc_fwhm;
MissedCleavages qc_missed_cleavages;
Ms2IdentificationRate qc_ms2ir;
MzCalibration qc_mz_calibration;
RTAlignment qc_rt_alignment;
PeptideMass qc_pepmass;
PSMExplainedIonCurrent qc_psm_corr;
TIC qc_tic;
Ms2SpectrumStats qc_ms2stats;
PeakMap exp;
QCBase::SpectraMap spec_map;
// Loop through featuremaps...
PeptideIdentificationList all_new_upep_ids;
String out_txt_dir = getOutputDirOption("out_txt:directory");
const bool write_mq_evidence = !getFlag_("out_txt:omit_mq_evidence");
const bool write_mq_msms = !getFlag_("out_txt:omit_mq_msms");
MQEvidence export_evidence(write_mq_evidence ? out_txt_dir : "");
MQMsms export_msms(write_mq_msms ? out_txt_dir : "");
vector<TIC::Result> tic_results;
for (Size i = 0; i < number_exps; ++i)
{
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
if (i < in_raw.size())
{ // we either have 'n' or 1 mzML ... use the correct one in each iteration
FileHandler().loadExperiment(in_raw[i], exp, {FileTypes::MZML}, log_type_);
spec_map.calculateMap(exp);
}
ProteinIdentification::Mapping mp_f;
FeatureMap fmap_local;
if (!in_postFDR.empty())
{
FileHandler().loadFeatures(in_postFDR[i], fmap_local, {FileTypes::FEATUREXML}, log_type_);
fmap = &fmap_local;
}
else
{
fmap = &(fmaps[i]);
}
for (Feature & f: *fmap) // make sure that the first PeptideIdentification of a Feature is the one with the highest Score
{
sortVectorOfPeptideIDsbyScore_(f.getPeptideIdentifications());
}
mp_f.create(fmap->getProteinIdentifications());
TransformationDescription trafo_descr;
if (!in_trafo.empty())
{
FileHandler().loadTransformations(in_trafo[i], trafo_descr, true, {FileTypes::TRANSFORMATIONXML});
}
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
if (qc_contaminants.isRunnable(status))
{
qc_contaminants.compute(*fmap, contaminants);
}
if (qc_frag_mass_err.isRunnable(status))
{
qc_frag_mass_err.compute(*fmap, exp, spec_map, tolerance_unit, tolerance_value);
}
if (qc_ms2ir.isRunnable(status))
{
qc_ms2ir.compute(*fmap, exp, all_target_flag);
}
if (qc_mz_calibration.isRunnable(status))
{
qc_mz_calibration.compute(*fmap, exp, spec_map);
}
// after qc_mz_calibration, because it calculates 'mass' metavalue
if (qc_missed_cleavages.isRunnable(status))
{
qc_missed_cleavages.compute(*fmap);
}
if (qc_rt_alignment.isRunnable(status))
{ // add metavalues rt_raw & rt_align to all PepIDs
qc_rt_alignment.compute(*fmap, trafo_descr);
}
if (qc_fwhm.isRunnable(status))
{
qc_fwhm.compute(*fmap);
}
if (qc_pepmass.isRunnable(status))
{
qc_pepmass.compute(*fmap);
}
if (qc_psm_corr.isRunnable(status))
{
qc_psm_corr.compute(*fmap, exp, spec_map, tolerance_unit, tolerance_value);
}
if (qc_tic.isRunnable(status))
{
tic_results.push_back(qc_tic.compute(exp));
}
if (qc_ms2stats.isRunnable(status))
{
// copies FWHM metavalue to PepIDs as well
PeptideIdentificationList new_upep_ids = qc_ms2stats.compute(exp, *fmap, spec_map);
// use identifier of CMap for just calculated pepIDs (via common MS-run-path)
const auto& f_runpath = mp_f.runpath_to_identifier.begin()->first; // just get any runpath from fmap
const auto ptr_cmap = mp_c.runpath_to_identifier.find(f_runpath);
if (ptr_cmap == mp_c.runpath_to_identifier.end())
{
OPENMS_LOG_ERROR << "FeatureXML (MS run '" << ListUtils::concatenate(f_runpath, ", ") << "') does not correspond to ConsensusXML (run not found). Check input!\n";
return ILLEGAL_PARAMETERS;
}
for (PeptideIdentification& pep_id : new_upep_ids)
{
pep_id.setIdentifier(ptr_cmap->second);
}
// annotate the RT alignment
if (qc_rt_alignment.isRunnable(status))
{
qc_rt_alignment.compute(new_upep_ids, trafo_descr);
}
// save the just calculated IDs for appending to Cmap later (not now, because the vector might resize and invalidate our PepID*).
all_new_upep_ids.insert(all_new_upep_ids.end(), new_upep_ids.begin(), new_upep_ids.end());
}
StringList out_feat = getStringList_("out_feat");
if (!out_feat.empty())
{
FileHandler().storeFeatures(out_feat[i], *fmap, {FileTypes::FEATUREXML}, log_type_);
}
//-------------------------------------------------------------
// Annotate calculated meta values from FeatureMap to given ConsensusMap
//-------------------------------------------------------------
// copy MetaValues of unassigned PepIDs
addPepIDMetaValues_(fmap->getUnassignedPeptideIdentifications(), customID_to_cpepID, mp_f.identifier_to_msrunpath, cmap);
// copy MetaValues of assigned PepIDs
for (Feature& feature : *fmap)
{
addPepIDMetaValues_(feature.getPeptideIdentifications(), customID_to_cpepID, mp_f.identifier_to_msrunpath, cmap);
}
if (MQExporterHelper::isValid(out_txt_dir))
{
//if the user provided no fastafile, we can try this as a last resort
const auto& cmap_prot_ids = cmap.getProteinIdentifications();
if(!cmap_prot_ids.empty())
{
const String& file_name = cmap_prot_ids[0].getSearchParameters().db;
if(prot_description.empty())
{
fallbackFasta(file_name, prot_description);
}
}
//index the fasta file for constant access
map<String,String> fasta_map {};
indexFasta(prot_description, fasta_map);
if (write_mq_evidence)
{
OPENMS_LOG_INFO << "Exporting FeatureMap for evidence..." << std::endl;
export_evidence.exportFeatureMap(*fmap,cmap,exp,fasta_map);
}
if (write_mq_msms)
{
OPENMS_LOG_INFO << "Exporting FeatureMap for msms..." << std::endl;
export_msms.exportFeatureMap(*fmap,cmap,exp,fasta_map);
}
}
}
// check if all PepIDs of ConsensusMap appeared in a FeatureMap
bool incomplete_features {false};
auto f =
[&incomplete_features](const PeptideIdentification& pep_id)
{
if (!pep_id.getHits().empty() && !pep_id.getHits()[0].metaValueExists("missed_cleavages"))
{
OPENMS_LOG_ERROR << "A PeptideIdentification in the ConsensusXML with sequence " << pep_id.getHits()[0].getSequence().toString()
<< ", RT '" << pep_id.getRT() << "', m/z '" << pep_id.getMZ() << "' and identifier '" << pep_id.getIdentifier()
<< "' does not appear in any of the given FeatureXMLs. Check your input!\n";
incomplete_features = true;
}
};
cmap.applyFunctionOnPeptideIDs(f, true);
if (incomplete_features)
{
return ILLEGAL_PARAMETERS;
}
// add new PeptideIdentifications (for unidentified MS2 spectra)
cmap.getUnassignedPeptideIdentifications().insert(cmap.getUnassignedPeptideIdentifications().end(), all_new_upep_ids.begin(), all_new_upep_ids.end());
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
String out_cm = getStringOption_("out_cm");
if (!out_cm.empty())
{
FileHandler().storeConsensusFeatures(out_cm, cmap, {FileTypes::CONSENSUSXML}, log_type_);
}
String out = getStringOption_("out");
if (!out.empty())
{
MzTab mztab = MzTab::exportConsensusMapToMzTab(cmap, in_cm, true, true, true, true, "QC export from OpenMS");
MzTabMetaData meta = mztab.getMetaData();
qc_tic.addMetaDataMetricsToMzTab(meta, tic_results);
qc_ms2ir.addMetaDataMetricsToMzTab(meta);
mztab.setMetaData(meta);
MzTabFile mztab_out;
mztab_out.store(out, mztab);
}
return EXECUTION_OK;
}
private:
StringList updateFileStatus_(QCBase::Status& status, UInt64& number_exps, const String& port, const QCBase::Requires& req) const
{
// since files are optional, leave function if none are provided by the user
StringList files = getStringList_(port);
if (!files.empty())
{
if (number_exps == 0)
{
number_exps = files.size(); // Number of experiments is determined from first non empty file list.
}
if (number_exps != files.size()) // exit if any file list has different length
{
throw(Exception::InvalidParameter(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, port + ": invalid number of files. Expected were " + number_exps + ".\n"));
}
status |= req;
}
return files;
}
void sortVectorOfPeptideIDsbyScore_(PeptideIdentificationList& pep_ids)
{
for (PeptideIdentification& pep_id : pep_ids)
{
pep_id.sort(); // sort the PeptideHits of PeptideIdentifications by Score (Best PeptideHit at index 0)
}
std::sort(pep_ids.begin(), pep_ids.end(), [](const PeptideIdentification& a,const PeptideIdentification& b)
{
if (a.empty() || b.empty())
{
return a.empty() > b.empty();
}
return a.getHits()[0].getScore() > b.getHits()[0].getScore(); // sort the PeptideIdentifications by their PeptideHit with the highest Score
});
}
void addPepIDMetaValues_(
const PeptideIdentificationList& f_pep_ids,
const multimap<String, pair<Size, Size>>& customID_to_cpepID,
const map<String, StringList>& fidentifier_to_msrunpath,
ConsensusMap& cmap) const
{
for (const PeptideIdentification& f_pep_id : f_pep_ids)
{
// for empty PIs which were created by a metric
if (f_pep_id.getHits().empty())
{
continue;
}
String UID = PeptideIdentification::buildUIDFromPepID(f_pep_id,fidentifier_to_msrunpath);
const auto range = customID_to_cpepID.equal_range(UID);
for (auto it_pep = range.first; it_pep != range.second; ++it_pep) // OMS_CODING_TEST_EXCLUDE
{
// copy all MetaValues that are at PepID level
// copy all MetaValues that are at best Hit level
//TODO check if first = best assumption is met!
Size cf_index = it_pep->second.first; //ConsensusFeature Index
Size pi_index = it_pep->second.second; //PeptideIdentification Index
if (cf_index != Size(-1))
{
cmap[cf_index].getPeptideIdentifications()[pi_index].addMetaValues(f_pep_id);
cmap[cf_index].getPeptideIdentifications()[pi_index].getHits()[0].addMetaValues(f_pep_id.getHits()[0]);
}
else
{
cmap.getUnassignedPeptideIdentifications()[pi_index].addMetaValues(f_pep_id);
cmap.getUnassignedPeptideIdentifications()[pi_index].getHits()[0].addMetaValues(f_pep_id.getHits()[0]);
}
}
}
}
void indexFasta(std::vector<FASTAFile::FASTAEntry>& prot_description, std::map<String, String>& fasta_map)
{
//map the identifier to the description so that we can access the description via the cmap-identifier
for(const auto& entry : prot_description)
{
fasta_map.emplace(entry.identifier, entry.description);
}
}
void fallbackFasta(const String& file_name, std::vector<FASTAFile::FASTAEntry>& prot_description)
{
OPENMS_LOG_INFO << "No FASTA passed, looking for the default search parameters in consensusXML" << std::endl;
try
{
FASTAFile().load(file_name, prot_description);
}
catch(const Exception::FileNotFound& e)
{
OPENMS_LOG_INFO << e.what() << std::endl;
}
}
};
// the actual main function needed to create an executable
int main(int argc, const char ** argv)
{
TOPPQualityControl tool;
return tool.main(argc, argv);
}
/// @endcond | C++ |
3D | OpenMS/OpenMS | src/topp/FalseDiscoveryRate.cpp | .cpp | 15,078 | 329 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Timo Sachsenberg, Andreas Bertsch $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/ANALYSIS/ID/FalseDiscoveryRate.h>
#include <OpenMS/PROCESSING/ID/IDFilter.h>
#include <OpenMS/KERNEL/StandardTypes.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/FORMAT/FileHandler.h>
using namespace OpenMS;
using namespace std;
/**
@page TOPP_FalseDiscoveryRate FalseDiscoveryRate
@brief Tool to estimate the false discovery rate on peptide and protein level
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → FalseDiscoveryRate →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_CometAdapter (or other ID engines) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> @ref TOPP_IDFilter </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeptideIndexer </td>
</tr>
</table>
</CENTER>
This TOPP tool calculates the false discovery rate (FDR) for results of target-decoy searches. The FDR calculation can be performed for proteins and/or for peptides (more exactly, peptide spectrum matches).
The false discovery rate is defined as the number of false discoveries (decoy hits) divided by the number of false and correct discoveries (both target and decoy hits) with a score better than a given threshold.
@ref TOPP_PeptideIndexer must be applied to the search results (idXML file) to index the data and to annotate peptide and protein hits with their target/decoy status.
@note When no decoy hits were found you will get a warning like this:<br>
"FalseDiscoveryRate: #decoy sequences is zero! Setting all target sequences to q-value/FDR 0!"<br>
This should be a serious concern, since it indicates a possible problem with the target/decoy annotation step (@ref TOPP_PeptideIndexer), e.g. due to a misconfigured database.
@note FalseDiscoveryRate only annotates peptides and proteins with their FDR. By setting FDR:PSM or FDR:protein the maximum q-value (e.g., 0.05 corresponds to an FDR of 5%) can be controlled on the PSM and protein level.
Alternatively, FDR filtering can be performed in the @ref TOPP_IDFilter tool by setting score:pep and score:prot to the maximum q-value. After potential filtering, associations are
automatically updated and unreferenced proteins/peptides removed based on the advanced cleanup parameters.
@note Currently mzIdentML (mzid) is not directly supported as an input/output format of this tool. Convert mzid files to/from idXML using @ref TOPP_IDFileConverter if necessary.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_FalseDiscoveryRate.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_FalseDiscoveryRate.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPFalseDiscoveryRate :
public TOPPBase
{
public:
TOPPFalseDiscoveryRate() :
TOPPBase("FalseDiscoveryRate", "Estimates the false discovery rate on peptide and protein level using decoy searches.")
{
}
protected:
Param getSubsectionDefaults_(const String& /*section*/) const override
{
return FalseDiscoveryRate().getDefaults();
}
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Identifications from searching a target-decoy database.");
setValidFormats_("in", ListUtils::create<String>("idXML"));
registerOutputFile_("out", "<file>", "", "Identifications with annotated FDR");
setValidFormats_("out", ListUtils::create<String>("idXML"));
registerStringOption_("PSM", "<FDR level>", "true", "Perform FDR calculation on PSM level", false);
setValidStrings_("PSM", ListUtils::create<String>("true,false"));
registerStringOption_("peptide", "<FDR level>", "false", "Perform FDR calculation on peptide level and annotates it as meta value\n(Note: if set, also calculates FDR/q-value on PSM level.)", false);
setValidStrings_("peptide", ListUtils::create<String>("true,false"));
registerStringOption_("PSM_peptide_base_score", "<score name or type>", "", "Set if you want to choose a different score than the last calculated main score for PSM or peptide level.", false);
registerStringOption_("PSM_peptide_base_score_orientation", "<higher/lower>", "", "In case the score orientation cannot be inferred.", false, true);
setValidStrings_("PSM_peptide_base_score_orientation", ListUtils::create<String>("higher_better, lower_better"));
registerStringOption_("protein", "<FDR level>", "true", "Perform FDR calculation on protein level", false);
setValidStrings_("protein", ListUtils::create<String>("true,false"));
registerStringOption_("proteingroup", "<FDR level>", "false", "Perform FDR calculation on (indist.) protein group level, too. Currently, this will enable protein FDR automatically (since internals need to be in-sync) but will affect the level at which it filters (if enabled).", false);
setValidStrings_("proteingroup", ListUtils::create<String>("true,false"));
registerStringOption_("protein_score", "<type>", "", "The protein score used to calculate the protein FDR. If empty, the main score is used.", false, true);
auto ids = IDScoreSwitcherAlgorithm();
setValidStrings_("protein_score", ids.getScoreNames()); // lists all scores (including PSM only scores)
registerStringOption_("protein_base_score", "<score name or type>", "", "Set if you want to choose a different score than the last calculated main score for protein (group) level.", false);
registerStringOption_("protein_base_score_orientation", "<higher/lower>", "", "Set if you want to choose a different score than the last calculated main score for protein (group) level.", false, true);
setValidStrings_("protein_base_score_orientation", ListUtils::create<String>("higher_better, lower_better"));
registerTOPPSubsection_("FDR", "FDR control");
registerDoubleOption_("FDR:PSM", "<fraction>", 1, "Filter PSMs based on q-value (e.g., 0.05 = 5% FDR, disabled for 1)", false);
setMinFloat_("FDR:PSM", 0);
setMaxFloat_("FDR:PSM", 1);
registerDoubleOption_("FDR:protein", "<fraction>", 1, "Filter proteins based on q-value (e.g., 0.05 = 5% FDR, disabled for 1)", false);
setMinFloat_("FDR:protein", 0);
setMaxFloat_("FDR:protein", 1);
registerTOPPSubsection_("FDR:cleanup", "Cleanup references after FDR control");
registerStringOption_("FDR:cleanup:remove_proteins_without_psms","<choice>", "true",
"Remove proteins without PSMs (due to being decoy or below PSM FDR threshold).", false, true);
setValidStrings_("FDR:cleanup:remove_proteins_without_psms", {"true","false"});
registerStringOption_("FDR:cleanup:remove_psms_without_proteins","<choice>", "true",
"Remove PSMs without proteins (due to being decoy or below protein FDR threshold).", false, true);
setValidStrings_("FDR:cleanup:remove_psms_without_proteins", {"true","false"});
registerStringOption_("FDR:cleanup:remove_spectra_without_psms","<choice>", "true",
"Remove spectra without PSMs (due to being decoy or below protein FDR threshold)."
" Caution: if remove_psms_without_proteins is false, protein level filtering does not propagate.", false, true);
setValidStrings_("FDR:cleanup:remove_spectra_without_psms", {"true","false"});
registerSubsection_("algorithm", "Parameter section for the FDR calculation algorithm");
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
Param alg_param = getParam_().copy("algorithm:", true);
FalseDiscoveryRate fdr;
fdr.setParameters(alg_param);
writeDebug_("Parameters passed to FalseDiscoveryRate", alg_param, 3);
// input/output files
String in = getStringOption_("in");
String out = getStringOption_("out");
const double protein_fdr = getDoubleOption_("FDR:protein");
const double psm_fdr = getDoubleOption_("FDR:PSM");
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
PeptideIdentificationList pep_ids;
vector<ProteinIdentification> prot_ids;
FileHandler().loadIdentifications(in, prot_ids, pep_ids, {FileTypes::IDXML});
Size n_prot_ids = prot_ids.size();
Size n_prot_hits = IDFilter::countHits(prot_ids);
Size n_pep_ids = pep_ids.size();
Size n_pep_hits = IDFilter::countHits(pep_ids);
bool filter_applied(false);
try
{
bool groups = getStringOption_("proteingroup") == "true";
if (getStringOption_("protein") == "true" || groups)
{
String protein_score = getStringOption_("protein_score");
if (!protein_score.empty())
{
try
{
IDScoreSwitcherAlgorithm::ScoreType score_type = IDScoreSwitcherAlgorithm::toScoreTypeEnum(protein_score);
IDScoreSwitcherAlgorithm switcher;
Size c = 0;
switcher.switchToGeneralScoreType(prot_ids, score_type, c);
}
catch (Exception::MissingInformation& e)
{
IDScoreSwitcherAlgorithm switcher;
auto params = switcher.getParameters();
params.setValue("new_score", protein_score);
params.setValue("new_score_orientation", getStringOption_("protein_base_score_orientation"));
params.setValue("proteins", "true");
switcher.setParameters(params);
Size c = 0;
for (auto& run : prot_ids)
{
switcher.switchScores(run,c);
}
}
}
for (auto& run : prot_ids)
{
if (!run.hasInferenceData() && !getFlag_("force"))
{
throw OpenMS::Exception::MissingInformation(
__FILE__,
__LINE__,
OPENMS_PRETTY_FUNCTION,
"It seems like protein inference was not yet performed."
" Calculating Protein FDR is probably not meaningful. To override,"
" use the force flag.");
}
else
{
fdr.applyBasic(run, groups);
if (protein_fdr < 1)
{
if (groups)
{
OPENMS_LOG_INFO << "FDR control: Filtering protein groups..." << endl;
IDFilter::filterGroupsByScore(run.getIndistinguishableProteins(), protein_fdr, run.isHigherScoreBetter());
filter_applied = true;
}
else
{
OPENMS_LOG_INFO << "FDR control: Filtering proteins..." << endl;
IDFilter::filterHitsByScore(run, protein_fdr);
filter_applied = true;
}
}
}
}
}
bool peptide_level_fdr = getStringOption_("peptide") == "true";
bool psm_level_fdr = getStringOption_("PSM") == "true";
if (psm_level_fdr || peptide_level_fdr)
{
fdr.apply(pep_ids, peptide_level_fdr);
// TODO If no decoys are removed in the param settings, we shouldn't need cleanups
// but then all tests need to be changed since cleanup sorts.
//if (alg_param.getValue("add_decoy_peptides").toBool())
//{
// filter_applied = true;
//}
filter_applied = true;
if (psm_fdr < 1)
{
filter_applied = true;
OPENMS_LOG_INFO << "FDR control: Filtering PSMs..." << endl;
IDFilter::filterHitsByScore(pep_ids, psm_fdr);
}
}
}
catch (Exception::MissingInformation& e)
{
OPENMS_LOG_FATAL_ERROR << "FalseDiscoveryRate failed due to missing information:\n"
<< e.what();
return INCOMPATIBLE_INPUT_DATA;
}
if (filter_applied)
{
//remove_proteins_without_psms
if (getStringOption_("FDR:cleanup:remove_proteins_without_psms") == "true")
{
IDFilter::removeUnreferencedProteins(prot_ids, pep_ids);
}
//remove_psms_without_proteins
IDFilter::removeDanglingProteinReferences(pep_ids,
prot_ids,
getStringOption_("FDR:cleanup:remove_psms_without_proteins") == "true");
//remove_spectra_without_psms
if (getStringOption_("FDR:cleanup:remove_spectra_without_psms") == "true")
{
IDFilter::removeEmptyIdentifications(pep_ids);
}
// we want to keep "empty" protein ID runs because they contain search meta data
// update protein groupings if necessary:
for (auto& prot : prot_ids)
{
bool valid = IDFilter::updateProteinGroups(prot.getProteinGroups(),
prot.getHits());
if (!valid)
{
OPENMS_LOG_WARN << "Warning: While updating protein groups, some prot_ids were removed from groups that are still present. "
<< "The new grouping (especially the group probabilities) may not be completely valid any more."
<< endl;
}
valid = IDFilter::updateProteinGroups(
prot.getIndistinguishableProteins(), prot.getHits());
if (!valid)
{
OPENMS_LOG_WARN << "Warning: While updating indistinguishable prot_ids, some prot_ids were removed from groups that are still present. "
<< "The new grouping (especially the group probabilities) may not be completely valid any more."
<< endl;
}
}
}
// some stats
OPENMS_LOG_INFO << "Before filtering:\n"
<< n_prot_ids << " protein identification(s) with "
<< n_prot_hits << " protein hit(s),\n"
<< n_pep_ids << " peptide identification(s) with "
<< n_pep_hits << " pep_ids hit(s).\n"
<< "After filtering:\n"
<< prot_ids.size() << " protein identification(s) with "
<< IDFilter::countHits(prot_ids) << " protein hit(s),\n"
<< pep_ids.size() << " peptide identification(s) with "
<< IDFilter::countHits(pep_ids) << " pep_ids hit(s)." << endl;
OPENMS_LOG_INFO << "Writing filtered output..." << endl;
FileHandler().storeIdentifications(out, prot_ids, pep_ids, {FileTypes::IDXML});
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPFalseDiscoveryRate tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/ConsensusID.cpp | .cpp | 43,901 | 929 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hendrik Weisser $
// $Authors: Sven Nahnsen, Hendrik Weisser $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/ANALYSIS/ID/ConsensusIDAlgorithm.h>
#include <OpenMS/ANALYSIS/ID/ConsensusIDAlgorithmPEPMatrix.h>
#include <OpenMS/ANALYSIS/ID/ConsensusIDAlgorithmPEPIons.h>
#include <OpenMS/ANALYSIS/ID/ConsensusIDAlgorithmBest.h>
#include <OpenMS/ANALYSIS/ID/ConsensusIDAlgorithmWorst.h>
#include <OpenMS/ANALYSIS/ID/ConsensusIDAlgorithmAverage.h>
#include <OpenMS/ANALYSIS/ID/ConsensusIDAlgorithmRanks.h>
#include <OpenMS/ANALYSIS/MAPMATCHING/FeatureGroupingAlgorithmQT.h>
#include <OpenMS/CONCEPT/VersionInfo.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/CHEMISTRY/ProteaseDB.h>
#include <unordered_set>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_ConsensusID ConsensusID
@brief Computes a consensus from results of multiple peptide identification engines.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=4> → ConsensusID →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDPosteriorErrorProbability </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=3> @ref TOPP_PeptideIndexer </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN="center" ROWSPAN=1> @ref TOPP_IDFilter </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN="center" ROWSPAN=1> @ref TOPP_IDMapper </td>
</tr>
</table>
</CENTER>
<B>Reference:</B>
Nahnsen <em>et al.</em>: <a href="https://doi.org/10.1021/pr2002879">Probabilistic consensus scoring improves tandem mass spectrometry peptide identification</a> (J. Proteome Res., 2011, PMID: 21644507).
<B>Algorithms:</B>
ConsensusID offers several algorithms that can aggregate results from multiple peptide identification engines ("search engines") into consensus identifications - typically one per MS2 spectrum. This works especially well for search engines that provide more than one peptide hit per spectrum, i.e. that report not just the best hit, but also a list of runner-up candidates with corresponding scores.
The available algorithms are (see also @ref OpenMS::ConsensusIDAlgorithm and its subclasses):
@li @p PEPMatrix: Scoring based on posterior error probabilities (PEPs) and peptide sequence similarities. This algorithm uses a substitution matrix to score the similarity of sequences not listed by all search engines. It requires PEPs as the scores for all peptide hits.
@li @p PEPIons: Scoring based on posterior error probabilities (PEPs) and fragment ion similarities ("shared peak count"). This algorithm, too, requires PEPs as scores.
@li @p best: For each peptide ID, this uses the best score of any search engine as the consensus score. All peptide IDs must have the same score type.
@li @p worst: For each peptide ID, this uses the worst score of any search engine as the consensus score. All peptide IDs must have the same score type.
@li @p average: For each peptide ID, this uses the average score of all search engines as the consensus score. Again, all peptide IDs must have the same score type.
@li @p ranks: Calculates a consensus score based on the ranks of peptide IDs in the results of different search engines. The final score is in the range (0, 1], with 1 being the best score. The input peptide IDs do not need to have the same score type.
PEPs for search results can be calculated using the @ref TOPP_IDPosteriorErrorProbability tool, which supports a variety of search engines.
@note Important: All protein-level identification results will be lost by applying ConsensusID. (It is unclear how potentially conflicting protein-level results from different search engines should be combined.) If necessary, run the @ref TOPP_PeptideIndexer tool to add protein references for peptides again.
@note Peptides with different post-translational modifications (PTMs), or with different site localizations of the same PTMs, are treated as different peptides by all algorithms. However, a qualification applies for the @p PEPMatrix algorithm: The similarity scoring method used there can only take unmodified peptide sequences into account, so PTMs are ignored during that step. However, the PTMs are not removed from the peptides, and there will be separate results for differently-modified peptides.
<B>File types:</B>
Different input files types are supported:
@li idXML: A file containing multiple identification runs, typically from different search engines. Use @ref TOPP_IDMerger to merge individual idXML files from different search runs into one. During the ConsensusID analysis, the identification results will be grouped according to their originating MS2 spectra, based on retention time and precursor m/z information (see parameters @p rt_delta and @p mz_delta). One consensus identification will be generated for each group. With the per_spectrum flag you can also input multiple idXML files. A consensus will be made per combination of originating mzml file and spectrum_ref.
@li featureXML or consensusXML: Given (consensus) features annotated with peptide identifications from multiple search runs, one consensus identification is created for every annotated feature. Peptide identifications not assigned to features are not considered and will be removed. See @ref TOPP_IDMapper for the task of mapping peptide identifications to feature maps or consensus maps.
@note Currently mzIdentML (mzid) is not directly supported as an input/output format of this tool. Convert mzid files to/from idXML using @ref TOPP_IDFileConverter if necessary.
<B>Filtering:</B>
Generally, search results can be filtered according to various criteria using @ref TOPP_IDFilter before (or after) applying this tool. ConsensusID itself offers only a limited number of filtering options that are especially useful in its context (see the @p filter parameter section):
@li @p considered_hits: Limits the number of alternative peptide hits considered per spectrum/feature for each identification run. This helps to reduce runtime, especially for the @p PEPMatrix and @p PEPIons algorithms, which involve costly "all vs. all" comparisons of peptide hits.
@li @p min_support: This allows filtering of peptide hits based on agreement between search engines. Every peptide sequence in the analysis has been identified by at least one search run. This parameter defines which fraction (between 0 and 1) of the remaining search runs must "support" a peptide identification that should be kept. The meaning of "support" differs slightly between algorithms: For @p best, @p worst, @p average and @p rank, each search run supports peptides that it has also identified among its top @p considered_hits candidates. So @p min_support simply gives the fraction of additional search engines that must have identified a peptide. (For example, if there are three search runs, and only peptides identified by at least two of them should be kept, set @p min_support to 0.5.) For the similarity-based algorithms @p PEPMatrix and @p PEPIons, the "support" for a peptide is the average similarity of the most-similar peptide from each (other) search run. (In the context of the JPR publication, this is the average of the similarity scores used in the consensus score calculation for a peptide.)
@li @p count_empty: Typically not all search engines will provide results for all searched MS2 spectra. This parameter determines whether search runs that provided no results should be counted in the "support" calculation; by default, they are ignored.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_ConsensusID.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_ConsensusID.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPConsensusID :
public TOPPBase
{
public:
TOPPConsensusID() :
TOPPBase("ConsensusID", "Computes a consensus of peptide identifications of several identification engines.")
{
}
protected:
String algorithm_; // algorithm for consensus calculation (input parameter)
bool keep_old_scores_;
void registerOptionsAndFlags_() override
{
registerInputFileList_("in", "<file(s)>", {}, "input file");
setValidFormats_("in", ListUtils::create<String>("idXML,featureXML,consensusXML"));
registerOutputFile_("out", "<file>", "", "output file");
setValidFormats_("out", ListUtils::create<String>("idXML,featureXML,consensusXML"));
addEmptyLine_();
registerDoubleOption_("rt_delta", "<value>", 0.1, "[idXML input only] Maximum allowed retention time deviation between identifications belonging to the same spectrum.", false);
setMinFloat_("rt_delta", 0.0);
registerDoubleOption_("mz_delta", "<value>", 0.1, "[idXML input only] Maximum allowed precursor m/z deviation between identifications belonging to the same spectrum.", false);
setMinFloat_("mz_delta", 0.0);
registerFlag_("per_spectrum", "(only idXML) if set, mapping will be done based on exact matching of originating mzml file and spectrum_ref");
// General algorithm parameters are defined in the abstract base class
// "ConsensusIDAlgorithm", but we can't get them from there because we can't
// instantiate the class. So we get those parameters from a subclass that
// doesn't add any other parameters:
registerTOPPSubsection_("filter", "Options for filtering peptide hits");
registerFullParam_(ConsensusIDAlgorithmBest().getDefaults());
registerStringOption_("algorithm", "<choice>", "PEPMatrix",
"Algorithm used for consensus scoring.\n"
"* PEPMatrix: Scoring based on posterior error probabilities (PEPs) and peptide sequence similarities (scored by a substitution matrix). Requires PEPs as scores.\n"
"* PEPIons: Scoring based on posterior error probabilities (PEPs) and fragment ion similarities ('shared peak count'). Requires PEPs as scores.\n"
"* best: For each peptide ID, use the best score of any search engine as the consensus score. Requires the same score type in all ID runs.\n"
"* worst: For each peptide ID, use the worst score of any search engine as the consensus score. Requires the same score type in all ID runs.\n"
"* average: For each peptide ID, use the average score of all search engines as the consensus. Requires the same score type in all ID runs.\n"
"* ranks: Calculates a consensus score based on the ranks of peptide IDs in the results of different search engines. The final score is in the range (0, 1], with 1 being the best score. No requirements about score types.", false);
setValidStrings_("algorithm", ListUtils::create<String>("PEPMatrix,PEPIons,best,worst,average,ranks"));
// subsections appear in alphabetical (?) order, independent of the order
// in which they were registered:
registerSubsection_("PEPIons", "PEPIons algorithm parameters");
registerSubsection_("PEPMatrix", "PEPMatrix algorithm parameters");
}
Param getSubsectionDefaults_(const String& section) const override
{
Param algo_params;
if (section == "PEPMatrix")
{
algo_params = ConsensusIDAlgorithmPEPMatrix().getDefaults();
}
else // section == "PEPIons"
{
algo_params = ConsensusIDAlgorithmPEPIons().getDefaults();
}
// remove parameters defined in the base class (to avoid duplicates):
algo_params.remove("filter:");
return algo_params;
}
void setProteinIdentifications_(vector<ProteinIdentification>& prot_ids)
{
// modification params are necessary for further analysis tools (e.g. LuciPHOr2)
set<String> fixed_mods_set;
set<String> var_mods_set;
StringList merged_spectra_data;
String engine = prot_ids[0].getSearchEngine();
String version = prot_ids[0].getSearchEngineVersion();
// merge proteins
unordered_set<String> seen_proteins;
vector<ProteinHit> merged_proteins;
for (Size i = 0; i < prot_ids.size(); ++i)
{
for (auto& hit : prot_ids[i].getHits())
{
const auto& iter_inserted = seen_proteins.emplace(hit.getAccession());
if (iter_inserted.second)
{
merged_proteins.push_back(std::move(hit));
}
}
}
for (ProteinIdentification& prot : prot_ids)
{
ProteinIdentification::SearchParameters search_params(prot.getSearchParameters());
std::copy(search_params.fixed_modifications.begin(), search_params.fixed_modifications.end(), std::inserter(fixed_mods_set, fixed_mods_set.end()));
std::copy(search_params.variable_modifications.begin(), search_params.variable_modifications.end(), std::inserter(var_mods_set, var_mods_set.end()));
StringList spectra_data;
prot.getPrimaryMSRunPath(spectra_data);
std::copy(spectra_data.begin(), spectra_data.end(), std::inserter(merged_spectra_data, merged_spectra_data.end()));
}
ProteinIdentification::SearchParameters search_params;
std::vector<String> fixed_mods(fixed_mods_set.begin(), fixed_mods_set.end());
std::vector<String> var_mods(var_mods_set.begin(), var_mods_set.end());
search_params.fixed_modifications = fixed_mods;
search_params.variable_modifications = var_mods;
prot_ids.clear();
prot_ids.resize(1);
prot_ids[0].getHits() = merged_proteins;
prot_ids[0].setDateTime(DateTime::now());
prot_ids[0].setSearchEngine("OpenMS/ConsensusID_" + algorithm_);
prot_ids[0].setSearchEngineVersion(VersionInfo::getVersion());
prot_ids[0].setSearchParameters(search_params);
//TODO for completeness we could in the other algorithms, collect all search engines and put them here
// or maybe put it in a DataProcessing step
//TODO actually this only makes sense if there was only one search engine. (see the alternative
// setProteinIdentificationSettings_)
// best, worst, average can also be used on PEP scores for different search engines. IDPEP does not
// overwrite the search engine (in contrast to PercolatorAdapter)
if (algorithm_ == "best" || algorithm_ == "worst" || algorithm_ == "average")
{
prot_ids[0].setMetaValue("ConsensusIDBaseSearch", engine + String(":") + version);
}
// make file name entries unique
std::sort(merged_spectra_data.begin(), merged_spectra_data.end());
StringList::iterator last = std::unique(merged_spectra_data.begin(), merged_spectra_data.end());
merged_spectra_data.erase(last, merged_spectra_data.end());
prot_ids[0].setPrimaryMSRunPath(merged_spectra_data);
}
tuple<String, String, ProteinIdentification::SearchParameters> getOriginalSearchEngineSettings_(const ProteinIdentification& prot)
{
String engine = prot.getSearchEngine();
const ProteinIdentification::SearchParameters& old_sp = prot.getSearchParameters();
if (engine != "Percolator")
{
return std::tie(engine, prot.getSearchEngineVersion(), old_sp);
}
else
{
String original_SE = "Unknown";
String original_SE_ver = "0.0";
vector<String> mvkeys;
old_sp.getKeys(mvkeys);
for (const String & mvkey : mvkeys)
{
if (mvkey.hasPrefix("SE:"))
{
original_SE = mvkey.substr(3);
original_SE_ver = old_sp.getMetaValue(mvkey);
break; // multiSE percolator before consensusID not allowed; we take first only
}
}
ProteinIdentification::SearchParameters sp{};
for (const String & mvkey : mvkeys)
{
if (mvkey.hasPrefix(original_SE))
{
if (mvkey.hasSuffix("db"))
{
sp.db = old_sp.getMetaValue(mvkey);
}
else if (mvkey.hasSuffix("db_version"))
{
sp.db_version = old_sp.getMetaValue(mvkey);
}
else if (mvkey.hasSuffix("taxonomy"))
{
sp.taxonomy = old_sp.getMetaValue(mvkey);
}
else if (mvkey.hasSuffix("charges"))
{
sp.charges = old_sp.getMetaValue(mvkey);;
}
else if (mvkey.hasSuffix("fixed_modifications"))
{
const String& s = old_sp.getMetaValue(mvkey);
s.split(',', sp.fixed_modifications);
}
else if (mvkey.hasSuffix("variable_modifications"))
{
const String& s = old_sp.getMetaValue(mvkey);
s.split(',', sp.variable_modifications);
}
else if (mvkey.hasSuffix("missed_cleavages"))
{
sp.missed_cleavages = (UInt) old_sp.getMetaValue(mvkey);
}
else if (mvkey.hasSuffix("fragment_mass_tolerance"))
{
sp.fragment_mass_tolerance = (double) old_sp.getMetaValue(mvkey);
}
else if (mvkey.hasSuffix("fragment_mass_tolerance_ppm"))
{
sp.fragment_mass_tolerance_ppm = old_sp.getMetaValue(mvkey).toBool();
}
else if (mvkey.hasSuffix("precursor_mass_tolerance"))
{
sp.precursor_mass_tolerance = (double) old_sp.getMetaValue(mvkey);
}
else if (mvkey.hasSuffix("precursor_mass_tolerance_ppm"))
{
sp.precursor_mass_tolerance_ppm = old_sp.getMetaValue(mvkey).toBool();
}
else if (mvkey.hasSuffix("digestion_enzyme"))
{
Protease p = *(ProteaseDB::getInstance()->getEnzyme(old_sp.getMetaValue(mvkey)));
sp.digestion_enzyme = p;
}
else if (mvkey.hasSuffix("enzyme_term_specificity"))
{
sp.enzyme_term_specificity = static_cast<EnzymaticDigestion::Specificity>((int) old_sp.getMetaValue(mvkey));
}
}
}
return std::tie(original_SE, original_SE_ver, sp);
}
}
void setProteinIdentificationSettings_(ProteinIdentification& prot_id,
vector<tuple<String, String, ProteinIdentification::SearchParameters>>& se_ver_settings,
vector<tuple<String, String, vector<pair<String, String>>>>& rescore_ver_settings)
{
// modification params are necessary for further analysis tools (e.g. LuciPHOr2)
set<String> fixed_mods_set;
set<String> var_mods_set;
set<EnzymaticDigestion::Specificity> specs;
double prec_tol_ppm = 0.;
double prec_tol_da = 0.;
double frag_tol_ppm = 0.;
double frag_tol_da = 0.;
int min_chg = 10000;
int max_chg = -10000;
Size mc = 0;
// we sort them to pick the same entries, no matter the order of the inputs
set<String, std::greater<String>> enzymes;
set<String, std::greater<String>> dbs;
// use the first settings as basis (i.e. copy over db and enzyme and tolerance)
// we assume that they are the same or similar
ProteinIdentification::SearchParameters new_sp = get<2>(se_ver_settings[0]);
// first check the rescoring procedure. Should at least be the same tool.
// "" = IDPosteriorProbability. If parts were not rescored at all, they wont have a PEP annotated,
// and the tool will fail in the next step (beginning of algorithm)
// TODO maybe also consolidate/merge those settings. But they are currently only used for reporting.
const auto& final_rescore_ver_setting = rescore_ver_settings[0];
const String& final_rescore_algo = get<0>(final_rescore_ver_setting);
const String& final_rescore_algo_version = get<1>(final_rescore_ver_setting);
for (const auto& rescore_ver_setting : rescore_ver_settings)
{
if (get<0>(rescore_ver_setting) != final_rescore_algo
|| get<1>(rescore_ver_setting) != final_rescore_algo_version)
{
OPENMS_LOG_WARN << "Warning: Trying to use ConsensusID on searches with different rescoring algorithms. " +
get<0>(rescore_ver_setting) + " vs " + final_rescore_algo;
}
}
if (!final_rescore_algo.empty()) new_sp.setMetaValue(final_rescore_algo, final_rescore_algo_version);
for (const auto& s : get<2>(final_rescore_ver_setting))
{
// the metavalue names in s.first already contain the algorithm name. No need to prepend
new_sp.setMetaValue(s.first, s.second);
}
bool allsamese = true;
for (const auto& se_ver_setting : se_ver_settings)
{
allsamese = allsamese &&
(get<0>(se_ver_setting) == get<0>(se_ver_settings[0]) &&
get<1>(se_ver_setting) == get<1>(se_ver_settings[0]));
const ProteinIdentification::SearchParameters& sp = get<2>(se_ver_setting);
const String& SE = get<0>(se_ver_setting);
new_sp.setMetaValue("SE:" + SE, get<1>(se_ver_setting));
new_sp.setMetaValue(SE+":db",sp.db);
new_sp.setMetaValue(SE+":db_version",sp.db_version);
new_sp.setMetaValue(SE+":taxonomy",sp.taxonomy);
new_sp.setMetaValue(SE+":charges",sp.charges);
new_sp.setMetaValue(SE+":fixed_modifications",ListUtils::concatenate(sp.fixed_modifications, ","));
new_sp.setMetaValue(SE+":variable_modifications",ListUtils::concatenate(sp.variable_modifications, ","));
new_sp.setMetaValue(SE+":missed_cleavages",sp.missed_cleavages);
new_sp.setMetaValue(SE+":fragment_mass_tolerance",sp.fragment_mass_tolerance);
new_sp.setMetaValue(SE+":fragment_mass_tolerance_unit",sp.fragment_mass_tolerance_ppm ? "ppm" : "Da");
new_sp.setMetaValue(SE+":precursor_mass_tolerance",sp.precursor_mass_tolerance);
new_sp.setMetaValue(SE+":precursor_mass_tolerance_unit",sp.precursor_mass_tolerance_ppm ? "ppm" : "Da");
new_sp.setMetaValue(SE+":digestion_enzyme",sp.digestion_enzyme.getName());
new_sp.setMetaValue(SE+":enzyme_term_specificity",EnzymaticDigestion::NamesOfSpecificity[sp.enzyme_term_specificity]);
const auto& chg_pair = sp.getChargeRange();
if (chg_pair.first != 0 && chg_pair.first < min_chg)
{
min_chg = chg_pair.first;
}
if (chg_pair.second != 0 && chg_pair.second > max_chg)
{
max_chg = chg_pair.second;
}
if (sp.missed_cleavages > mc )
{
mc = sp.missed_cleavages;
}
if (sp.fragment_mass_tolerance_ppm)
{
if (sp.fragment_mass_tolerance > frag_tol_ppm) frag_tol_ppm = sp.fragment_mass_tolerance;
}
else
{
if (sp.fragment_mass_tolerance > frag_tol_da) frag_tol_da = sp.fragment_mass_tolerance;
}
if (sp.precursor_mass_tolerance_ppm)
{
if (sp.precursor_mass_tolerance > prec_tol_ppm) prec_tol_ppm = sp.precursor_mass_tolerance;
}
else
{
if (sp.precursor_mass_tolerance > prec_tol_da) prec_tol_da = sp.precursor_mass_tolerance;
}
enzymes.insert(sp.digestion_enzyme.getName());
dbs.insert(sp.db);
specs.insert(sp.enzyme_term_specificity);
std::copy(sp.fixed_modifications.begin(), sp.fixed_modifications.end(), std::inserter(fixed_mods_set, fixed_mods_set.end()));
std::copy(sp.variable_modifications.begin(), sp.variable_modifications.end(), std::inserter(var_mods_set, var_mods_set.end()));
}
if (specs.find(EnzymaticDigestion::SPEC_NONE) != specs.end())
{
new_sp.enzyme_term_specificity = EnzymaticDigestion::SPEC_NONE;
}
else if (specs.find(EnzymaticDigestion::SPEC_SEMI) != specs.end())
{
new_sp.enzyme_term_specificity = EnzymaticDigestion::SPEC_SEMI;
}
else if (specs.find(EnzymaticDigestion::SPEC_NONTERM) != specs.end())
{
new_sp.enzyme_term_specificity = EnzymaticDigestion::SPEC_NONTERM;
}
else if (specs.find(EnzymaticDigestion::SPEC_NOCTERM) != specs.end())
{
new_sp.enzyme_term_specificity = EnzymaticDigestion::SPEC_NOCTERM;
}
else if (specs.find(EnzymaticDigestion::SPEC_FULL) != specs.end())
{
new_sp.enzyme_term_specificity = EnzymaticDigestion::SPEC_FULL;
}
std::vector<String> fixed_mods(fixed_mods_set.begin(), fixed_mods_set.end());
std::vector<String> var_mods(var_mods_set.begin(), var_mods_set.end());
new_sp.fixed_modifications = fixed_mods;
new_sp.variable_modifications = var_mods;
String final_enz;
for (const auto& enz : enzymes)
{
if (enz != "unknown_enzyme")
{
// Although the set should be sorted to start with the longest
// versions, this extends "" to Trypsin and e.g. Trypsin to Trypsin/P
if (enz.hasSubstring(final_enz))
{
final_enz = enz;
}
else if (!final_enz.hasSubstring(enz))
{
OPENMS_LOG_WARN << "Warning: Trying to use ConsensusID on searches with incompatible enzymes."
" OpenMS officially supports only one enzyme per search. Using " + final_enz + " to (incompletely)"
" represent the combined run. This might or might not lead to inconsistencies downstream.";
}
}
}
new_sp.digestion_enzyme = *ProteaseDB::getInstance()->getEnzyme(final_enz);
String final_db = *dbs.begin();
String final_db_bn = final_db;
final_db_bn.substitute("\\","/");
final_db_bn = File::basename(final_db_bn);
// we need to copy to substitute anyway
for (auto db : dbs) // OMS_CODING_TEST_EXCLUDE
{
db.substitute("\\","/");
if (File::basename(db) != final_db_bn)
{
OPENMS_LOG_WARN << "Warning: Trying to use ConsensusID on searches with different databases."
" OpenMS officially supports only one database per search. Using " + final_db + " to (incompletely)"
" represent the combined run. This might or might not lead to inconsistencies downstream.";
}
}
new_sp.charges = String(min_chg) + "-" + String(max_chg);
if (prec_tol_da > 0 && prec_tol_ppm > 0)
{
OPENMS_LOG_WARN << "Warning: Trying to use ConsensusID on searches with incompatible "
"precursor tolerance units. Using Da for the combined run.";
}
if (prec_tol_da > 0)
{
new_sp.precursor_mass_tolerance = prec_tol_da;
new_sp.precursor_mass_tolerance_ppm = false;
}
else
{
new_sp.precursor_mass_tolerance = prec_tol_ppm;
new_sp.precursor_mass_tolerance_ppm = true;
}
if (frag_tol_da > 0 && frag_tol_ppm > 0)
{
OPENMS_LOG_WARN << "Warning: Trying to use ConsensusID on searches with incompatible "
"fragment tolerance units. Using Da for the combined run.";
}
if (frag_tol_da > 0)
{
new_sp.fragment_mass_tolerance = frag_tol_da;
new_sp.fragment_mass_tolerance_ppm = false;
}
else
{
new_sp.fragment_mass_tolerance = frag_tol_ppm;
new_sp.fragment_mass_tolerance_ppm = true;
}
new_sp.missed_cleavages = mc;
prot_id.setDateTime(DateTime::now());
prot_id.setSearchEngine("OpenMS/ConsensusID_" + algorithm_);
prot_id.setSearchEngineVersion(VersionInfo::getVersion());
prot_id.setSearchParameters(new_sp);
//TODO for completeness we could in the other algorithms, collect all search engines and put them here
// or maybe put it in a DataProcessing step
if (allsamese)
{
prot_id.setMetaValue("ConsensusIDBaseSearch", get<0>(se_ver_settings[0]) + String(":") + get<1>(se_ver_settings[0]));
}
}
template <typename MapType>
void processFeatureOrConsensusMap_(MapType& input_map,
ConsensusIDAlgorithm* consensus)
{
// Problem with feature data: IDs from multiple spectra may be attached to
// a (consensus) feature, so we may have multiple IDs from the same search
// engine. This means that we can't just use the number of search runs as
// our "baseline" for the number of identifications (parameter
// "number_of_runs")! To work around this, we multiply the number of
// different ID runs with the max. number of times we see the same ID run
// in the annotations of a feature.
map<String, String> runid_to_se;
map<String, Size> id_mapping; // mapping: run ID -> index
Size number_of_runs = input_map.getProteinIdentifications().size();
for (Size i = 0; i < number_of_runs; ++i)
{
const auto& prot = input_map.getProteinIdentifications()[i];
id_mapping[prot.getIdentifier()] = i;
if (keep_old_scores_)
{
runid_to_se[prot.getIdentifier()] = prot.getOriginalSearchEngineName();
}
}
// compute consensus:
for (typename MapType::Iterator map_it = input_map.begin();
map_it != input_map.end(); ++map_it)
{
PeptideIdentificationList& ids = map_it->getPeptideIdentifications();
vector<Size> times_seen(number_of_runs);
for (PeptideIdentification& pep : ids)
{
++times_seen[id_mapping[pep.getIdentifier()]];
}
Size n_repeats = *max_element(times_seen.begin(), times_seen.end());
consensus->apply(ids, runid_to_se, number_of_runs * n_repeats);
}
// create new identification run:
setProteinIdentifications_(input_map.getProteinIdentifications());
// remove outdated information (protein references will be broken):
input_map.getUnassignedPeptideIdentifications().clear();
}
ExitCodes main_(int, const char**) override
{
StringList in = getStringList_("in");
FileTypes::Type in_type = FileHandler::getType(in[0]);
String out = getStringOption_("out");
double rt_delta = getDoubleOption_("rt_delta");
double mz_delta = getDoubleOption_("mz_delta");
keep_old_scores_ = getFlag_("filter:keep_old_scores");
//----------------------------------------------------------------
// set up ConsensusID
//----------------------------------------------------------------
ConsensusIDAlgorithm* consensus;
// general algorithm parameters:
Param algo_params = ConsensusIDAlgorithmBest().getDefaults();
algorithm_ = getStringOption_("algorithm");
if (algorithm_ == "PEPMatrix")
{
consensus = new ConsensusIDAlgorithmPEPMatrix();
// add algorithm-specific parameters:
algo_params.merge(getParam_().copy("PEPMatrix:", true));
}
else if (algorithm_ == "PEPIons")
{
consensus = new ConsensusIDAlgorithmPEPIons();
// add algorithm-specific parameters:
algo_params.merge(getParam_().copy("PEPIons:", true));
}
else if (algorithm_ == "best")
{
consensus = new ConsensusIDAlgorithmBest();
}
else if (algorithm_ == "worst")
{
consensus = new ConsensusIDAlgorithmWorst();
}
else if (algorithm_ == "average")
{
consensus = new ConsensusIDAlgorithmAverage();
}
else // algorithm_ == "ranks"
{
consensus = new ConsensusIDAlgorithmRanks();
}
algo_params.update(getParam_(), false, getGlobalLogDebug()); // update general params.
consensus->setParameters(algo_params);
//----------------------------------------------------------------
// idXML
//----------------------------------------------------------------
if (in_type == FileTypes::IDXML)
{
vector<ProteinIdentification> prot_ids;
PeptideIdentificationList pep_ids;
if (getFlag_("per_spectrum"))
{
map<String, unordered_map<String, PeptideIdentificationList>> grouping_per_file;
map<String, unordered_set<String>> seen_proteins_per_file;
map<String, Size> runid_to_old_run_idx;
map<String, String> runid_to_old_se;
// the values (new_run_idx) in mzml_to_new_run_idx correspond to the indices in mzml_to_sesettings
map<String, Size> mzml_to_new_run_idx;
vector<vector<tuple<String, String, ProteinIdentification::SearchParameters>>> mzml_to_sesettings;
vector<vector<tuple<String, String, vector<pair<String,String>>>>> mzml_to_rescoresettings;
for (const auto& infile : in)
{
vector<ProteinIdentification> tmp_prot_ids;
PeptideIdentificationList tmp_pep_ids;
FileHandler().loadIdentifications(infile, tmp_prot_ids, tmp_pep_ids, {FileTypes::IDXML});
Size idx(0);
for (const auto& prot : tmp_prot_ids)
{
runid_to_old_run_idx[prot.getIdentifier()] = idx++;
if (keep_old_scores_)
{
runid_to_old_se[prot.getIdentifier()] = prot.getOriginalSearchEngineName();
}
StringList original_files;
prot.getPrimaryMSRunPath(original_files);
for (auto& f : original_files)
{
std::replace( f.begin(), f.end(), '\\', '/');
f = FileHandler::stripExtension(File::basename(f)); // some SE adapters write full paths, some may use raw
}
if (original_files.size() != 1)
{
//TODO in theory you could also compare the whole StringList (if you want to consensusID
// a whole "merge" of multiple ID files (e.g. fractions)
throw Exception::InvalidValue(
__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
"Currently only ID runs on exactly one mzML file are supported. "
"Run " + prot.getIdentifier() + " contains no 'file_origin' UserParam (report issue) or has multiple entries in it (avoid merging).", String(original_files.size()));
}
String original_file = original_files[0];
auto iter_inserted = seen_proteins_per_file.emplace(original_file, unordered_set<String>{});
const auto se_ver_settings = getOriginalSearchEngineSettings_(prot);
tuple<String, String, vector<pair<String,String>>> rescore_ver_settings{"","",vector<pair<String,String>>()};
//TODO find a way to get/check IDPEP params.
if (prot.getSearchEngine() == "Percolator")
{
get<0>(rescore_ver_settings) = prot.getSearchEngine();
get<1>(rescore_ver_settings) = prot.getSearchEngineVersion();
const auto& sp = prot.getSearchParameters();
vector<String> mvkeys;
sp.getKeys(mvkeys);
for (const String & mvkey : mvkeys)
{
if (mvkey.hasPrefix("Percolator:"))
{
// we do not cut the tool (here Percolator) prefix since we will use it as is
// in the new params
get<2>(rescore_ver_settings).emplace_back(mvkey, sp.getMetaValue(mvkey));
}
}
}
if (iter_inserted.second)
{
mzml_to_new_run_idx[original_file] = prot_ids.size();
mzml_to_sesettings.emplace_back(vector<tuple<String, String, ProteinIdentification::SearchParameters>>{});
mzml_to_sesettings.back().emplace_back(se_ver_settings);
mzml_to_rescoresettings.emplace_back(vector<tuple<String, String, vector<pair<String,String>>>>{});
mzml_to_rescoresettings.back().emplace_back(rescore_ver_settings);
prot_ids.emplace_back(ProteinIdentification());
prot_ids.back().setIdentifier("ConsensusID for " + original_file);
}
else
{
mzml_to_sesettings[mzml_to_new_run_idx[original_file]].emplace_back(se_ver_settings);
mzml_to_rescoresettings[mzml_to_new_run_idx[original_file]].emplace_back(rescore_ver_settings);
}
for (auto& hit : prot.getHits())
{
auto acciter_inserted = iter_inserted.first->second.emplace(hit.getAccession());
if (acciter_inserted.second)
{
prot_ids[mzml_to_new_run_idx[original_file]].getHits().emplace_back(std::move(hit));
}
}
}
for (auto& pep_id : tmp_pep_ids)
{
StringList original_files;
const ProteinIdentification& old = tmp_prot_ids[runid_to_old_run_idx[pep_id.getIdentifier()]];
old.getPrimaryMSRunPath(original_files); // the size should have been checked during the loop over proteins
for (auto& f : original_files)
{
std::replace( f.begin(), f.end(), '\\', '/');
f = FileHandler::stripExtension(File::basename(f)); // some SE adapters write full paths, some may use raw
}
String original_file = original_files[0];
auto iter_inserted = grouping_per_file.emplace(original_file, unordered_map<String,PeptideIdentificationList>{});
if (pep_id.metaValueExists("spectrum_reference"))
{
String nativeID = pep_id.getSpectrumReference();
auto nativeid_iter_inserted = iter_inserted.first->second.emplace(nativeID, PeptideIdentificationList{});
nativeid_iter_inserted.first->second.emplace_back(std::move(pep_id));
}
}
}
for (auto& file_ref_peps : grouping_per_file)
{
Size new_run_id = mzml_to_new_run_idx[file_ref_peps.first];
ProteinIdentification& to_put = prot_ids[new_run_id];
// Note: we assume that at least one of the inputs had mzML as an extension
// we could keep track of it but IMHO we should not allow raw there at all (just complicates things)
to_put.setPrimaryMSRunPath({file_ref_peps.first + ".mzML"});
setProteinIdentificationSettings_(to_put, mzml_to_sesettings[new_run_id], mzml_to_rescoresettings[new_run_id]);
for (const auto& ref_peps : file_ref_peps.second)
{
PeptideIdentificationList peps = ref_peps.second;
if (peps.empty())
{
continue; //sth went wrong. skip
}
double mz = peps[0].getMZ();
double rt = peps[0].getRT();
// has to have a ref, save it, since apply might modify everything
String ref = peps[0].getSpectrumReference();
consensus->apply(peps, runid_to_old_se, mzml_to_sesettings[new_run_id].size());
for (auto& p : peps)
{
p.setIdentifier(to_put.getIdentifier());
p.setMZ(mz);
p.setRT(rt);
p.setSpectrumReference( ref);
//TODO copy other meta values from the originals? They need to be collected
// in the algorithm subclasses though first
pep_ids.emplace_back(std::move(p));
}
}
}
}
else // link spectra by RT and mz proximity and do ConsensusID on their PSMs
{
if (in.size() != 1)
{
OPENMS_LOG_FATAL_ERROR << "ConsensusID on idXML without the --per_spectrum flag, expects a single idXML file."
"Please merge the files with IDMerger using its default settings." << std::endl;
}
// note: this requires a single merged idXML file.
FileHandler().loadIdentifications(in[0], prot_ids, pep_ids, {FileTypes::IDXML});
if (prot_ids.size() == 1)
{
OPENMS_LOG_FATAL_ERROR << "ConsensusID on idXML without the --per_spectrum flag expects a merged idXML file"
"with multiple runs. Only one run found in the first file." << std::endl;
}
// merge peptide IDs by precursor position - this is equivalent to a
// feature linking problem (peptide IDs from different ID runs <->
// features from different maps), so we bring the data into a format
// suitable for a feature grouping algorithm:
vector<FeatureMap> maps(prot_ids.size());
map<String, String> runid_to_se;
map<String, Size> id_mapping; // mapping: run ID -> index (of feature map)
for (Size i = 0; i < prot_ids.size(); ++i)
{
id_mapping[prot_ids[i].getIdentifier()] = i;
if (keep_old_scores_)
{
runid_to_se[prot_ids[i].getIdentifier()] = prot_ids[i].getOriginalSearchEngineName();
}
}
for (PeptideIdentification& pep : pep_ids)
{
String run_id = pep.getIdentifier();
if (!pep.hasRT() || !pep.hasMZ())
{
OPENMS_LOG_FATAL_ERROR << "Peptide ID without RT and/or m/z information found in identification run '" + run_id + "'.\nMake sure that this information is included for all IDs when generating/converting search results. Aborting!" << endl;
return INCOMPATIBLE_INPUT_DATA;
}
Feature feature;
feature.setRT(pep.getRT());
feature.setMZ(pep.getMZ());
feature.getPeptideIdentifications().push_back(pep);
maps[id_mapping[run_id]].push_back(feature);
}
// precondition for "FeatureGroupingAlgorithmQT::group":
for (FeatureMap& map : maps)
{
map.updateRanges();
}
FeatureGroupingAlgorithmQT linker;
Param linker_params = linker.getDefaults();
linker_params.setValue("use_identifications", "false");
linker_params.setValue("ignore_charge", "true");
linker_params.setValue("distance_RT:max_difference", rt_delta);
linker_params.setValue("distance_MZ:max_difference", mz_delta);
linker_params.setValue("distance_MZ:unit", "Da");
linker.setParameters(linker_params);
ConsensusMap grouping;
linker.group(maps, grouping);
Size old_size = prot_ids.size();
// create new identification run
setProteinIdentifications_(prot_ids);
// compute consensus
pep_ids.clear();
for (auto& cfeature : grouping)
{
auto& ids = cfeature.getPeptideIdentifications();
consensus->apply(ids, runid_to_se, old_size);
if (!ids.empty())
{
PeptideIdentification& pep_id = ids[0];
// hits may be empty due to filtering (parameter "min_support");
// in that case skip to avoid a warning from "IDXMLFile::store":
if (!pep_id.getHits().empty())
{
pep_id.setIdentifier(prot_ids[0].getIdentifier());
pep_id.setRT(cfeature.getRT());
pep_id.setMZ(cfeature.getMZ());
pep_ids.push_back(pep_id);
}
}
}
}
// store consensus
FileHandler().storeIdentifications(out, prot_ids, pep_ids, {FileTypes::IDXML});
}
//----------------------------------------------------------------
// featureXML
//----------------------------------------------------------------
if (in_type == FileTypes::FEATUREXML)
{
FeatureMap map;
FileHandler().loadFeatures(in[0], map, {FileTypes::FEATUREXML});
processFeatureOrConsensusMap_(map, consensus);
FileHandler().storeFeatures(out, map, {FileTypes::FEATUREXML});
}
//----------------------------------------------------------------
// consensusXML
//----------------------------------------------------------------
if (in_type == FileTypes::CONSENSUSXML)
{
ConsensusMap map;
FileHandler().loadConsensusFeatures(in[0], map, {FileTypes::CONSENSUSXML});
processFeatureOrConsensusMap_(map, consensus);
FileHandler().storeConsensusFeatures(out, map, {FileTypes::CONSENSUSXML});
}
delete consensus;
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPConsensusID tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/SequenceCoverageCalculator.cpp | .cpp | 10,364 | 271 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Nico Pfeifer, Chris Bielow $
// --------------------------------------------------------------------------
#include <OpenMS/CONCEPT/LogStream.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FASTAFile.h>
#include <OpenMS/METADATA/ProteinIdentification.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <unordered_map>
#include <numeric>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_SequenceCoverageCalculator SequenceCoverageCalculator
@brief Prints information about idXML files.
@note Currently mzIdentML (mzid) is not directly supported as an input/output format of this tool. Convert mzid files to/from idXML using @ref TOPP_IDFileConverter if necessary.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_SequenceCoverageCalculator.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_SequenceCoverageCalculator.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPSequenceCoverageCalculator :
public TOPPBase
{
public:
TOPPSequenceCoverageCalculator() :
TOPPBase("SequenceCoverageCalculator", "Annotates coverage information to idXML files.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in_database", "<file>", "", "input file containing the database in FASTA format");
setValidFormats_("in_database", ListUtils::create<String>("fasta"));
registerInputFile_("in_peptides", "<file>", "", "input file containing the identified peptides");
setValidFormats_("in_peptides", ListUtils::create<String>("idXML"), true);
registerOutputFile_("out", "<file>", "", "Optional text output file. If left out, the output is written to the command line.", false);
setValidFormats_("out", ListUtils::create<String>("idXML"), true);
}
void getStartAndEndIndex(const String& sequence, const String& substring, pair<Size, Size>& indices)
{
indices.first = 0;
indices.second = 0;
if (sequence.hasSubstring(substring))
{
for (Size i = 0; i <= sequence.size() - substring.size(); ++i)
{
Size temp_index = i;
Size temp_count = 0;
while (temp_index < sequence.size()
&& temp_count < substring.size()
&& sequence.at(temp_index) == substring.at(temp_index - i))
{
++temp_index;
++temp_count;
}
if (temp_count == substring.size())
{
indices.first = i;
indices.second = temp_index;
i = sequence.size();
}
}
}
}
struct CoverageInfo
{
double coverage{}; // fraction of sequence covered by peptides
Size count{}; // number of unique peptides
Size mod_count{}; // number of unique modified peptides
};
ExitCodes outputTo_(ostream& os, String out)
{
vector<ProteinIdentification> protein_identifications;
PeptideIdentificationList identifications;
vector<FASTAFile::FASTAEntry> proteins;
vector<double> statistics;
vector<Size> counts;
vector<Size> mod_counts;
vector<PeptideHit> temp_hits;
vector<Size> coverage;
Size spectrum_count = 0;
unordered_map<String, Size> unique_peptides;
unordered_map<String, Size> temp_unique_peptides;
unordered_map<String, Size> temp_modified_unique_peptides;
protein_identifications.push_back(ProteinIdentification());
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
String inputfile_name = getStringOption_("in_peptides");
String database_name = getStringOption_("in_database");
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
String document_id;
FileHandler().loadIdentifications(inputfile_name, protein_identifications, identifications, {FileTypes::IDXML});
FASTAFile().load(database_name, proteins);
statistics.resize(proteins.size(), 0.);
counts.resize(proteins.size(), 0);
mod_counts.resize(proteins.size(), 0);
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
unordered_map<String, CoverageInfo> prot2cov;
os << "proteinID\tcoverage (%)\tunique hits\n";
for (Size j = 0; j < proteins.size(); ++j)
{
coverage.clear();
coverage.resize(proteins[j].sequence.size(), 0);
temp_unique_peptides.clear();
temp_modified_unique_peptides.clear();
for (Size i = 0; i < identifications.size(); ++i)
{
if (!identifications[i].empty())
{
if (identifications[i].getHits().size() > 1)
{
OPENMS_LOG_ERROR << "Spectrum with more than one identification found, which is not allowed.\n"
<< "Use the IDFilter with the -best_hits option to filter for best hits." << endl;
return ILLEGAL_PARAMETERS;
}
set<String> accession;
accession.insert(proteins[j].identifier);
temp_hits = PeptideIdentification::getReferencingHits(identifications[i].getHits(), accession);
if (temp_hits.size() == 1)
{
pair<Size, Size> indices;
getStartAndEndIndex(proteins[j].sequence, temp_hits[0].getSequence().toUnmodifiedString(), indices);
for (Size k = indices.first; k < indices.second; ++k)
{
coverage[k] = 1;
}
if (indices.first != indices.second)
{
// os << temp_hits[0].getSequence().toUnmodifiedString() << endl;
}
++spectrum_count;
if (unique_peptides.find(temp_hits[0].getSequence().toString()) == unique_peptides.end())
{
unique_peptides.insert(make_pair(temp_hits[0].getSequence().toString(), 0));
}
if (temp_unique_peptides.find(temp_hits[0].getSequence().toUnmodifiedString()) == temp_unique_peptides.end())
{
temp_unique_peptides.insert(make_pair(temp_hits[0].getSequence().toUnmodifiedString(), 0));
}
if (temp_modified_unique_peptides.find(temp_hits[0].getSequence().toUnmodifiedString()) == temp_modified_unique_peptides.end())
{
temp_modified_unique_peptides.insert(make_pair(temp_hits[0].getSequence().toString(), 0));
}
}
}
}
/* << proteins[j].sequence << endl;
for (Size k = 0; k < coverage.size(); ++k)
{
os << coverage[k];
}
os << endl;
*/
// statistics[j] = make_pair(,
// accumulate(coverage.begin(), coverage.end(), 0) / proteins[j].sequence.size());
statistics[j] = ((double) accumulate(coverage.begin(), coverage.end(), Size(0))) / proteins[j].sequence.size();
counts[j] = temp_unique_peptides.size();
mod_counts[j] = temp_modified_unique_peptides.size();
// details for this protein
if (counts[j] > 0)
{
prot2cov[proteins[j].identifier] = { statistics[j], counts[j], mod_counts[j] };
os << proteins[j].identifier << "\t" << statistics[j] * 100 << "\t" << counts[j] << "\n";
}
// os << statistics[j] << endl;
}
// os << "Sum of coverage is " << accumulate(statistics.begin(), statistics.end(), 0.) << endl;
// update meta values in protein_identifications
for (auto& prot_id : protein_identifications)
{
for (auto& prot_hit : prot_id.getHits())
{
if (prot2cov.find(prot_hit.getAccession()) != prot2cov.end())
{
prot_hit.setMetaValue("coverage", prot2cov[prot_hit.getAccession()].coverage);
prot_hit.setMetaValue("unique_peptides", prot2cov[prot_hit.getAccession()].count);
prot_hit.setMetaValue("unique_modified_peptides", prot2cov[prot_hit.getAccession()].mod_count);
}
}
}
FileHandler().storeIdentifications(out, protein_identifications, identifications, {FileTypes::IDXML});
os << "Average coverage per protein is " << (accumulate(statistics.begin(), statistics.end(), 0.) / statistics.size()) << endl;
os << "Average number of peptides per protein is " << (((double) accumulate(counts.begin(), counts.end(), 0.)) / counts.size()) << endl;
os << "Average number of un/modified peptides per protein is " << (((double) accumulate(mod_counts.begin(), mod_counts.end(), 0.)) / mod_counts.size()) << endl;
os << "Number of identified spectra: " << spectrum_count << endl;
os << "Number of unique identified peptides: " << unique_peptides.size() << endl;
vector<double>::iterator it = statistics.begin();
vector<Size>::iterator it2 = counts.begin();
vector<Size>::iterator it3 = mod_counts.begin();
while (it != statistics.end())
{
if (*it == 0.)
{
it = statistics.erase(it);
it2 = counts.erase(it2);
it3 = mod_counts.erase(it3);
}
else
{
++it;
++it2;
++it3;
}
}
os << "Average coverage per found protein (" << statistics.size() << ") is " << (accumulate(statistics.begin(), statistics.end(), 0.) / statistics.size()) << endl;
os << "Average number of peptides per found protein is " << (((double) accumulate(counts.begin(), counts.end(), 0.)) / counts.size()) << endl;
os << "Average number of un/modified peptides per protein is " << (((double) accumulate(mod_counts.begin(), mod_counts.end(), 0.)) / mod_counts.size()) << endl;
return EXECUTION_OK;
}
ExitCodes main_(int, const char**) override
{
String out = getStringOption_("out");
return outputTo_(getGlobalLogInfo(), out);
}
};
int main(int argc, const char** argv)
{
TOPPSequenceCoverageCalculator tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/FeatureLinkerUnlabeledKD.cpp | .cpp | 4,482 | 124 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Johannes Veit $
// $Authors: Johannes Veit $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CONCEPT/Exception.h>
#include <OpenMS/CONCEPT/ProgressLogger.h>
#include <OpenMS/DATASTRUCTURES/DefaultParamHandler.h>
#include <OpenMS/ANALYSIS/MAPMATCHING/FeatureGroupingAlgorithmKD.h>
#include "../topp/FeatureLinkerBase.cpp"
#include <fstream>
#include <iostream>
using namespace OpenMS;
using namespace std;
//
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_FeatureLinkerUnlabeledKD FeatureLinkerUnlabeledKD
@brief Group corresponding features across labelfree experiments.
Group corresponding features across labelfree experiments. This tool
produces results similar to those of FeatureLinkerUnlabeledQT, since it
optimizes a similar objective. However, this algorithm is more efficient
than FLQT as it uses a kd-tree for fast 2D region queries in m/z - RT space
and a sorted binary search tree to choose the best cluster among the remaining
ones in O(1). Insertion and searching in this tree have O(log n) runtime.
KD-tree insertion and search have O(log n) runtime. The overall complexity of
the algorithm is O(n log(n)) time and O(n) space.
In practice, the runtime of FeatureLinkerUnlabeledQT is often not
significantly worse than that of FeatureLinkerUnlabeledKD if the datasets
are relatively small and/or the value of the -nr_partitions parameter is
chosen large enough. If, however, the datasets are very large, and especially
if they are so dense that a partitioning based on the specified m/z
tolerance is not possible anymore, then this algorithm becomes orders of
magnitudes faster than FLQT.
Notably, this algorithm can be used to align featureXML files containing
unassembled mass traces (as produced by MassTraceExtractor), which is often
impossible for reasonably large datasets using other aligners, as these
datasets tend to be too dense and hence cannot be partitioned.
Prior to feature linking, this tool performs an (optional) retention time
transformation on the features using LOWESS regression in order to minimize
retention time differences between corresponding features across different
maps. These transformed RTs are used only internally. In the results, original
RTs will be reported.
The linking behavior can be influenced by separately specifying how to use
the available charge and adduct information. Options allow to restrict
linking to features with the same adduct/charge (or lack thereof, i.e.
features with charge zero or no adduct annotation), additionally allowing
the linking of charged/adduct-annotated features with those having no
charge/adduct information, or allowing all features to be linked irrespective
of charge state/adduct information.
Note that the more relaxed the allowed grouping criteria, the larger internally
used connected components memory-wise. More stringent m/z or retention time
tolerances might be required then.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_FeatureLinkerUnlabeledKD.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_FeatureLinkerUnlabeledKD.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPFeatureLinkerUnlabeledKD :
public TOPPFeatureLinkerBase
{
public:
TOPPFeatureLinkerUnlabeledKD() :
TOPPFeatureLinkerBase("FeatureLinkerUnlabeledKD", "Groups corresponding features from multiple maps.")
{
setLogType(CMD);
}
protected:
void registerOptionsAndFlags_() override
{
TOPPFeatureLinkerBase::registerOptionsAndFlags_();
registerSubsection_("algorithm", "Algorithm parameters section");
}
Param getSubsectionDefaults_(const String & /*section*/) const override
{
FeatureGroupingAlgorithmKD algo;
Param p = algo.getParameters();
return p;
}
ExitCodes main_(int, const char **) override
{
FeatureGroupingAlgorithmKD algo;
return TOPPFeatureLinkerBase::common_main_(&algo);
}
};
int main(int argc, const char ** argv)
{
TOPPFeatureLinkerUnlabeledKD tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/FileConverter.cpp | .cpp | 30,337 | 728 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Marc Sturm, Andreas Bertsch, Chris Bielow $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/FORMAT/DATAACCESS/MSDataCachedConsumer.h>
#include <OpenMS/FORMAT/DATAACCESS/MSDataWritingConsumer.h>
// TODO add handler support for other accss
#include <OpenMS/FORMAT/DTA2DFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/FORMAT/IBSpectraFile.h>
// TODO add handler support for other access
#include <OpenMS/FORMAT/MascotGenericFile.h>
// TODO: remove MZML header after we get cached and Transform working
#include <OpenMS/FORMAT/MzMLFile.h>
// TODO: remove MZXML header after we get cached and Transform working
#include <OpenMS/FORMAT/MzXMLFile.h>
#include <OpenMS/METADATA/ID/IdentificationDataConverter.h>
#include <OpenMS/KERNEL/ChromatogramTools.h>
#include <OpenMS/KERNEL/ConversionHelper.h>
#include <QStringList>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_FileConverter FileConverter
@brief Converts between different MS file formats.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → FileConverter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_GenericWrapper (e.g. for calling external converters) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> any tool operating on the output format</td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any vendor software exporting supported formats (e.g. mzML) </td>
</tr>
</table>
</CENTER>
The main use of this tool is to convert data from external sources to the formats used by OpenMS/TOPP.
Maybe most importantly, data from MS experiments in a number of different formats can be converted to mzML,
the canonical file format used by OpenMS/TOPP for experimental data. (mzML is the PSI approved format and
supports traceability of analysis steps.)
Thermo raw files can be converted to mzML using the ThermoRawFileParser provided in the THIRDPARTY folder.
On windows, a recent .NET framwork needs to be installed. On linux and mac, the mono runtime needs to be
present and accessible via the -NET_executable parameter. The path to the ThermoRawFileParser can be set
via the -ThermoRaw_executable option.
For MaxQuant-flavoured mzXML the use of the advanced option '-force_MaxQuant_compatibility' is recommended.
Many different format conversions are supported, and some may be more useful than others. Depending on the
file formats involved, information can be lost during conversion, e.g. when converting featureXML to mzData.
In such cases a warning is shown.
The input and output file types are determined from the file extensions or from the first few lines of the
files. If file type determination is not possible, the input or output file type has to be given explicitly.
Conversion with the same output as input format is supported. In some cases, this can be helpful to remove
errors from files (e.g. the index), to update file formats to new versions, or to check whether information is lost upon
reading or writing.
Some information about the supported input types:
@ref OpenMS::MzMLFile "mzML"
@ref OpenMS::MzXMLFile "mzXML"
@ref OpenMS::MzDataFile "mzData"
@ref OpenMS::MascotGenericFile "mgf"
@ref OpenMS::DTA2DFile "dta2d"
@ref OpenMS::DTAFile "dta"
@ref OpenMS::FeatureXMLFile "featureXML"
@ref OpenMS::ConsensusXMLFile "consensusXML"
@ref OpenMS::MS2File "ms2"
@ref OpenMS::XMassFile "fid/XMASS"
@ref OpenMS::MsInspectFile "tsv"
@ref OpenMS::SpecArrayFile "peplist"
@ref OpenMS::KroenikFile "kroenik"
@ref OpenMS::EDTAFile "edta"
@ref OpenMS::SqMassFile "sqmass"
@ref OpenMS::OMSFile "oms"
@note See @ref TOPP_IDFileConverter for similar functionality for protein/peptide identification file formats.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_FileConverter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_FileConverter.html
*/
String extractCachedMetaFilename(const String& in)
{
// Special handling of cached mzML as input types:
// we expect two paired input files which we should read into exp
std::vector<String> split_out;
in.split(".cachedMzML", split_out);
if (split_out.size() != 2)
{
OPENMS_LOG_ERROR << "Cannot deduce base path from input '" << in
<< "' (note that '.cachedMzML' should only occur once as the final ending)" << std::endl;
return "";
}
String in_meta = split_out[0] + ".mzML";
return in_meta;
}
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPFileConverter :
public TOPPBase
{
public:
TOPPFileConverter() :
TOPPBase("FileConverter", "Converts between different MS file formats.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input file to convert.");
registerStringOption_("in_type", "<type>", "", "Input file type -- default: determined from file extension or content\n", false, false); // optional and not advanced (for workflow engines to show this param)
vector<String> input_formats = {"mzML", "mzXML", "mgf", "raw", "cachedMzML", "mzData", "dta", "dta2d", "featureXML", "consensusXML", "ms2", "fid", "tsv", "peplist", "kroenik", "edta", "oms"};
setValidFormats_("in", input_formats);
setValidStrings_("in_type", input_formats);
registerStringOption_("UID_postprocessing", "<method>", "ensure", "unique ID post-processing for output data.\n'none' keeps current IDs even if invalid.\n'ensure' keeps current IDs but reassigns invalid ones.\n'reassign' assigns new unique IDs.", false, true);
String method("none,ensure,reassign");
setValidStrings_("UID_postprocessing", ListUtils::create<String>(method));
vector<String> output_formats = {"mzML", "mzXML", "cachedMzML", "mgf", "featureXML", "consensusXML", "edta", "mzData", "dta2d", "csv", "sqmass", "oms"};
registerOutputFile_("out", "<file>", "", "Output file");
setValidFormats_("out", output_formats);
registerStringOption_("out_type", "<type>", "", "Output file type -- default: determined from file extension or content\nNote: that not all conversion paths work or make sense.", false, false); // optional and not advanced (for workflow engines to show this param)
setValidStrings_("out_type", output_formats);
registerFlag_("TIC_DTA2D", "Export the TIC instead of the entire experiment in mzML/mzData/mzXML -> DTA2D conversions.", true);
registerFlag_("MGF_compact", "Use a more compact format when writing MGF (no zero-intensity peaks, limited number of decimal places)", true);
registerFlag_("force_MaxQuant_compatibility", "[mzXML output only] Make sure that MaxQuant can read the mzXML and set the msManufacturer to 'Thermo Scientific'.", true);
registerFlag_("force_TPP_compatibility", "[mzML output only] Make sure that TPP parsers can read the mzML and the precursor ion m/z in the file (otherwise it will be set to zero by the TPP).", true);
registerFlag_("convert_to_chromatograms", "[mzML output only] Assumes that the provided spectra represent data in SRM mode or targeted MS1 mode and converts them to chromatogram data.", true);
registerStringOption_("write_scan_index", "<toggle>", "true", "Append an index when writing mzML or mzXML files. Some external tools might rely on it.", false, true);
setValidStrings_("write_scan_index", ListUtils::create<String>("true,false"));
registerFlag_("lossy_compression", "Use numpress compression to achieve optimally small file size using linear compression for m/z domain and slof for intensity and float data arrays (attention: may cause small loss of precision; only for mzML data).", true);
registerDoubleOption_("lossy_mass_accuracy", "<error>", -1.0, "Desired (absolute) m/z accuracy for lossy compression (e.g. use 0.0001 for a mass accuracy of 0.2 ppm at 500 m/z, default uses -1.0 for maximal accuracy).", false, true);
registerFlag_("process_lowmemory", "Whether to process the file on the fly without loading the whole file into memory first (only for conversions of mzXML/mzML to mzML).\nNote: this flag will prevent conversion from spectra to chromatograms.", true);
registerTOPPSubsection_("RawToMzML", "Options for converting raw files to mzML (uses ThermoRawFileParser)");
registerInputFile_("RawToMzML:NET_executable", "<executable>", "", "The .NET framework executable. Only required on linux and mac.", false, true, {"is_executable"});
registerInputFile_("RawToMzML:ThermoRaw_executable", "<file>", "ThermoRawFileParser.exe", "The ThermoRawFileParser executable.", false, true, {"is_executable"});
setValidFormats_("RawToMzML:ThermoRaw_executable", {"exe"});
registerFlag_("RawToMzML:no_peak_picking", "Disables vendor peak picking for raw files.", true);
registerFlag_("RawToMzML:no_zlib_compression", "Disables zlib compression for raw file conversion. Enables compatibility with some tools that do not support compressed input files, e.g. X!Tandem.", true);
registerFlag_("RawToMzML:include_noise", "Include noise data in mzML output.", true);
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
//input file names
String in = getStringOption_("in");
bool write_scan_index = getStringOption_("write_scan_index") == "true" ? true : false;
bool force_MaxQuant_compatibility = getFlag_("force_MaxQuant_compatibility");
bool force_TPP_compatibility = getFlag_("force_TPP_compatibility");
bool convert_to_chromatograms = getFlag_("convert_to_chromatograms");
bool lossy_compression = getFlag_("lossy_compression");
double mass_acc = getDoubleOption_("lossy_mass_accuracy");
// prepare data structures for lossy compression (note that we compress any float data arrays the same as intensity arrays)
MSNumpressCoder::NumpressConfig npconfig_mz, npconfig_int, npconfig_fda;
npconfig_mz.estimate_fixed_point = true; // critical
npconfig_int.estimate_fixed_point = true; // critical
npconfig_fda.estimate_fixed_point = true; // critical
npconfig_mz.numpressErrorTolerance = -1.0; // skip check, faster
npconfig_int.numpressErrorTolerance = -1.0; // skip check, faster
npconfig_fda.numpressErrorTolerance = -1.0; // skip check, faster
npconfig_mz.setCompression("linear");
npconfig_int.setCompression("slof");
npconfig_fda.setCompression("slof");
npconfig_mz.linear_fp_mass_acc = mass_acc; // set the desired mass accuracy
// input file type
FileHandler fh;
FileTypes::Type in_type = FileTypes::nameToType(getStringOption_("in_type"));
if (in_type == FileTypes::UNKNOWN)
{
in_type = fh.getType(in);
writeDebug_(String("Input file type: ") + FileTypes::typeToName(in_type), 2);
if (in_type == FileTypes::UNKNOWN)
{
writeLogError_("Error: Could not determine input file type!");
return PARSE_ERROR;
}
}
// output file names and types
String out = getStringOption_("out");
FileTypes::Type out_type = FileHandler::getConsistentOutputfileType(out, getStringOption_("out_type"));
if (out_type == FileTypes::UNKNOWN)
{
writeLogError_("Error: Could not determine output file type! Please adjust the 'out_type' parameter of this tool.");
return PARSE_ERROR;
}
bool TIC_DTA2D = getFlag_("TIC_DTA2D");
bool process_lowmemory = getFlag_("process_lowmemory");
writeDebug_(String("Output file type: ") + FileTypes::typeToName(out_type), 1);
String uid_postprocessing = getStringOption_("UID_postprocessing");
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
MSExperiment exp;
assert(exp.empty());
const MSExperiment empty_exp; ///< to determine if 'exp' was modified (loading and storing an MSExp with metadata but empty spectra/chroms should be valid), i.e. checking exp.empty() is not sufficient
FeatureMap fm;
ConsensusMap cm;
writeDebug_(String("Loading input file"), 1);
if (in_type == FileTypes::CONSENSUSXML)
{
FileHandler().loadConsensusFeatures(in, cm, {FileTypes::CONSENSUSXML}, log_type_);
cm.sortByPosition();
if ((out_type != FileTypes::FEATUREXML) &&
(out_type != FileTypes::CONSENSUSXML) &&
(out_type != FileTypes::OMS)
)
{
// You you will lose information and waste memory. Enough reasons to issue a warning!
writeLogWarn_("Warning: Converting consensus features to peaks. You will lose information!");
exp.set2DData(cm);
}
}
else if (in_type == FileTypes::RAW)
{
if (out_type != FileTypes::MZML)
{
throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
"Only conversion to mzML supported at this point.");
}
bool no_peak_picking = getFlag_("RawToMzML:no_peak_picking");
bool no_zlib_compression = getFlag_("RawToMzML:no_zlib_compression");
bool include_noise = getFlag_("RawToMzML:include_noise");
writeLogInfo_("RawFileReader reading tool. Copyright 2016 by Thermo Fisher Scientific, Inc. All rights reserved");
String net_executable = getStringOption_("RawToMzML:NET_executable");
QStringList arguments;
#ifdef OPENMS_WINDOWSPLATFORM
if (net_executable.empty())
{ // default on Windows: if NO mono executable is set use the "native" .NET one
net_executable = getStringOption_("RawToMzML:ThermoRaw_executable");
}
else
{ // use e.g., mono
arguments << getStringOption_("RawToMzML:ThermoRaw_executable").toQString();
}
#else
// default on Mac, Linux: use mono
net_executable = net_executable.empty() ? "mono" : net_executable;
arguments << getStringOption_("RawToMzML:ThermoRaw_executable").toQString();
#endif
arguments << ("--input=" + in).c_str()
<< ("--output=" + out).c_str()
<< "-f=2" // indexedMzML
<< "-e"; // ignore instrument errors
if (no_peak_picking)
{
arguments << "--noPeakPicking";
}
if (no_zlib_compression)
{
arguments << "--noZlibCompression";
}
if (include_noise)
{
arguments << "--noiseData";
}
return runExternalProcess_(net_executable.toQString(), arguments);
}
else if (in_type == FileTypes::EDTA)
{
FileHandler().loadConsensusFeatures(in, cm, {FileTypes::EDTA}, log_type_);
cm.sortByPosition();
if ((out_type != FileTypes::FEATUREXML) &&
(out_type != FileTypes::CONSENSUSXML))
{
// You you will lose information and waste memory. Enough reasons to issue a warning!
writeLogWarn_("Warning: Converting consensus features to peaks. You will lose information!");
exp.set2DData(cm);
}
}
else if (in_type == FileTypes::FEATUREXML ||
in_type == FileTypes::TSV ||
in_type == FileTypes::PEPLIST ||
in_type == FileTypes::KROENIK)
{
fh.loadFeatures(in, fm, {in_type});
fm.sortByPosition();
if ((out_type != FileTypes::FEATUREXML) &&
(out_type != FileTypes::CONSENSUSXML) &&
(out_type != FileTypes::OMS))
{
// You will lose information and waste memory. Enough reasons to issue a warning!
writeLogWarn_("Warning: Converting features to peaks. You will lose information! Mass traces are added, if present as 'num_of_masstraces' and 'masstrace_intensity' (X>=0) meta values.");
exp.set2DData<true>(fm);
}
}
else if (in_type == FileTypes::CACHEDMZML)
{
// Determine location of meta information (empty mzML)
String in_meta = extractCachedMetaFilename(in);
if (in_meta.empty())
{
return ILLEGAL_PARAMETERS;
}
Internal::CachedMzMLHandler cacher;
cacher.setLogType(log_type_);
PeakMap tmp_exp;
FileHandler().loadExperiment(in_meta, exp, {FileTypes::MZML}, log_type_);
cacher.readMemdump(tmp_exp, in);
// Sanity check
if (exp.size() != tmp_exp.size())
{
OPENMS_LOG_ERROR << "Paired input files do not match, cannot convert: " << in_meta << " and " << in << std::endl;
return ILLEGAL_PARAMETERS;
}
// Populate meta data with actual data points
for (Size i=0; i < tmp_exp.size(); ++i)
{
for (Size j = 0; j < tmp_exp[i].size(); j++)
{
exp[i].push_back(tmp_exp[i][j]);
}
}
std::vector<MSChromatogram > old_chromatograms = exp.getChromatograms();
for (Size i=0; i < tmp_exp.getChromatograms().size(); ++i)
{
for (Size j = 0; j < tmp_exp.getChromatograms()[i].size(); j++)
{
old_chromatograms[i].push_back(tmp_exp.getChromatograms()[i][j]);
}
}
exp.setChromatograms(old_chromatograms);
}
else if (process_lowmemory)
{
// Special switch for the low memory options:
// We can transform the complete experiment directly without first
// loading the complete data into memory. PlainMSDataWritingConsumer will
// write out mzML to disk as they are read from the input.
if ((in_type == FileTypes::MZXML || in_type == FileTypes::MZML) && out_type == FileTypes::MZML)
{
// Prepare the consumer
PlainMSDataWritingConsumer consumer(out);
consumer.getOptions().setWriteIndex(write_scan_index);
bool skip_full_count = false;
// numpress compression
if (lossy_compression)
{
consumer.getOptions().setNumpressConfigurationMassTime(npconfig_mz);
consumer.getOptions().setNumpressConfigurationIntensity(npconfig_int);
consumer.getOptions().setNumpressConfigurationFloatDataArray(npconfig_fda);
consumer.getOptions().setCompression(true);
}
consumer.addDataProcessing(getProcessingInfo_(DataProcessing::CONVERSION_MZML));
// for different input file type
if (in_type == FileTypes::MZML)
{
MzMLFile mzmlfile;
mzmlfile.setLogType(log_type_);
mzmlfile.transform(in, &consumer, skip_full_count);
return EXECUTION_OK;
}
else if (in_type == FileTypes::MZXML)
{
MzXMLFile mzxmlfile;
mzxmlfile.setLogType(log_type_);
mzxmlfile.transform(in, &consumer, skip_full_count);
return EXECUTION_OK;
}
}
else if (in_type == FileTypes::MZML && out_type == FileTypes::CACHEDMZML)
{
// Determine output path for meta information (empty mzML)
String out_meta = extractCachedMetaFilename(out);
if (out_meta.empty())
{
return ILLEGAL_PARAMETERS;
}
Internal::CachedMzMLHandler cacher;
cacher.setLogType(log_type_);
PeakMap exp_meta;
MSDataCachedConsumer consumer(out);
MzMLFile().transform(in, &consumer, exp_meta);
cacher.writeMetadata(exp_meta, out_meta);
return EXECUTION_OK;
}
else
{
throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
"Process_lowmemory option can only be used with mzML / mzXML input and mzML output data types.");
}
}
else
{
fh.loadExperiment(in, exp, {in_type}, log_type_, true, true);
}
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
writeDebug_(String("Writing output file"), 1);
if (out_type == FileTypes::MZML)
{
if (exp == empty_exp)
{
OPENMS_LOG_ERROR << "No input data: no MS1/MS2 data present! Cannot write mzML. Please use another input/output format combination.";
return ExitCodes::INCOMPATIBLE_INPUT_DATA;
}
//add data processing entry
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::
CONVERSION_MZML));
FileHandler mzmlFile;
mzmlFile.getOptions().setWriteIndex(write_scan_index);
mzmlFile.getOptions().setForceTPPCompatability(force_TPP_compatibility);
// numpress compression
if (lossy_compression)
{
mzmlFile.getOptions().setNumpressConfigurationMassTime(npconfig_mz);
mzmlFile.getOptions().setNumpressConfigurationIntensity(npconfig_int);
mzmlFile.getOptions().setNumpressConfigurationFloatDataArray(npconfig_fda);
mzmlFile.getOptions().setCompression(true);
}
if (convert_to_chromatograms)
{
for (auto & s : exp)
{
s.getInstrumentSettings().setScanMode(InstrumentSettings::ScanMode::SRM);
}
}
ChromatogramTools().convertSpectraToChromatograms(exp, true, convert_to_chromatograms);
mzmlFile.storeExperiment(out, exp, {FileTypes::MZML}, log_type_);
}
else if (out_type == FileTypes::MZDATA)
{
if (exp == empty_exp)
{
OPENMS_LOG_ERROR << "No input data: no MS1/MS2 data present! Cannot write mzData. Please use another input/output format combination.";
return ExitCodes::INCOMPATIBLE_INPUT_DATA;
}
//annotate output with data processing info
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::
CONVERSION_MZDATA));
ChromatogramTools().convertChromatogramsToSpectra<MSExperiment>(exp);
FileHandler().storeExperiment(out, exp, {FileTypes::MZDATA}, log_type_);
}
else if (out_type == FileTypes::MZXML)
{
if (exp == empty_exp)
{
OPENMS_LOG_ERROR << "No input data: no MS1/MS2 data present! Cannot write mzXML. Please use another input/output format combination.";
return ExitCodes::INCOMPATIBLE_INPUT_DATA;
}
//annotate output with data processing info
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::
CONVERSION_MZXML));
FileHandler f;
f.getOptions().setForceMQCompatability(force_MaxQuant_compatibility);
f.getOptions().setWriteIndex(write_scan_index);
f.storeExperiment(out, exp, {FileTypes::MZXML}, log_type_);
}
else if (out_type == FileTypes::DTA2D)
{
if (exp == empty_exp)
{
OPENMS_LOG_ERROR << "No input data: no MS1/MS2 data present! Cannot write DTA2D. Please use another input/output format combination.";
return ExitCodes::INCOMPATIBLE_INPUT_DATA;
}
//add data processing entry
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::
FORMAT_CONVERSION));
DTA2DFile f;
f.setLogType(log_type_);
ChromatogramTools().convertChromatogramsToSpectra<MSExperiment>(exp);
if (TIC_DTA2D)
{
// store the total ion chromatogram (TIC)
f.storeTIC(out, exp);
}
else
{
// store entire experiment
f.store(out, exp);
}
}
else if (out_type == FileTypes::MGF)
{
//add data processing entry
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::
FORMAT_CONVERSION));
MascotGenericFile f;
f.setLogType(log_type_);
f.store(out, exp, getFlag_("MGF_compact"));
}
else if (out_type == FileTypes::FEATUREXML)
{
if ((in_type == FileTypes::FEATUREXML) || (in_type == FileTypes::TSV) ||
(in_type == FileTypes::PEPLIST) || (in_type == FileTypes::KROENIK))
{
if (uid_postprocessing == "ensure")
{
fm.applyMemberFunction(&UniqueIdInterface::ensureUniqueId);
}
else if (uid_postprocessing == "reassign")
{
fm.applyMemberFunction(&UniqueIdInterface::setUniqueId);
}
}
else if (in_type == FileTypes::CONSENSUSXML || in_type == FileTypes::EDTA)
{
MapConversion::convert(cm, true, fm);
}
else if (in_type == FileTypes::OMS)
{
FileHandler().loadFeatures(in, fm, {FileTypes::OMS}, log_type_);
IdentificationDataConverter::exportFeatureIDs(fm);
}
else // not loaded as feature map or consensus map
{
// The feature specific information is only defaulted. Enough reasons to issue a warning!
writeLogWarn_("Warning: Converting peaks to features will lead to incomplete features!");
fm.clear();
fm.reserve(exp.getSize());
Feature feature;
feature.setQuality(0, 1); // override default
feature.setQuality(1, 1); // override default
feature.setOverallQuality(1); // override default
for (const MSSpectrum& spec : exp)
{
feature.setRT(spec.getRT());
for (const Peak1D& peak : spec)
{
feature.setMZ(peak.getMZ());
feature.setIntensity(peak.getIntensity());
feature.setUniqueId();
fm.push_back(feature);
}
}
fm.updateRanges();
}
addDataProcessing_(fm, getProcessingInfo_(DataProcessing::
FORMAT_CONVERSION));
FileHandler().storeFeatures(out, fm, {FileTypes::FEATUREXML}, log_type_);
}
else if (out_type == FileTypes::CONSENSUSXML)
{
if ((in_type == FileTypes::FEATUREXML) || (in_type == FileTypes::TSV) ||
(in_type == FileTypes::PEPLIST) || (in_type == FileTypes::KROENIK))
{
if (uid_postprocessing == "ensure")
{
fm.applyMemberFunction(&UniqueIdInterface::ensureUniqueId);
}
else if (uid_postprocessing == "reassign")
{
fm.applyMemberFunction(&UniqueIdInterface::setUniqueId);
}
MapConversion::convert(0, fm, cm);
}
// nothing to do for consensus input
else if (in_type == FileTypes::CONSENSUSXML || in_type == FileTypes::EDTA)
{
}
else // experimental data
{
MapConversion::convert(0, exp, cm, exp.size());
}
for (auto& pepID : cm.getUnassignedPeptideIdentifications())
{
pepID.setMetaValue("map_index", 0);
}
addDataProcessing_(cm, getProcessingInfo_(DataProcessing::
FORMAT_CONVERSION));
FileHandler().storeConsensusFeatures(out, cm, {FileTypes::CONSENSUSXML}, log_type_);
}
else if (out_type == FileTypes::EDTA)
{
if (!fm.empty() && !cm.empty())
{
OPENMS_LOG_ERROR << "Internal error: cannot decide on container (Consensus or Feature)! This is a bug. Please report it!";
return INTERNAL_ERROR;
}
if (fm.empty() && cm.empty())
{
OPENMS_LOG_ERROR << "No input data: either Consensus or Feature data present! Cannot write EDTA. Please use another input/output format combination.";
return ExitCodes::INCOMPATIBLE_INPUT_DATA;
}
if (!fm.empty())
{
FileHandler().storeFeatures(out, fm, {FileTypes::EDTA}, log_type_);
}
else if (!cm.empty())
{
FileHandler().storeConsensusFeatures(out, cm, {FileTypes::EDTA}, log_type_);
}
}
else if (out_type == FileTypes::CACHEDMZML)
{
// Determine output path for meta information (empty mzML)
String out_meta = extractCachedMetaFilename(out);
if (out_meta.empty())
{
return ILLEGAL_PARAMETERS;
}
Internal::CachedMzMLHandler().writeMetadata(exp, out_meta);
Internal::CachedMzMLHandler().writeMemdump(exp, out);
}
else if (out_type == FileTypes::CSV)
{
// as ibspectra is currently the only csv/text based format we assume
// that out_type == FileTypes::CSV means ibspectra, if more formats
// are added we need a more intelligent strategy to decide which
// conversion is requested
// IBSpectra selected as output type
if (in_type != FileTypes::CONSENSUSXML)
{
OPENMS_LOG_ERROR << "Incompatible input data: FileConverter can only convert consensusXML files to ibspectra format.";
return INCOMPATIBLE_INPUT_DATA;
}
IBSpectraFile ibfile;
ibfile.store(out, cm);
}
else if (out_type == FileTypes::SQMASS)
{
FileHandler().storeExperiment(out, exp, {FileTypes::SQMASS}, log_type_);
}
else if (out_type == FileTypes::OMS)
{
if (in_type == FileTypes::FEATUREXML)
{
IdentificationDataConverter::importFeatureIDs(fm);
FileHandler().storeFeatures(out, fm, {FileTypes::OMS}, log_type_);
}
else if (in_type == FileTypes::CONSENSUSXML)
{
IdentificationDataConverter::importConsensusIDs(cm);
FileHandler().storeConsensusFeatures(out, cm, {FileTypes::OMS}, log_type_);
}
else
{
OPENMS_LOG_ERROR << "Incompatible input data: FileConverter can only convert featureXML and consensusXML files to oms format.";
return INCOMPATIBLE_INPUT_DATA;
}
}
else
{
writeLogError_("Error: Unknown output file type given. Aborting!");
printUsage_();
return ILLEGAL_PARAMETERS;
}
// last check if output file was written:
if (!File::exists(out))
{
OPENMS_LOG_ERROR << "Internal error: Conversion did not create an output file! This is a bug. Please report it!";
return INTERNAL_ERROR;
}
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPFileConverter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/Resampler.cpp | .cpp | 4,744 | 159 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: $
// --------------------------------------------------------------------------
#include <OpenMS/config.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/VISUAL/MultiGradient.h>
#include <OpenMS/PROCESSING/RESAMPLING/LinearResamplerAlign.h>
#include <OpenMS/PROCESSING/FILTERING/ThresholdMower.h>
#include <QtGui/QImage>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_Resampler Resampler
@brief Resampler can be used to transform an LC/MS map into a resampled map.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → Resampler →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> - </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_NoiseFilterSGolay </td>
</tr>
</table>
</CENTER>
When writing an peak file, all spectra are resampled with a new sampling
rate. The number of spectra does not change.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_Resampler.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_Resampler.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPResampler :
public TOPPBase
{
public:
TOPPResampler() :
TOPPBase("Resampler",
"Transforms an LC/MS map into a resampled map or a PNG image.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "input file ");
setValidFormats_("in", {"mzML"});
registerOutputFile_("out", "<file>", "", "output file in mzML format");
setValidFormats_("out", {"mzML"});
registerDoubleOption_("sampling_rate", "<rate>", 0.1,
"New sampling rate in m/z dimension (in Th unless ppm flag is set)", false);
setMinFloat_("sampling_rate", 0.0);
registerFlag_("ppm", "sampling_rate is given in ppm");
registerFlag_("align_sampling", "Ensures that sampling is performed equally across the map (same raster is used for all spectra)");
registerDoubleOption_("min_int_cutoff", "<min intensity>", -1.0,
"Intensity cutoff for peaks to be stored in output spectrum (only peaks above this cutoff will be stored, -1 means store all data)", false);
}
ExitCodes main_(int, const char **) override
{
//----------------------------------------------------------------
// load data
//----------------------------------------------------------------
String in = getStringOption_("in");
String out = getStringOption_("out");
double sampling_rate = getDoubleOption_("sampling_rate");
double min_int_cutoff = getDoubleOption_("min_int_cutoff");
bool align_sampling = getFlag_("align_sampling");
bool ppm = getFlag_("ppm");
PeakMap exp;
exp.updateRanges();
FileHandler().loadExperiment(in, exp, {FileTypes::MZML}, log_type_);
Param resampler_param;
resampler_param.setValue("spacing", sampling_rate);
resampler_param.setValue("ppm", ppm ? "true" : "false");
LinearResamplerAlign lin_resampler; // LinearResampler does not know about ppm!
lin_resampler.setParameters(resampler_param);
if (!align_sampling)
{
// resample every scan
for (Size i = 0; i < exp.size(); ++i)
{
lin_resampler.raster(exp[i]);
}
}
else if(!exp.spectrumRanges().RangeRT::isEmpty())
{
// start with even position
auto start_pos = floor(exp.spectrumRanges().getMinRT());
// resample every scan
for (Size i = 0; i < exp.size(); ++i)
{
lin_resampler.raster_align(exp[i], start_pos, exp.spectrumRanges().getMaxRT());
}
}
if (min_int_cutoff >= 0.0)
{
ThresholdMower mow;
Param p;
p.setValue("threshold", min_int_cutoff);
mow.setParameters(p);
mow.filterPeakMap(exp);
}
//annotate output with data processing info
addDataProcessing_(exp,
getProcessingInfo_(DataProcessing::DATA_PROCESSING));
//store output
FileHandler().storeExperiment(out, exp, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPResampler tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/IsobaricWorkflow.cpp | .cpp | 41,526 | 914 | // Copyright (c) 2002-2023, The OpenMS Team -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Julianus Pfeuffer $
// $Authors: Julianus Pfeuffer $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
// the available quantitation methods
#include <OpenMS/ANALYSIS/QUANTITATION/IsobaricQuantitationMethod.h>
#include <OpenMS/ANALYSIS/QUANTITATION/ItraqFourPlexQuantitationMethod.h>
#include <OpenMS/ANALYSIS/QUANTITATION/ItraqEightPlexQuantitationMethod.h>
#include <OpenMS/ANALYSIS/QUANTITATION/TMTSixPlexQuantitationMethod.h>
#include <OpenMS/ANALYSIS/QUANTITATION/TMTTenPlexQuantitationMethod.h>
#include <OpenMS/ANALYSIS/QUANTITATION/TMTElevenPlexQuantitationMethod.h>
#include <OpenMS/ANALYSIS/QUANTITATION/TMTSixteenPlexQuantitationMethod.h>
#include <OpenMS/ANALYSIS/QUANTITATION/TMTEighteenPlexQuantitationMethod.h>
#include <OpenMS/ANALYSIS/QUANTITATION/IsobaricChannelExtractor.h>
#include <OpenMS/ANALYSIS/QUANTITATION/IsobaricIsotopeCorrector.h>
#include <OpenMS/ANALYSIS/QUANTITATION/IsobaricQuantifier.h>
#include <OpenMS/ML/NNLS/NonNegativeLeastSquaresSolver.h>
#include <OpenMS/ANALYSIS/ID/BasicProteinInferenceAlgorithm.h>
#include <OpenMS/ANALYSIS/ID/BayesianProteinInferenceAlgorithm.h>
#include <OpenMS/ANALYSIS/ID/FalseDiscoveryRate.h>
#include <OpenMS/ANALYSIS/ID/IDMergerAlgorithm.h>
#include <OpenMS/ANALYSIS/ID/PrecursorPurity.h>
#include <OpenMS/ANALYSIS/QUANTITATION/PeptideAndProteinQuant.h>
#include <OpenMS/FORMAT/ConsensusXMLFile.h>
#include <OpenMS/FORMAT/ExperimentalDesignFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/MzMLFile.h>
#include <OpenMS/FORMAT/MzTabFile.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/PROCESSING/ID/IDFilter.h>
#include <string>
#include <vector>
#ifdef _OPENMP
#include <omp.h>
#endif
#include <memory> // for std::unique_ptr
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_IsobaricWorkflow IsobaricWorkflow
@brief Extracts and normalizes isobaric labeling information from an LC-MS/MS experiment.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → IsobaricAnalyzer →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> @ref TOPP_IDMapper</td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FileFilter </td>
</tr>
</table>
</CENTER>
The input MSn spectra have to be in centroid mode for the tool to work properly. Use e.g. @ref TOPP_PeakPickerHiRes to perform centroiding of profile data, if necessary.
This tool currently supports iTRAQ 4-plex and 8-plex, and TMT 6-plex, 10-plex, 11-plex, 16-plex, and 18-plex and higher labeling methods.
It extracts the isobaric reporter ion intensities from centroided MS2 or MS3 data (MSn), then performs isotope correction and stores the resulting quantitation in a consensus map,
in which each consensus feature represents one relevant MSn scan (e.g. HCD; see parameters @p select_activation and @p min_precursor_intensity).
The MS level for quantification is chosen automatically, i.e. if MS3 is present, MS2 will be ignored.
For intensity, the closest non-zero m/z signal to the theoretical position is taken as reporter ion abundance.
The position (RT, m/z) of the consensus centroid is the precursor position in MS1 (from the MS2 spectrum);
the consensus sub-elements correspond to the theoretical channel m/z (with m/z values of 113-121 Th for iTRAQ and 126-131 Th for TMT, respectively).
For all labeling techniques, the search radius (@p reporter_mass_shift) should be set as small as possible, to avoid picking up false-positive ions as reporters.
Usually, Orbitraps deliver precision of about 0.0001 Th at this low mass range. Low intensity reporters might have a slightly higher deviation.
By default, the mass range is set to ~0.002 Th, which should be sufficient for all instruments (~15 ppm).
The tool will throw an Exception if you set it below 0.0001 Th (~0.7ppm).
The tool will also throw an Exception if you set @p reporter_mass_shift > 0.003 Th for TMT-10plex and TMT-11plex, since this could
lead to ambiguities with neighbouring channels (which are ~0.006 Th apart in most cases).
For quality control purposes, the tool reports the median distance between the theoretical vs. observed reporter ion peaks in each channel.
The search radius is fixed to 0.5 Th (regardless of the user defined search radius). This allows to track calibration issues.
For TMT-10plex, these results are automatically omitted if they could be confused with a neighbouring channel, i.e.
exceed the tolerance to a neighbouring channel with the same nominal mass (C/N channels).
If the distance is too large, you might have a m/z calibration problem (see @ref TOPP_InternalCalibration).
@note If none of the reporter ions can be detected in an MSn scan, a consensus feature will still be generated,
but the intensities of the overall feature and of all its sub-elements will be zero.
(If desired, such features can be removed by applying an intensity filter in @ref TOPP_FileFilter.)
However, if the spectrum is completely empty (no ions whatsoever), no consensus feature will be generated.
Isotope correction is done using non-negative least squares (NNLS), i.e.:@n
Minimize ||Ax - b||, subject to x >= 0, where b is the vector of observed reporter intensities (with "contaminating" isotope species),
A is a correction matrix (as supplied by the manufacturer of the labeling kit) and x is the desired vector of corrected (real) reporter intensities.
Other software tools solve this problem by using an inverse matrix multiplication, but this can yield entries in x which are negative.
In a real sample, this solution cannot possibly be true, so usually negative values (= negative reporter intensities) are set to zero.
However, a negative result usually means that noise was not properly accounted for in the calculation.
We thus use NNLS to get a non-negative solution, without the need to truncate negative values.
In the (usual) case that inverse matrix multiplication yields only positive values, our NNLS will give the exact same optimal solution.
The correction matrices can be found (and changed) in the INI file (parameter @p correction_matrix of the corresponding labeling method).
However, these matrices for both 4-plex and 8-plex iTRAQ are now stable, and every kit delivered should have the same isotope correction values.
Thus, there should be no need to change them, but feel free to compare the values in the INI file with your kit's certificate.
For TMT (6-plex and 10-plex) the values have to be adapted for each kit: Modify the correction matrix according to the data in the product data sheet of your charge:
<pre>
Data sheet:
Mass Tag Repoter Ion -2 -1 Monoisotopic +1 +2
126 126.12776 0.0% 0.0% 100% 5.0% 0.0%
127N 127.124761 0.0% 0.2% 100% 4.6% 0.0%
...
</pre>
Corresponding correction matrix:
<pre>
[0.0/0.0/5.0/0.0,
0.0/0.2/4.6/0.0,
...
</pre>
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_IsobaricWorkflow.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_IsobaricWorkflow.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPIsobaricWorkflow :
public TOPPBase
{
private:
std::string ID_RUN_NAME_ = "IsobaricWorkflow_";
std::map<String, std::unique_ptr<IsobaricQuantitationMethod>> quant_methods_;
std::map<String, String> quant_method_names_;
void addMethod_(std::unique_ptr<IsobaricQuantitationMethod> ptr, std::string name)
{
std::string internal_name = ptr->getMethodName();
quant_methods_[internal_name] = std::move(ptr);
quant_method_names_[internal_name] = name;
}
public:
TOPPIsobaricWorkflow() :
TOPPBase("IsobaricWorkflow", "Calculates isobaric quantitative values for peptides")
{
addMethod_(make_unique<ItraqFourPlexQuantitationMethod>(), "iTRAQ 4-plex");
addMethod_(make_unique<ItraqEightPlexQuantitationMethod>(), "iTRAQ 8-plex");
addMethod_(make_unique<TMTSixPlexQuantitationMethod>(), "TMT 6-plex");
addMethod_(make_unique<TMTTenPlexQuantitationMethod>(), "TMT 10-plex");
addMethod_(make_unique<TMTElevenPlexQuantitationMethod>(), "TMT 11-plex");
addMethod_(make_unique<TMTSixteenPlexQuantitationMethod>(), "TMT 16-plex");
addMethod_(make_unique<TMTEighteenPlexQuantitationMethod>(), "TMT 18-plex");
}
protected:
void registerOptionsAndFlags_() override
{
// initialize with the first available type
registerStringOption_("type", "<mode>", quant_methods_.begin()->first, "Isobaric Quantitation method used in the experiment.", false);
StringList valid_types;
for (const auto& qm : quant_methods_)
{
valid_types.push_back(qm.first);
}
setValidStrings_("type", valid_types);
registerInputFileList_("in", "<file>", {}, "input centroided spectrum files");
setValidFormats_("in", {"mzML"});
registerInputFileList_("in_id", "<file>", {}, "corresponding input PSMs");
setValidFormats_("in_id", {"idXML"});
registerInputFile_("exp_design", "<file>", "", "experimental design file (optional). If not given, the design is assumed to be unfractionated.", false);
setValidFormats_("exp_design", {"tsv"});
registerOutputFile_("out", "<file>", "", "output consensusXML file");
setValidFormats_("out", {"consensusXML"});
registerOutputFile_("out_mzTab", "<file>", "", "output mzTab file with quantitative information");
setValidFormats_("out_mzTab", {"mzTab"});
registerFlag_("calculate_id_purity", "Calculate the purity of the precursor ion based on the MS1 spectrum. Only used for MS3, otherwise it is the same as the quant. precursor purity.");
//registerIntOption_("max_parallel_files", "<num>", 1, "Maximum number of files to load in parallel.", false);
registerDoubleOption_("psm_score", "<score>", NAN, "The score which should be reached by a peptide hit to be kept. (use 'NAN' to disable this filter)", false);
registerDoubleOption_("protein_score", "<score>", NAN, "The score which should be reached by a protein hit to be kept. All proteins are filtered based on their singleton scores irrespective of grouping. Use in combination with 'delete_unreferenced_peptide_hits' to remove affected peptides. (use 'NAN' to disable this filter)", false);
registerFlag_("delete_unreferenced_peptide_hits", "Peptides not referenced by any protein are deleted in the IDs.");
// registerFlag_("remove_decoys", "Remove decoys according to the information in the user parameters.");
registerStringOption_("inference_method", "<option>", "aggregation", "Methods used for protein inference", false);
setValidStrings_("inference_method", ListUtils::create<String>("aggregation,bayesian"));
registerStringOption_("picked_fdr", "<option>", "false", "Use a picked protein FDR", false, true);
setValidStrings_("picked_fdr", {"true", "false"});
registerStringOption_("picked_decoy_string", "<decoy_string>", "", "If using picked protein FDRs, which decoy string was used? Leave blank for auto-detection.", false, true);
registerStringOption_("picked_decoy_prefix", "<option>", "prefix", "If using picked protein FDRs, was the decoy string a prefix or suffix? Ignored during auto-detection.", false, true);
setValidStrings_("picked_decoy_prefix", {"prefix", "suffix"});
registerStringOption_("FDR_type", "<option>", "PSM", "Sub-protein FDR level. PSM, PSM+peptide (best PSM q-value).", false, true);
setValidStrings_("FDR_type", {"PSM", "PSM+peptide"});
registerDoubleOption_("proteinFDR", "<threshold>", 1.0, "Protein FDR threshold (0.05=5%).", false);
setMinFloat_("proteinFDR", 0.0);
setMaxFloat_("proteinFDR", 1.0);
registerDoubleOption_("psmFDR", "<threshold>", 1.0, "FDR threshold for sub-protein level (e.g. 0.05=5%)." , false);
setMinFloat_("psmFDR", 0.0);
setMaxFloat_("psmFDR", 1.0);
registerStringOption_("protein_quantification", "<option>", "unique_peptides",
"Quantify proteins based on:\n"
"unique_peptides = use peptides mapping to single proteins or a group of indistinguishable proteins"
"(according to the set of experimentally identified peptides).\n"
"strictly_unique_peptides = use peptides mapping to a unique single protein only.\n"
"shared_peptides = use shared peptides only for its best group (by inference score)",
false, true);
setValidStrings_("protein_quantification", ListUtils::create<String>("unique_peptides,strictly_unique_peptides,shared_peptides"));
registerSubsection_("extraction", "Parameters for the channel extraction.");
registerSubsection_("quantification", "Parameters for the peptide quantification.");
for (const auto& qm : quant_methods_)
{
registerSubsection_(qm.second->getMethodName(), String("Algorithm parameters for ") + quant_method_names_[qm.second->getMethodName()]);
}
Param pq_defaults = PeptideAndProteinQuant().getDefaults();
pq_defaults.setValue("top:include_all", "true");
pq_defaults.addTag("top:include_all", "advanced");
Param bpi_defaults = BasicProteinInferenceAlgorithm().getDefaults();
// set based on protein_quantification default
bpi_defaults.remove("annotate_indistinguishable_groups");
bpi_defaults.remove("greedy_group_resolution");
Param bayes_defaults = BayesianProteinInferenceAlgorithm().getDefaults();
vector<string> bpi_remove = {"use_ids_outside_features", "annotate_group_probabilities", "user_defined_priors", "update_PSM_probabilities"};
for (const auto& r : bpi_remove)
{
bayes_defaults.remove(r);
}
// Why do we only have a const iterator?????
vector<string> bpi_show = {"psm_probability_cutoff", "top_PSMs"};
for (auto it = bayes_defaults.begin(); it != bayes_defaults.end(); ++it)
{
if (!ListUtils::contains(bpi_show, it.getName()))
{
bayes_defaults.addTag(it.getName(), "advanced");
}
}
Param combined;
combined.insert("ProteinQuantification:", pq_defaults);
combined.insert("BasicProteinInference:", bpi_defaults);
combined.insert("BayesianProteinInference:", bayes_defaults);
registerFullParam_(combined);
}
Param getSubsectionDefaults_(const String& section) const override
{
ItraqFourPlexQuantitationMethod temp_quant;
if (section == "extraction")
{
return IsobaricChannelExtractor(&temp_quant).getParameters();
}
else if (section == "quantification")
{
return IsobaricQuantifier(&temp_quant).getParameters();
}
else
{
const auto it = quant_methods_.find(section);
if (it == quant_methods_.end())
{ // should not happen
throw Exception::InvalidParameter(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Invalid subsection " + section);
}
return it->second->getParameters();
}
}
inline std::tuple<int, int, int> getSpecIdxs_(int pep_idx, const MSExperiment& exp, bool has_ms3)
{
int quant_spec_idx = -1;
int id_spec_idx = pep_idx;
int ms1_spec_idx = exp.getPrecursorSpectrum(pep_idx);
if (has_ms3)
{
quant_spec_idx = exp.getFirstProductSpectrum(pep_idx);
}
else
{
quant_spec_idx = pep_idx;
}
return std::make_tuple(quant_spec_idx, id_spec_idx, ms1_spec_idx);
}
inline std::pair<double, double> getPurities_(
int quant_spec_idx,
int id_spec_idx,
int ms1_spec_idx,
const MSExperiment& exp,
bool has_ms3,
bool max_precursor_isotope_deviation,
bool calc_id_purity,
bool interpolate_precursor_purity)
{
double quant_purity = -1.0;
double id_purity = -1.0;
if (has_ms3)
{
// TODO double-check that this is the correct way to compute the purity for MS3. Currently purities are very low.
std::vector<double> quant_purities = PrecursorPurity::computeSingleScanPrecursorPurities(quant_spec_idx, id_spec_idx, exp, max_precursor_isotope_deviation);
// average over all precursors
quant_purity = std::accumulate(quant_purities.begin(), quant_purities.end(), 0.0) / quant_purities.size();
}
if (calc_id_purity || !has_ms3)
{
double ms1_purity = -1.;
if (!interpolate_precursor_purity)
{
ms1_purity = PrecursorPurity::computeSingleScanPrecursorPurities(id_spec_idx, ms1_spec_idx, exp, max_precursor_isotope_deviation)[0];
}
else
{
Size next_ms1_spec = quant_spec_idx;
do
{
next_ms1_spec++;
if (next_ms1_spec >= exp.size())
{
// No more MS1 spectra found, use the original MS1 spectrum
next_ms1_spec = ms1_spec_idx;
break;
}
else if (exp[next_ms1_spec].getMSLevel() == 1)
{
break;
}
} while (next_ms1_spec < exp.size());
ms1_purity = PrecursorPurity::computeInterpolatedPrecursorPurity(id_spec_idx, ms1_spec_idx, next_ms1_spec, exp, max_precursor_isotope_deviation)[0];
}
if (has_ms3)
{
id_purity = ms1_purity;
}
else
{
quant_purity = ms1_purity;
id_purity = ms1_purity;
}
}
return std::make_pair(quant_purity, id_purity);
}
/**
* @brief Fills an existing ConsensusFeature with all kinds of information of an identified and isobarically quantified peptide.
*
* @param[out] cf the ConsensusFeature to fill
* @param[in] pep information about the PSM (object will be moved)
* @param[in] exp the MSExperiment to extract information about spectra
* @param[in] id_spec_idx index of the identifying spectrum
* @param[in] quant_spec_idx index of the quantifying spectrum
* @param[in] itys the extracted intensities from the quant. spec.
* @param[in] quant_method the quantification method used (for channel information), e.g. TMT10plex
* @param[in] quant_purity purity of the quant. precursor(s) (if available, else -1.)
* @param[in] id_purity purity of the id precursor
* @param[in] min_reporter_intensity minimum intensity of a reporter ion to be considered
* @param[in] file_idx index of the file in the input list
* @param[in] spec_idx index of the spectrum over all files
*/
void inline fillConsensusFeature_(ConsensusFeature & cf, PeptideIdentification& pep,
const MSExperiment& exp, Size id_spec_idx, Size quant_spec_idx, const std::vector<double>& itys,
const std::unique_ptr<IsobaricQuantitationMethod>& quant_method, double quant_purity, double id_purity,
double min_reporter_intensity, Size file_idx)
{
const auto& quant_spec = exp[quant_spec_idx];
const auto& id_spec = exp[id_spec_idx];
//cf.setUniqueId(spec_idx);
cf.setRT(id_spec.getRT());
cf.setMZ(id_spec.getPrecursors()[0].getMZ());
Peak2D channel_value;
channel_value.setRT(quant_spec.getRT());
// for each channel of current file
UInt64 map_index = 0;
Peak2D::IntensityType overall_intensity = 0.;
Size col_offset = file_idx * quant_method->getChannelInformation().size();
for (IsobaricQuantitationMethod::IsobaricChannelList::const_iterator cl_it = quant_method->getChannelInformation().begin();
cl_it != quant_method->getChannelInformation().end();
++cl_it)
{
// set mz-position of channel
channel_value.setMZ(cl_it->center);
// discard contribution of this channel as it is below the required intensity threshold
if (itys[map_index] < min_reporter_intensity)
{
channel_value.setIntensity(0);
} else {
channel_value.setIntensity(itys[map_index]);
}
overall_intensity += channel_value.getIntensity();
// add channel to ConsensusFeature
// TODO for the last param, element_index we probably need a global count of PSMs+channel that we are handling
// I hate this useless UniqueIDGeneration
cf.insert(col_offset + map_index, channel_value, map_index);
++map_index;
} // ! channel_iterator
// add purity information if we could compute it
// TODO we should reuse the faster and more efficient quality field of ConsensusFeature
if (id_purity > 0.0)
{
cf.setMetaValue("precursor_purity", id_purity);
}
if (quant_purity > 0.0)
{
cf.setMetaValue("quant_precursor_purity", quant_purity);
}
// embed the id of the scan from which the quantitative information was extracted
cf.setMetaValue("scan_id", quant_spec.getNativeID());
// ...as well as additional meta information
cf.setMetaValue("precursor_intensity", id_spec.getPrecursors()[0].getIntensity());
cf.setCharge(id_spec.getPrecursors()[0].getCharge());
cf.setIntensity(overall_intensity);
pep.setIdentifier(ID_RUN_NAME_);
// Set id_merge_index to track which input file this peptide identification originated from.
// This is required for proper mzTab export when multiple files are merged into a single ID run.
pep.setMetaValue(Constants::UserParam::ID_MERGE_INDEX, file_idx);
cf.setPeptideIdentifications({std::move(pep)});
}
std::string addTimeStamp_(std::string& s) const
{
std::array<char, 64> buffer;
buffer.fill(0);
time_t rawtime;
time(&rawtime);
const auto timeinfo = localtime(&rawtime);
strftime(buffer.data(), sizeof(buffer), "%d-%m-%Y %H-%M-%S", timeinfo);
return s + String(buffer.data());
}
ExitCodes main_(int, const char**) override
{
ID_RUN_NAME_ = addTimeStamp_(ID_RUN_NAME_);
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
String out = getStringOption_("out");
String exp_design = getStringOption_("exp_design");
bool bayesian = getStringOption_("inference_method") == "bayesian";
Param pq_param = getParam_().copy("ProteinQuantification:", true);
writeDebug_("Parameters passed to PeptideAndProteinQuant algorithm", pq_param, 3);
//-------------------------------------------------------------
// init quant method and extractor
//-------------------------------------------------------------
const auto& quant_method = quant_methods_[getStringOption_("type")];
// set the parameters for this method
quant_method->setParameters(getParam_().copy(quant_method->getMethodName() + ":", true));
bool calc_id_purity = getParam_().getValue("calculate_id_purity").toBool();
Param extract_param(getParam_().copy("extraction:", true));
IsobaricChannelExtractor channel_extractor(quant_method.get());
channel_extractor.setParameters(extract_param);
double min_reporter_intensity = channel_extractor.getParameters().getValue("min_reporter_intensity");
// TODO since I am mostly using the internal classes IsobaricChannelCorrector and IsobaricNormalizer (if at all),
// I should only expose their parameters and only init their objects here.
IsobaricQuantifier quantifier(quant_method.get());
Param quant_param(getParam_().copy("quantification:", true));
quantifier.setParameters(quant_param);
Matrix<double> correction_matrix = quant_method->getIsotopeCorrectionMatrix();
// TODO this should go to the purity class or just be a parameter
bool interpolate_precursor_purity = channel_extractor.getParameters().getValue("purity_interpolation").toBool();
double max_precursor_isotope_deviation = channel_extractor.getParameters().getValue("precursor_isotope_deviation");
//const String& exp_design = getStringOption_("exp_design");
IDMergerAlgorithm merger(ID_RUN_NAME_, false);
ConsensusMap cmap;
MzMLFile mzml_file;
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
// iterate over pair of mzML and idXML
const auto in_mz = getStringList_("in");
const auto in_id = getStringList_("in_id");
OPENMS_PRECONDITION(in_mz.size() == in_id.size(), "Number of mzML and idXML files must be equal.");
/* I tried this but it is the same speed as focussing on the parallelization of the inner loop.
int max_parallel_files = std::max((int)getIntOption_("max_parallel_files"), (int)in_mz.size());
int inner_threads = std::max(1, omp_get_max_threads() / max_parallel_files);
vector<ConsensusMap> all_cmaps{in_mz.size()};
#ifdef _OPENMP
omp_set_max_active_levels(2);
#pragma omp parallel for num_threads(max_parallel_files)
#endif
*/
for (Size i = 0; i < in_mz.size(); ++i)
{
//ConsensusMap& cur_cmap = all_cmaps[i];
ConsensusMap cur_cmap;
const String& mz_file = in_mz[i];
const String& id_file = in_id[i];
// load mzML
PeakMap exp;
mzml_file.load(mz_file, exp);
std::unordered_map<String, Size> ms2scan_to_index;
bool has_ms3 = false;
for (Size s = 0; s < exp.size(); ++s)
{
if (exp[s].getMSLevel() == 2)
{
ms2scan_to_index[exp[s].getNativeID()] = s;
} else if (exp[s].getMSLevel() == 3)
{
has_ms3 = true;
}
}
if(has_ms3)
{
OPENMS_LOG_INFO << "Found MS3 spectra. Assuming TMT SPS-MS3 workflow." << std::endl;
}
// load idXML
vector<ProteinIdentification> prot_ids;
PeptideIdentificationList pep_ids;
FileHandler().loadIdentifications(id_file, prot_ids, pep_ids);
// TODO filter by qvalue here?
double pro_score = getDoubleOption_("protein_score");
double psm_score = getDoubleOption_("psm_score");
if (!std::isnan(pro_score))
{
OPENMS_LOG_INFO << "Filtering by protein score (better than " << pro_score << ")..." << endl;
IDFilter::filterHitsByScore(prot_ids, pro_score);
}
if (!std::isnan(psm_score))
{
OPENMS_LOG_INFO << "Filtering by PSM score (better than " << psm_score << ")..." << endl;
IDFilter::filterHitsByScore(pep_ids, psm_score);
}
merger.insertRuns(std::move(prot_ids), {}); // pep IDs will be stored in the consensus features
std::vector<ChannelQC> qc;
qc.resize(quant_method->getNumberOfChannels());
for (Size i = 0; i < qc.size(); ++i)
{
qc[i].mz_deltas = std::vector<double>(pep_ids.size());
}
cur_cmap.resize(pep_ids.size());
channel_extractor.registerChannelsInOutputMap(cmap, mz_file);
// Collect peptide IDs without corresponding MS3 spectra (thread-safe collection)
// Note: unassigned_pep_ids is shared across threads; push_back is protected by critical section
PeptideIdentificationList unassigned_pep_ids;
#pragma omp parallel for /*num_threads(inner_threads)*/
for (int64_t pep_idx = 0; pep_idx < static_cast<int64_t>(pep_ids.size()); ++pep_idx)
{
auto& pep = pep_ids[pep_idx];
const auto& spec_ref = pep.getSpectrumReference();
if (!spec_ref.empty())
{
auto ms2spec_it = ms2scan_to_index.find(spec_ref);
if (ms2spec_it != ms2scan_to_index.end())
{
std::vector<std::pair<double,unsigned>> channel_qc(quant_method->getNumberOfChannels(), std::make_pair(std::numeric_limits<double>::quiet_NaN(), 0));
auto [quant_spec_idx, id_spec_idx, ms1_spec_idx] = getSpecIdxs_(ms2spec_it->second, exp, has_ms3);
// Check if MS3 spectrum is missing for MS2 spectrum
if (has_ms3 && quant_spec_idx == -1)
{
// Store peptide ID with file association for unassigned IDs
PeptideIdentification unassigned_pep = pep;
unassigned_pep.setIdentifier(ID_RUN_NAME_);
unassigned_pep.setMetaValue(Constants::UserParam::ID_MERGE_INDEX, i);
#pragma omp critical(unassigned_pep_ids_collection)
{
OPENMS_LOG_WARN << "MS2 spectrum " << spec_ref << " at index " << ms2spec_it->second
<< " does not have a corresponding MS3 spectrum. Skipping quantification and adding to unassigned.\n";
unassigned_pep_ids.push_back(std::move(unassigned_pep));
}
continue;
}
auto [quant_purity, id_purity] = getPurities_(quant_spec_idx, id_spec_idx, ms1_spec_idx, exp, has_ms3, max_precursor_isotope_deviation, calc_id_purity, interpolate_precursor_purity);
if (has_ms3 && exp[quant_spec_idx].getMSLevel() != 3)
{
throw Exception::InvalidValue(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "MS3 spectrum expected but not found.", String(exp[quant_spec_idx].getMSLevel()));
}
std::vector<double> itys = channel_extractor.extractSingleSpec(quant_spec_idx, exp, channel_qc);
// TODO if itys are all zero we can actually skip correction and quantification
// NNLS modifies input, so we need a copy of the correction matrix
Matrix<double> m = correction_matrix;
std::vector<double> corrected(itys.size(), 0.);
NonNegativeLeastSquaresSolver::solve(m, itys, corrected);
fillConsensusFeature_(cur_cmap[pep_idx], pep, exp, id_spec_idx, quant_spec_idx, corrected, quant_method, quant_purity, id_purity, min_reporter_intensity, i);
for (Size i = 0; i < channel_qc.size(); ++i)
{
// TODO ProteomeDiscoverer also outputs:
// - Reporter S/N but with S/N they mean the intensity corrected for the noise value from the raw thermo Orbitrap files
// (and I dont think we read them, see https://github.com/compomics/ThermoRawFileParser/blob/c293d4aa1b04bfd62124ff42c512572427a4316a/Writer/MzMlSpectrumWriter.cs#L1664)
// - For SPS-MS3 the number of precursor windows that actually surround a fragment from the identified peptide! Useful, but currently not implemented here.
qc[i].mz_deltas[pep_idx] = channel_qc[i].first;
if (channel_qc[i].second > 1)
{
#pragma omp atomic
qc[i].signal_not_unique++;
}
}
}
else
{
// will leave the cMap with a default initialized consensus feature. Remember to remove it later.
// should also never happen
OPENMS_LOG_WARN << "Identified spectrum " << spec_ref << " not found in mzML file. Skipping." << std::endl;
}
}
}
channel_extractor.printStatsWithMissing(qc);
// Add peptide IDs without MS3 spectra to unassigned IDs
if (!unassigned_pep_ids.empty())
{
OPENMS_LOG_INFO << "Adding " << unassigned_pep_ids.size()
<< " peptide identifications without MS3 spectra to unassigned IDs." << std::endl;
auto& cmap_unassigned = cmap.getUnassignedPeptideIdentifications();
cmap_unassigned.insert(cmap_unassigned.end(),
std::make_move_iterator(unassigned_pep_ids.begin()),
std::make_move_iterator(unassigned_pep_ids.end()));
}
// TODO if we want to support normalization, we either need to replace the quantifier with corrector and normalizer separately
// or init the normalizer from the quantifier settings.
// But honestly, most downstream software can do it better, so I would not bother. Just export to mzTab and do it in R/python.
//IsobaricNormalizer::normalize(cur_cmap);
// TODO cleanup, reset?
// Always use insert (not move assignment) to preserve column headers registered at line 577.
// Move assignment would lose column headers if early files have no peptide IDs after filtering.
cmap.reserve(cmap.size() + cur_cmap.size());
cmap.insert(cmap.end(), std::make_move_iterator(cur_cmap.begin()), std::make_move_iterator(cur_cmap.end()));
// TODO If we do the parquet export, we can export the feature file here already. Then, if prot. inference and quant are disabled,
// the tool could be run on a single file and distributed over multiple nodes. We could use a parquet partitioned over raw_files
// If we then implement an inference and quant tool that can read parquet, we can do inference and quant after that parallelized step.
// Potentially even with pyopenms??
}
/*cmap.reserve(std::accumulate(all_cmaps.begin(), all_cmaps.end(), 0, [](Size s, const ConsensusMap& c){return s + c.size();}));
for (auto& cur_cmap : all_cmaps)
{
cmap.insert(cmap.end(), std::make_move_iterator(cur_cmap.begin()), std::make_move_iterator(cur_cmap.end()));
}*/
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
// Annotate output with data processing info
// Create a custom DataProcessing with filtered parameters
// (exclude unused quantification method sections to reduce output size)
DataProcessing dp = getProcessingInfo_(DataProcessing::QUANTITATION);
// Remove parameters for unused quantification methods
String selected_method = quant_method->getMethodName();
vector<String> keys_to_remove;
for (const auto& qm : quant_methods_)
{
if (qm.first != selected_method)
{
// Collect all parameter keys that start with this unused method name
vector<String> all_keys;
dp.getKeys(all_keys);
for (const String& key : all_keys)
{
if (key.hasPrefix("parameter: " + qm.first + ":"))
{
keys_to_remove.push_back(key);
}
}
}
}
for (const String& key : keys_to_remove)
{
dp.removeMetaValue(key);
}
addDataProcessing_(cmap, dp);
// remove empty features (TODO based on some other filtering settings?)
const auto empty_feat = [](const ConsensusFeature& c){return c.getIntensity() <= 0.;};
cmap.erase(remove_if(cmap.begin(), cmap.end(), empty_feat), cmap.end());
cmap.ensureUniqueId();
std::vector<ProteinIdentification> merged_prot_ids;
merged_prot_ids.resize(1);
PeptideIdentificationList _;
merger.returnResultsAndClear(merged_prot_ids[0], _);
std::cout << "Merged " << merged_prot_ids[0].getHits().size() << " proteins." << std::endl;
cmap.setProteinIdentifications(merged_prot_ids);
ExperimentalDesign design = ExperimentalDesign::fromConsensusMap(cmap);
if (exp_design != "")
{
design = ExperimentalDesignFile::load(exp_design, true);
}
bool groups = getStringOption_("protein_quantification") != "strictly_unique_peptides";
bool greedy_group_resolution = getStringOption_("protein_quantification") == "shared_peptides";
if (!bayesian) {
BasicProteinInferenceAlgorithm prot_inference;
Param bpi_param = getParam_().copy("BasicProteinInference:", true);
bpi_param.setValue("annotate_indistinguishable_groups", groups ? "true" : "false");
bpi_param.setValue("greedy_group_resolution", greedy_group_resolution ? "true" : "false");
writeDebug_("Parameters passed to BasicProteinInference algorithm", bpi_param, 3);
prot_inference.setParameters(bpi_param);
prot_inference.run(cmap, cmap.getProteinIdentifications()[0], false);
}
else {
BayesianProteinInferenceAlgorithm bayes;
Param bayes_param = getParam_().copy("BayesianProteinInference:", true);
// Hardcoded for this usecase
bayes_param.setValue("update_PSM_probabilities", "false");
bayes_param.setValue("annotate_group_probabilities", "true");
bayes_param.setValue("user_defined_priors", "false");
bayes_param.setValue("use_ids_outside_features", "false");
writeDebug_("Parameters passed to BayesianProteinInference algorithm", bayes_param, 3);
bayes.setParameters(bayes_param);
bayes.inferPosteriorProbabilities(cmap, greedy_group_resolution);
if (!groups) {
cmap.getProteinIdentifications()[0].getIndistinguishableProteins().clear();
}
}
FalseDiscoveryRate fdr;
auto& proteins = cmap.getProteinIdentifications()[0];
if (getStringOption_("picked_fdr") == "true")
{
fdr.applyPickedProteinFDR(proteins, getStringOption_("picked_decoy_string"), getStringOption_("picked_decoy_prefix") == "prefix");
}
else
{
fdr.applyBasic(proteins);
}
if (getStringOption_("FDR_type") == "PSM+peptide")
{
fdr.applyBasicPeptideLevel(cmap, false);
}
else
{
fdr.applyBasic(cmap, false);
}
bool rm_pep = getFlag_("delete_unreferenced_peptide_hits");
if (rm_pep)
{
OPENMS_LOG_INFO << "Removing peptide hits without protein references..." << endl;
}
IDFilter::removeDanglingProteinReferences(cmap, rm_pep);
IDFilter::removeUnreferencedProteins(cmap, true);
IDFilter::updateProteinGroups(proteins.getIndistinguishableProteins(), proteins.getHits());
IDFilter::updateProteinGroups(proteins.getProteinGroups(), proteins.getHits());
const double max_pro_fdr = getDoubleOption_("proteinFDR");
const double max_psm_fdr = getDoubleOption_("psmFDR");
// FDR filtering
if (max_psm_fdr < 1.0)
{
for (auto& f : cmap)
{
IDFilter::filterHitsByScore(f.getPeptideIdentifications(), max_psm_fdr);
}
IDFilter::filterHitsByScore(cmap.getUnassignedPeptideIdentifications(), max_psm_fdr);
}
if (max_pro_fdr < 1.0)
{
IDFilter::filterHitsByScore(proteins, max_pro_fdr);
IDFilter::removeDanglingProteinReferences(cmap, rm_pep);
}
if (max_psm_fdr < 1.0)
{
IDFilter::removeUnreferencedProteins(cmap, true);
}
if (max_pro_fdr < 1.0 || max_psm_fdr < 1.0)
{
IDFilter::updateProteinGroups(proteins.getIndistinguishableProteins(), proteins.getHits());
IDFilter::updateProteinGroups(proteins.getProteinGroups(), proteins.getHits());
}
if (proteins.getHits().empty())
{
throw Exception::MissingInformation(
__FILE__,
__LINE__,
OPENMS_PRETTY_FUNCTION,
"No proteins left after FDR filtering. Please check the log and adjust your settings.");
}
if (!greedy_group_resolution && !groups)
{
for (auto& f : cmap)
{
IDFilter::keepUniquePeptidesPerProtein(f.getPeptideIdentifications());
}
IDFilter::keepUniquePeptidesPerProtein(cmap.getUnassignedPeptideIdentifications());
}
PeptideAndProteinQuant prot_quantifier;
prot_quantifier.setParameters(pq_param);
prot_quantifier.readQuantData(
cmap,
design);
prot_quantifier.quantifyPeptides();
ProteinIdentification& inferred_proteins = cmap.getProteinIdentifications()[0];
if (inferred_proteins.getIndistinguishableProteins().empty())
{
throw Exception::MissingInformation(
__FILE__,
__LINE__,
OPENMS_PRETTY_FUNCTION,
"No information on indistinguishable protein groups found.");
}
prot_quantifier.quantifyProteins(inferred_proteins);
auto const & protein_quants = prot_quantifier.getProteinResults();
if (protein_quants.empty())
{
OPENMS_LOG_WARN << "Warning: No proteins were quantified." << endl;
}
// Annotate quants to protein(groups) for easier export in mzTab
// Note: we keep protein groups that have not been quantified
prot_quantifier.annotateQuantificationsToProteins(
protein_quants, inferred_proteins, true);
// TODO also allow storing mzTab and even better, parquet
FileHandler().storeConsensusFeatures(out, cmap);
String out_mzTab = getStringOption_("out_mzTab");
if (! out_mzTab.empty())
{
const bool report_unidentified_features(false);
const bool report_unmapped(true);
const bool report_subfeatures(false);
const bool report_unidentified_spectra(false);
const bool report_not_only_best_psm_per_spectrum(false);
MzTabFile().store(out_mzTab,
cmap,
false,
report_unidentified_features,
report_unmapped,
report_subfeatures,
report_unidentified_spectra,
report_not_only_best_psm_per_spectrum);
}
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPIsobaricWorkflow tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/MascotAdapterOnline.cpp | .cpp | 16,574 | 469 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Andreas Bertsch, Daniel Jameson, Chris Bielow $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/ID/PercolatorFeatureSetHelper.h>
#include <OpenMS/DATASTRUCTURES/DefaultParamHandler.h>
#include <OpenMS/FORMAT/MascotXMLFile.h>
#include <OpenMS/FORMAT/MascotRemoteQuery.h>
#include <OpenMS/FORMAT/MascotGenericFile.h>
#include <OpenMS/KERNEL/StandardTypes.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/APPLICATIONS/SearchEngineBase.h>
#include <sstream>
#include <QtCore/QFile>
#include <QtCore/QCoreApplication>
#include <QtCore/QTimer>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_MascotAdapterOnline MascotAdapterOnline
@brief Identifies peptides in MS/MS spectra via Mascot.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → MascotAdapterOnline →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any signal-/preprocessing tool @n (that writes mzML format)</td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFilter or @n any protein/peptide processing tool</td>
</tr>
</table>
</CENTER>
This wrapper application generates peptide identifications for MS/MS
spectra using the search engine Mascot. It communicates with the Mascot
server over the network (i.e. it does not have to run on the server
itself).
The adapter supports Mascot security features as well as proxy connections.
Mascot versions 2.2.x up to 2.4.1 are supported and have been successfully
tested (to varying degrees).
@bug Running the adapter on Mascot 2.4 (possibly also other versions) produces the following error messages, which should be ignored:\n
MascotRemoteQuery: An error occurred (requestId=11): Request aborted (QT Error Code: 7)\n
MascotRemoteQuery: An error occurred (requestId=12): Request aborted (QT Error Code: 7)
@note Some Mascot server instances seem to fail without reporting back an
error message. In such cases, try to run the search on another Mascot
server or change/validate the search parameters (e.g. using modifications
that are known to Mascot and can thus be set in the INI file, but which are
unknown to Mascot, might pose a problem).
@note Mascot returns incomplete/incorrect protein assignments for most
identified peptides (due to protein-level grouping/filtering). Thus,
the protein associations are therefore not included in the output of this
adapter, only the peptide sequences. @ref TOPP_PeptideIndexer should be run
after this tool to get correct assignments.
@note Currently mzIdentML (mzid) is not directly supported as an
input/output format of this tool. Convert mzid files to/from idXML using
@ref TOPP_IDFileConverter if necessary.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_MascotAdapterOnline.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_MascotAdapterOnline.html
For the parameters of the algorithm section see the algorithms documentation: @n
@ref OpenMS::MascotRemoteQuery "Mascot_server" @n
@ref OpenMS::MascotGenericFile "Mascot_parameters" @n
*/
// We do not want this class to show up in the doc:
/// @cond TOPPCLASSES
class TOPPMascotAdapterOnline :
public SearchEngineBase
{
public:
TOPPMascotAdapterOnline() :
SearchEngineBase("MascotAdapterOnline", "Annotates MS/MS spectra using Mascot.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "input file in mzML format.\n");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerOutputFile_("out", "<file>", "", "output file in idXML format.\n");
setValidFormats_("out", ListUtils::create<String>("idXML"));
registerSubsection_("Mascot_server", "Mascot server details");
registerSubsection_("Mascot_parameters", "Mascot parameters used for searching");
}
Param getSubsectionDefaults_(const String& section) const override
{
if (section == "Mascot_server")
{
MascotRemoteQuery mascot_query;
return mascot_query.getParameters();
}
if (section == "Mascot_parameters")
{
MascotGenericFile mgf_file;
Param p = mgf_file.getParameters();
p.remove("internal:");
return p;
}
return Param();
}
void parseMascotResponse_(const PeakMap& exp, bool decoy, MascotRemoteQuery* mascot_query, ProteinIdentification& prot_id, PeptideIdentificationList& pep_ids)
{
String mascot_tmp_file_name = decoy ? (File::getTempDirectory() + "/" + File::getUniqueName() + "_Mascot_decoy_response") : (File::getTempDirectory() + "/" + File::getUniqueName() + "_Mascot_response");
QFile mascot_tmp_file(mascot_tmp_file_name.c_str());
mascot_tmp_file.open(QIODevice::WriteOnly);
if (decoy)
{
mascot_tmp_file.write(mascot_query->getMascotXMLDecoyResponse());
}
else
{
mascot_tmp_file.write(mascot_query->getMascotXMLResponse());
}
mascot_tmp_file.close();
writeDebug_(String("\nMascot Server Response file saved to: '") + mascot_tmp_file_name + "'. If an error occurs, send this file to the OpenMS team.\n", 100);
// set up helper object for looking up spectrum meta data:
SpectrumMetaDataLookup lookup;
MascotXMLFile::initializeLookup(lookup, exp);
// read the response
MascotXMLFile().load(mascot_tmp_file_name, prot_id, pep_ids, lookup);
writeDebug_("Read " + String(pep_ids.size()) + " peptide ids and " + String(prot_id.getHits().size()) + " protein identifications from Mascot", 5);
// for debugging errors relating to unexpected response files
if (this->debug_level_ >= 100)
{
writeDebug_(String("\nMascot Server Response file saved to: '") + mascot_tmp_file_name + "'. If an error occurs, send this file to the OpenMS team.\n", 100);
}
else
{
mascot_tmp_file.remove(); // delete file
}
}
// merge b into a
void mergeIDs_(ProteinIdentification& p_a, const ProteinIdentification& p_b, PeptideIdentificationList& pep_a, const PeptideIdentificationList& pep_b)
{
// if p_a is empty use all meta values and hits from p_b to initialize p_a
if (p_a.getHits().empty())
{
p_a = p_b;
}
else
{
// p_a already initialized? just add proteins of b to a
for (const ProteinHit& p : p_b.getHits())
{
p_a.insertHit(p);
}
}
map<String, size_t> native_id2id_index;
size_t index{};
String run_identifier;
for (const PeptideIdentification& pep : pep_a)
{
const String& native_id = pep.getSpectrumReference();
native_id2id_index[native_id] = index;
++index;
if (run_identifier.empty()) run_identifier = pep.getIdentifier();
}
for (auto pep : pep_b) //OMS_CODING_TEST_EXCLUDE
{
auto it = native_id2id_index.find(pep.getSpectrumReference());
if (it == native_id2id_index.end()) // spectrum not yet identified? add decoy id
{
pep.setIdentifier(run_identifier);
pep_a.push_back(pep);
}
else
{
const vector<PeptideHit>& hits = pep.getHits();
if (hits.empty())
{
continue;
}
for (const PeptideHit& h : hits)
{
pep_a[it->second].insertHit(h);
}
pep_a[it->second].sort();
}
}
}
ExitCodes main_(int argc, const char** argv) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
// input/output files
String in = getRawfileName();
String out(getStringOption_("out"));
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
PeakMap exp;
// keep only MS2 spectra
FileHandler fh;
fh.getOptions().setMSLevels({2});
fh.loadExperiment(in, exp, {FileTypes::Type::MZML}, log_type_, false, false);
writeLogInfo_("Number of spectra loaded: " + String(exp.size()));
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
Param mascot_param = getParam_().copy("Mascot_parameters:", true);
// overwrite default search title with filename
if (mascot_param.getValue("search_title") == "OpenMS_search")
{
mascot_param.setValue("search_title", FileHandler::stripExtension(File::basename(in)));
}
mascot_param.setValue("internal:HTTP_format", "true");
SpectrumLookup lookup;
lookup.readSpectra(exp.getSpectra());
Param mascot_query_param = getParam_().copy("Mascot_server:", true);
size_t batch_size = (size_t)mascot_query_param.getValue("batch_size");
size_t chunks = (exp.size() - 1) / batch_size + 1; // Note: safe as we have at least one spectrum
vector<ProteinIdentification> all_prot_ids;
ProteinIdentification all_prot_id;
PeptideIdentificationList all_pep_ids;
MSExperiment current_batch;
for (size_t k = 0; k < chunks; ++k)
{
// get range for next set of n elements
auto start_itr = std::next(exp.begin(), k*batch_size);
auto end_itr = std::next(exp.begin(), k*batch_size + batch_size);
// allocate memory for the current chunk
current_batch.resize(batch_size);
// code to handle the last sub-vector as it might
// contain less elements
if (k*batch_size + batch_size > exp.size())
{
end_itr = exp.end();
current_batch.resize(exp.size() - k*batch_size);
}
// copy elements from the input range to the sub-vector
std::copy(start_itr, end_itr, current_batch.begin());
// write mgf and run search
MascotGenericFile mgf_file;
mgf_file.setParameters(mascot_param);
// get the spectra into string stream
writeDebug_("Writing MGF file to stream", 1);
stringstream ss;
mgf_file.store(ss, in, current_batch, true); // write in compact format
// Usage of a QCoreApplication is overkill here (and ugly too), but we just use the
// QEventLoop to process the signals and slots and grab the results afterwards from
// the MascotRemotQuery instance
char** argv2 = const_cast<char**>(argv);
QCoreApplication event_loop(argc, argv2);
MascotRemoteQuery* mascot_query = new MascotRemoteQuery(&event_loop);
writeDebug_("Setting parameters for Mascot query", 1);
mascot_query->setParameters(mascot_query_param);
bool internal_decoys = mascot_param.getValue("decoy") == "true";
// We used internal decoy search. Set that we want to retrieve decoy search results during export.
if (internal_decoys)
{
mascot_query->setExportDecoys(true);
}
writeDebug_("Setting spectra for Mascot query", 1);
mascot_query->setQuerySpectra(ss.str());
// remove unnecessary spectra
ss.clear();
QObject::connect(mascot_query, SIGNAL(done()), &event_loop, SLOT(quit()));
QTimer::singleShot(1000, mascot_query, SLOT(run()));
writeLogInfo_("Submitting Mascot query (now: " + DateTime::now().get() + ")...");
event_loop.exec();
writeLogInfo_("Mascot query finished");
if (mascot_query->hasError())
{
writeLogError_("An error occurred during the query: " + mascot_query->getErrorMessage());
delete mascot_query;
return EXTERNAL_PROGRAM_ERROR;
}
PeptideIdentificationList pep_ids;
ProteinIdentification prot_id;
if (!mascot_query_param.exists("skip_export") ||
!mascot_query_param.getValue("skip_export").toBool())
{
// write Mascot response to file
parseMascotResponse_(current_batch, false, mascot_query, prot_id, pep_ids); // targets
// reannotate proper spectrum native id if missing
for (auto& pep : pep_ids)
{
// no need to reannotate
if (pep.metaValueExists("spectrum_reference")
&& !(static_cast<String>(pep.getMetaValue("spectrum_reference")).empty()))
{
continue;
}
try
{
Size index = lookup.findByRT(pep.getRT());
pep.setSpectrumReference( exp[index].getNativeID());
}
catch (Exception::ElementNotFound&)
{
OPENMS_LOG_ERROR << "Error: Failed to look up spectrum native ID for peptide identification with retention time '" + String(pep.getRT()) + "'." << endl;
}
}
if (internal_decoys)
{
PeptideIdentificationList decoy_pep_ids;
ProteinIdentification decoy_prot_id;
parseMascotResponse_(current_batch, true, mascot_query, decoy_prot_id, decoy_pep_ids); // decoys
// reannotate proper spectrum native id if missing
for (auto& pep : decoy_pep_ids)
{
// no need to reannotate
if (pep.metaValueExists("spectrum_reference")
&& !(static_cast<String>(pep.getMetaValue("spectrum_reference")).empty()))
{
continue;
}
try
{
Size index = lookup.findByRT(pep.getRT());
pep.setSpectrumReference( exp[index].getNativeID());
}
catch (Exception::ElementNotFound&)
{
OPENMS_LOG_ERROR << "Error: Failed to look up spectrum native ID for peptide identification with retention time '" + String(pep.getRT()) + "'." << endl;
}
}
mergeIDs_(prot_id, decoy_prot_id, pep_ids, decoy_pep_ids);
}
}
String search_number = mascot_query->getSearchIdentifier();
if (search_number.empty())
{
writeLogError_("Error: Failed to extract the Mascot search identifier (search number).");
if (mascot_query_param.exists("skip_export") &&
mascot_query_param.getValue("skip_export").toBool())
{
return PARSE_ERROR;
}
}
else
{
prot_id.setMetaValue("SearchNumber", search_number);
}
// clean up
delete mascot_query;
current_batch.clear(true); // clear meta data
mergeIDs_(all_prot_id, prot_id, all_pep_ids, pep_ids);
}
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
all_prot_id.setPrimaryMSRunPath({ in }, exp);
DateTime now = DateTime::now();
String date_string = now.get();
String run_identifier("Mascot_" + date_string);
// remove proteins as protein links seem are broken and reindexing is needed
all_prot_id.getHits().clear();
all_prot_id.setIdentifier(run_identifier);
all_prot_ids.push_back(all_prot_id);
// remove protein links from peptides as protein links seem are broken and reindexing is needed
for (auto& pep : all_pep_ids)
{
pep.setIdentifier(run_identifier);
for (auto& hit : pep.getHits())
{
hit.setPeptideEvidences({});
}
}
// write all (!) parameters as metavalues to the search parameters
DefaultParamHandler::writeParametersToMetaValues(this->getParam_(), all_prot_ids[0].getSearchParameters(), this->getToolPrefix());
// get feature set used in percolator
StringList feature_set;
PercolatorFeatureSetHelper::addMASCOTFeatures(all_pep_ids, feature_set);
all_prot_ids.front().getSearchParameters().setMetaValue("extra_features", ListUtils::concatenate(feature_set, ","));
FileHandler().storeIdentifications(out, all_prot_ids, all_pep_ids, {FileTypes::IDXML});
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPMascotAdapterOnline tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/PSMFeatureExtractor.cpp | .cpp | 12,860 | 310 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Mathias Walzer $
// $Authors: Andreas Simon, Mathias Walzer, Matthew The $
// --------------------------------------------------------------------------
#include <OpenMS/config.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/DATASTRUCTURES/StringListUtils.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CONCEPT/ProgressLogger.h>
#include <OpenMS/CONCEPT/Constants.h>
#include <OpenMS/ANALYSIS/ID/PercolatorFeatureSetHelper.h>
#include <QtCore/QFile>
#include <QtCore/QDir>
#include <QtCore/QProcess>
#include <iostream>
#include <cmath>
#include <string>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_PSMFeatureExtractor PSMFeatureExtractor
@brief PSMFeatureExtractor computes extra features for each input PSM
@experimental Parts of this tool are still work in progress and usage and input requirements or output might change. (multiple_search_engine, Mascot support)
<center>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → PSMFeatureExtractor →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeptideIndexer</td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PercolatorAdapter </td>
</tr>
</table>
</center>
<p>
PSMFeatureExtractor is search engine sensitive, i.e. it's extra features
vary, depending on the search engine. Thus, please make sure the input is
compliant with TOPP SearchengineAdapter output. Also, PeptideIndexer compliant
target/decoy annotation is mandatory.
Currently supported search engines are Comet, X!Tandem, MSGF+.
Mascot support is available but in beta development.
</p>
@note if you have extra features you want to pass to percolator, use the extra
flag and list the MetaData entries containing the extra features.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_PSMFeatureExtractor.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_PSMFeatureExtractor.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class PSMFeatureExtractor :
public TOPPBase
{
public:
PSMFeatureExtractor() :
TOPPBase("PSMFeatureExtractor", "Computes extra features for each input PSM.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFileList_("in", "<files>", StringList(), "Input file(s)", true);
setValidFormats_("in", ListUtils::create<String>("idXML,mzid"));
registerOutputFile_("out", "<file>", "", "Output file in mzid or idXML format", true);
setValidFormats_("out", ListUtils::create<String>("idXML,mzid"));
registerStringOption_("out_type", "<type>", "", "Output file type -- default: determined from file extension or content.", false);
setValidStrings_("out_type", ListUtils::create<String>("idXML,mzid"));
registerStringList_("extra", "<MetaData parameter>", vector<String>(), "List of the MetaData parameters to be included in a feature set for precolator.", false, false);
registerFlag_("multiple_search_engines", "Combine PSMs from different search engines by merging on scan level.");
registerFlag_("skip_db_check", "Manual override to skip the check if same settings for multiple search engines were applied. Only valid together with -multiple_search_engines flag.", true);
registerFlag_("concat", "Naive merging of PSMs from different search engines: concatenate multiple search results instead of merging on scan level. Only valid together with -multiple_search_engines flag.", true);
registerFlag_("impute", "Will instead of discarding all PSM not unanimously detected by all SE, impute missing values by their respective scores min/max observed. Only valid together with -multiple_search_engines flag.", true);
registerFlag_("limit_imputation", "Will impute missing scores with the worst numerical limit (instead of min/max observed) of the respective score. Only valid together with -multiple_search_engines flag.", true);
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// general variables and data to perform PSMFeatureExtractor
//-------------------------------------------------------------
PeptideIdentificationList all_peptide_ids;
vector<ProteinIdentification> all_protein_ids;
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
const StringList in_list = getStringList_("in");
bool multiple_search_engines = getFlag_("multiple_search_engines");
OPENMS_LOG_DEBUG << "Input file (of target?): " << ListUtils::concatenate(in_list, ",") << endl;
if (in_list.size() > 1 && !multiple_search_engines)
{
writeLogError_("Error: multiple input files given for -in, but -multiple_search_engines flag not specified. If the same search engine was used, feed the input files into PSMFeatureExtractor one by one.");
printUsage_();
return ILLEGAL_PARAMETERS;
}
const String out(getStringOption_("out"));
//-------------------------------------------------------------
// read input
//-------------------------------------------------------------
bool skip_db_check = getFlag_("skip_db_check");
bool concatenate = getFlag_("concat");
StringList search_engines_used;
for (StringList::const_iterator fit = in_list.begin(); fit != in_list.end(); ++fit)
{
PeptideIdentificationList peptide_ids;
vector<ProteinIdentification> protein_ids;
String in = *fit;
FileHandler fh;
FileTypes::Type in_type = fh.getType(in);
OPENMS_LOG_INFO << "Loading input file: " << in << endl;
if (in_type == FileTypes::IDXML || in_type == FileTypes::MZIDENTML)
{
FileHandler().loadIdentifications(in, protein_ids, peptide_ids, {FileTypes::IDXML, FileTypes::MZIDENTML});
}
if (in_type == FileTypes::MZIDENTML)
{
OPENMS_LOG_WARN << "Converting from mzid: possible loss of information depending on target format." << endl;
}
//else caught by TOPPBase:registerInput being mandatory mzid or idxml
//check and warn if merged from multiple runs
if (protein_ids.size() > 1)
{
throw Exception::InvalidValue(
__FILE__,
__LINE__,
OPENMS_PRETTY_FUNCTION,
"File '" + in + "' has more than one ProteinIDRun. This is currently not correctly handled."
"Please use the merge_proteins_add_psms option if you used IDMerger. Alternatively, pass"
" all original single-run idXML inputs as list to this tool.",
"# runs: " + String(protein_ids.size()));
}
// will check if all ProteinIdentifications have the same search db unless it is the first, in which case all_protein_ids is empty yet.
if (multiple_search_engines && !skip_db_check && !all_protein_ids.empty())
{
ProteinIdentification::SearchParameters all_search_parameters = all_protein_ids.front().getSearchParameters();
ProteinIdentification::SearchParameters search_parameters = protein_ids.front().getSearchParameters();
if (search_parameters.db != all_search_parameters.db)
{
writeLogError_("Error: Input files are not searched with the same protein database, " + search_parameters.db + " vs. " + all_search_parameters.db + ". Set -skip_db_check flag to ignore this. Aborting!");
return INCOMPATIBLE_INPUT_DATA;
}
}
if (!multiple_search_engines)
{
all_peptide_ids.insert(all_peptide_ids.end(), peptide_ids.begin(), peptide_ids.end());
}
else
{
String search_engine = protein_ids.front().getSearchEngine();
if (!ListUtils::contains(search_engines_used, search_engine))
{
search_engines_used.push_back(search_engine);
}
if (concatenate)
{
// will concatenate the list
PercolatorFeatureSetHelper::concatMULTISEPeptideIds(all_peptide_ids, peptide_ids, search_engine);
}
else
{
// will collapse the list (based on spectrum_reference)
PercolatorFeatureSetHelper::mergeMULTISEPeptideIds(all_peptide_ids, peptide_ids, search_engine);
}
}
PercolatorFeatureSetHelper::mergeMULTISEProteinIds(all_protein_ids, protein_ids);
}
if (all_protein_ids.empty())
{
writeLogError_("Error: No protein hits found in input file. Aborting!");
printUsage_();
return INPUT_FILE_EMPTY;
}
//-------------------------------------------------------------
// extract search engine and prepare pin
//-------------------------------------------------------------
String search_engine = all_protein_ids.front().getSearchEngine();
if (multiple_search_engines) search_engine = "multiple";
OPENMS_LOG_DEBUG << "Registered search engine: " << search_engine << endl;
StringList extra_features = getStringList_("extra");
StringList feature_set;
if (search_engine == "multiple")
{
if (getFlag_("concat"))
{
PercolatorFeatureSetHelper::addCONCATSEFeatures(all_peptide_ids, search_engines_used, feature_set);
}
else
{
bool impute = getFlag_("impute");
bool limits = getFlag_("limit_imputation");
PercolatorFeatureSetHelper::addMULTISEFeatures(all_peptide_ids, search_engines_used, feature_set, !impute, limits);
}
}
else if (search_engine == "MS-GF+")
{
PercolatorFeatureSetHelper::addMSGFFeatures(all_peptide_ids, feature_set);
}
else if (search_engine == "Mascot")
{
PercolatorFeatureSetHelper::addMASCOTFeatures(all_peptide_ids, feature_set);
}
else if (search_engine == "XTandem")
{
PercolatorFeatureSetHelper::addXTANDEMFeatures(all_peptide_ids, feature_set);
}
else if (search_engine == "Comet")
{
PercolatorFeatureSetHelper::addCOMETFeatures(all_peptide_ids, feature_set);
}
else if (search_engine == "MSFragger")
{
PercolatorFeatureSetHelper::addMSFRAGGERFeatures(feature_set);
}
else
{
OPENMS_LOG_ERROR << "No known input to create PSM features from. Aborting" << std::endl;
return INCOMPATIBLE_INPUT_DATA;
}
String run_identifier = all_protein_ids.front().getIdentifier();
for (PeptideIdentificationList::iterator it = all_peptide_ids.begin(); it != all_peptide_ids.end(); ++it)
{
it->setIdentifier(run_identifier);
PercolatorFeatureSetHelper::checkExtraFeatures(it->getHits(), extra_features); // will remove inconsistently available features
}
if (all_protein_ids.size() > 1)
{
OPENMS_LOG_ERROR << "Multiple identifications in one file are not supported. Please resume with separate input files. Quitting." << std::endl;
return INCOMPATIBLE_INPUT_DATA;
}
else
{
ProteinIdentification::SearchParameters search_parameters = all_protein_ids.front().getSearchParameters();
search_parameters.setMetaValue("feature_extractor", "TOPP_PSMFeatureExtractor");
feature_set.insert(feature_set.end(), extra_features.begin(), extra_features.end());
search_parameters.setMetaValue("extra_features", ListUtils::concatenate(feature_set, ","));
all_protein_ids.front().setSearchParameters(search_parameters);
}
// Storing the PeptideHits with calculated q-value, pep and svm score
FileTypes::Type out_type = FileTypes::nameToType(getStringOption_("out_type"));
if (out_type == FileTypes::UNKNOWN)
{
FileHandler fh;
out_type = fh.getTypeByFileName(out);
}
if (out_type == FileTypes::UNKNOWN)
{
writeLogError_("Error: Could not determine output file type! Set 'out_type' parameter to desired file type.");
return PARSE_ERROR;
}
OPENMS_LOG_INFO << "writing output file: " << out << endl;
FileHandler().storeIdentifications(out, all_protein_ids, all_peptide_ids, {FileTypes::MZIDENTML, FileTypes::IDXML});
writeLogInfo_("PSMFeatureExtractor finished successfully!");
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
PSMFeatureExtractor tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/QCImporter.cpp | .cpp | 9,380 | 247 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Mathias Walzer $
// $Author: Mathias Walzer $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/FORMAT/CsvFile.h>
#include <OpenMS/KERNEL/StandardTypes.h>
#include <OpenMS/DATASTRUCTURES/String.h>
#include <OpenMS/FORMAT/QcMLFile.h>
#include <OpenMS/FORMAT/ControlledVocabulary.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/CONCEPT/UniqueIdGenerator.h>
#include <QByteArray>
#include <QFile>
#include <QString>
#include <QFileInfo>
//~ #include <QIODevice>
#include <iostream>
#include <fstream>
#include <vector>
#include <map>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_QCImporter QCImporter
@brief Will import several quality parameter from a tabular (text) format into a qcML file - counterpart to QCExporter.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → QCEmbedder →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_QCExporter </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_QCMerger </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_CometAdapter </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_QCShrinker </td>
</tr>
</table>
</CENTER>
If there is additional data from external tools in tabular format containing additional quality parameter (qp) to runs or sets, or even new runs, these can be imported into the qcML file. For an example see the examples in the share directory.
- @p table The table containing the additional qp values in the columns. First row is considered containing the header. The target run or set names/ids are indicated by column "raw data file", so each row after the header will contain the values of qps for that run.
- @p mapping The mapping of the table header to the according qp cvs, also in csv format. The first row is considered containing the headers as in the table. The second row is considered the according qp cv accessions.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_QCImporter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_QCImporter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPQCImporter :
public TOPPBase
{
public:
TOPPQCImporter() :
TOPPBase("QCImporter", "Imports tables with quality control parameters into qcml files.", true, {{ "Walzer M, Pernas LE, Nasso S, Bittremieux W, Nahnsen S, Kelchtermans P, Martens, L", "qcML: An Exchange Format for Quality Control Metrics from Mass Spectrometry Experiments", "Molecular & Cellular Proteomics 2014; 13(8)" , "10.1074/mcp.M113.035907"}})
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input qcml file", false);
setValidFormats_("in", ListUtils::create<String>("qcML"));
registerInputFile_("table", "<file>", "", R"(The table containing the additional qp values in the columns. First row is considered containing the header. The target run or set names/ids are indicated by column "raw data file", so each row after the header will contain the values of qps for that run. (csv without "!))", true);
setValidFormats_("table", ListUtils::create<String>("csv"));
registerInputFile_("mapping", "<file>", "", "The mapping of the table header to the according qp cvs, also in csv format. The first row is considered containing the headers as in the table. The second row is considered the according qp cv accessions. (csv without \"!)", true);
setValidFormats_("mapping", ListUtils::create<String>("csv"));
registerOutputFile_("out", "<file>", "", "Output extended qcML file", true);
setValidFormats_("out", ListUtils::create<String>("qcML"));
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
String in = getStringOption_("in");
String out = getStringOption_("out");
String mappi = getStringOption_("mapping");
String tab = getStringOption_("table");
ControlledVocabulary cv;
cv.loadFromOBO("PSI-MS", File::find("/CV/psi-ms.obo"));
cv.loadFromOBO("QC", File::find("/CV/qc-cv.obo"));
cv.loadFromOBO("QC", File::find("/CV/qc-cv-legacy.obo"));
//-------------------------------------------------------------
// reading input
//------------------------------------------------------------
QcMLFile qcmlfile;
if (!in.empty())
{
qcmlfile.load(in);
}
if (!mappi.empty() && !tab.empty())
{
CsvFile csv_file(tab);
CsvFile map_file(mappi);
if (map_file.rowCount() < 2) //assumed that first row is the header of table and second row is the according qc
{
cerr << "Error: You have to give a mapping of your table (first row is the header of table and second row is the according qc). Aborting!" << endl;
return ILLEGAL_PARAMETERS;
}
StringList header, according;
map_file.getRow(0, header);
map_file.getRow(1, according);
if (header.size() != according.size())
{
cerr << "Error: You have to give a mapping of your table (first row is the header of table and second row is the according qc). Aborting!" << endl;
return ILLEGAL_PARAMETERS;
}
//~ std::map<String,String> mapping;
//~ std::transform( header.begin(), header.end(), according.begin(), std::inserter(mapping, mapping.end() ), std::make_pair<String,String> );
int runset_col = -1;
for (Size i = 0; i < according.size(); ++i)
{
if (!cv.exists(according[i]))
{
try
{
const ControlledVocabulary::CVTerm& term = cv.getTermByName(according[i]);
header[i] = term.name;
according[i] = term.id;
}
catch (...)
{
cerr << "Error: You have to specify a correct cv with accession or name in col " << String(i) << ". Aborting!" << endl;
//~ cerr << "Header was: "<< header[i] << " , according value was: " << according[i] << endl;
return ILLEGAL_PARAMETERS;
}
}
else
{
const ControlledVocabulary::CVTerm& term = cv.getTerm(according[i]);
header[i] = term.name;
}
if (header[i] == "raw data file") //TODO add set name as possibility!
{
runset_col = i;
}
}
if (runset_col < 0)
{
cerr << "Error: You have to give a mapping of your table - rows to runs/sets. Aborting!" << endl;
return ILLEGAL_PARAMETERS;
}
if (csv_file.rowCount() > 1)
{
StringList li;
for (Size i = 1; i < csv_file.rowCount(); ++i)
{
StringList li;
csv_file.getRow(i, li);
if (li.size() < according.size())
{
cerr << "Error: You have to give a correct mapping of your table - row " << String(i + 1) << " is too short. Aborting!" << endl;
return ILLEGAL_PARAMETERS;
}
std::vector<QcMLFile::QualityParameter> qps;
String id;
bool set = false;
for (Size j = 0; j < li.size(); ++j)
{
if (j == static_cast<Size>(runset_col))
{
if (qcmlfile.existsRun(li[j])) //TODO this only works for real run IDs
{
id = li[j];
}
else if (qcmlfile.existsSet(li[j])) //TODO this only works for real set IDs
{
id = li[j];
set = true;
}
else
{
id = li[j];
qcmlfile.registerRun(id, id);
//TODO warn that if this was supposed to be a set - now it is not!
}
}
QcMLFile::QualityParameter def;
def.name = header[j]; ///< Name
def.id = String(UniqueIdGenerator::getUniqueId());
def.cvRef = "QC"; ///< cv reference ('full name')
def.cvAcc = according[j];
def.value = li[j];
qps.push_back(def);
}
if (!id.empty())
{
for (std::vector<QcMLFile::QualityParameter>::const_iterator qit = qps.begin(); qit != qps.end(); ++qit)
{
if (!set)
{
qcmlfile.addRunQualityParameter(id, *qit);
}
else
{
qcmlfile.addSetQualityParameter(id, *qit);
}
}
}
}
}
}
qcmlfile.store(out);
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPQCImporter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/MetaboliteSpectralMatcher.cpp | .cpp | 6,035 | 178 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Erhan Kenar $
// --------------------------------------------------------------------------
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/KERNEL/FeatureMap.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/MascotGenericFile.h>
#include <OpenMS/FORMAT/MSPGenericFile.h>
#include <OpenMS/FORMAT/MzTab.h>
#include <OpenMS/FORMAT/MzTabFile.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/ANALYSIS/ID/MetaboliteSpectralMatching.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_MetaboliteSpectralMatcher MetaboliteSpectralMatcher
@brief MetaboliteSpectralMatcher identifies small molecules from tandem MS spectra using a spectral library.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → MetaboliteSpectralMatcher →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> processing in R </td>
</tr>
</table>
</CENTER>
By default, MS2 spectra with similar precursor mass are merged before comparison with database spectra, for example when a mass at the beginning of the peak and on the peak apex is selected twice as precursor.
Merging can also have disadvantages, for example, for isobaric or isomeric compounds that have similar/same masses but can have different retention times and MS2 spectra.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_MetaboliteSpectralMatcher.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_MetaboliteSpectralMatcher.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPMetaboliteSpectralMatcher :
public TOPPBase
{
public:
TOPPMetaboliteSpectralMatcher() :
TOPPBase("MetaboliteSpectralMatcher", "Perform a spectral library search.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input spectra.");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerInputFile_("database", "<file>", "", "Default spectral database.", true);
setValidFormats_("database", {"mzML", "msp", "mgf"});
registerOutputFile_("out", "<file>", "", "mzTab file");
setValidFormats_("out", ListUtils::create<String>("mzTab"));
registerOutputFile_("out_spectra", "<file>", "", "Output spectra as mzML file. Can be useful to inspect the peak map after spectra merging.", false);
setValidFormats_("out_spectra", ListUtils::create<String>("mzML"));
registerSubsection_("algorithm", "Algorithm parameters section");
}
Param getSubsectionDefaults_(const String& /*section*/) const override
{
return MetaboliteSpectralMatching().getDefaults();
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
String in = getStringOption_("in");
String database = getStringOption_("database");
String spec_db_filename(database);
// default path? retrieve file path in share folder
if (database == "CHEMISTRY/MetaboliteSpectralDB.mzML")
{
// throws Exception::FileNotFound if file does not exist
spec_db_filename = File::find("CHEMISTRY/MetaboliteSpectralDB.mzML");
}
String out = getStringOption_("out");
String out_spectra = getStringOption_("out_spectra");
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
FileHandler mz_file;
std::vector<Int> ms_level = {2};
mz_file.getOptions().setMSLevels(ms_level);
PeakMap ms_peakmap;
mz_file.loadExperiment(in, ms_peakmap, {FileTypes::MZML}, log_type_);
if (ms_peakmap.empty())
{
OPENMS_LOG_WARN << "The input file does not contain any MS2/fragment spectra.";
return INCOMPATIBLE_INPUT_DATA;
}
MzTab mztab_output;
MzTabFile mztab_outfile;
//-------------------------------------------------------------
// get parameters
//-------------------------------------------------------------
Param msm_param = getParam_().copy("algorithm:", true);
writeDebug_("Parameters passed to MetaboliteSpectralMatcher", msm_param, 3);
//-------------------------------------------------------------
// load database
//-------------------------------------------------------------
PeakMap spec_db;
FileHandler().loadExperiment(spec_db_filename, spec_db, {FileTypes::MSP, FileTypes::MZML, FileTypes::MGF}, log_type_);
if (spec_db.empty())
{
OPENMS_LOG_WARN << "The spectral library does not contain any spectra.";
return INCOMPATIBLE_INPUT_DATA;
}
//-------------------------------------------------------------
// run spectral library search
//-------------------------------------------------------------
MetaboliteSpectralMatching msm;
msm.setParameters(msm_param);
msm.run(ms_peakmap, spec_db, mztab_output, out_spectra);
//-------------------------------------------------------------
// store results
//-------------------------------------------------------------
mztab_outfile.store(out, mztab_output);
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPMetaboliteSpectralMatcher tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/SpectraSTSearchAdapter.cpp | .cpp | 14,957 | 332 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// -----------------------------------------
// $Maintainer: Lukas Zimmermann $
// $Authors: Lukas Zimmermann $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CONCEPT/LogStream.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/SYSTEM/File.h>
#include <sstream>
#include <QDir>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_SpectraSTSearchAdapter SpectraSTSearchAdapter
@brief This util provides an interface to the 'SEARCH' mode of the SpectraST program.
All non-advanced parameters of the executable of SpectraST were translated into
parameters of this util.
SpectraST: Version: 5
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_SpectraSTSearchAdapter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_SpectraSTSearchAdapter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPSpectraSTSearchAdapter :
public TOPPBase
{
public:
// Define parameter name
static const String param_executable;
static const String param_spectra_files;
static const String param_library_file;
static const String param_sequence_database_file;
static const String param_sequence_database_type;
static const String param_search_file;
static const String param_params_file;
static const String param_precursor_mz_tolerance;
static const String param_use_isotopically_averaged_mass;
static const String param_use_all_charge_states;
static const String param_output_files;
static const vector<String> param_output_file_formats;
static const vector<String> param_input_file_formats;
static const String param_user_mod_file;
TOPPSpectraSTSearchAdapter() :
TOPPBase("SpectraSTSearchAdapter", "Interface to the SEARCH Mode of the SpectraST executable")
{
}
protected:
// this function will be used to register the tool parameters
// it gets automatically called on tool execution
void registerOptionsAndFlags_() override
{
StringList empty;
// Handle executable
registerInputFile_(TOPPSpectraSTSearchAdapter::param_executable, "<path>", "spectrast", "Path to the SpectraST executable to use; may be empty if the executable is globally available.", true, false, ListUtils::create<String>("skipexists"));
// register spectra input files
registerInputFileList_(TOPPSpectraSTSearchAdapter::param_spectra_files, "<SearchFileName1> [ <SearchFileName2> ... <SearchFileNameN> ]", empty, "File names(s) of spectra to be searched.", true, false);
setValidFormats_(TOPPSpectraSTSearchAdapter::param_spectra_files, ListUtils::create<String>(TOPPSpectraSTSearchAdapter::param_input_file_formats), false);
// register Output files
registerOutputFileList_(TOPPSpectraSTSearchAdapter::param_output_files, "<OutputFile1> [ <OutputFileName2> ... <OutputFileNameN> ]", empty, "Output files. Make sure to specify one output file for each input file", true, false);
setValidFormats_(TOPPSpectraSTSearchAdapter::param_output_files, TOPPSpectraSTSearchAdapter::param_output_file_formats, false);
// Require library file to be searched
registerInputFile_(TOPPSpectraSTSearchAdapter::param_library_file, "<lib_file>.splib", "", "Specify library file.", true, false);
setValidFormats_(TOPPSpectraSTSearchAdapter::param_library_file, ListUtils::create<String>("splib"), false);
// Sequence database file
registerInputFile_(TOPPSpectraSTSearchAdapter::param_sequence_database_file, "<sequencedb_file>.fasta", "", "The sequence database.", false, false);
setValidFormats_(TOPPSpectraSTSearchAdapter::param_sequence_database_file, ListUtils::create<String>("fasta"), false);
registerStringOption_(TOPPSpectraSTSearchAdapter::param_sequence_database_type, "<sequencedb_type>", "AA", "Specify type of sequence database", false, false);
setValidStrings_(TOPPSpectraSTSearchAdapter::param_sequence_database_type, ListUtils::create<String>("DNA,AA"));
// Search file
registerInputFile_(TOPPSpectraSTSearchAdapter::param_search_file, "<search_file>", "", "Only search a subset of the query spectra in the search file", false, false);
setValidFormats_(TOPPSpectraSTSearchAdapter::param_search_file, ListUtils::create<String>("txt, dat"), false);
// params file
registerInputFile_(TOPPSpectraSTSearchAdapter::param_params_file, "<params_file>", "", "Read search options from file. All options set in the file will be overridden by command-line options, if specified.", false, false);
setValidFormats_(TOPPSpectraSTSearchAdapter::param_params_file, ListUtils::create<String>("params"), false);
// Precursor m/z tolerance
registerDoubleOption_(TOPPSpectraSTSearchAdapter::param_precursor_mz_tolerance, "<precursor_mz_tolerance>", 3, "m/z (in Th) tolerance within which candidate entries are compared to the query. Monoisotopic mass is assumed.", false, false);
setMinFloat_(TOPPSpectraSTSearchAdapter::param_precursor_mz_tolerance, 0);
//Whether to use isotope average instead of monoisotopic mass
registerFlag_(TOPPSpectraSTSearchAdapter::param_use_isotopically_averaged_mass, "Use isotopically averaged mass instead of monoisotopic mass", true);
// Whether to use all charge states
registerFlag_(TOPPSpectraSTSearchAdapter::param_use_all_charge_states, "Search library spectra of all charge states, i.e., ignore specified charge state (if any) of the query spectrum", true);
// User defined modifications file
registerInputFile_(TOPPSpectraSTSearchAdapter::param_user_mod_file, "<user_mod_file>", "", "Specify name of user-defined modifications file. Default is \"spectrast.usermods\".", false, true);
}
// the main_ function is called after all parameters are read
ExitCodes main_(int, const char **) override
{
// Assemble command line for SpectraST
QStringList arguments;
// Executable
String executable = getStringOption_(TOPPSpectraSTSearchAdapter::param_executable);
if (executable.empty())
{
executable = "spectrast";
}
// Make Search Mode explicit
arguments << "-s";
double precursor_mz_tolerance = getDoubleOption_(TOPPSpectraSTSearchAdapter::param_precursor_mz_tolerance);
arguments << QString::number(precursor_mz_tolerance).prepend("-sM");
// Set the parameter file if present
String params_file = getStringOption_(TOPPSpectraSTSearchAdapter::param_params_file);
if (! params_file.empty())
{
arguments << params_file.toQString().prepend("-sF");
}
// Add library file argument, terminate if the corresponding spidx is not present
String library_file = getStringOption_(TOPPSpectraSTSearchAdapter::param_library_file);
String index_file = FileHandler::stripExtension(library_file).append(".spidx");
if (! File::exists(index_file))
{
OPENMS_LOG_ERROR << "ERROR: Index file required by spectrast not found:\n" << index_file << endl;
return INPUT_FILE_NOT_FOUND;
}
arguments << library_file.toQString().prepend("-sL");
// Add Sequence Database file if exists
String sequence_database_file = getStringOption_(TOPPSpectraSTSearchAdapter::param_sequence_database_file);
if (! sequence_database_file.empty())
{
String sequence_database_type = getStringOption_(TOPPSpectraSTSearchAdapter::param_sequence_database_type);
// Check empty or invalid sequence database type
if (sequence_database_type.empty())
{
OPENMS_LOG_ERROR << "ERROR: Sequence database type invalid or not provided" << endl;
return MISSING_PARAMETERS;
}
arguments << sequence_database_type.toQString().prepend("-sT");
arguments << sequence_database_file.toQString().prepend("-sD");
}
// Set the number of threads in SpectraST
Int threads = getIntOption_("threads");
arguments << (threads > 1 ? QString::number(threads).prepend("-sP") : "-sP!");
// Set the search file
String search_file = getStringOption_(TOPPSpectraSTSearchAdapter::param_search_file);
if (! search_file.empty())
{
arguments << search_file.toQString().prepend("-sS");
}
// Flags
arguments << (getFlag_(TOPPSpectraSTSearchAdapter::param_use_isotopically_averaged_mass) ? "-sA" : "-sA!");
arguments << (getFlag_(TOPPSpectraSTSearchAdapter::param_use_all_charge_states) ? "-sz" : "-sz!");
// User mod file
String user_mod_file = getStringOption_(TOPPSpectraSTSearchAdapter::param_user_mod_file);
if (! user_mod_file.empty())
{
arguments << user_mod_file.toQString().prepend("-M");
}
// Input and output files, errors if lists are not equally long
StringList spectra_files = getStringList_(TOPPSpectraSTSearchAdapter::param_spectra_files);
StringList output_files = getStringList_(TOPPSpectraSTSearchAdapter::param_output_files);
if (spectra_files.size() != output_files.size())
{
OPENMS_LOG_ERROR << "ERROR: Number of output files does not match number of input files." << endl;
return ILLEGAL_PARAMETERS;
}
if (spectra_files.empty())
{
OPENMS_LOG_ERROR << "ERROR: At least one file containing spectra to be searched must be provided." << endl;
return ILLEGAL_PARAMETERS;
}
String first_output_file = output_files[0];
String outputFormat;
// Determine the output format with the first file
for (StringList::const_iterator it = TOPPSpectraSTSearchAdapter::param_output_file_formats.begin();
it != TOPPSpectraSTSearchAdapter::param_output_file_formats.end(); ++it)
{
String format = *it;
if (first_output_file.hasSuffix(format))
{
outputFormat = format;
}
}
if (outputFormat.empty())
{
OPENMS_LOG_ERROR << "ERROR: Unrecognized output format from file: " << first_output_file << endl;
return ILLEGAL_PARAMETERS;
}
// Output files must agree on format
for (StringList::const_iterator it = output_files.begin(); it != output_files.end(); ++it)
{
String output_file = *it;
if (! output_file.hasSuffix(outputFormat))
{
OPENMS_LOG_ERROR << "ERROR: Output filename does not agree in format: "
<< output_file << " is not " << outputFormat << endl;
return ILLEGAL_PARAMETERS;
}
}
String temp_dir = File::getTempDirectory();
arguments << outputFormat.toQString().prepend("-sE");
arguments << temp_dir.toQString().prepend("-sO");
// Check whether input files agree in format
String first_input_file = spectra_files[0];
String inputFormat;
// Determine the output format with the first file
for (StringList::const_iterator it = TOPPSpectraSTSearchAdapter::param_input_file_formats.begin();
it != TOPPSpectraSTSearchAdapter::param_input_file_formats.end(); ++it)
{
String format = *it;
if (first_input_file.hasSuffix(format))
{
inputFormat = format;
}
}
// Exit if the input file format is invalid
if (inputFormat.empty())
{
OPENMS_LOG_ERROR << "ERROR: Unrecognized input format from file: " << first_input_file << endl;
return ILLEGAL_PARAMETERS;
}
for (vector<String>::const_iterator it = spectra_files.begin(); it != spectra_files.end(); ++it)
{
String input_file = *it;
if (! input_file.hasSuffix(inputFormat))
{
OPENMS_LOG_ERROR << "ERROR: Input filename does not agree in format: "
<< input_file << " is not " << inputFormat << endl;
return ILLEGAL_PARAMETERS;
}
arguments << input_file.toQString();
}
// Writing the final SpectraST command to the DEBUG LOG
std::stringstream ss;
ss << "COMMAND: " << executable;
for (QStringList::const_iterator it = arguments.begin(); it != arguments.end(); ++it)
{
ss << " " << it->toStdString();
}
OPENMS_LOG_DEBUG << ss.str() << endl;
// Run SpectraST
TOPPBase::ExitCodes exit_code = runExternalProcess_(executable.toQString(), arguments);
if (exit_code != EXECUTION_OK)
{
return exit_code;
}
// Copy the output files to the specified location
QDir temp_dir_qt = QDir(temp_dir.toQString());
for (size_t i = 0; i < spectra_files.size(); i++)
{
String spectra_file = spectra_files[i];
QString actual_path = temp_dir_qt.filePath(FileHandler::stripExtension(File::basename(spectra_file)).toQString().append(".").append(outputFormat.toQString()));
std::ifstream ifs(actual_path.toStdString().c_str(), std::ios::in | std::ios::binary);
std::ofstream ofs(output_files[i].c_str(), std::ios::out | std::ios::binary);
ofs << ifs.rdbuf();
}
// Exit the tool
return EXECUTION_OK;
}
};
// End of Tool definition
// Definition of static members
const String TOPPSpectraSTSearchAdapter::param_executable = "executable";
const String TOPPSpectraSTSearchAdapter::param_spectra_files = "spectra_files";
const String TOPPSpectraSTSearchAdapter::param_library_file = "library_file";
const String TOPPSpectraSTSearchAdapter::param_sequence_database_file = "sequence_database_file";
const String TOPPSpectraSTSearchAdapter::param_sequence_database_type = "sequence_database_type";
const String TOPPSpectraSTSearchAdapter::param_search_file = "search_file";
const String TOPPSpectraSTSearchAdapter::param_params_file = "params_file";
const String TOPPSpectraSTSearchAdapter::param_precursor_mz_tolerance = "precursor_mz_tolerance";
const String TOPPSpectraSTSearchAdapter::param_use_isotopically_averaged_mass = "use_isotopically_averaged_mass";
const String TOPPSpectraSTSearchAdapter::param_use_all_charge_states = "use_all_charge_states";
const String TOPPSpectraSTSearchAdapter::param_output_files = "output_files";
const String TOPPSpectraSTSearchAdapter::param_user_mod_file = "user_mod_file";
const StringList TOPPSpectraSTSearchAdapter::param_output_file_formats = ListUtils::create<String>("txt,tsv,xml,pepXML,html");
const StringList TOPPSpectraSTSearchAdapter::param_input_file_formats = ListUtils::create<String>("mzML,mzXML,mzData,mgf,dta,msp");
// the actual main function needed to create an executable
int main(int argc, const char ** argv)
{
TOPPSpectraSTSearchAdapter tool;
return tool.main(argc, argv);
}
///@endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/OpenSwathDIAPreScoring.cpp | .cpp | 5,947 | 162 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Witold Wolski $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/OPENSWATH/DIAPrescoring.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/DataAccessHelper.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/SimpleOpenMSSpectraAccessFactory.h>
#include <OpenMS/ANALYSIS/OPENSWATH/OpenSwathHelper.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CONCEPT/Exception.h>
#include <OpenMS/CONCEPT/ProgressLogger.h>
#include <OpenMS/OPENSWATHALGO/DATAACCESS/DataFrameWriter.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/SYSTEM/File.h>
#include <iostream>
/**
@page TOPP_OpenSwathDIAPreScoring OpenSwathDIAPreScoring
@brief ...
SWATH specific parameters only apply if you have full MS2 spectra maps.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_OpenSwathDIAPreScoring.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_OpenSwathDIAPreScoring.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
using namespace OpenMS;
class DIAPreScoring :
public TOPPBase
{
public:
DIAPreScoring() :
TOPPBase("OpenSwathDIAPreScoring", "Scoring spectra using the DIA scores.")
{
}
protected:
typedef PeakMap MapType;
typedef std::shared_ptr<PeakMap> MapTypePtr;
void registerOptionsAndFlags_() override
{
registerInputFile_("tr", "<file>", "", "transition file");
setValidFormats_("tr", ListUtils::create<String>("traML"));
registerInputFileList_("swath_files", "<files>", StringList(),
"Swath files that were used to extract the transitions. If present, SWATH specific scoring will be applied.",
true);
setValidFormats_("swath_files", ListUtils::create<String>("mzML"));
registerOutputFileList_("output_files", "<files>", StringList(),
"Output files. One per Swath input file.",
true);
setValidFormats_("output_files", ListUtils::create<String>("tsv"));
registerDoubleOption_("min_upper_edge_dist", "<double>", 0.0,
"Minimal distance to the edge to still consider a precursor, in Thomson (only in SWATH)",
false);
}
Param getSubsectionDefaults_(const String&) const override
{
return OpenMS::DiaPrescore().getDefaults();
}
ExitCodes main_(int, const char**) override
{
OpenMS::StringList file_list = getStringList_("swath_files");
OpenMS::StringList outfile_list = getStringList_("output_files");
std::string tr_file = getStringOption_("tr");
std::cout << tr_file << std::endl;
double min_upper_edge_dist = getDoubleOption_("min_upper_edge_dist");
// If we have a transformation file, trafo will transform the RT in the
// scoring according to the model. If we don't have one, it will apply the
// null transformation.
Param feature_finder_param = getParam_().copy("algorithm:", true);
// Create the output map, load the input TraML file and the chromatograms
MapType exp;
OpenSwath::LightTargetedExperiment transition_exp;
std::cout << "Loading TraML file" << std::endl;
{
OpenMS::TargetedExperiment transition_exp_;
FileHandler().loadTransitions(tr_file, transition_exp_, {FileTypes::TRAML});
OpenSwathDataAccessHelper::convertTargetedExp(transition_exp_, transition_exp);
int ltrans = transition_exp.transitions.size();
std::cout << ltrans << std::endl;
}
// Here we deal with SWATH files (can be multiple files)
for (Size i = 0; i < file_list.size(); ++i)
{
MapTypePtr swath_map (new MapType);
FeatureMap featureFile;
std::cout << "Loading file " << file_list[i] << std::endl;
String fname = outfile_list[i];
FileHandler().loadExperiment(file_list[i], *swath_map, {FileTypes::MZML}, log_type_);
if (swath_map->empty() || (*swath_map)[0].getPrecursors().empty())
{
std::cerr << "WARNING: File " << swath_map->getLoadedFilePath()
<< " does not have any experiments or any precursors. Is it a SWATH map?"
<< std::endl;
continue;
}
// Find the transitions to extract and extract them
OpenSwath::LightTargetedExperiment transition_exp_used;
double upper, lower;
const std::vector<Precursor> prec = (*swath_map)[0].getPrecursors();
lower = prec[0].getMZ() - prec[0].getIsolationWindowLowerOffset();
upper = prec[0].getMZ() + prec[0].getIsolationWindowUpperOffset();
OpenSwathHelper::selectSwathTransitions(transition_exp, transition_exp_used,
min_upper_edge_dist, lower, upper);
if (transition_exp_used.getTransitions().empty())
{
std::cerr << "WARNING: For file " << swath_map->getLoadedFilePath()
<< " there are no transitions to extract." << std::endl;
continue;
}
std::cout << "Using Spectrum Interface!" << std::endl;
OpenSwath::SpectrumAccessPtr spectrumAccess = SimpleOpenMSSpectraFactory::getSpectrumAccessOpenMSPtr(
swath_map);
OpenSwath::IDataFrameWriter* dfw = new OpenSwath::CSVWriter(fname);
OpenMS::DiaPrescore dp;
OpenMS::RangeMobility im_range; // create empty IM range object
dp.operator()(spectrumAccess, transition_exp_used, im_range, dfw); //note IM not supported here yet
delete dfw;
} //end of for loop
return EXECUTION_OK;
} //end of _main
};
int main(int argc, const char** argv)
{
DIAPreScoring tool;
int code = tool.main(argc, argv);
return code;
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/ClusterMassTraces.cpp | .cpp | 4,938 | 156 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hannes Roest $
// $Authors: Hannes Roest $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/OPENSWATH/MasstraceCorrelator.h>
#include <OpenMS/FORMAT/FileHandler.h>
using namespace OpenMS;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_ClusterMassTraces ClusterMassTraces
@brief Cluster mass traces occurring in the same map together
Cluster mass traces together found in a mass spectrometric map (MS1 or MS2).
Input is a consensus map containing individual mass traces, the output may be
spectra containing all clustered features.
Mass traces are clustered independent of precursor traces in another map
(this is the more simple approach) and pseudo spectra are created without
any precursors assigned. This is useful for
- clustering of features in an MS1 map (isotope traces, charge states etc)
- clustering of features in an SWATH map (fragment ions from the same precursor, isotope traces, charge states etc)
On the clustered fragments in an MS2 map, one can then (optionally) do
- de novo searches
- calculate the most likely precursor(s) and DB-search
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_ClusterMassTraces.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_ClusterMassTraces.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
#include <OpenMS/APPLICATIONS/TOPPBase.h>
class TOPPClusterMassTraces
: public TOPPBase,
public ProgressLogger
{
// Docu
//
public:
TOPPClusterMassTraces()
: TOPPBase("ClusterMassTraces","Creates pseudo spectra.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in","<file>","","Mass traces");
setValidFormats_("in",ListUtils::create<String>("consensusXML"));
registerOutputFile_("out","<file>","","output file");
setValidFormats_("out",ListUtils::create<String>("mzML"));
registerDoubleOption_("min_pearson_correlation", "<double>", 0.7, "Minimal pearson correlation score", false);
registerIntOption_("min_peak_nr", "<number>", 1, "Minimal peak nr to output pseudo spectra", false);
registerIntOption_("max_lag", "<number>", 1, "Maximal lag", false);
registerDoubleOption_("max_rt_apex_difference", "<double>", 5.0, "Maximal difference of the apex in retention time", false);
registerDoubleOption_("max_intensity_cutoff", "<double>", 0.0, "Maximal intensity to be added to a spectrum", false);
registerDoubleOption_("add_precursor", "<double>", 0.0, "Add a precursor mass", false);
}
public:
ExitCodes main_(int , const char**) override
{
setLogType(log_type_);
String infile = getStringOption_("in");
String out = getStringOption_("out");
double min_pearson_correlation_ = getDoubleOption_("min_pearson_correlation");
int max_lag_ = getIntOption_("max_lag");
int min_peak_nr = getIntOption_("min_peak_nr");
double max_rt_apex_difference_ = getDoubleOption_("max_rt_apex_difference");
double add_precursor = getDoubleOption_("add_precursor");
// double max_intensity_cutoff_ = getDoubleOption_("max_intensity_cutoff");
ConsensusMap masstrace_map;
FileHandler().loadConsensusFeatures(infile, masstrace_map, {FileTypes::CONSENSUSXML}, log_type_);
MSExperiment pseudo_spectra;
if (masstrace_map.empty())
{
// Error
}
std::cout << "Input map " << infile <<" has size: " << masstrace_map.size() << std::endl;
masstrace_map.sortByIntensity(true);
std::cout << "Input map " << infile <<" has size: " << masstrace_map.size() << std::endl;
OpenMS::MasstraceCorrelator mtcorr;
mtcorr.setLogType(log_type_);
mtcorr.createPseudoSpectra(masstrace_map, pseudo_spectra, min_peak_nr,
min_pearson_correlation_, max_lag_, max_rt_apex_difference_/* , max_intensity_cutoff_ */);
pseudo_spectra.sortSpectra();
// If we want to set a specific precursor, do this now
if (add_precursor > 0 )
{
for (Size i = 0; i < pseudo_spectra.size(); i++)
{
Precursor p;
//p.setIsolationWindowLowerOffset(swath_lower);
//p.setIsolationWindowUpperOffset(swath_upper);
p.setMZ(add_precursor);
std::vector<Precursor> preclist;
preclist.push_back(p);
pseudo_spectra[i].setPrecursors(preclist);
}
}
FileHandler().storeExperiment(out,pseudo_spectra, {FileTypes::MZML});
return EXECUTION_OK;
}
};
int main( int argc, const char** argv )
{
TOPPClusterMassTraces tool;
return tool.main(argc,argv);
}
///@endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/IDPosteriorErrorProbability.cpp | .cpp | 12,119 | 285 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: David Wojnar $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/MATH/STATISTICS/PosteriorErrorProbabilityModel.h>
#include <OpenMS/FORMAT/FileHandler.h>
using namespace OpenMS;
using namespace Math; //PosteriorErrorProbabilityModel
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_IDPosteriorErrorProbability IDPosteriorErrorProbability
@brief Tool to estimate the probability of peptide hits to be incorrectly assigned.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → IDPosteriorErrorProbability →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_CometAdapter (or other ID engines) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_ConsensusID </td>
</tr>
</table>
</CENTER>
@experimental This tool has not been tested thoroughly and might behave not as expected!
By default an estimation is performed using the (inverse) Gumbel distribution for incorrectly assigned sequences
and a Gaussian distribution for correctly assigned sequences. The probabilities are calculated by using Bayes' law, similar to PeptideProphet.
Alternatively, a second Gaussian distribution can be used for incorrectly assigned sequences.
At the moment, IDPosteriorErrorProbability is able to handle X! Tandem, Mascot, MyriMatch and OMSSA scores.
No target/decoy information needs to be provided, since the model fits are done on the mixed distribution.
In order to validate the computed probabilities an optional plot output can be generated.
There are two parameters for the plot:
The scores are plotted in the form of bins. Each bin represents a set of scores in a range of '(highest_score - smallest_score) / number_of_bins' (if all scores have positive values).
The midpoint of the bin is the mean of the scores it represents.
The parameter 'out_plot' should be used to give the plot a unique name. Two files are created. One with the binned scores and one with all steps of the estimation.
If parameter @p top_hits_only is set, only the top hits of each peptide identification are used for the estimation process.
Additionally, if 'top_hits_only' is set, target/decoy information is available and a @ref TOPP_FalseDiscoveryRate run was performed previously, an additional plot will be generated with target and decoy bins ('out_plot' must not be empty).
A peptide hit is assumed to be a target if its q-value is smaller than @p fdr_for_targets_smaller.
The plots are saved as a Gnuplot file. An attempt is made to call Gnuplot, which will create a PDF file containing all steps of the estimation. If this fails, the user has to run Gnuplot manually - or adjust the PATH environment such that Gnuplot can be found and retry.
@note Currently mzIdentML (mzid) is not directly supported as an input/output format of this tool. Convert mzid files to/from idXML using @ref TOPP_IDFileConverter if necessary.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_IDPosteriorErrorProbability.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_IDPosteriorErrorProbability.html
For the parameters of the algorithm section see the algorithms documentation: @n
@ref OpenMS::Math::PosteriorErrorProbabilityModel "fit_algorithm" @n
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPIDPosteriorErrorProbability :
public TOPPBase
{
public:
TOPPIDPosteriorErrorProbability() :
TOPPBase("IDPosteriorErrorProbability", "Estimates probabilities for incorrectly assigned peptide sequences and a set of search engine scores using a mixture model.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "input file ");
setValidFormats_("in", ListUtils::create<String>("idXML"));
registerOutputFile_("out", "<file>", "", "output file ");
setValidFormats_("out", ListUtils::create<String>("idXML"));
registerOutputFile_("out_plot", "<file>", "", "txt file (if gnuplot is available, a corresponding PDF will be created as well.)", false);
setValidFormats_("out_plot", ListUtils::create<String>("txt"));
registerFlag_("split_charge", "The search engine scores are split by charge if this flag is set. Thus, for each charge state a new model will be computed.");
registerFlag_("top_hits_only", "If set only the top hits of every PeptideIdentification will be used");
registerDoubleOption_("fdr_for_targets_smaller", "<value>", 0.05, "Only used, when top_hits_only set. Additionally, target/decoy information should be available. The score_type must be q-value from an previous False Discovery Rate run.", false, true);
registerFlag_("ignore_bad_data", "If set errors will be written but ignored. Useful for pipelines with many datasets where only a few are bad, but the pipeline should run through.");
registerFlag_("prob_correct", "If set scores will be calculated as '1 - ErrorProbabilities' and can be interpreted as probabilities for correct identifications.");
registerSubsection_("fit_algorithm", "Algorithm parameter subsection");
addEmptyLine_();
}
//there is only one parameter at the moment
Param getSubsectionDefaults_(const String& /*section*/) const override
{
Param p = PosteriorErrorProbabilityModel().getParameters();
if (p.exists("out_plot"))
{ // hide from user -- we have a top-level param for that
p.remove("out_plot");
}
else
{
throw Exception::Precondition(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "INTERNAL ERROR: Param 'out_plot' was removed from fit-algorithm. Please update param handling internally!");
}
return p;
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
String inputfile_name = getStringOption_("in");
String outputfile_name = getStringOption_("out");
Param fit_algorithm = getParam_().copy("fit_algorithm:", true);
fit_algorithm.setValue("out_plot", getStringOption_("out_plot")); // re-assemble full param (was moved to top-level)
bool split_charge = getFlag_("split_charge");
bool top_hits_only = getFlag_("top_hits_only");
double fdr_for_targets_smaller = getDoubleOption_("fdr_for_targets_smaller");
bool ignore_bad_data = getFlag_("ignore_bad_data");
bool prob_correct = getFlag_("prob_correct");
String outlier_handling = fit_algorithm.getValue("outlier_handling").toString();
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
FileHandler file;
vector<ProteinIdentification> protein_ids;
PeptideIdentificationList peptide_ids;
file.loadIdentifications(inputfile_name, protein_ids, peptide_ids, {FileTypes::IDXML});
PosteriorErrorProbabilityModel PEP_model;
PEP_model.setParameters(fit_algorithm);
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
// check if there is a q-value score and target_decoy information available
bool target_decoy_available(false);
for (PeptideIdentification const & pep_id : peptide_ids)
{
const vector<PeptideHit>& hits = pep_id.getHits();
if (!hits.empty())
{
target_decoy_available = (pep_id.getScoreType() == "q-value"
&& hits[0].metaValueExists("target_decoy"));
break;
}
}
// map identifier "engine,charge" (if split_charge==true) or "engine"
// to three extracted score vectors. The main score vector contains the PSM scores.
// Second and third are optional and contain target and decoy scores.
map<String, vector<vector<double> > > all_scores = PosteriorErrorProbabilityModel::extractAndTransformScores(
protein_ids,
peptide_ids,
split_charge,
top_hits_only,
target_decoy_available,
fdr_for_targets_smaller);
if (all_scores.empty())
{
writeLogWarn_("No data collected. Check whether search engine is supported.");
if (!ignore_bad_data)
{
return INPUT_FILE_EMPTY;
}
}
String out_plot = String(fit_algorithm.getValue("out_plot").toString()).trim();
for (auto & score : all_scores)
{
vector<String> engine_info;
score.first.split(',', engine_info);
String engine = engine_info[0];
Int charge = (engine_info.size() == 2) ? engine_info[1].toInt() : -1;
if (split_charge)
{
// only adapt plot output if plot is requested (this badly violates the output rules and needs to change!)
// one way to fix this: plot charges into a single file (no renaming of output file needed) - but this requires major code restructuring
if (!out_plot.empty()) fit_algorithm.setValue("out_plot", out_plot + "_charge_" + String(charge));
PEP_model.setParameters(fit_algorithm);
}
// fit to score vector
//TODO choose outlier handling based on search engine? If not set by user?
//XTandem is prone to accumulation at min values/censoring
//OMSSA is prone to outliers
bool return_value = PEP_model.fit(score.second[0], outlier_handling);
if (!return_value)
{
writeLogWarn_("Unable to fit data. Algorithm did not run through for the following search engine: " + engine);
if (!ignore_bad_data)
{
return UNEXPECTED_RESULT;
}
}
if (return_value)
{
// plot target_decoy
if (!out_plot.empty()
&& top_hits_only
&& target_decoy_available
&& (!score.second[0].empty()))
{
PEP_model.plotTargetDecoyEstimation(score.second[1], score.second[2]); //target, decoy
}
bool unable_to_fit_data(true), data_might_not_be_well_fit(true);
PosteriorErrorProbabilityModel::updateScores(
PEP_model,
engine,
charge,
prob_correct,
split_charge,
protein_ids,
peptide_ids,
unable_to_fit_data,
data_might_not_be_well_fit);
if (unable_to_fit_data)
{
writeLogWarn_(String("Unable to fit data for search engine: ") + engine);
if (!ignore_bad_data)
{
return UNEXPECTED_RESULT;
}
}
else if (data_might_not_be_well_fit)
{
writeLogWarn_(String("Data might not be well fitted for search engine: ") + engine);
}
}
}
// Unfortunately this cannot go into the algorithm since
// you would overwrite some score types before they are extracted when you
// do split_charge
for (auto& pep : peptide_ids)
{
if (prob_correct)
{
pep.setScoreType("Posterior Probability");
pep.setHigherScoreBetter(true);
}
else
{
pep.setScoreType("Posterior Error Probability");
pep.setHigherScoreBetter(false);
}
}
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
file.storeIdentifications(outputfile_name, protein_ids, peptide_ids, {FileTypes::IDXML});
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPIDPosteriorErrorProbability tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/FeatureLinkerUnlabeled.cpp | .cpp | 12,905 | 338 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Marc Sturm, Clemens Groepl, Steffen Sass $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/MAPMATCHING/FeatureGroupingAlgorithmUnlabeled.h>
// TODO move LoadSize into handler
#include <OpenMS/FORMAT/FeatureXMLFile.h>
#include "FeatureLinkerBase.cpp"
#include <iomanip> // setw
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_FeatureLinkerUnlabeled FeatureLinkerUnlabeled
@brief Groups corresponding features from multiple maps.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=4> → FeatureLinkerUnlabeled →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FeatureFinderCentroided @n (or another feature detection algorithm) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_ProteinQuantifier </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> @ref TOPP_MapAlignerPoseClustering @n (or another map alignment algorithm) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_TextExporter </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_SeedListGenerator </td>
</tr>
</table>
</CENTER>
This tool provides an algorithm for grouping corresponding features in multiple runs of label-free experiments. For more details and algorithm-specific parameters (set in the INI file) see "Detailed Description" in the @ref OpenMS::FeatureGroupingAlgorithmUnlabeled "algorithm documentation"
or the INI file table below.
FeatureLinkerUnlabeled takes several feature maps (featureXML files) and stores the corresponding features in a consensus map (consensusXML file). Feature maps can be created from MS experiments (peak data) using one of the FeatureFinder TOPP tools.
Advanced users can convert the consensusXML generated by this tool to EDTA using @subpage TOPP_FileConverter and plot the distribution of distances in RT (or m/z) between different input files (can be done in Excel).
The distribution should be Gaussian-like with very few points beyond the tails. Points far away from the Gaussian indicate a too wide tolerance. A Gaussian with its left/right tail trimmed indicates a too narrow tolerance.
@see @ref TOPP_FeatureLinkerUnlabeledQT @ref TOPP_FeatureLinkerLabeled
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_FeatureLinkerUnlabeled.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_FeatureLinkerUnlabeled.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPFeatureLinkerUnlabeled :
public TOPPFeatureLinkerBase
{
public:
TOPPFeatureLinkerUnlabeled() :
TOPPFeatureLinkerBase("FeatureLinkerUnlabeled", "Groups corresponding features from multiple maps.")
{
}
protected:
void registerOptionsAndFlags_() override
{
TOPPFeatureLinkerBase::registerOptionsAndFlags_();
registerSubsection_("algorithm", "Algorithm parameters section");
}
Param getSubsectionDefaults_(const String & /*section*/) const override
{
FeatureGroupingAlgorithmUnlabeled algo;
Param p = algo.getParameters();
return p;
}
ExitCodes main_(int, const char **) override
{
FeatureGroupingAlgorithmUnlabeled * algorithm = new FeatureGroupingAlgorithmUnlabeled();
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
StringList ins;
ins = getStringList_("in");
String out = getStringOption_("out");
//-------------------------------------------------------------
// check for valid input
//-------------------------------------------------------------
// check if all input files have the correct type
FileTypes::Type file_type = FileHandler::getType(ins[0]);
for (Size i = 0; i < ins.size(); ++i)
{
if (FileHandler::getType(ins[i]) != file_type)
{
writeLogError_("Error: All input files must be of the same type!");
return ILLEGAL_PARAMETERS;
}
}
//-------------------------------------------------------------
// set up algorithm
//-------------------------------------------------------------
Param algorithm_param = getParam_().copy("algorithm:", true);
writeDebug_("Used algorithm parameters", algorithm_param, 3);
algorithm->setParameters(algorithm_param);
Size reference_index(0);
//-------------------------------------------------------------
// perform grouping
//-------------------------------------------------------------
// load input
ConsensusMap out_map;
StringList ms_run_locations;
if (file_type == FileTypes::FEATUREXML)
{
// use map with highest number of features as reference:
Size max_count(0);
FeatureXMLFile f;
for (Size i = 0; i < ins.size(); ++i)
{
Size s = f.loadSize(ins[i]);
if (s > max_count)
{
max_count = s;
reference_index = i;
}
}
// Load reference map and input it to the algorithm
UInt64 ref_id;
Size ref_size;
PeptideIdentificationList ref_pepids;
std::vector<ProteinIdentification> ref_protids;
{
FeatureMap map_ref;
FileHandler f_fxml_tmp;
f_fxml_tmp.getFeatOptions().setLoadConvexHull(false);
f_fxml_tmp.getFeatOptions().setLoadSubordinates(false);
f_fxml_tmp.loadFeatures(ins[reference_index], map_ref, {FileTypes::FEATUREXML});
algorithm->setReference(reference_index, map_ref);
ref_id = map_ref.getUniqueId();
ref_size = map_ref.size();
ref_pepids = map_ref.getUnassignedPeptideIdentifications();
ref_protids = map_ref.getProteinIdentifications();
}
ConsensusMap dummy;
// go through all input files and add them to the result one by one
for (Size i = 0; i < ins.size(); ++i)
{
FileHandler f_fxml_tmp;
FeatureMap tmp_map;
f_fxml_tmp.getFeatOptions().setLoadConvexHull(false);
f_fxml_tmp.getFeatOptions().setLoadSubordinates(false);
f_fxml_tmp.loadFeatures(ins[i], tmp_map, {FileTypes::FEATUREXML});
// copy over information on the primary MS run
StringList ms_runs;
tmp_map.getPrimaryMSRunPath(ms_runs);
ms_run_locations.insert(ms_run_locations.end(), ms_runs.begin(), ms_runs.end());
if (i != reference_index)
{
algorithm->addToGroup(i, tmp_map);
// store some meta-data about the maps in the "dummy" object -> try to
// keep the same order as they were given in the input independent of
// which map is the reference.
dummy.getColumnHeaders()[i].filename = ins[i];
dummy.getColumnHeaders()[i].size = tmp_map.size();
dummy.getColumnHeaders()[i].unique_id = tmp_map.getUniqueId();
// add protein identifications to result map
dummy.getProteinIdentifications().insert(
dummy.getProteinIdentifications().end(),
tmp_map.getProteinIdentifications().begin(),
tmp_map.getProteinIdentifications().end());
// add unassigned peptide identifications to result map
auto& newIDs = dummy.getUnassignedPeptideIdentifications();
for (const PeptideIdentification& pepID : tmp_map.getUnassignedPeptideIdentifications())
{
auto newPepID = pepID;
//TODO during linking of consensusMaps we have the problem that old identifications
// already have a map_index associated. Since we link the consensusFeatures only anyway
// (without keeping the subfeatures) it should be ok for now to "re"-index
newPepID.setMetaValue("map_index", i);
newIDs.push_back(newPepID);
}
}
else
{
// copy the meta-data from the reference map
dummy.getColumnHeaders()[i].filename = ins[i];
dummy.getColumnHeaders()[i].size = ref_size;
dummy.getColumnHeaders()[i].unique_id = ref_id;
// add protein identifications to result map
dummy.getProteinIdentifications().insert(
dummy.getProteinIdentifications().end(),
ref_protids.begin(),
ref_protids.end());
// add unassigned peptide identifications to result map
auto& newIDs = dummy.getUnassignedPeptideIdentifications();
for (const PeptideIdentification& pepID : ref_pepids)
{
auto newPepID = pepID;
//TODO during linking of consensusMaps we have the problem that old identifications
// already have a map_index associated. Since we link the consensusFeatures only anyway
// (without keeping the subfeatures) it should be ok for now to "re"-index
newPepID.setMetaValue("map_index", i);
newIDs.push_back(newPepID);
}
}
}
// get the resulting map
out_map = algorithm->getResultMap();
//
// Copy back meta-data (Protein / Peptide ids / File descriptions)
//
// add protein identifications to result map
out_map.getProteinIdentifications().insert(
out_map.getProteinIdentifications().end(),
dummy.getProteinIdentifications().begin(),
dummy.getProteinIdentifications().end());
// add unassigned peptide identifications to result map
out_map.getUnassignedPeptideIdentifications().insert(
out_map.getUnassignedPeptideIdentifications().end(),
dummy.getUnassignedPeptideIdentifications().begin(),
dummy.getUnassignedPeptideIdentifications().end());
out_map.setColumnHeaders(dummy.getColumnHeaders());
// canonical ordering for checking the results, and the ids have no real meaning anyway
// the way this was done in DelaunayPairFinder and StablePairFinder
// -> the same ordering as FeatureGroupingAlgorithmUnlabeled::group applies!
out_map.sortByMZ();
out_map.updateRanges();
}
else
{
vector<ConsensusMap> maps(ins.size());
FileHandler f;
for (Size i = 0; i < ins.size(); ++i)
{
f.loadConsensusFeatures(ins[i], maps[i], {FileTypes::CONSENSUSXML});
StringList ms_runs;
maps[i].getPrimaryMSRunPath(ms_runs);
ms_run_locations.insert(ms_run_locations.end(), ms_runs.begin(), ms_runs.end());
}
// group
algorithm->FeatureGroupingAlgorithm::group(maps, out_map);
// set file descriptions:
bool keep_subelements = getFlag_("keep_subelements");
if (!keep_subelements)
{
for (Size i = 0; i < ins.size(); ++i)
{
out_map.getColumnHeaders()[i].filename = ins[i];
out_map.getColumnHeaders()[i].size = maps[i].size();
out_map.getColumnHeaders()[i].unique_id = maps[i].getUniqueId();
}
}
else
{
// components of the output map are not the input maps themselves, but
// the components of the input maps:
algorithm->transferSubelements(maps, out_map);
}
}
// assign unique ids
out_map.applyMemberFunction(&UniqueIdInterface::setUniqueId);
// annotate output with data processing info
addDataProcessing_(out_map, getProcessingInfo_(DataProcessing::FEATURE_GROUPING));
out_map.setPrimaryMSRunPath(ms_run_locations);
// write output
FileHandler().storeConsensusFeatures(out, out_map, {FileTypes::CONSENSUSXML});
// some statistics
map<Size, UInt> num_consfeat_of_size;
for (const ConsensusFeature& cf : out_map)
{
++num_consfeat_of_size[cf.size()];
}
OPENMS_LOG_INFO << "Number of consensus features:" << endl;
for (map<Size, UInt>::reverse_iterator i = num_consfeat_of_size.rbegin(); i != num_consfeat_of_size.rend(); ++i)
{
OPENMS_LOG_INFO << " of size " << setw(2) << i->first << ": " << setw(6) << i->second << endl;
}
OPENMS_LOG_INFO << " total: " << setw(6) << out_map.size() << endl;
delete algorithm;
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPFeatureLinkerUnlabeled tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/IDDecoyProbability.cpp | .cpp | 6,159 | 179 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Andreas Bertsch $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/ANALYSIS/ID/IDDecoyProbability.h>
#include <OpenMS/KERNEL/StandardTypes.h>
#include <OpenMS/FORMAT/FileHandler.h>
using namespace OpenMS;
using namespace std;
/**
@page TOPP_IDDecoyProbability IDDecoyProbability
@brief Util to estimate probability of peptide hits
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → IDDecoyProbability →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_CometAdapter (or other ID engines) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> - </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeptideIndexer </td>
</tr>
</table>
</CENTER>
@experimental This util is deprecated and might behave not as expected!
So far an estimation of the false score distribution with a gamma distribution
and the correct score distribution with a gaussian distribution is performed.
The probabilities are calculated using Bayes law, similar to PeptideProphet.
This implementation is much simpler than that of PeptideProphet.
@note Currently mzIdentML (mzid) is not directly supported as an input/output format of this tool. Convert mzid files to/from idXML using @ref TOPP_IDFileConverter if necessary.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_IDDecoyProbability.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_IDDecoyProbability.html
For the parameters of the algorithm section see the algorithms documentation: @n
@ref OpenMS::IDDecoyProbability "decoy_algorithm" @n
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPIDDecoyProbability :
public TOPPBase
{
public:
TOPPIDDecoyProbability() :
TOPPBase("IDDecoyProbability", "Estimates peptide probabilities using a decoy search strategy.\nWARNING: This util is deprecated.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Identification input of combined forward decoy search (reindex with PeptideIndexer first)", false);
setValidFormats_("in", ListUtils::create<String>("idXML"));
registerInputFile_("fwd_in", "<file>", "", "Identification input of forward run", false);
setValidFormats_("fwd_in", ListUtils::create<String>("idXML"));
registerInputFile_("rev_in", "<file>", "", "Identification input of decoy run", false);
setValidFormats_("rev_in", ListUtils::create<String>("idXML"));
registerOutputFile_("out", "<file>", "", "Identification output with forward scores converted to probabilities");
setValidFormats_("out", ListUtils::create<String>("idXML"));
registerSubsection_("decoy_algorithm", "Algorithm parameter subsection");
addEmptyLine_();
}
Param getSubsectionDefaults_(const String & /*section*/) const override
{
IDDecoyProbability decoy_prob;
return decoy_prob.getParameters();
}
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
//input/output files
// either fwd_in and rev_in must be given or just the in which contains results of a search against a concatenated target decoy sequence db
String fwd_in(getStringOption_("fwd_in")), rev_in(getStringOption_("rev_in")), in(getStringOption_("in"));
bool combined(false);
if (!fwd_in.empty() && !rev_in.empty())
{
if (!in.empty())
{
writeLogError_("Error: either 'fwd_in' and 'rev_in' must be given or 'in', but not both");
return ILLEGAL_PARAMETERS;
}
}
else
{
if (!in.empty())
{
combined = true;
}
else
{
writeLogError_("Error: at least 'fwd_in' and 'rev_in' or 'in' must be given");
return ILLEGAL_PARAMETERS;
}
}
String out(getStringOption_("out"));
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
IDDecoyProbability decoy_prob;
Param decoy_param = getParam_().copy("decoy_algorithm:", true);
decoy_prob.setParameters(decoy_param);
if (!combined)
{
PeptideIdentificationList fwd_pep, rev_pep, out_pep;
vector<ProteinIdentification> fwd_prot, rev_prot;
FileHandler().loadIdentifications(fwd_in, fwd_prot, fwd_pep, {FileTypes::IDXML});
FileHandler().loadIdentifications(rev_in, rev_prot, rev_pep, {FileTypes::IDXML});
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
writeDebug_("Starting calculations", 1);
decoy_prob.apply(out_pep, fwd_pep, rev_pep);
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
FileHandler().storeIdentifications(out, fwd_prot, out_pep, {FileTypes::IDXML});
}
else
{
vector<ProteinIdentification> prot_ids;
PeptideIdentificationList pep_ids;
String document_id;
FileHandler().loadIdentifications(in, prot_ids, pep_ids, {FileTypes::IDXML});
decoy_prob.apply(pep_ids);
FileHandler().storeIdentifications(out, prot_ids, pep_ids, {FileTypes::IDXML});
}
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPIDDecoyProbability tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/RNADigestor.cpp | .cpp | 5,535 | 158 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hendrik Weisser $
// $Authors: Nico Pfeiffer, Chris Bielow, Hendrik Weisser $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CHEMISTRY/RNaseDigestion.h>
#include <OpenMS/CHEMISTRY/RNaseDB.h>
#include <OpenMS/FORMAT/FASTAFile.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_RNADigestor RNADigestor
@brief Digests an RNA sequence database in-silico.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → RNADigestor →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> none (FASTA input) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> none (so far)</td>
</tr>
</table>
</CENTER>
This application is used to digest an RNA sequence database to get all fragments given a cleavage enzyme.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_RNADigestor.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_RNADigestor.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPRNADigestor :
public TOPPBase
{
public:
TOPPRNADigestor() :
TOPPBase("RNADigestor", "Digests an RNA sequence database in-silico.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input file containing RNA sequences");
setValidFormats_("in", ListUtils::create<String>("fasta"));
registerOutputFile_("out", "<file>", "", "Output file containing sequence fragments");
setValidFormats_("out", ListUtils::create<String>("fasta"));
registerIntOption_("missed_cleavages", "<number>", 1, "The number of allowed missed cleavages", false);
setMinInt_("missed_cleavages", 0);
registerIntOption_("min_length", "<number>", 3, "Minimum length of a fragment", false);
registerIntOption_("max_length", "<number>", 30, "Maximum length of a fragment", false);
vector<String> all_enzymes;
RNaseDB::getInstance()->getAllNames(all_enzymes);
registerStringOption_("enzyme", "<string>", "RNase_T1", "Digestion enzyme (RNase)", false);
setValidStrings_("enzyme", all_enzymes);
registerFlag_("unique", "Report each unique sequence fragment only once");
registerFlag_("cdna", "Input file contains cDNA sequences - replace 'T' with 'U')");
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
String in = getStringOption_("in");
String out = getStringOption_("out");
Size min_size = getIntOption_("min_length");
Size max_size = getIntOption_("max_length");
Size missed_cleavages = getIntOption_("missed_cleavages");
bool unique = getFlag_("unique");
bool cdna = getFlag_("cdna");
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
vector<FASTAFile::FASTAEntry> seq_data;
FASTAFile().load(in, seq_data);
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
String enzyme = getStringOption_("enzyme");
RNaseDigestion digestor;
digestor.setEnzyme(enzyme);
digestor.setMissedCleavages(missed_cleavages);
std::vector<FASTAFile::FASTAEntry> all_fragments;
set<NASequence> unique_fragments;
for (FASTAFile::FASTAEntry& entry : seq_data)
{
vector<NASequence> fragments;
if (cdna) entry.sequence.toUpper().substitute('T', 'U');
NASequence seq = NASequence::fromString(entry.sequence);
digestor.digest(seq, fragments, min_size, max_size);
Size counter = 1;
for (vector<NASequence>::const_iterator frag_it = fragments.begin();
frag_it != fragments.end(); ++frag_it)
{
if (!unique || !unique_fragments.count(*frag_it))
{
String id = entry.identifier + "_" + String(counter);
String desc;
if (!entry.description.empty()) desc = entry.description + " ";
desc += "(fragment " + String(counter) + ")";
FASTAFile::FASTAEntry fragment(id, desc, frag_it->toString());
all_fragments.push_back(fragment);
unique_fragments.insert(*frag_it);
counter++;
}
}
}
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
FASTAFile().store(out, all_fragments);
OPENMS_LOG_INFO << "Digested " << seq_data.size() << " sequence(s) into "
<< all_fragments.size()
<< " fragments meeting the length restrictions." << endl;
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPRNADigestor tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/MSstatsConverter.cpp | .cpp | 9,059 | 209 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Lukas Zimmermann $
// $Authors: Lukas Zimmermann $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/FORMAT/TextFile.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/MzTabFile.h>
#include <OpenMS/FORMAT/MzTab.h>
#include <OpenMS/METADATA/ExperimentalDesign.h>
#include <OpenMS/FORMAT/ExperimentalDesignFile.h>
#include <OpenMS/SYSTEM/File.h>
#include <boost/regex.hpp>
#include <OpenMS/FORMAT/MSstatsFile.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_MSstatsConverter MSstatsConverter
@brief Converter to input for MSstats
This util consumes an ID-mapped consensusXML file and OpenMS experimental design in TSV format to create a CSV file which can subsequently be used as input for the R package MSstats [1].
[1] M. Choi et al. MSstats: an R package for statistical analysis for quantitative mass spectrometry-based proteomic experiments. Bioinformatics (2014), 30 (17): 2524-2526
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_MSstatsConverter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_MSstatsConverter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPMSstatsConverter final :
public TOPPBase
{
public:
TOPPMSstatsConverter() :
TOPPBase("MSstatsConverter", "Converter to input for MSstats")
{
}
protected:
// this function will be used to register the tool parameters
// it gets automatically called on tool execution
void registerOptionsAndFlags_() override
{
// Input consensusXML
registerInputFile_(param_in, "<in>", "", "Input consensusXML with peptide intensities",
true, false);
setValidFormats_(param_in, ListUtils::create<String>("consensusXML"), true);
registerInputFile_(param_in_design, "<in_design>", "", "Experimental Design file", true,
false);
setValidFormats_(param_in_design, ListUtils::create<String>("tsv"), true);
registerStringOption_(param_method, "<method>",
"LFQ",
"Method used in the experiment(label free [LFQ], isobaric labeling [ISO]))", false,
false);
setValidStrings_(param_method,
ListUtils::create<String>("LFQ,ISO"));
registerStringOption_(param_msstats_bioreplicate, "<msstats_bioreplicate>",
"MSstats_BioReplicate",
"Which column in the condition table should be used for MSstats 'BioReplicate'", false,
false);
registerStringOption_(param_msstats_condition, "<msstats_condition>", "MSstats_Condition",
"Which column in the condition table should be used for MSstats 'Condition'", false, false);
registerStringOption_(param_msstats_mixture, "msstats_mixture", "MSstats_Mixture",
"Which column in the condition table should be used for MSstats 'Mixture'", false, false);
// advanced option to overwrite MS file annotations in consensusXML
registerInputFileList_(param_reannotate_filenames, "<file(s)>", StringList(),
"Overwrite MS file names in consensusXML", false, true);
setValidFormats_(param_reannotate_filenames, ListUtils::create<String>("mzML"), true);
// Isotope label type
registerFlag_(param_labeled_reference_peptides,
"If set, IsotopeLabelType is 'H', else 'L'");
// Specifies how peptide ions eluding at different retention times should be resolved
registerStringOption_(param_retention_time_summarization_method,
"<retention_time_summarization_method>", "max",
"How indistinguishable peptidoforms at different retention times should be treated."
" This is usually necessary for LFQ experiments and therefore defaults to 'max'."
" In case of TMT/iTRAQ, MSstatsTMT"
" does the aggregation itself later and the parameter always resets to manual (i.e. is unused).", false,
true);
setValidStrings_(param_retention_time_summarization_method,
ListUtils::create<String>("manual,max,min,mean,sum"));
// Output CSV file
registerOutputFile_(param_out, "<out>", "", "Input CSV file for MSstats.", true, false);
setValidFormats_(param_out, ListUtils::create<String>("csv"));
}
// the main_ function is called after all parameters are read
ExitCodes main_(int, const char **) final
{
try
{
// Input file, must be consensusXML
const String arg_in(getStringOption_(param_in));
const FileTypes::Type in_type(FileHandler::getType(arg_in));
fatalErrorIf_(
in_type != FileTypes::CONSENSUSXML,
"Input type is not consensusXML!",
ILLEGAL_PARAMETERS);
// Tool arguments
const String arg_method = getStringOption_(param_method);
const String arg_out = getStringOption_(param_out);
// Experimental Design file
const String arg_in_design = getStringOption_(param_in_design);
const ExperimentalDesign design = ExperimentalDesignFile::load(arg_in_design, false);
ExperimentalDesign::SampleSection sampleSection = design.getSampleSection();
ConsensusMap consensus_map;
FileHandler().loadConsensusFeatures(arg_in, consensus_map, {FileTypes::CONSENSUSXML});
StringList reannotate_filenames = getStringList_(param_reannotate_filenames);
bool is_isotope_label_type = getFlag_(param_labeled_reference_peptides);
String bioreplicate = getStringOption_(param_msstats_bioreplicate);
String condition = getStringOption_(param_msstats_condition);
String mixture = getStringOption_(param_msstats_mixture);
String retention_time_summarization_method = getStringOption_(param_retention_time_summarization_method);
MSstatsFile msStatsFile;
if (arg_method == "LFQ")
{
msStatsFile.storeLFQ(arg_out, consensus_map, design,
reannotate_filenames, is_isotope_label_type,
bioreplicate, condition, retention_time_summarization_method);
}
else if (arg_method == "ISO")
{
msStatsFile.storeISO(arg_out, consensus_map, design,
reannotate_filenames, bioreplicate, condition,
mixture, retention_time_summarization_method);
}
return EXECUTION_OK;
}
catch (const ExitCodes &exit_code)
{
return exit_code;
}
}
static const String param_in;
static const String param_in_design;
static const String param_method;
static const String param_msstats_bioreplicate;
static const String param_msstats_condition;
static const String param_msstats_mixture;
static const String param_out;
static const String param_labeled_reference_peptides;
static const String param_retention_time_summarization_method;
static const String param_reannotate_filenames;
private:
static void fatalErrorIf_(const bool error_condition, const String &message, const int exit_code)
{
if (error_condition)
{
OPENMS_LOG_FATAL_ERROR << "FATAL: " << message << std::endl;
throw exit_code;
}
}
};
const String TOPPMSstatsConverter::param_in = "in";
const String TOPPMSstatsConverter::param_in_design = "in_design";
const String TOPPMSstatsConverter::param_method = "method";
const String TOPPMSstatsConverter::param_msstats_bioreplicate = "msstats_bioreplicate";
const String TOPPMSstatsConverter::param_msstats_condition = "msstats_condition";
const String TOPPMSstatsConverter::param_msstats_mixture = "msstats_mixture";
const String TOPPMSstatsConverter::param_out = "out";
const String TOPPMSstatsConverter::param_labeled_reference_peptides = "labeled_reference_peptides";
const String TOPPMSstatsConverter::param_retention_time_summarization_method = "retention_time_summarization_method";
const String TOPPMSstatsConverter::param_reannotate_filenames = "reannotate_filenames";
// the actual main function needed to create an executable
int main(int argc, const char **argv)
{
TOPPMSstatsConverter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/NovorAdapter.cpp | .cpp | 16,248 | 371 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Oliver Alka $
// $Authors: Oliver Alka $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/DATASTRUCTURES/String.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/CHEMISTRY/ProteaseDB.h>
#include <OpenMS/METADATA/PeptideIdentification.h>
#include <OpenMS/METADATA/PeptideHit.h>
#include <OpenMS/METADATA/ProteinIdentification.h>
#include <OpenMS/METADATA/SpectrumLookup.h>
// TOOD Remove once we've moved transform to handler
#include <OpenMS/FORMAT/MzMLFile.h>
// TODO remove once we have store spectrum in handler
#include <OpenMS/FORMAT/MascotGenericFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/CsvFile.h>
#include <OpenMS/FORMAT/DATAACCESS/MSDataTransformingConsumer.h>
#include <OpenMS/SYSTEM/JavaInfo.h>
#include <QFileInfo>
#include <fstream>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_NovorAdapter NovorAdapter
@brief Novoradapter for de novo sequencing from tandem mass spectrometry data.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → NovorAdapter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> signal-/preprocessing e.g. @ref TOPP_FileFilter @n (in mzML format)</td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFilter or @n any protein/peptide processing tool</td>
</tr>
</table>
</CENTER>
This tool can be used for de novo sequencing of peptides from MS/MS data.
Novor must be installed before this wrapper can be used. This wrapper was successfully tested with version v1.06.0634 (stable).
Novor settings can be either used via command line or directly using a param file (param.txt).
Parameter names have been changed to match names found in other search engine adapters. For further information check the Novor wiki (http://wiki.rapidnovor.com/wiki/Main_Page) and the official tool website (https://www.rapidnovor.com/).
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_NovorAdapter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_NovorAdapter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPNovorAdapter :
public TOPPBase
{
public:
TOPPNovorAdapter() :
TOPPBase("NovorAdapter", "Performs de novo sequencing of peptides from MS/MS data with Novor.", true,
{
Citation{"Ma Bin",
"Novor: Real-Time Peptide de Novo Sequencing Software",
"Journal of The American Society for Mass Spectrometry; 30 June 2015",
"10.1007/s13361-015-1204-0"}
})
{}
protected:
// this function will be used to register the tool parameters
// it gets automatically called on tool execution
void registerOptionsAndFlags_() override
{
// thirdparty executable
registerInputFile_("executable", "<jar>", "novor.jar", "novor.jar", false, false, ListUtils::create<String>("skipexists"));
// input, output and parameter file
registerInputFile_("in", "<file>", "", "MzML Input file");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerOutputFile_("out", "<file>", "", "Novor idXML output");
setValidFormats_("out", ListUtils::create<String>("idXML"));
// enzyme
registerStringOption_("enzyme", "<choice>", "Trypsin", "Digestion enzyme - currently only Trypsin is supported ", false);
setValidStrings_("enzyme", ListUtils::create<String>("Trypsin"));
// instrument
registerStringOption_("fragmentation", "<choice>", "CID", "Fragmentation method", false);
setValidStrings_("fragmentation", ListUtils::create<String>("CID,HCD"));
registerStringOption_("massAnalyzer", "<choice>" , "Trap", "MassAnalyzer e.g. (Oritrap CID-Trap, CID-FT, HCD-FT; QTof CID-TOF)", false);
setValidStrings_("massAnalyzer", ListUtils::create<String>("Trap,TOF,FT"));
// mass error tolerance
registerDoubleOption_("fragment_mass_tolerance", "<double>", 0.5, "Fragmentation error tolerance (Da)", false);
registerDoubleOption_("precursor_mass_tolerance", "<double>" , 15.0, "Precursor error tolerance (ppm or Da)", false);
registerStringOption_("precursor_error_units", "<choice>", "ppm", "Unit of precursor mass tolerance", false);
setValidStrings_("precursor_error_units", ListUtils::create<String>("ppm,Da"));
// post-translational-modification
registerStringList_("variable_modifications", "<mods>", vector<String>(), "Variable modifications", false);
setValidStrings_("variable_modifications", ListUtils::create<String>("Acetyl (K),Acetyl (N-term),Amidated (C-term),Ammonia-loss (N-term C),Biotin (K),Biotin (N-term),Carbamidomethyl (C),Carbamyl (K),Carbamyl (N-term),Carboxymethyl (C),Deamidated (NQ),Dehydrated (N-term C),Dioxidation (M),Methyl (C-term),Methyl (DE),Oxidation (M),Oxidation (HW),Phospho (ST),Phospho (Y),Pyro-carbamidomethyl (N-term C),Pyro-Glu (E),Pyro-Glu (Q),Sodium (C-term),Sodium (DE),Sulfo (STY),Trimethyl (RK)"));
registerStringList_("fixed_modifications", "<mods>", vector<String>(), "Fixed modifications", false);
setValidStrings_("fixed_modifications", ListUtils::create<String>("Acetyl (K),Acetyl (N-term),Amidated (C-term),Ammonia-loss (N-term C),Biotin (K),Biotin (N-term),Carbamidomethyl (C),Carbamyl (K),Carbamyl (N-term),Carboxymethyl (C),Deamidated (NQ),Dehydrated (N-term C),Dioxidation (M),Methyl (C-term),Methyl (DE),Oxidation (M),Oxidation (HW),Phospho (ST),Phospho (Y),Pyro-carbamidomethyl (N-term C),Pyro-Glu (E),Pyro-Glu (Q),Sodium (C-term),Sodium (DE),Sulfo (STY),Trimethyl (RK)"));
// forbidden residues
registerStringList_("forbiddenResidues", "<mods>", vector<String>(), "Forbidden Resiudes", false);
setValidStrings_("forbiddenResidues", ListUtils::create<String>("I,U"));
// parameter novorFile will not be wrapped here
registerInputFile_("novorFile", "<file>", "", "File to introduce customized algorithm parameters for advanced users (otional .novor file)", false);
setValidFormats_("novorFile", ListUtils::create<String>("novor"));
registerInputFile_("java_executable", "<file>", "java", "The Java executable. Usually Java is on the system PATH. If Java is not found, use this parameter to specify the full path to Java", false, false, ListUtils::create<String>("skipexists"));
registerIntOption_("java_memory", "<num>", 3500, "Maximum Java heap size (in MB)", false);
}
void createParamFile_(ofstream& os)
{
vector<String> variable_mods = getStringList_("variable_modifications");
vector<String> fixed_mods = getStringList_("fixed_modifications");
vector<String> forbidden_residues = getStringList_("forbiddenResidues");
String variable_mod = ListUtils::concatenate(variable_mods, ',');
String fixed_mod = ListUtils::concatenate(fixed_mods, ',');
String forbidden_res = ListUtils::concatenate(forbidden_residues, ',');
os << "enzyme = " << getStringOption_("enzyme") << "\n"
<< "fragmentation = " << getStringOption_("fragmentation") << "\n"
<< "massAnalyzer = " << getStringOption_("massAnalyzer") << "\n"
<< "fragmentIonErrorTol = " << getDoubleOption_("fragment_mass_tolerance") << "Da" << "\n"
<< "precursorErrorTol = " << getDoubleOption_("precursor_mass_tolerance") << getStringOption_("precursor_error_units") << "\n"
<< "variableModifications = " << variable_mod << "\n"
<< "fixedModifications = " << fixed_mod << "\n"
<< "forbiddenResidues = " << forbidden_res << "\n";
// novorFile for custom alogrithm parameters of nova
String cparamfile = getStringOption_("novorFile");
ifstream cpfile(cparamfile);
if (cpfile)
{
os << "novorFile" << cparamfile << "\n";
}
os.close();
}
// the main_ function is called after all parameters are read
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
String in = getStringOption_("in");
String out = getStringOption_("out");
//-------------------------------------------------------------
// determine the executable
//-------------------------------------------------------------
const String java_executable = getStringOption_("java_executable");
QString java_memory = "-Xmx" + QString::number(getIntOption_("java_memory")) + "m";
QString executable = getStringOption_("executable").toQString();
if (executable.isEmpty())
{
const char* novor_path_env = getenv("NOVOR_PATH");
if (novor_path_env == nullptr || strlen(novor_path_env) == 0)
{
writeLogError_("FATAL: Executable of Novor could not be found. Please either use NOVOR_PATH env variable or provide via '-executable'!");
return MISSING_PARAMETERS;
}
executable = novor_path_env;
}
// Normalize file path
QFileInfo file_info(executable);
executable = file_info.canonicalFilePath();
writeLogInfo_("Executable is: " + executable);
const QString & path_to_executable = File::path(executable).toQString();
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
// tmp_dir
File::TempDir tmp_dir(debug_level_ >= 2);
// parameter file
String tmp_param = tmp_dir.getPath() + "param.txt";
ofstream os(tmp_param.c_str());
createParamFile_(os);
// convert mzML to mgf format
Size count_written{ 0 };
String tmp_mgf = tmp_dir.getPath() + "tmp_mgf.mgf";
{
std::ofstream ofs(tmp_mgf, std::ofstream::out);
MascotGenericFile mgf;
auto f = [&ofs, &in, &mgf, &count_written](const MSSpectrum& spec)
{
if (spec.getType(true) == MSSpectrum::SpectrumType::CENTROID)
{
mgf.writeSpectrum(ofs, spec, in, "UNKNOWN");
++count_written;
}
};
MSDataTransformingConsumer c;
c.setSpectraProcessingFunc(f);
MzMLFile mzml_file;
mzml_file.getOptions().setMSLevels({2});
mzml_file.transform(in, &c, true);
OPENMS_LOG_INFO << "Info: " << count_written << " MS2 spectra will be passed to NOVOR\n";
if (count_written == 0)
{
if (getFlag_("force"))
{
OPENMS_LOG_WARN << "Warning: no MS2 spectra were passed to NOVOR. The result will be empty! To make this an error, remove the '-force' flag!" << std::endl;
}
else
{
OPENMS_LOG_ERROR << "Error: no MS2 spectra were passed to NOVOR. This is probably not intentional. Please check the input data '" << in << "'. To make this a warning, set the '-force' flag." << std::endl;
return ExitCodes::INPUT_FILE_EMPTY;
}
}
}
//-------------------------------------------------------------
// process
//-------------------------------------------------------------
String tmp_out = tmp_dir.getPath() + "tmp_out_novor.csv";
QStringList process_params;
process_params << java_memory
<< "-jar" << executable
<< "-f"
<< "-o" << tmp_out.toQString()
<< "-p" << tmp_param.toQString()
<< tmp_mgf.toQString();
// print novor command line
TOPPBase::ExitCodes exit_code = runExternalProcess_(java_executable.toQString(), process_params, path_to_executable);
if (exit_code != EXECUTION_OK)
{
return exit_code;
}
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
// Build mapping to get correct spectrum reference later
MSExperiment exp;
MzMLFile m;
PeakFileOptions op;
op.setFillData(false); // no actual peak data
op.setMSLevels({ 2 }); //only MS2
m.setOptions(op);
m.load(in, exp);
SpectrumLookup mapping;
mapping.readSpectra(exp);
if (!File::exists(tmp_out))
{
OPENMS_LOG_ERROR << "Error: NOVOR did not write the output file '" << tmp_out << "' as requested!\n";
return ExitCodes::EXTERNAL_PROGRAM_ERROR;
}
CsvFile csv(tmp_out, ',');
PeptideIdentificationList peptide_ids;
for (Size i = 0; i != csv.rowCount(); ++i)
{
StringList sl;
csv.getRow(i, sl);
if (sl.empty() || sl[0][0] == '#') { continue; }
PeptideIdentification pi;
pi.setSpectrumReference( exp[mapping.findByScanNumber(sl[1].toInt())].getNativeID());
pi.setScoreType("novorscore");
pi.setHigherScoreBetter(true);
pi.setRT(sl[2].toDouble());
pi.setMZ(sl[3].toDouble());
PeptideHit ph;
ph.setCharge(sl[4].toInt());
ph.setScore(sl[8].toDouble());
// replace PTM name (see http://wiki.rapidnovor.com/wiki/Built-in_PTMs)
String sequence = sl[9];
sequence.substitute("(Cam)", "(Carbamidomethyl)");
sequence.substitute("(O)","(Oxidation)");
sequence.substitute("(PyroCam)", "(Pyro-carbamidomethyl)");
ph.setSequence(AASequence::fromString(sequence));
ph.setMetaValue("pepMass(denovo)", sl[5].toDouble());
ph.setMetaValue("err(data-denovo)", sl[6].toDouble());
ph.setMetaValue("ppm(1e6*err/(mz*z))", sl[7].toDouble());
ph.setMetaValue("aaScore", sl[10].toQString());
pi.getHits().push_back(std::move(ph));
peptide_ids.push_back(std::move(pi));
}
// extract version from comment
// # v1.06.0634 (stable)
// v1.06.0634 (stable)
vector<ProteinIdentification> protein_ids;
StringList versionrow;
csv.getRow(2, versionrow);
String version = versionrow[0].substr(versionrow[0].find("v."));
protein_ids = vector<ProteinIdentification>(1);
protein_ids[0].setDateTime(DateTime::now());
protein_ids[0].setSearchEngine("Novor");
protein_ids[0].setSearchEngineVersion(version);
ProteinIdentification::SearchParameters search_parameters;
search_parameters.db = "denovo";
search_parameters.mass_type = ProteinIdentification::PeakMassType::MONOISOTOPIC;
// if a parameter file is used the modifications need to be parsed from the novor output csv
search_parameters.fixed_modifications = getStringList_("fixed_modifications");
search_parameters.variable_modifications = getStringList_("variable_modifications");
search_parameters.fragment_mass_tolerance = getDoubleOption_("fragment_mass_tolerance");
search_parameters.precursor_mass_tolerance = getDoubleOption_("precursor_mass_tolerance");
search_parameters.precursor_mass_tolerance_ppm = getStringOption_("precursor_error_units") == "ppm" ? true : false;
search_parameters.fragment_mass_tolerance_ppm = false;
search_parameters.digestion_enzyme = *ProteaseDB::getInstance()->getEnzyme(getStringOption_("enzyme"));
//StringList inputFile;
//inputFile[0] = in;
//protein_ids[0].setPrimaryMSRunPath(inputFile);
protein_ids[0].setSearchParameters(search_parameters);
OPENMS_LOG_INFO << "NOVOR created " << peptide_ids.size() << " PSMs from " << count_written << " MS2 spectra (" << (peptide_ids.size() * 100 / count_written) << "% annotated)\n";
FileHandler().storeIdentifications(out, protein_ids, peptide_ids, {FileTypes::IDXML});
return EXECUTION_OK;
}
};
// the actual main function needed to create an executable
int main(int argc, const char ** argv)
{
TOPPNovorAdapter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/OpenSwathChromatogramExtractor.cpp | .cpp | 13,610 | 315 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hannes Roest $
// $Authors: Hannes Roest $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/OPENSWATH/ChromatogramExtractor.h>
#include <OpenMS/ANALYSIS/OPENSWATH/OpenSwathHelper.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/SimpleOpenMSSpectraAccessFactory.h>
#include <OpenMS/CONCEPT/Exception.h>
#include <OpenMS/CONCEPT/ProgressLogger.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/FORMAT/FileHandler.h>
using namespace std;
#ifdef _OPENMP
#include <omp.h>
#endif
// #ifdef _OPENMP
// #define IF_MASTERTHREAD if (omp_get_thread_num() ==0)
// #else
// #define IF_MASTERTHREAD
// #endif
using namespace OpenMS;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_OpenSwathChromatogramExtractor OpenSwathChromatogramExtractor
@brief Extracts chromatograms (XICs) from a file containing spectra.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → OpenSwathChromatogramExtractor →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FileFilter </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> @ref TOPP_OpenSwathAnalyzer </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_OpenSwathRTNormalizer </td>
</tr>
</table>
</CENTER>
This module extracts ion traces (extracted ion chromatograms or XICs) from a
file containing spectra. The masses at which the chromatograms should be
extracted are stored in a TraML file and the result is stored in a mzML file
holding chromatograms. This tool is designed to extract chromatograms from either
SWATH (data independent acquisition) data (see ref[1]) or from MS1 data. For
SWATH data it will extract the @a m/z found in the product ion section of the
TraML transitions, returning as many chromatograms as input transitions were
provided -- while for MS1 data it will extract at the precursor ion @a m/z.
The input assay library or transition list is provided via the @p -tr flag
and needs to be in TraML format. More information about the input filetype
can be found in @ref OpenMS::TraMLFile "TraML".
The input MS file (MS1 file or DIA / SWATH file) is provided through the @p
-in flag. If you are extracting MS1 data, use the @p -extract_MS1 flag,
otherwise use the @p -is_swath flag. If you are extracting MS1 XIC only, make
sure you do not have any MS2 spectra in your input, filter them out using the
@ref TOPP_FileFilter.
For SWATH data, the @p -is_swath flag which will check the precursor
isolation window of the first scan and assume all scans in that file were
recorded with this precursor window (thus making it necessary to provide one
input file per SWATH window). The module will then only extract transitions
whose precursors fall into the corresponding isolation window.
By default, the whole RT range is extracted, however the @p -rt_window
parameter allows extraction of a subset of the RT range. In case the assay
library RT values are not absolute retention times but normalized ones, an
optional transformation function can be provided with @p -rt_norm parameter,
mapping the normalized RT space to the experimental RT space. See @ref
TOPP_OpenSwathRTNormalizer for further information.
For the extraction method, two convolution functions are available: top-hat
and bartlett. While top-hat will just sum up the signal within a quadratic
window, bartlett will weigh the signal in the center of the window more than
the signal on the edge.
[1] Gillet LC, Navarro P, Tate S, Rost H, Selevsek N, Reiter L, Bonner R, Aebersold R. \n
<a href="https://doi.org/10.1074/mcp.O111.016717"> Targeted data extraction of the MS/MS spectra generated by data-independent
acquisition: a new concept for consistent and accurate proteome analysis. </a> \n
Mol Cell Proteomics. 2012 Jun;11(6):O111.016717.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_OpenSwathChromatogramExtractor.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_OpenSwathChromatogramExtractor.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPOpenSwathChromatogramExtractor
: public TOPPBase
{
public:
TOPPOpenSwathChromatogramExtractor()
: TOPPBase("OpenSwathChromatogramExtractor", "Extract chromatograms (XIC) from a MS2 map file.", true)
{
}
protected:
typedef PeakMap MapType;
void registerOptionsAndFlags_() override
{
registerInputFileList_("in", "<files>", StringList(), "Input files separated by blank");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerInputFile_("tr", "<file>", "", "transition file ('TraML' or 'csv')");
setValidFormats_("tr", ListUtils::create<String>("csv,traML"));
registerInputFile_("rt_norm", "<file>", "", "RT normalization file (how to map the RTs of this run to the ones stored in the library)", false);
setValidFormats_("rt_norm", ListUtils::create<String>("trafoXML"));
registerOutputFile_("out", "<file>", "", "output file");
setValidFormats_("out", ListUtils::create<String>("mzML"));
registerDoubleOption_("min_upper_edge_dist", "<double>", 0.0, "Minimal distance to the edge to still consider a precursor, in Thomson", false);
registerDoubleOption_("rt_window", "<double>", -1, "Extraction window in RT dimension (-1 means extract over the whole range). This is the full window size, e.g. a value of 1000 seconds would extract 500 seconds on either side.", false);
registerDoubleOption_("ion_mobility_window", "<double>", -1, "Extraction window in ion mobility dimension (in milliseconds). This is the full window size, e.g. a value of 10 milliseconds would extract 5 milliseconds on either side.", false);
registerDoubleOption_("mz_window", "<double>", 0.05, "Extraction window in m/z dimension (in Thomson, to use ppm see -ppm flag). This is the full window size, e.g. 100 ppm would extract 50 ppm on either side.", false);
setMinFloat_("mz_window", 0.0);
registerFlag_("ppm", "m/z extraction_window is in ppm");
registerFlag_("is_swath", "Set this flag if the data is SWATH data");
registerFlag_("extract_MS1", "Extract the MS1 transitions based on the precursor values in the TraML file (useful for extracting MS1 XIC)");
registerStringOption_("extraction_function", "<name>", "tophat", "Function used to extract the signal", false, true); // required, advanced
StringList model_types;
model_types.push_back("tophat");
model_types.push_back("bartlett"); // bartlett if we use zeros at the end
setValidStrings_("extraction_function", model_types);
registerModelOptions_("linear");
}
void registerModelOptions_(const String & default_model)
{
registerTOPPSubsection_("model", "Options to control the modeling of retention time transformations from data");
registerStringOption_("model:type", "<name>", default_model, "Type of model", false, true);
StringList model_types;
TransformationDescription::getModelTypes(model_types);
if (!ListUtils::contains(model_types, default_model))
{
model_types.insert(model_types.begin(), default_model);
}
setValidStrings_("model:type", model_types);
registerFlag_("model:symmetric_regression", "Only for 'linear' model: Perform linear regression on 'y - x' vs. 'y + x', instead of on 'y' vs. 'x'.", true);
}
ExitCodes main_(int, const char **) override
{
StringList file_list = getStringList_("in");
String tr_file_str = getStringOption_("tr");
String out = getStringOption_("out");
bool is_swath = getFlag_("is_swath");
bool ppm = getFlag_("ppm");
bool extract_MS1 = getFlag_("extract_MS1");
double min_upper_edge_dist = getDoubleOption_("min_upper_edge_dist");
double mz_extraction_window = getDoubleOption_("mz_window");
double rt_extraction_window = getDoubleOption_("rt_window");
double im_window = getDoubleOption_("ion_mobility_window");
String extraction_function = getStringOption_("extraction_function");
// If we have a transformation file, trafo will transform the RT in the
// scoring according to the model. If we don't have one, it will apply the
// null transformation.
String trafo_in = getStringOption_("rt_norm");
TransformationDescription trafo;
if (!trafo_in.empty())
{
String model_type = getStringOption_("model:type");
Param model_params = getParam_().copy("model:", true);
FileHandler().loadTransformations(trafo_in, trafo, true, {FileTypes::TRANSFORMATIONXML});
trafo.fitModel(model_type, model_params);
}
TransformationDescription trafo_inverse = trafo;
trafo_inverse.invert();
const char * tr_file = tr_file_str.c_str();
MapType out_exp;
std::vector< OpenMS::MSChromatogram > chromatograms;
OpenMS::TargetedExperiment targeted_exp;
std::cout << "Loading TraML file" << std::endl;
FileHandler().loadTransitions(tr_file, targeted_exp, {FileTypes::TRAML});
std::cout << "Loaded TraML file" << std::endl;
// Do parallelization over the different input files
// Only in OpenMP 3.0 are unsigned loop variables allowed
#pragma omp parallel for
for (SignedSize i = 0; i < boost::numeric_cast<SignedSize>(file_list.size()); ++i)
{
std::shared_ptr<PeakMap > exp(new PeakMap);
// Find the transitions to extract and extract them
MapType tmp_out;
OpenMS::TargetedExperiment transition_exp_used;
FileHandler().loadExperiment(file_list[i], *exp, {FileTypes::MZML}, log_type_);
if (exp->empty())
{
continue; // if empty, go on
}
OpenSwath::SpectrumAccessPtr expptr = SimpleOpenMSSpectraFactory::getSpectrumAccessOpenMSPtr(exp);
bool do_continue = true;
if (is_swath)
{
do_continue = OpenSwathHelper::checkSwathMapAndSelectTransitions(*exp, targeted_exp, transition_exp_used, min_upper_edge_dist);
}
else
{
transition_exp_used = targeted_exp;
}
// after loading the first file, copy the meta data from that experiment
// this may happen *after* chromatograms were already added to the
// output, thus we do NOT fill the experiment here but rather store all
// the chromatograms in the "chromatograms" array and store them in
// out_exp afterwards.
#pragma omp critical (OpenSwathChromatogramExtractor_metadata)
if (i == 0)
{
out_exp = *exp;
out_exp.clear(false);
}
std::cout << "Extracting " << transition_exp_used.getTransitions().size() << " transitions" << std::endl;
std::vector< OpenSwath::ChromatogramPtr > chromatogram_ptrs;
std::vector< ChromatogramExtractor::ExtractionCoordinates > coordinates;
// continue if the map is not empty
if (do_continue)
{
// Prepare the coordinates (with or without rt extraction) and then extract the chromatograms
ChromatogramExtractor extractor;
if (rt_extraction_window < 0)
{
extractor.prepare_coordinates(chromatogram_ptrs, coordinates, transition_exp_used, rt_extraction_window, extract_MS1);
}
else
{
// Use an rt extraction window of 0.0 which will just write the retention time in start / end positions
extractor.prepare_coordinates(chromatogram_ptrs, coordinates, transition_exp_used, 0.0, extract_MS1);
for (ChromatogramExtractor::ExtractionCoordinates& chrom : coordinates)
{
chrom.rt_start = trafo_inverse.apply(chrom.rt_start) - rt_extraction_window / 2.0;
chrom.rt_end = trafo_inverse.apply(chrom.rt_end) + rt_extraction_window / 2.0;
}
}
extractor.extractChromatograms(expptr, chromatogram_ptrs, coordinates,
mz_extraction_window, ppm, im_window, extraction_function);
#pragma omp critical (OpenSwathChromatogramExtractor_insertMS1)
{
// Remove potential meta value indicating cached data
SpectrumSettings exp_settings = (*exp)[0];
for (Size j = 0; j < exp_settings.getDataProcessing().size(); j++)
{
if (exp_settings.getDataProcessing()[j]->metaValueExists("cached_data"))
{
exp_settings.getDataProcessing()[j]->removeMetaValue("cached_data");
}
}
extractor.return_chromatogram(chromatogram_ptrs, coordinates, transition_exp_used, exp_settings, chromatograms, extract_MS1, im_window);
}
} // end of do_continue
} // end of loop over all files / end of OpenMP
// TODO check that no chromatogram IDs occur multiple times !
// store the output
out_exp.setChromatograms(chromatograms);
addDataProcessing_(out_exp, getProcessingInfo_(DataProcessing::SMOOTHING));
FileHandler().storeExperiment(out, out_exp, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPOpenSwathChromatogramExtractor tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/InternalCalibration.cpp | .cpp | 18,946 | 374 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/DATASTRUCTURES/CalibrationData.h>
#include <OpenMS/PROCESSING/CALIBRATION/InternalCalibration.h>
#include <OpenMS/PROCESSING/CALIBRATION/MZTrafoModel.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/TextFile.h>
#include <OpenMS/KERNEL/FeatureMap.h>
#include <vector>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_InternalCalibration InternalCalibration
@brief Performs an internal mass recalibration on an MS experiment.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → InternalCalibration →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> any tool operating on MS peak data @n (in mzML format)</td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FeatureFinderCentroided </td>
</tr>
</table>
</CENTER>
Given reference masses (as either peptide identifications or as list of fixed masses) an MS experiment
can be recalibrated using a linear or quadratic regression fitted to the observed vs. the theoretical masses.
Chose one of two optional input files:
1) peptide identifications (from featureXML or idXML) using 'id_in'
2) lock masses using 'lock_in'
The user can choose whether the calibration function shall be
calculated for each spectrum separately or once for the whole map.
If this is done scan-wise, a user-defined range of neighboring spectra
is searched for lock masses/peptide IDs. They are used to build a model, which is applied to
the spectrum at hand.
The RT range ('RT_chunking') should be small enough to resolve time-dependent change of decalibration, but wide enough
to have enough calibrant masses for a stable model. A linear model requires at least two calibrants, a quadradic at least three.
Usually, the RT range should provide about 3x more calibrants than required, i.e. 6(=3x2) for linear, and 9(=3x3) for quadratic models.
If the calibrant data is too sparse for a certain scan, the closest neighboring model will be used automatically.
If no model can be calculated anywhere, the tool will fail.
Optional quality control output files allow to judge the success of calibration. It is strongly advised to inspect them.
If PNG images are requested, 'R' (statistical programming language) needs to be installed and available on the system path!
Outlier detection is supported using the RANSAC algorithm. However, usually it's better to provide high-confidence calibrants instead of
relying on automatic removal of outliers.
Post calibration statistics (median ppm and median-absolute-deviation) are automatically computed.
The calibration is deemed successful if the statistics are within certain bounds ('goodness:XXX').
Detailed description for each calibration method:
1) [id_in] The peptide identifications should be derived from the very same mzML file using a wide precursor window (e.g. 25 ppm), which captures
the possible decalibration. Subsequently, the IDs should be filtered for high confidence (e.g. low FDR, ideally FDR=0.0) and given as input to this tool.
Remaining outliers can be removed by using RANSAC.
The data might benefit from a precursor mass correction (e.g. using @ref TOPP_HighResPrecursorMassCorrector), before an MS/MS search is done.
The list of calibrants is derived solely from the idXML/featureXML and only the resulting model is applied to the mzML.
2) [lock_in] Calibration can be performed using specific lock masses which occur in most spectra. The structure of the cal:lock_in CSV file is as follows:
Each line represents one lock mass in the format: \<m/z\>, \<ms-level\>, \<charge\>
Lines starting with # are treated as comments and ignored. The ms-level is usually '1', but you can also use '2' if there are fragment ions commonly occurring.
Example:
@code
# lock mass at 574 m/z at MS1 with charge 2
574.345, 1, 2
@endcode
Additional filters ('cal:lock_require_mono', 'cal:lock_require_iso') allow to exclude spurious false-positive calibrant peaks.
These filters require knowledge of the charge state, thus charge needs to be specified in the input CSV.
Detailed information on which lock masses passed these filters are available when -debug is used (any level).
The calibration function will use all lock masses (i.e. from all ms-levels) within the defined RT range to calibrate a spectrum. Thus, care should be taken that
spectra from ms-levels specified here, are recorded using the same mass analyzer (MA). This is no issue for a Q-Exactive (which only has one MA),
but depends on the acquisition scheme for instruments with two/three MAs (e.g. for Orbitrap Velos, MS/MS spectra are commonly acquired in the ion trap and should not be used during calibration of MS1).
General remarks:
The user can select what MS levels are subjected to calibration. Calibration must be done once for each mass analyzer.
Usually, peptide ID's provide calibration points for MS1 precursors, i.e. are suitable for MS1. They are applicable for MS2 only if
the same mass analyzer was used (e.g. Q-Exactive). In other words, MS/MS spectra acquired using the ion trap analyzer of a Velos cannot be calibrated using
peptide ID's.
Precursor m/z associated to higher-level MS spectra are corrected if their precursor spectra are subject to calibration,
e.g. precursor information within MS2 spectra is calibrated if target ms-level is set to 1.
Lock masses ('cal:lock_in') can be specified freely for MS1 and/or MS2.
@note The tool assumes the input data is already picked/centroided.
@note Currently mzIdentML (mzid) is not directly supported as an input/output format of this tool. Convert mzid files to/from idXML using @ref TOPP_IDFileConverter if necessary.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_InternalCalibration.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_InternalCalibration.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPInternalCalibration :
public TOPPBase
{
public:
TOPPInternalCalibration() :
TOPPBase("InternalCalibration", "Applies an internal mass recalibration.")
{
}
protected:
void registerOptionsAndFlags_() override
{
// data
registerInputFile_("in", "<file>", "", "Input peak file");
setValidFormats_("in", {"mzML"});
registerOutputFile_("out", "<file>", "", "Output file ");
setValidFormats_("out", {"mzML"});
registerInputFile_("rscript_executable", "<file>", "Rscript", "Path to the Rscript executable (default: 'Rscript').", false, false, {"is_executable"});
addEmptyLine_();
registerDoubleOption_("ppm_match_tolerance", "<delta m/z in [ppm]>", 25, "Finding calibrants in raw data uses this tolerance (for lock masses and ID's).", false);
// transformation
registerTOPPSubsection_("cal", "Chose one of two optional input files ('id_in' or 'lock_in') to define the calibration masses/function");
registerInputFile_("cal:id_in", "<file>", "", "Identifications or features whose peptide ID's serve as calibration masses.", false);
setValidFormats_("cal:id_in", {"idXML", "featureXML"});
registerInputFile_("cal:lock_in", "<file>", "", "Input file containing reference m/z values (text file with each line as: m/z ms-level charge) which occur in all scans.", false);
setValidFormats_("cal:lock_in", {"csv"});
registerOutputFile_("cal:lock_out", "<file>", "", "Optional output file containing peaks from 'in' which were matched to reference m/z values. Useful to see which peaks were used for calibration.", false);
setValidFormats_("cal:lock_out", {"mzML"});
registerOutputFile_("cal:lock_fail_out", "<file>", "", "Optional output file containing lock masses which were NOT found or accepted(!) in data from 'in'. Useful to see which peaks were used for calibration.", false);
setValidFormats_("cal:lock_fail_out", {"mzML"});
registerFlag_("cal:lock_require_mono", "Require all lock masses to be monoisotopic, i.e. not the iso1, iso2 etc ('charge' column is used to determine the spacing). Peaks which are not mono-isotopic are not used.");
registerFlag_("cal:lock_require_iso", "Require all lock masses to have at least the +1 isotope. Peaks without isotope pattern are not used.");
registerStringOption_("cal:model_type",
"<model>",
MZTrafoModel::enumToName(MZTrafoModel::MODELTYPE::LINEAR_WEIGHTED),
"Type of function to be fitted to the calibration points.",
false);
setValidStrings_("cal:model_type", MZTrafoModel::names_of_modeltype, static_cast<int>(MZTrafoModel::MODELTYPE::SIZE_OF_MODELTYPE));
addEmptyLine_();
registerIntList_("ms_level", "i j ...", {1, 2, 3}, "Target MS levels to apply the transformation onto. Does not affect calibrant collection.", false);
registerDoubleOption_("RT_chunking", "<RT window in [sec]>", 300, "RT window (one-sided, i.e. left->center, or center->right) around an MS scan in which calibrants are collected to build a model. Set to -1 to use ALL calibrants for all scans, i.e. a global model.", false);
registerTOPPSubsection_("RANSAC", "Robust outlier removal using RANSAC");
registerFlag_("RANSAC:enabled", "Apply RANSAC to calibration points to remove outliers before fitting a model.");
// RANSAC:n is automatically taken from the input model (i.e. n=2 for linear, n=3 for quadratic)
//registerIntOption_("RANSAC:pc_n", "<# points>", 20, "Percentage (1-99) of initial model points from available data.", false);
//setMinInt_("RANSAC:pc_n", 1);
//setMaxInt_("RANSAC:pc_n", 99);
registerDoubleOption_("RANSAC:threshold", "<threshold>", 10.0, "Threshold for accepting inliers (instrument precision (not accuracy!) as ppm^2 distance)", false);
registerIntOption_("RANSAC:pc_inliers", "<# inliers>", 30, "Minimum percentage (of available data) of inliers (<threshold away from model) to accept the model.", false);
setMinInt_("RANSAC:pc_inliers", 1);
setMaxInt_("RANSAC:pc_inliers", 99);
registerIntOption_("RANSAC:iter", "<# iterations>", 70, "Maximal # iterations.", false);
registerTOPPSubsection_("goodness", "Thresholds for accepting calibration success");
registerDoubleOption_("goodness:median", "<threshold>", 4.0, "The median ppm error of calibrated masses must be smaller than this threshold.", false);
registerDoubleOption_("goodness:MAD", "<threshold>", 2.0, "The median absolute deviation of the ppm error of calibrated masses must be smaller than this threshold.", false);
registerTOPPSubsection_("quality_control", "Tables and plots to verify calibration performance");
registerOutputFile_("quality_control:models", "<table>", "", "Table of model parameters for each spectrum.", false);
setValidFormats_("quality_control:models", {"csv"});
registerOutputFile_("quality_control:models_plot", "<image>", "", "Plot image of model parameters for each spectrum.", false);
setValidFormats_("quality_control:models_plot", {"png"});
registerOutputFile_("quality_control:residuals", "<table>", "", "Table of pre- and post calibration errors.", false);
setValidFormats_("quality_control:residuals", {"csv"});
registerOutputFile_("quality_control:residuals_plot", "<image>", "", "Plot image of pre- and post calibration errors.", false);
setValidFormats_("quality_control:residuals_plot", {"png"});
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
String in = getStringOption_("in");
String out = getStringOption_("out");
String cal_id = getStringOption_("cal:id_in");
String cal_lock = getStringOption_("cal:lock_in");
String file_cal_lock_out = getStringOption_("cal:lock_out");
String file_cal_lock_fail_out = getStringOption_("cal:lock_fail_out");
double rt_chunk = getDoubleOption_("RT_chunking");
IntList ms_level = getIntList_("ms_level");
if (((int)!cal_lock.empty() + (int)!cal_id.empty()) != 1)
{
OPENMS_LOG_ERROR << "Conflicting input given. Please provide only ONE of either 'cal:id_in' or 'cal:lock_in'!" << std::endl;
return ILLEGAL_PARAMETERS;
}
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
// Raw data
PeakMap exp;
FileHandler mz_file;
mz_file.loadExperiment(in, exp, {FileTypes::MZML}, log_type_);
InternalCalibration ic;
ic.setLogType(log_type_);
double tol_ppm = getDoubleOption_("ppm_match_tolerance");
// featureXML/idXML input
if (!cal_id.empty())
{
FileTypes::Type ftype = FileHandler().getTypeByContent(cal_id);
if (ftype == FileTypes::FEATUREXML)
{
FeatureMap feature_map;
FileHandler().loadFeatures(cal_id, feature_map, {FileTypes::FEATUREXML});
ic.fillCalibrants(feature_map, tol_ppm);
}
else if (ftype == FileTypes::IDXML)
{
std::vector<ProteinIdentification> prot_ids;
PeptideIdentificationList pep_ids;
FileHandler().loadIdentifications(cal_id, prot_ids, pep_ids, {FileTypes::IDXML});
ic.fillCalibrants(pep_ids, tol_ppm);
}
}
else if (!cal_lock.empty())
{ // CSV file calibrant masses
// load CSV
TextFile ref_file;
ref_file.load(cal_lock, true, -1, true, "#");
vector<InternalCalibration::LockMass> ref_masses;
for (TextFile::ConstIterator iter = ref_file.begin(); iter != ref_file.end(); ++iter)
{
std::vector<String> vec;
iter->split(",", vec);
if (vec.size() != 3)
{
throw Exception::MissingInformation(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, String("Input file ") + cal_lock + " does not have three comma-separated entries per row!");
}
ref_masses.push_back(InternalCalibration::LockMass(vec[0].toDouble(), vec[1].toInt(), vec[2].toInt()));
}
bool lock_require_mono = getFlag_("cal:lock_require_mono");
bool lock_require_iso = getFlag_("cal:lock_require_iso");
// match calibrants to data
CalibrationData failed_points;
ic.fillCalibrants(exp, ref_masses, tol_ppm, lock_require_mono, lock_require_iso, failed_points, debug_level_ > 0);
// write matched lock mass peaks
if (!file_cal_lock_out.empty())
{
OPENMS_LOG_INFO << "\nWriting matched lock masses to mzML file '" << file_cal_lock_out << "'." << std::endl;
PeakMap exp_out;
exp_out.set2DData(ic.getCalibrationPoints(), CalibrationData::getMetaValues());
mz_file.storeExperiment(file_cal_lock_out, exp_out, {FileTypes::MZML}, log_type_);
}
if (!file_cal_lock_fail_out.empty())
{
OPENMS_LOG_INFO << "\nWriting unmatched lock masses to mzML file '" << file_cal_lock_fail_out << "'." << std::endl;
PeakMap exp_out;
exp_out.set2DData(failed_points, CalibrationData::getMetaValues());
mz_file.storeExperiment(file_cal_lock_fail_out, exp_out, {FileTypes::MZML}, log_type_);
}
}
bool use_RANSAC = getFlag_("RANSAC:enabled");
if (ic.getCalibrationPoints().empty())
{
OPENMS_LOG_ERROR << "No calibration points found! Check your Raw data and calibration masses." << std::endl;
if (!getFlag_("force"))
{
OPENMS_LOG_ERROR << "Set the 'force' flag to true if you want to continue with uncalibrated data." << std::endl;
return UNEXPECTED_RESULT;
}
OPENMS_LOG_ERROR << "The 'force' flag was set to true. Storing uncalibrated data to '-out'." << std::endl;
// do not calibrate
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::CALIBRATION));
mz_file.storeExperiment(out, exp, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
//
// create models and calibrate
//
String model_type = getStringOption_("cal:model_type");
MZTrafoModel::MODELTYPE md = MZTrafoModel::nameToEnum(model_type);
Size RANSAC_initial_points = model_type.hasSubstring("linear") ? 2 : 3;
Math::RANSACParam p(RANSAC_initial_points, getIntOption_("RANSAC:iter"), getDoubleOption_("RANSAC:threshold"), getIntOption_("RANSAC:pc_inliers"), true);
MZTrafoModel::setRANSACParams(p);
if (getFlag_("test"))
{
MZTrafoModel::setRANSACSeed(0);
}
// these limits are a little loose, but should prevent grossly wrong models without burdening the user with yet another parameter.
MZTrafoModel::setCoefficientLimits(tol_ppm, tol_ppm, 0.5);
String file_models_plot = getStringOption_("quality_control:models_plot");
String file_residuals_plot = getStringOption_("quality_control:residuals_plot");
String rscript_executable;
if (!file_models_plot.empty() || !file_residuals_plot.empty())
{ // only check for existence of Rscript if output files are requested...
rscript_executable = getStringOption_("rscript_executable");
}
if (!ic.calibrate(exp, ms_level, md, rt_chunk, use_RANSAC,
getDoubleOption_("goodness:median"),
getDoubleOption_("goodness:MAD"),
getStringOption_("quality_control:models"),
file_models_plot,
getStringOption_("quality_control:residuals"),
file_residuals_plot,
rscript_executable))
{
OPENMS_LOG_ERROR << "\nCalibration failed. See error message above!" << std::endl;
return UNEXPECTED_RESULT;
}
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
//annotate output with data processing info
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::CALIBRATION));
mz_file.storeExperiment(out, exp, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPInternalCalibration tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/OpenPepXL.cpp | .cpp | 13,430 | 278 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Eugen Netz $
// $Authors: Timo Sachsenberg, Eugen Netz $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/ANALYSIS/XLMS/OpenPepXLAlgorithm.h>
#include <OpenMS/CONCEPT/VersionInfo.h>
#include <OpenMS/FORMAT/XQuestResultXMLFile.h>
#include <OpenMS/FORMAT/FASTAFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
using namespace std;
using namespace OpenMS;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_OpenPepXL OpenPepXL
@brief Search for peptide pairs linked with a labeled cross-linker
This tool performs a search for cross-links in the given mass spectra.
It uses linked MS1 features to pair up MS2 spectra and uses these pairs to find the fragment peaks that contain the linker and those that do not.
It executes the following steps in order:
<ul>
<li>Reading of MS2 spectra from the given mzML file</li>
<li>Processing of spectra: deisotoping and filtering</li>
<li>Digesting and preprocessing the protein database, building a peptide pair index dependent on the precursor masses of the MS2 spectra</li>
<li>Generating theoretical spectra of cross-linked peptides and aligning the experimental spectra against those</li>
<li>Scoring of cross-link spectrum matches</li>
<li>Using PeptideIndexer to map the peptides to all possible source proteins</li>
<li>Writing out the results in idXML, mzid according to mzIdentML 1.2 specifications and/or in the xQuest output format</li>
</ul>
See below or have a look at the INI file (via "OpenPepXL -write_ini myini.ini") for available parameters and more functionality.
<h3>Input: MS2 spectra, linked features from FeatureFinderMultiplex and fasta database of proteins expected to be cross-linked in the sample</h3>
The spectra should be provided as one mzML file. If you have multiple files, e.g. for multiple fractions, you should run this tool on each
file separately.
The database can either be provided as one merged file containing targets and decoys or as two separate files.
A consensusXML file, that links the MS1 feature pairs from heavy and light cross-linkers is also required.
This file can be generated by the tool FeatureFinderMultiplex.
Setting up FeatureFinderMultiplex:
In the FeatureFinderMultiplex parameters you have to change the mass of one of the labels to the difference between the light and heavy
(e.g. change the mass of Arg6 to 12.075321 for labeled DSS) in the advanced options.
The parameter -labels should have one empty label ( [] ) and the label you adapted (e.g. [][Arg6]).
For the other settings refer to the documentation of FeatureFinderMultiplex.
<h3>Parameters</h3>
The parameters for fixed and variable modifications refer to additional modifications beside the cross-linker.
The linker used in the experiment has to be described using the cross-linker specific parameters.
Only one mass is allowed for a cross-linker, that links two peptides (-cross_linker:mass_light), while multiple masses are possible for mono-links of the same cross-linking reagent.
Mono-links are cross-linkers, that are linked to one peptide by one of their two reactive groups.
The masses refer to the light version of the linker. The parameter -cross_linker:mass_iso_shift defines the difference
between the light and heavy versions of the cross-linker and the mono-links.
The parameters -cross_linker:residue1 and -cross_linker:residue2 are used to enumerate the amino acids,
that each end of the linker can react with. This way any heterobifunctional cross-linker can be defined.
To define a homobifunctional cross-linker, these two parameters should have the same value.
The parameter -cross_linker:name is used to solve ambiguities arising from different cross-linkers having the same mass
after the linking reaction (see section on output for clarification).
<h3>Output: XL-MS Identifications with scores and linked positions in the proteins</h3>
There are three file formats for output of data possible. idXML is the internal format of OpenMS, and is recommended for post-processing using other TOPP tools like XFDR or TOPPView.
The second format xquest.xml is the output format of xQuest, which is a popular XL-MS ID tool. This format is compatible with a number of post-processing and visulization tools,
like xProphet for FDR estimation (Leitner, A. et al., 2014, Nature protocols)
and through the xQuest Results Viewer also the XlinkAnalyzer for visualization and analysis using protein structures (Kosinski, J. et al., 2015, Journal of structural biology).
The third format is mzIdentML according to the specifications for XL-MS ID data in version 1.2 (Vizcaíno, J. A. et al., 2017, Mol Cell Proteomics).
This is a standardized long term storage format and compatible with complete submissions to the PRIDE database, that is part of the ProteomeXchange consortium.
The specification includes the XLMOD database of cross-linking reagents, and if the provided cross-link mass matches one from the
database, its accession and name are used. If the name is provided with the -cross_linker:name parameter, it is used
to solve ambiguities arising from different cross-linkers having the same mass after the linking reaction (e.g. DSS and BS3).
It is also used as the name of the linker, if no matching masses are found in the database.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → OpenPepXL →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> - </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> - </td>
</tr>
</table>
</CENTER>
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_OpenPepXL.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_OpenPepXL.html
*/
/// @cond TOPPCLASSES
class TOPPOpenPepXL :
public TOPPBase
{
public:
TOPPOpenPepXL() :
TOPPBase("OpenPepXL", "Protein-protein cross-linking identification using labeled linkers.", true)
{
}
protected:
void registerOptionsAndFlags_() override
{
// input files
registerInputFile_("in", "<file>", "", "Input file containing the spectra.", true, false);
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerInputFile_("consensus", "<file>", "", "Input file containing the linked mass peaks.", true, false);
setValidFormats_("consensus", ListUtils::create<String>("consensusXML"));
registerInputFile_("database", "<file>", "", "Input file containing the protein database.", true, false);
setValidFormats_("database", ListUtils::create<String>("fasta"));
registerInputFile_("decoy_database", "<file>", "", "Input file containing the decoy protein database. Decoys can also be included in the normal database file instead (or additionally).", false, true);
setValidFormats_("decoy_database", ListUtils::create<String>("fasta"));
registerFullParam_(OpenPepXLAlgorithm().getDefaults());
// output file
registerOutputFile_("out_idXML", "<idXML_file>", "", "Results in idXML format (at least one of these output parameters should be set, otherwise you will not have any results)", false, false);
setValidFormats_("out_idXML", ListUtils::create<String>("idXML"));
registerOutputFile_("out_mzIdentML", "<mzIdentML_file>","", "Results in mzIdentML (.mzid) format (at least one of these output parameters should be set, otherwise you will not have any results)", false, false);
setValidFormats_("out_mzIdentML", ListUtils::create<String>("mzid"));
registerOutputFile_("out_xquestxml", "<xQuestXML_file>", "", "Results in the xquest.xml format (at least one of these output parameters should be set, otherwise you will not have any results).", false, false);
setValidFormats_("out_xquestxml", ListUtils::create<String>("xquest.xml"));
registerOutputFile_("out_xquest_specxml", "<xQuestSpecXML_file>", "", "Matched spectra in the xQuest .spec.xml format for spectra visualization in the xQuest results manager.", false, false);
setValidFormats_("out_xquest_specxml", ListUtils::create<String>("spec.xml"));
}
ExitCodes main_(int, const char**) override
{
ProgressLogger progresslogger;
progresslogger.setLogType(log_type_);
const string in_mzml(getStringOption_("in"));
const string in_fasta(getStringOption_("database"));
const string in_decoy_fasta(getStringOption_("decoy_database"));
const string in_consensus(getStringOption_("consensus"));
const string out_idXML(getStringOption_("out_idXML"));
const string out_xquest = getStringOption_("out_xquestxml");
const string out_xquest_specxml = getStringOption_("out_xquest_specxml");
const string out_mzIdentML = getStringOption_("out_mzIdentML");
OPENMS_LOG_INFO << "Analyzing file: " << endl;
OPENMS_LOG_INFO << in_mzml << endl;
// load MS2 map
PeakMap unprocessed_spectra;
FileHandler f;
PeakFileOptions options;
options.clearMSLevels();
options.addMSLevel(1);
options.addMSLevel(2);
f.getOptions() = options;
f.loadExperiment(in_mzml, unprocessed_spectra, {FileTypes::MZML}, log_type_);
// load linked features
ConsensusMap cfeatures;
FileHandler cf;
cf.loadConsensusFeatures(in_consensus, cfeatures, {FileTypes::CONSENSUSXML});
// load fasta database
progresslogger.startProgress(0, 1, "Load database from FASTA file...");
FASTAFile fastaFile;
vector<FASTAFile::FASTAEntry> fasta_db;
fastaFile.load(in_fasta, fasta_db);
if (!in_decoy_fasta.empty())
{
vector<FASTAFile::FASTAEntry> fasta_decoys;
fastaFile.load(in_decoy_fasta, fasta_decoys);
fasta_db.reserve(fasta_db.size() + fasta_decoys.size());
fasta_db.insert(fasta_db.end(), fasta_decoys.begin(), fasta_decoys.end());
}
progresslogger.endProgress();
// initialize solution vectors
vector<ProteinIdentification> protein_ids(1);
PeptideIdentificationList peptide_ids;
// these are mainly necessary for writing out xQuest type spectrum files
OPXLDataStructs::PreprocessedPairSpectra preprocessed_pair_spectra(0);
vector< pair<Size, Size> > spectrum_pairs;
vector< vector< OPXLDataStructs::CrossLinkSpectrumMatch > > all_top_csms;
PeakMap spectra;
OpenPepXLAlgorithm search_algorithm;
Param this_param = getParam_();
Param algo_param = search_algorithm.getParameters();
algo_param.update(this_param, false, false, false, false, getGlobalLogDebug()); // suppress param. update message
search_algorithm.setParameters(algo_param);
search_algorithm.setLogType(this->log_type_);
ProteinIdentification::SearchParameters search_params;
search_params.db = in_fasta;
search_params.setMetaValue("input_consensusXML", in_consensus);
search_params.setMetaValue("input_mzML", in_mzml);
search_params.setMetaValue("input_decoys", in_decoy_fasta);
search_params.setMetaValue("out_xquest_specxml", out_xquest_specxml);
protein_ids[0].setSearchParameters(search_params);
protein_ids[0].setDateTime(DateTime::now());
protein_ids[0].setSearchEngine("OpenPepXL");
protein_ids[0].setSearchEngineVersion(VersionInfo::getVersion());
protein_ids[0].setMetaValue("SpectrumIdentificationProtocol", DataValue("MS:1002494")); // crosslinking search = MS:1002494
// run algorithm
OpenPepXLAlgorithm::ExitCodes exit_code = search_algorithm.run(unprocessed_spectra, cfeatures, fasta_db, protein_ids, peptide_ids, preprocessed_pair_spectra, spectrum_pairs, all_top_csms, spectra);
if (exit_code != OpenPepXLAlgorithm::ExitCodes::EXECUTION_OK)
{
if (exit_code == OpenPepXLAlgorithm::ExitCodes::ILLEGAL_PARAMETERS)
{
return ILLEGAL_PARAMETERS;
}
}
// write output
progresslogger.startProgress(0, 1, "Writing output...");
if (!out_idXML.empty())
{
FileHandler().storeIdentifications(out_idXML, protein_ids, peptide_ids, {FileTypes::IDXML});
}
if (!out_mzIdentML.empty())
{
FileHandler().storeIdentifications(out_mzIdentML, protein_ids, peptide_ids, {FileTypes::MZIDENTML});
}
if (!out_xquest.empty() || !out_xquest_specxml.empty())
{
vector<String> input_split_dir;
vector<String> input_split;
getStringOption_("in").split(String("/"), input_split_dir);
input_split_dir[input_split_dir.size()-1].split(String("."), input_split);
String base_name = input_split[0];
if (!out_xquest.empty())
{
FileHandler().storeIdentifications(out_xquest, protein_ids, peptide_ids, {FileTypes::XQUESTXML});
}
if (!out_xquest_specxml.empty())
{
XQuestResultXMLFile::writeXQuestXMLSpec(out_xquest_specxml, base_name, preprocessed_pair_spectra, spectrum_pairs, all_top_csms, spectra, test_mode_);
}
}
progresslogger.endProgress();
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPOpenPepXL tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/PeakPickerIM.cpp | .cpp | 10,908 | 307 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Author: Mohammed Alhigaylan $
// $Maintainer: Timo Sachsenberg $
// --------------------------------------------------------------------------
#include <OpenMS/CONCEPT/LogStream.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/MzMLFile.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/FORMAT/DATAACCESS/MSDataWritingConsumer.h>
#include <OpenMS/INTERFACES/IMSDataConsumer.h>
#include <OpenMS/PROCESSING/CENTROIDING/PeakPickerIM.h>
#include <OpenMS/IONMOBILITY/IMTypes.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_PeakPickerIM PeakPickerIM
@brief A tool for peak detection in the ion mobility dimension for mzML files.
<center>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → PeakPickerIM →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FileConverter </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any tool operating on MS peak data @n (in mzML format)</td>
</tr>
</table>
</center>
This tool applies peak picking in the ion mobility dimension to raw LC-IMS-MS data.
The input mzML file should contain ion mobility data in concatenated format
(where each spectrum contains an ion mobility float data array).
Three peak picking methods are available:
- @b mobilogram: Picks peaks along the ion mobility dimension using a peak picker.
- @b cluster: Clusters peaks in the ion mobility dimension.
- @b traces: Picks peaks using ion mobility elution profiles.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_PeakPickerIM.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_PeakPickerIM.html
For the parameters of the algorithm section see the algorithm documentation: @ref OpenMS::PeakPickerIM "PeakPickerIM"
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPPeakPickerIM : public TOPPBase
{
public:
TOPPPeakPickerIM() :
TOPPBase("PeakPickerIM", "Applies PeakPickerIM to an mzML file", false)
{}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input mzML file");
setValidFormats_("in", { "mzML" });
registerOutputFile_("out", "<file>", "", "Output mzML file");
setValidFormats_("out", { "mzML" });
registerStringOption_("processOption", "<name>", "inmemory",
"Whether to load all data and process them in-memory or process on-the-fly (lowmemory) without loading the whole file into memory first",
false, true);
setValidStrings_("processOption", { "inmemory", "lowmemory" } );
registerStringOption_("method", "<name>", "mobilogram",
"Method to pick peaks in IM dimension", false, true);
setValidStrings_("method", { "mobilogram", "cluster", "traces" } );
addEmptyLine_();
registerSubsection_("algorithm", "Algorithm parameters for PeakPickerIM (organized into pickIMTraces, pickIMCluster, pickIMElutionProfiles).");
}
Param getSubsectionDefaults_(const String& section) const override
{
if (section == "algorithm")
{
OpenMS::PeakPickerIM picker_defaults;
Param p = picker_defaults.getDefaults();
Param combined;
combined.insert("pickIMTraces:", p.copy("pickIMTraces:", true));
combined.insert("pickIMCluster:", p.copy("pickIMCluster:", true));
combined.insert("pickIMElutionProfiles:",p.copy("pickIMElutionProfiles:", true));
return combined;
}
return Param();
}
// -------------------- Low-memory consumer --------------------
class Consumer : public MSDataWritingConsumer
{
public:
Consumer(String filename, const String& method, const PeakPickerIM& pp) :
MSDataWritingConsumer(std::move(filename)), pp_(pp), method_(method) {}
void processSpectrum_(MapType::SpectrumType& spectrum) override
{
if (method_ == "mobilogram")
{
pp_.pickIMTraces(spectrum);
}
else if (method_ == "cluster")
{
pp_.pickIMCluster(spectrum);
}
else if (method_ == "traces")
{
pp_.pickIMElutionProfiles(spectrum);
}
}
void processChromatogram_(MapType::ChromatogramType&) override {}
private:
PeakPickerIM pp_;
String method_;
};
// -------------------- Format detection consumer (reads first spectrum only) --------------------
class FormatDetector : public Interfaces::IMSDataConsumer
{
public:
IMFormat detected_format = IMFormat::NONE;
// Exception to abort after first spectrum (efficient early exit)
struct FirstSpectrumRead : std::exception {};
void consumeSpectrum(SpectrumType& s) override
{
detected_format = IMTypes::determineIMFormat(s);
throw FirstSpectrumRead(); // Abort after reading first spectrum
}
void consumeChromatogram(ChromatogramType&) override {}
void setExperimentalSettings(const ExperimentalSettings&) override {}
void setExpectedSize(size_t, size_t) override {}
};
// -------------------- Passthrough consumer (copies without processing) --------------------
class PassthroughConsumer : public MSDataWritingConsumer
{
public:
PassthroughConsumer(const String& filename) : MSDataWritingConsumer(filename) {}
void processSpectrum_(MapType::SpectrumType&) override {} // No processing
void processChromatogram_(MapType::ChromatogramType&) override {}
};
// -------------------- Helper for low-memory path --------------------
ExitCodes doLowMemAlgorithm(const String& method, const PeakPickerIM& pp,
const String& input_file, const String& output_file)
{
MzMLFile mzml;
mzml.setLogType(log_type_);
// Step 1: Detect IMFormat by reading only the first spectrum (minimal I/O)
IMFormat im_format = IMFormat::NONE;
{
FormatDetector detector;
try
{
mzml.transform(input_file, &detector);
// If we reach here, file has no spectra - format stays NONE
}
catch (const FormatDetector::FirstSpectrumRead&)
{
im_format = detector.detected_format;
}
}
// Step 2: Validate format
if (im_format == IMFormat::CENTROIDED)
{
OPENMS_LOG_ERROR << "Error: Input file contains ion mobility data that is already centroided. "
<< "PeakPickerIM expects raw (concatenated) IM data. "
<< "Re-picking already centroided data is not supported." << std::endl;
return ILLEGAL_PARAMETERS;
}
if (im_format == IMFormat::MULTIPLE_SPECTRA)
{
OPENMS_LOG_ERROR << "Error: Input file contains ion mobility data in MULTIPLE_SPECTRA format "
<< "(one spectrum per IM frame). PeakPickerIM expects raw (concatenated) IM data "
<< "where each spectrum contains an ion mobility float data array. "
<< "This format is not supported." << std::endl;
return ILLEGAL_PARAMETERS;
}
if (im_format == IMFormat::MIXED)
{
OPENMS_LOG_ERROR << "Error: Input file contains mixed ion mobility formats "
<< "(both CONCATENATED and MULTIPLE_SPECTRA). PeakPickerIM expects raw (concatenated) IM data "
<< "where each spectrum contains an ion mobility float data array. "
<< "Mixed formats are not supported." << std::endl;
return ILLEGAL_PARAMETERS;
}
if (im_format == IMFormat::NONE)
{
OPENMS_LOG_WARN << "Warning: Input file does not contain ion mobility data. "
<< "No peak picking will be performed." << std::endl;
// Pass through unchanged
PassthroughConsumer passthrough(output_file);
mzml.transform(input_file, &passthrough);
return EXECUTION_OK;
}
// Step 3: Proceed with streaming processing
Consumer pp_consumer(output_file, method, pp);
pp_consumer.addDataProcessing(getProcessingInfo_(DataProcessing::PEAK_PICKING));
mzml.transform(input_file, &pp_consumer);
return EXECUTION_OK;
}
ExitCodes main_(int, const char**) override
{
const String input_file = getStringOption_("in");
const String output_file = getStringOption_("out");
const String process_opt = getStringOption_("processOption");
const String method = getStringOption_("method");
// Collect algorithm parameters from 'algorithm:' We strip and pass the remaining keys directly to PeakPickerIM.
Param algo = getParam_().copy("algorithm:",true);
PeakPickerIM picker;
picker.setParameters(algo);
if (process_opt == "lowmemory")
{
return doLowMemAlgorithm(method, picker, input_file, output_file);
}
else
{
PeakMap exp;
MzMLFile mzml;
mzml.load(input_file, exp);
// Check if input contains centroided IM data (error) or no IM data (warning)
IMFormat im_format = IMTypes::determineIMFormat(exp);
if (im_format == IMFormat::CENTROIDED)
{
OPENMS_LOG_ERROR << "Error: Input file contains ion mobility data that is already centroided. "
<< "PeakPickerIM expects raw (concatenated) IM data. "
<< "Re-picking already centroided data is not supported." << std::endl;
return ILLEGAL_PARAMETERS;
}
if (im_format == IMFormat::NONE)
{
OPENMS_LOG_WARN << "Warning: Input file does not contain ion mobility data. "
<< "No peak picking will be performed." << std::endl;
mzml.store(output_file, exp);
return EXECUTION_OK;
}
#pragma omp parallel for
for (SignedSize i = 0; i < static_cast<SignedSize>(exp.size()); ++i)
{
MSSpectrum& spectrum = exp[static_cast<Size>(i)];
if (method == "mobilogram")
{
picker.pickIMTraces(spectrum);
}
else if (method == "cluster")
{
picker.pickIMCluster(spectrum);
}
else if (method == "traces")
{
picker.pickIMElutionProfiles(spectrum);
}
}
// Annotate processing info (same as low-memory path)
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::PEAK_PICKING));
mzml.store(output_file, exp);
return EXECUTION_OK;
}
}
};
int main(int argc, const char** argv)
{
TOPPPeakPickerIM tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/DecoyDatabase.cpp | .cpp | 24,815 | 512 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Sven Nahnsen $
// $Authors: Sven Nahnsen, Andreas Bertsch, Chris Bielow, Philipp Wang $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/ID/NeighborSeq.h>
#include <OpenMS/ANALYSIS/OPENSWATH/MRMDecoy.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CHEMISTRY/ProteaseDB.h>
#include <OpenMS/CHEMISTRY/ProteaseDigestion.h>
#include <OpenMS/CHEMISTRY/ResidueDB.h>
#include <OpenMS/DATASTRUCTURES/FASTAContainer.h>
#include <OpenMS/FORMAT/FASTAFile.h>
#include <OpenMS/MATH/MathFunctions.h>
#include <OpenMS/METADATA/ProteinIdentification.h>
#include <regex>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_DecoyDatabase DecoyDatabase
@brief Create a decoy peptide database from standard FASTA databases.
Decoy databases are useful to control false discovery rates and thus estimate score cutoffs for identified spectra.
The decoy can either be generated by reversing or shuffling each of the peptides of a sequence (as defined by a given enzyme).
For reversing the N and C terminus of the peptides are kept in position by default.
To get a 'contaminants' database have a look at http://www.thegpm.org/crap/index.html or find/create your own contaminant database.
Multiple databases can be provided as input, which will internally be concatenated before being used for decoy generation.
This allows you to specify your target database plus a contaminant file and obtain a concatenated
target-decoy database using a single call, e.g., DecoyDatabase -in human.fasta crap.fasta -out human_TD.fasta
By default, a combined database is created where target and decoy sequences are written interleaved
(i.e., target1, decoy1, target2, decoy2,...).
If you need all targets before the decoys for some reason, use @p only_decoy and concatenate the files
externally.
The tool will keep track of all protein identifiers and report duplicates.
Also the tool automatically checks for decoys already in the input files (based on most common pre-/suffixes)
and terminates the program if decoys are found.
Extra functionality:
The Neighbor Peptide functionality (see subsection 'NeighborSearch') is designed to find peptides (neighbors) in a given set of sequences (FASTA file) that are
similar to a target peptide (aka relevant peptide) based on mass and spectral characteristics. This provides more power
when searching complex samples, but only a subset of the peptides/proteins is of interest.
See www.ncbi.nlm.nih.gov/pmc/articles/PMC8489664/ and NeighborSeq for details.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_DecoyDatabase.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_DecoyDatabase.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPDecoyDatabase :
public TOPPBase
{
public:
TOPPDecoyDatabase() :
TOPPBase("DecoyDatabase", "Creates combined target+decoy sequence database from forward sequence database.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFileList_("in", "<file(s)>", ListUtils::create<String>(""), "Input FASTA file(s), each containing a database. It is recommended to include a contaminant database as well.");
setValidFormats_("in", {"fasta"});
registerOutputFile_("out", "<file>", "", "Output FASTA file where the decoy database (target + decoy or only decoy, see 'only_decoy') will be written to.");
setValidFormats_("out", {"fasta"});
registerStringOption_("decoy_string", "<string>", "DECOY_", "String that is combined with the accession of the protein identifier to indicate a decoy protein.", false);
registerStringOption_("decoy_string_position", "<choice>", "prefix", "Should the 'decoy_string' be prepended (prefix) or appended (suffix) to the protein accession?", false);
setValidStrings_("decoy_string_position", {"prefix", "suffix"});
registerFlag_("only_decoy", "Write only decoy proteins to the output database instead of a combined database.", false);
registerStringOption_("type", "<choice>", "protein", "Type of sequence. RNA sequences may contain modification codes, which will be handled correctly if this is set to 'RNA'.", false);
setValidStrings_("type", {"protein", "RNA"});
registerStringOption_("method", "<choice>", "reverse", "Method by which decoy sequences are generated from target sequences. Note that all sequences are shuffled using the same random seed, ensuring that identical sequences produce the same shuffled decoy sequences. Shuffled sequences that produce highly similar output sequences are shuffled again (see shuffle_sequence_identity_threshold).", false);
setValidStrings_("method", {"reverse", "shuffle"});
registerIntOption_("shuffle_max_attempts", "<int>", 30, "shuffle: maximum attempts to lower the amino acid sequence identity between target and decoy for the shuffle algorithm", false, true);
registerDoubleOption_("shuffle_sequence_identity_threshold", "<double>", 0.5, "shuffle: target-decoy amino acid sequence identity threshold for the shuffle algorithm. If the sequence identity is above this threshold, shuffling is repeated. In case of repeated failure, individual amino acids are 'mutated' to produce a different amino acid sequence.", false, true);
registerIntOption_("shuffle_decoy_ratio", "<int>", 1, "shuffle: target-decoy database size ratio. The resulting size between target and decoy databases is 1 to this integer value.", false, true);
setMinInt_("shuffle_decoy_ratio", 1);
registerStringOption_("seed", "<int/'time')>", '1', "Random number seed (use 'time' for system time)", false, true);
StringList all_enzymes;
ProteaseDB::getInstance()->getAllNames(all_enzymes);
registerStringOption_("enzyme", "<enzyme>", "Trypsin", "Enzyme used for the digestion of the sample. Only applicable if parameter 'type' is 'protein'.",false);
setValidStrings_("enzyme", all_enzymes);
registerSubsection_("Decoy", "Decoy parameters section");
// New options for neighbor peptide search
registerTOPPSubsection_("NeighborSearch", "Parameters for neighbor peptide search ('in' holds the neighbor candidates)");
registerInputFile_("NeighborSearch:in_relevant_proteins", "<file>","", "These are the relevant proteins, for which we seek neighbors", false);
setValidFormats_("NeighborSearch:in_relevant_proteins", {"fasta"});
registerOutputFile_("NeighborSearch:out_neighbor", "<file>", "", "Output FASTA file with neighbors of relevant peptides (given in 'in_relevant_proteins').",false);
registerOutputFile_("NeighborSearch:out_relevant", "<file>", "",
"Output FASTA file with target+decoy of relevant peptides (given in 'in_relevant_proteins'). Required for downstream filtering of search results via IDFilter and subsequent FDR.", false);
registerIntOption_("NeighborSearch:missed_cleavages", "<int>", 0, "Number of missed cleavages for relevant and neighbor peptides.", false);
registerDoubleOption_("NeighborSearch:mz_bin_size", "<num>", 0.05,"Bin size for spectra m/z comparison (the original study suggests 0.05 Th for high-res and 1.0005079 Th for low-res spectra).", false);
registerDoubleOption_("NeighborSearch:pc_mass_tolerance", "<double>", 0.01, "Maximal precursor mass difference (in Da or ppm; see 'pc_mass_tolerance_unit') between neighbor and relevant peptide.", false);
registerStringOption_("NeighborSearch:pc_mass_tolerance_unit", "<choice>", "Da", "Is 'pc_mass_tolerance' in Da or ppm?", false);
setValidStrings_("NeighborSearch:pc_mass_tolerance_unit", {"Da", "ppm"});
registerIntOption_("NeighborSearch:min_peptide_length", "<int>", 5, "Minimum peptide length (relevant and neighbor peptides)", false);
registerDoubleOption_("NeighborSearch:min_shared_ion_fraction", "<double>", 0.25,
"Minimal required overlap 't_i' of b/y ions shared between neighbor candidate and a relevant peptide (t_i <= 2*B12/(B1+B2)). Higher values result in fewer neighbors.", false);
}
Param getSubsectionDefaults_(const String& /* name */) const override
{
Param p = MRMDecoy().getDefaults();
// change the default to also work with other proteases
p.setValue("non_shuffle_pattern", "", "Residues to not shuffle (keep at a constant position when shuffling). Separate by comma, e.g. use 'K,P,R' here.");
return p;
}
String getDecoyIdentifier_(const String& identifier, const String& decoy_string, const String& suffix_string, const bool as_prefix)
{
if (as_prefix)
{
return decoy_string + identifier + suffix_string;
}
else
{
return identifier + decoy_string + suffix_string;
}
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
enum class SeqType {protein, RNA};
StringList in = getStringList_("in");
String out = getStringOption_("out");
bool append = !getFlag_("only_decoy");
bool shuffle = (getStringOption_("method") == "shuffle");
int shuffle_ratio = shuffle ? getIntOption_("shuffle_decoy_ratio") : 1;
// Validate that shuffle_decoy_ratio is only used with shuffle method
if (!shuffle && getIntOption_("shuffle_decoy_ratio") != 1)
{
throw Exception::InvalidParameter(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
"Parameter 'shuffle_decoy_ratio' can only be used with method 'shuffle', not with method 'reverse'.");
}
String decoy_string = getStringOption_("decoy_string");
bool decoy_string_position_prefix =
(getStringOption_("decoy_string_position") == "prefix");
// check if decoy_string is common decoy string (e.g. decoy, rev, ...)
String decoy_string_lower = decoy_string;
decoy_string_lower.toLower();
bool is_common = false;
for (const auto& a : DecoyHelper::affixes)
{
if ((decoy_string_lower.hasPrefix(a) && decoy_string_position_prefix) || (decoy_string_lower.hasSuffix(a) && ! decoy_string_position_prefix))
{
is_common = true;
}
}
// terminate, if decoy_string is not one of the allowed decoy strings (exit code 11)
if (! is_common)
{
if (getFlag_("force"))
{
OPENMS_LOG_WARN << "Force Flag is enabled, decoys with custom decoy string (not in DecoyHelper::affixes) will not be detected.\n";
}
else
{
OPENMS_LOG_FATAL_ERROR << "Given decoy string is not allowed. Please use one of the strings in DecoyHelper::affixes as either prefix or "
"suffix (case insensitive): \n";
return INCOMPATIBLE_INPUT_DATA;
}
}
const SeqType input_type = getStringOption_("type") == "RNA" ? SeqType::RNA : SeqType::protein;
Param decoy_param = getParam_().copy("Decoy:", true);
bool keepN = decoy_param.getValue("keepPeptideNTerm").toBool();
bool keepC = decoy_param.getValue("keepPeptideCTerm").toBool();
String keep_const_pattern = decoy_param.getValue("non_shuffle_pattern").toString();
Int max_attempts = getIntOption_("shuffle_max_attempts");
double identity_threshold = getDoubleOption_("shuffle_sequence_identity_threshold");
// Set the seed for shuffling always to the same number, this
// ensures that identical peptides get shuffled the same way
// every time (without keeping track of them explicitly). This
// will ensure that the total number of unique tryptic peptides
// is identical in both databases.
;
String seed_option(getStringOption_("seed"));
const int seed = (seed_option == "time") ? time(nullptr) : seed_option.toInt();
// Configure Enzymatic digestion
// TODO: allow user-specified regex
ProteaseDigestion digestion;
String enzyme = getStringOption_("enzyme").trim();
if ((input_type == SeqType::protein) && ! enzyme.empty()) { digestion.setEnzyme(enzyme); }
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
if (in.size() == 1)
{
OPENMS_LOG_WARN << "Warning: Only one FASTA input file was provided, which might not contain contaminants. "
<< "You probably want to have them! Just add the contaminant file to the input file list 'in'." << endl;
}
// do this first, before potentially entering neighbor mode (which modifies the 'in' list)
for (const auto& file_fasta : in)
{
// check input files for decoys
FASTAContainer<TFI_File> in_entries {file_fasta};
auto r = DecoyHelper::countDecoys(in_entries);
// if decoys found, terminates with exit code INCOMPATIBLE_INPUT_DATA
if (static_cast<double>(r.all_prefix_occur + r.all_suffix_occur) >= 0.4 * static_cast<double>(r.all_proteins_count))
{
OPENMS_LOG_FATAL_ERROR << "Invalid input in " + file_fasta + ": Input file already contains decoys." << '\n';
return INCOMPATIBLE_INPUT_DATA;
}
}
// create neighbor peptides for the relevant peptides?
String in_relevant_proteins = getStringOption_("NeighborSearch:in_relevant_proteins");
String out_relevant = getStringOption_("NeighborSearch:out_relevant");
String out_neighbor = getStringOption_("NeighborSearch:out_neighbor");
if (in_relevant_proteins.empty() ^ out_relevant.empty())
{
OPENMS_LOG_ERROR << "Parameter settings are invalid. Both 'in_relevant_proteins' and 'out_relevant' must be set or unset.\n";
return ILLEGAL_PARAMETERS;
}
const bool neighbor_mode = ! in_relevant_proteins.empty();
if (!neighbor_mode && !out_neighbor.empty())
{
OPENMS_LOG_ERROR << "Parameter settings are invalid. You requested neighbor peptides via 'NeighborSearch:out_neighbor', but failed specify the required input ('NeighborSearch:in_relevant_proteins').\n";
return ILLEGAL_PARAMETERS;
}
if (neighbor_mode)
{
if (input_type != SeqType::protein)
{
OPENMS_LOG_ERROR << "Parameter settings are invalid. When requesting neighbor peptides, the input type must be 'protein', not 'RNA'.\n";
return INCOMPATIBLE_INPUT_DATA;
}
if (out_neighbor.empty())
{ // make it a temp file, since we need to append its content to the final 'out' DB
out_neighbor = File::getTemporaryFile(out_neighbor);
}
//-------------------------------------------------------------
// parsing neighbor parameters
//-------------------------------------------------------------
FASTAFile fasta_neighbor_out;
fasta_neighbor_out.writeStart(out_neighbor);
double mz_bin_size = getDoubleOption_("NeighborSearch:mz_bin_size");
double min_shared_ion_fraction = getDoubleOption_("NeighborSearch:min_shared_ion_fraction");
double mass_tolerance = getDoubleOption_("NeighborSearch:pc_mass_tolerance");
bool mass_tolerance_unit_ppm = getStringOption_("NeighborSearch:pc_mass_tolerance_unit") == "ppm";
int missed_cleavages = getIntOption_("NeighborSearch:missed_cleavages");
int min_peptide_length = getIntOption_("NeighborSearch:min_peptide_length");
// Create a ProteaseDigestion object for neighbor peptide digestion
// (it's not identical to the one used for creating decoys, because we need to consider missed cleavages)
ProteaseDigestion digestion_neighbor;
digestion_neighbor.setMissedCleavages(missed_cleavages);
if (! enzyme.empty()) { digestion_neighbor.setEnzyme(getStringOption_("enzyme").trim()); }
// Load the relevant proteins from 'NeighborSearch:in_relevant_proteins'
vector<FASTAFile::FASTAEntry> relevant_proteins;
FASTAFile().load(in_relevant_proteins, relevant_proteins);
vector<AASequence> digested_relevant_peptides;
vector<AASequence> temp_peptides;
for (const auto& entry : relevant_proteins)
{
digestion_neighbor.digest(AASequence::fromString(entry.sequence), temp_peptides, min_peptide_length);
digested_relevant_peptides.insert(digested_relevant_peptides.end(), make_move_iterator(temp_peptides.begin()), make_move_iterator(temp_peptides.end()));
}
NeighborSeq ns(std::move(digested_relevant_peptides));
// find neighbor peptides in 'in' for each relevant peptide in 'NeighborSearch:in_relevant_proteins'
for (Size i = 0; i < in.size(); ++i)
{
const auto x_residue = *ResidueDB::getInstance()->getResidue('X');
FASTAFile fasta_in;
fasta_in.setLogType(log_type_);
fasta_in.readStartWithProgress(in[i], "Finding Neighbors in '" + in[i] + "'");
FASTAFile::FASTAEntry entry;
vector<AASequence> digested_candidate_peptides;
while (fasta_in.readNextWithProgress(entry))
{
digestion_neighbor.digest(AASequence::fromString(entry.sequence), digested_candidate_peptides, min_peptide_length);
entry.sequence.clear(); // reset sequence; later append valid candidates (if any)
entry.identifier = "neighbor_" + entry.identifier;
for (auto& peptide : digested_candidate_peptides)
{
if (peptide.has(x_residue))
{ // 'X' in peptide prevents us from computing a PC mass and a spectrum
continue;
}
// Find relevant peptides for the current neighbor peptide candidate
bool is_neighbor_peptide = ns.isNeighborPeptide(peptide, mass_tolerance, mass_tolerance_unit_ppm, min_shared_ion_fraction, mz_bin_size);
if (!is_neighbor_peptide) continue;
entry.sequence += peptide.toString();
} // next candidate peptide
if (!entry.sequence.empty())
{
fasta_neighbor_out.writeNext(entry);
}
} // next candidate protein
} // next input file
// we only need relevant and neighbor peptides in our final DB:
in.clear();
// add relevant proteins FASTA file to the input list (to also create decoys for them)
in.push_back(in_relevant_proteins);
// add neighbor peptides FASTA file to the input list (to also create decoys for them)
in.push_back(out_neighbor);
const auto stats = ns.getNeighborStats();
OPENMS_LOG_INFO << "Neighbor peptide statistics for " << stats.total() << " reference peptides :\n"
<< " - " << stats.unfindable() << " peptides contained an 'X' (unknown amino acid) and thus could not be searched for neighbors\n"
<< " - " << stats.noNB() << " peptides had 0 neighbors\n"
<< " - " << stats.oneNB() << " peptides had 1 neighbor\n"
<< " - " << stats.multiNB() << " peptides had >=1 neighbors." << endl;
}
set<String> identifiers; // spot duplicate identifiers // std::unordered_set<string> has slightly more RAM, but slightly less CPU
FASTAFile f;
f.writeStart(out);
FASTAFile fasta_out_relevant; /// in neighbor-peptide mode: write relevant peptides to the output file
if (neighbor_mode)
{
fasta_out_relevant.writeStart(out_relevant);
}
MRMDecoy m;
m.setParameters(decoy_param);
Math::RandomShuffler shuffler(seed);
for (const auto& file_fasta : in)
{
/// in neighbor-peptide mode: write relevant peptides to the output file
const bool write_relevant = neighbor_mode && file_fasta == in_relevant_proteins;
f.readStart(file_fasta);
FASTAFile::FASTAEntry entry;
OpenMS::TargetedExperiment::Peptide p;
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
while (f.readNext(entry))
{
if (identifiers.find(entry.identifier) != identifiers.end())
{
OPENMS_LOG_WARN << "DecoyDatabase: Warning, identifier '" << entry.identifier << "' occurs more than once!" << endl;
}
identifiers.insert(entry.identifier);
if (append)
{
f.writeNext(entry);
if (write_relevant)
{
fasta_out_relevant.writeNext(entry);
}
}
// new decoy identifier
for (int num_repeat = 0; num_repeat < shuffle_ratio; num_repeat++)
{
FASTAFile::FASTAEntry decoy_entry = entry;
// Derive a per-repeat seed so each repeat yields a distinct decoy sequence
auto repeat_seed = seed + num_repeat;
// add iteration suffix if shuffle_ratio > 1
String suffix_string = shuffle_ratio == 1 ? "" : std::string("_") + std::to_string(num_repeat + 1);
decoy_entry.identifier = getDecoyIdentifier_(decoy_entry.identifier, decoy_string, suffix_string, decoy_string_position_prefix);
// new decoy sequence
if (input_type == SeqType::RNA)
{
string quick_seq = decoy_entry.sequence;
bool five_p = (decoy_entry.sequence.front() == 'p');
bool three_p = (decoy_entry.sequence.back() == 'p');
if (five_p) // we don't want to reverse terminal phosphates
{
quick_seq.erase(0, 1);
}
if (three_p) { quick_seq.pop_back(); }
vector<String> tokenized;
std::smatch m;
std::string pattern = R"([^\[]|(\[[^\[\]]*\]))";
std::regex re(pattern);
while (std::regex_search(quick_seq, m, re))
{
tokenized.emplace_back(m.str(0));
quick_seq = m.suffix();
}
if (shuffle)
{
shuffler.seed(repeat_seed);
shuffler.portable_random_shuffle(tokenized.begin(), tokenized.end());
}
else // reverse
{
reverse(tokenized.begin(), tokenized.end()); // reverse the tokens
}
if (five_p) // add back 5'
{
tokenized.insert(tokenized.begin(), String("p"));
}
if (three_p) // add back 3'
{
tokenized.emplace_back("p");
}
decoy_entry.sequence = ListUtils::concatenate(tokenized, "");
}
else // protein input
{
// if (terminal_aminos != "none")
if (enzyme != "no cleavage" && (keepN || keepC))
{
std::vector<AASequence> peptides;
digestion.digest(AASequence::fromString(decoy_entry.sequence), peptides);
String new_sequence = "";
for (auto const& peptide : peptides)
{
p.sequence = peptide.toString();
// TODO why are the functions from TargetedExperiment and MRMDecoy not anywhere more general?
// No soul would look there.
auto decoy_p = shuffle ? m.shufflePeptide(p, identity_threshold, repeat_seed, max_attempts)
: MRMDecoy::reversePeptide(p, keepN, keepC, keep_const_pattern);
new_sequence += decoy_p.sequence;
}
decoy_entry.sequence = new_sequence;
}
else // no cleavage
{
// sequence
if (shuffle)
{
shuffler.seed(repeat_seed); // use per-repeat seed for distinct decoys
shuffler.portable_random_shuffle(decoy_entry.sequence.begin(), decoy_entry.sequence.end());
}
else // reverse
{
decoy_entry.sequence.reverse();
}
}
} // protein entry
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
f.writeNext(decoy_entry);
// optional: if in neighbor mode: T+D of relevant peptides (if requested)
if (write_relevant)
{
fasta_out_relevant.writeNext(decoy_entry);
}
}
} // next protein
} // input files
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPDecoyDatabase tool;
return tool.main(argc, argv);
}
/// @endcond | C++ |
3D | OpenMS/OpenMS | src/topp/OpenMSInfo.cpp | .cpp | 6,181 | 178 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Marc Sturm, Chris Bielow $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CONCEPT/Colorizer.h>
#include <OpenMS/CONCEPT/VersionInfo.h>
#include <OpenMS/DATASTRUCTURES/String.h>
#include <OpenMS/FORMAT/IndentedStream.h>
#include <OpenMS/SYSTEM/BuildInfo.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/config.h>
#include <OpenMS/openms_data_path.h>
#ifdef _OPENMP
#include "omp.h"
#endif
#include <iostream>
using namespace OpenMS;
using namespace std;
/**
@page TOPP_OpenMSInfo OpenMSInfo
@brief Prints configurations details of %OpenMS (Version, Git hash, SIMD extensions, Multithreading), along with directories where auxilliary data
like modifications (UniMOD), Enzymes etc are taken from.
Some path's can be manipulated by the user by setting environment variables. If not set, the values are taken from the system defaults.
<ul>
<li> <b>Data path:</b> controlled by the environment variable 'OPENMS_DATA_PATH'; the value should point to a %OpenMS share directory, e.g.
'c:/program files/OpenMS3.1/share/OpenMS' <li> <b>Temp path:</b> controlled by the environment variable 'OPENMS_TMPDIR'; the value should point to
where you want %OpenMS to store temporary data. <li> <b>Userdata path:</b> controlled by the environment variable 'OPENMS_HOME_PATH'; the value should
point to where you want %OpenMS to store user-realted data, e.g. the .OpenMS.ini.
</ul>
<B>This tool does not need/use any command line parameters.</B>
Example output:
@code
Full documentation: http://www.openms.de/doxygen/nightly/html/TOPP_OpenMSInfo.html
To cite OpenMS:
+ Pfeuffer, J., Bielow, C., Wein, S. et al.. OpenMS 3 enables reproducible analysis of large-scale mass spectrometry
data. Nat Methods (2024). doi:10.1038/s41592-024-02197-7.
<< OpenMS Version >>
Version : 3.2.0
Build time : Sep 18 2024, 14:14:53
Git sha1 : disabled
Git branch : disabled
<< Installation information >>
Data path : C:/dev/openms2/share/OpenMS
Temp path : C:/Users/bielow/AppData/Local/Temp
Userdata path: C:/Users/bielow/
<< Build information >>
Source path : C:/dev/openms/src/openms
Binary path : C:/dev/openms_build/src/openms
Binary arch : 64 bit
Build type : Debug
LP-Solver : COIN-OR
OpenMP : enabled (maxThreads = 32)
SIMD extensions : SSE, SSE2, SSE3, SSE4.1, SSE4.2, AVX
Extra CXX flags : <none>
<< OS Information >>
Name: Windows
Version: 10
Architecture: 64 bit
@endcode
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
/// this needs to be done before TOPPBase is initialized, since it will set OMP's max_threads to 1
const auto max_threads = Internal::OpenMSBuildInfo::getOpenMPMaxNumThreads();
class TOPPOpenMSInfo : public TOPPBase
{
public:
TOPPOpenMSInfo(): TOPPBase("OpenMSInfo", "Prints configurations details of OpenMS.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerFlag_("p", "Print information (flag can also be omitted)");
registerInputFile_("dummy", "<ignored>", "", "A fake input file, which is needed for some workflow systems to call this tool", false, true);
}
// Param getSubsectionDefaults_(const String& /*section*/) const override
//{
// return SpectraMerger().getParameters();
// }
ExitCodes main_(int, const char**) override
{
IndentedStream is(cout, 0, 10);
is << '\n'
<< bright("Full documentation: ") // the space is needed, otherwise the remaining line will be underlined on Windows..
<< underline(TOPPBase::getDocumentationURL()) << " " // the space is needed ...
<< '\n'
<< bright("To cite OpenMS:\n") << " + "
<< is.indent(3) << cite_openms.toString() << is.indent(0);
is << "\n\n"
<< green("<< OpenMS Version >>\n")
<< "Version : " << VersionInfo::getVersion() << '\n'
<< "Build time : " << VersionInfo::getTime() << '\n'
<< "Git sha1 : " << VersionInfo::getRevision() << '\n'
<< "Git branch : " << VersionInfo::getBranch() << '\n'
<< '\n'
<< green("<< Installation information >>\n")
<< "Data path : " << File::getOpenMSDataPath() << '\n'
<< "Temp path : " << File::getTempDirectory() << '\n'
<< "Userdata path: " << File::getUserDirectory() << '\n'
<< '\n'
<< green("<< Build information >>\n")
<< "Source path : " << OPENMS_SOURCE_PATH << '\n'
<< "Binary path : " << OPENMS_BINARY_PATH << '\n'
<< "Binary arch : " << Internal::OpenMSOSInfo::getBinaryArchitecture() << '\n'
<< "Build type : " << Internal::OpenMSBuildInfo::getBuildType() << '\n';
#ifdef OPENMS_HAS_COINOR
is << "LP-Solver : COIN-OR\n";
#else
cout << "LP-Solver : GLPK\n";
#endif
#ifdef _OPENMP
is << "OpenMP : "
<< "enabled (maxThreads = " << max_threads << ")"
<< '\n';
#else
is << "OpenMP : "
<< "disabled"
<< '\n';
#endif
is << "SIMD extensions : " << Internal::OpenMSOSInfo::getActiveSIMDExtensions() << '\n'
<< "Extra CXX flags : " << OPENMS_MORECXX_FLAGS << '\n'
<< '\n';
Internal::OpenMSOSInfo info = Internal::OpenMSOSInfo::getOSInfo();
is << green("<< OS Information >>\n")
<< "Name: " << info.getOSAsString() << '\n'
<< "Version: " << info.getOSVersionAsString() << '\n'
<< "Architecture: " << info.getArchAsString() << '\n'
<< '\n';
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPOpenMSInfo tool;
// TOPPBase automatically shows the Help page if a TOPP tool is called without any parameters.
// This tool is special: we want to print stuff in this case. So we pass a "-p" flag.
const char* override_params[] = {argv[0], "-p"};
if (argc == 1) // no args given
{
return tool.main(2, override_params);
}
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/DigestorMotif.cpp | .cpp | 16,071 | 365 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Sven Nahnsen $
// $Author: Sven Nahnsen $
// --------------------------------------------------------------------------
#include <OpenMS/FORMAT/IdXMLFile.h>
#include <OpenMS/FORMAT/FASTAFile.h>
#include <OpenMS/METADATA/ProteinIdentification.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CHEMISTRY/ProteaseDigestion.h>
#include <OpenMS/CHEMISTRY/ElementDB.h>
#include <OpenMS/CHEMISTRY/Element.h>
#include <OpenMS/CHEMISTRY/ProteaseDB.h>
#include <map>
using namespace OpenMS;
using namespace std;
#define SEP "\t"
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_DigestorMotif DigestorMotif
@brief This application is used to digest a protein database to get all peptides given a cleavage enzyme. It will also produce peptide statistics given the mass
accuracy of the instrument. You can extract peptides with specific motifs,e.g. onyl cysteine containing peptides for ICAT experiments. At the moment only trypsin is supported.
@note Currently mzIdentML (mzid) is not directly supported as an input/output format of this tool. Convert mzid files to/from idXML using @ref TOPP_IDFileConverter if necessary.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_DigestorMotif.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_DigestorMotif.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPDigestorMotif :
public TOPPBase
{
public:
TOPPDigestorMotif() :
TOPPBase("DigestorMotif", "Digests a protein database in-silico")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "FASTA input file");
setValidFormats_("in", ListUtils::create<String>("fasta"));
registerOutputFile_("out", "<file>", "", "output file (peptides)\n");
setValidFormats_("out", ListUtils::create<String>("idXML"));
registerIntOption_("missed_cleavages", "<number>", 1, "the number of allowed missed cleavages", false);
registerIntOption_("mass_accuracy", "<number>", 1000, "give your mass accuracy in ppb", false);
registerIntOption_("min_length", "<number>", 6, "minimum length of peptide", false);
registerIntOption_("out_option", "<number>", 1, "indicate 1 (peptide table only), 2 (statistics only) or (both peptide table + statistics)", false);
vector<String> all_enzymes;
ProteaseDB::getInstance()->getAllNames(all_enzymes);
registerStringOption_("enzyme", "<cleavage site>", "Trypsin", "The enzyme used for peptide digestion.", false);
setValidStrings_("enzyme", all_enzymes);
registerStringOption_("motif", "<string>", "M", "the motif for the restricted peptidome", false);
setMinInt_("missed_cleavages", 0);
}
ExitCodes main_(int, const char**) override
{
vector<ProteinIdentification> protein_identifications;
PeptideIdentificationList identifications;
std::vector<FASTAFile::FASTAEntry> protein_data;
FASTAFile file;
ProteaseDigestion digestor;
vector<AASequence> temp_peptides;
PeptideIdentification peptide_identification;
ProteinIdentification protein_identification;
PeptideHit temp_peptide_hit;
ProteinHit temp_protein_hit;
vector<String> protein_accessions;
String inputfile_name;
String outputfile_name;
UInt min_size, counter = 0;
UInt missed_cleavages;
double accurate_mass, min_mass, max_mass;
UInt mass_acc, out_opt;
EmpiricalFormula EF;
UInt zero_count = 0;
ProteinIdentification::SearchParameters search_parameters;
protein_identifications.push_back(ProteinIdentification());
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
inputfile_name = getStringOption_("in");
outputfile_name = getStringOption_("out");
min_size = getIntOption_("min_length");
mass_acc = getIntOption_("mass_accuracy");
out_opt = getIntOption_("out_option");
missed_cleavages = getIntOption_("missed_cleavages");
AASequence M = AASequence::fromString(getStringOption_("motif"));
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
file.load(inputfile_name, protein_data);
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
// This should be updated if more cleavage enzymes are available
String enzyme_name = getStringOption_("enzyme");
digestor.setEnzyme(enzyme_name);
search_parameters.digestion_enzyme = *(ProteaseDB::getInstance()->getEnzyme(enzyme_name));
digestor.setMissedCleavages(missed_cleavages);
for (UInt i = 0; i < protein_data.size(); ++i)
{
PeptideEvidence pe;
temp_protein_hit.setSequence(protein_data[i].sequence);
temp_protein_hit.setAccession(protein_data[i].identifier);
pe.setProteinAccession(protein_data[i].identifier);
digestor.digest(AASequence::fromString(protein_data[i].sequence), temp_peptides);
temp_peptide_hit.setPeptideEvidences(vector<PeptideEvidence>(1, pe));
for (UInt j = 0; j < temp_peptides.size(); ++j)
{
if (temp_peptides[j].size() >= min_size)
{
if (temp_peptides[j].hasSubsequence(M))
{
temp_peptide_hit.setSequence(temp_peptides[j]);
peptide_identification.insertHit(temp_peptide_hit);
}
}
}
protein_identifications[0].insertHit(temp_protein_hit);
}
DateTime date_time;
String date_time_string;
date_time.now();
date_time_string = date_time.get();
protein_identifications[0].setSearchParameters(search_parameters);
protein_identifications[0].setDateTime(date_time);
protein_identifications[0].setSearchEngine("In-silico digestion");
protein_identifications[0].setIdentifier("In-silico_digestion" + date_time_string);
peptide_identification.setIdentifier("In-silico_digestion" + date_time_string);
identifications.push_back(peptide_identification);
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
ofstream fp_out(outputfile_name.c_str());
if (out_opt == 2)
{
fp_out << "mass_error" << SEP << "#proteins in database" << SEP << "# tryptic peptides" << SEP << "# unique peptide weights" << SEP << "# identifiable proteins" << SEP << "average window_size" << "\n";
}
UInt mass_iter = mass_acc;
while (mass_iter > 0)
{
vector<double> MIN, MAX;
vector<String> protein_names, PROTEINS;
vector<vector<double> > Y;
vector<UInt> OVER;
UInt total = 0;
if (out_opt == 1 || out_opt == 3)
{
fp_out << "counter" << SEP << "ProteinID" << SEP << "PeptideLocation" << SEP << "PeptideSequence" << SEP << "C" << SEP << "H" << SEP << "N" << SEP << "O" << SEP << "S" << SEP << "length" << SEP << "weight" << SEP << "min_weight" << SEP << "max_weight" << SEP << "Formula" << SEP << "D" << SEP << "E" << SEP << "K" << SEP << "R" << SEP << "H" << SEP << "Y" << SEP << "W" << SEP << "F" << SEP << "C" << SEP << "M" << SEP << "S" << SEP << "T" << SEP << "N" << SEP << "Q" << SEP << "G" << SEP << "A" << SEP << "V" << SEP << "L" << SEP << "I" << SEP << "P" << SEP << "hydrophobicity" << "\n";
}
for (UInt i = 0; i < protein_data.size(); ++i)
{
PeptideEvidence pe;
pe.setProteinAccession(protein_data[i].identifier);
temp_protein_hit.setAccession(protein_data[i].identifier);
digestor.digest(AASequence::fromString(protein_data[i].sequence), temp_peptides);
temp_peptide_hit.setPeptideEvidences(vector<PeptideEvidence>(1, pe));
for (UInt j = 0; j < temp_peptides.size(); ++j)
{
//vector<double> B_peptide, Y_peptide;
vector<double> peptide_ions;
accurate_mass = temp_peptides[j].getMonoWeight();
min_mass = accurate_mass - mass_iter * accurate_mass / 1000000000;
max_mass = accurate_mass + mass_iter * accurate_mass / 1000000000;
EF = temp_peptides[j].getFormula();
for (UInt r = 1; r <= temp_peptides[j].size(); ++r)
{
//B_peptide.push_back(temp_peptides[j].getPrefix(r).getMonoWeight());
peptide_ions.push_back(temp_peptides[j].getPrefix(r).getMonoWeight());
peptide_ions.push_back(temp_peptides[j].getSuffix(r).getMonoWeight());
//Y_peptide.push_back(temp_peptides[j].getSuffix(r).getMonoWeight());
}
if (temp_peptides[j].size() >= min_size)
{
if (temp_peptides[j].hasSubsequence(M))
{
OVER.push_back((-1)); //because the increment of the first will always be counted;
//IonCounter.push_back(0);
MIN.push_back(min_mass);
MAX.push_back(max_mass);
Y.push_back(peptide_ions);
//B.push_back(B_peptide);
protein_names.push_back(protein_accessions[0]);
temp_peptide_hit.setSequence(temp_peptides[j]);
peptide_identification.insertHit(temp_peptide_hit);
if (out_opt == 1 || out_opt == 3)
{
const String unmodified_peptide = temp_peptides[j].toUnmodifiedString();
const Size nK = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'K');
const Size nD = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'D');
const Size nR = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'R');
const Size nW = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'W');
const Size nM = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'M');
const Size nN = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'N');
const Size nA = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'A');
const Size nI = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'I');
const Size nE = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'E');
const Size nH = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'H');
const Size nF = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'F');
const Size nS = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'S');
const Size nQ = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'Q');
const Size nV = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'V');
const Size nP = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'P');
const Size nY = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'Y');
const Size nC = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'C');
const Size nT = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'T');
const Size nG = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'G');
const Size nL = std::count(unmodified_peptide.begin(), unmodified_peptide.end(), 'L');
const ElementDB* db = ElementDB::getInstance();
const Element* C = db->getElement("C");
const Element* H = db->getElement("H");
const Element* N = db->getElement("N");
const Element* O = db->getElement("O");
const Element* S = db->getElement("S");
fp_out << counter << SEP << ">" << protein_accessions[0] << SEP << j << SEP << temp_peptides[j] << SEP
<< EF.getNumberOf(C) << SEP << EF.getNumberOf(H) << SEP << EF.getNumberOf(N) << SEP << EF.getNumberOf(O) << SEP
<< EF.getNumberOf(S) << SEP << temp_peptides[j].size() << SEP << precisionWrapper(temp_peptides[j].getMonoWeight()) << SEP
<< precisionWrapper(min_mass) << SEP << precisionWrapper(max_mass) << SEP << temp_peptides[j].getFormula() << SEP
<< nD << SEP << nE << SEP << nK << SEP << nR << SEP << nH << SEP << nY << SEP << nW << SEP << nF << SEP << nC << SEP
<< nM << SEP << nS << SEP << nT << SEP << nN << SEP << nQ << SEP << nG << SEP << nA << SEP << nV << SEP << nL << SEP
<< nI << SEP << nP << SEP << nD * (-3.5) + nE * (-3.5) + nK * (-3.9) + nR * (-4.5) + nH * (-3.2) + nY * (-1.3) + nW * (-0.9) + nF * (2.8) + nC * (2.5) + nM * (1.9) + nS * (-0.8) + nT * (-0.7) + nN * (-3.5) + nQ * (-3.5) + nG * (-0.4) + nA * (1.8) + nV * (4.2) + nL * (4.5) + nI * (4.5) + nP * (-1.6) << "\n";
}
counter++;
}
}
}
protein_identifications[0].insertHit(temp_protein_hit);
}
if (out_opt != 2)
{
fp_out << "MW_count" << SEP;
}
for (UInt r = 1; r <= 100; ++r)
{
fp_out << "y" << r << SEP << "b" << r << SEP;
}
fp_out << "\n";
fp_out << "MW_count" << SEP << "Overlapping ions in search space" << "\n";
for (UInt x = 0; x < MAX.size(); ++x)
{
/*for(UInt it = 0; it < ions.size(); ++it)
{
ions[it] = -1; //all ions from the same peptide all be counted
}
*/
cout << "2nd loop" << SEP << MAX.size() - x << endl;
vector<UInt> IonCounter;
for (UInt y = 0; y < MAX.size(); ++y)
{
if ((MIN[y] < MIN[x] && MAX[y] > MIN[x]) || (MAX[y] > MAX[x] && MIN[y] < MAX[x]) || (MIN[x] == MIN[y]))
{
OVER[x] = OVER[x] + 1;
//find overlapping tandem ions
vector<double> X_temp, Y_temp;
X_temp = Y[x];
Y_temp = Y[y];
UInt ions = 0;
for (UInt xx = 0; xx < X_temp.size(); ++xx)
{
for (UInt yy = 0; yy < Y_temp.size(); ++yy)
{
if (fabs(X_temp[xx] - Y_temp[yy]) <= 1)
{
ions += 1;
}
}
}
IonCounter.push_back(ions);
}
}
if (out_opt == 3)
{
fp_out << OVER[x] << SEP;
if (MAX[x] < 3500)
{
for (UInt it = 0; it < IonCounter.size(); ++it)
{
fp_out << IonCounter[it] << SEP;
}
}
fp_out << "\n";
cout << OVER[x];
}
total = total + OVER[x];
if (OVER[x] == 0)
{
++zero_count;
PROTEINS.push_back(protein_names[x]);
}
}
UInt pro_count = 0;
for (UInt a = 0; a + 1 < PROTEINS.size(); ++a)
{
if (PROTEINS[a] == PROTEINS[a + 1])
{
++pro_count;
}
cout << PROTEINS.size() << endl << pro_count << endl;
}
if (out_opt != 2)
{
mass_iter = 0;
}
else
{
mass_iter = mass_iter - 1;
}
if (out_opt > 1)
{
fp_out << mass_iter << SEP << protein_data.size() << SEP << MAX.size() << SEP << zero_count << SEP << PROTEINS.size() - pro_count << SEP << total << endl;
}
pro_count = 0;
zero_count = 0;
}
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPDigestorMotif tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/FeatureFinderMultiplex.cpp | .cpp | 14,820 | 390 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Lars Nilse $
// $Authors: Lars Nilse, Timo Sachsenberg, Samuel Wein, Mathias Walzer $
// --------------------------------------------------------------------------
#include <OpenMS/config.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CONCEPT/Constants.h>
#include <OpenMS/CONCEPT/ProgressLogger.h>
#include <OpenMS/DATASTRUCTURES/DBoundingBox.h>
#include <OpenMS/DATASTRUCTURES/ListUtils.h>
#include <OpenMS/IONMOBILITY/FAIMSHelper.h>
#include <OpenMS/IONMOBILITY/IMDataConverter.h>
#include <OpenMS/IONMOBILITY/IMTypes.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/KERNEL/StandardTypes.h>
#include <OpenMS/KERNEL/ConsensusMap.h>
#include <OpenMS/KERNEL/FeatureMap.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/ML/REGRESSION/LinearRegression.h>
#include <OpenMS/KERNEL/RangeUtils.h>
#include <OpenMS/KERNEL/ChromatogramTools.h>
#include <OpenMS/PROCESSING/CENTROIDING/PeakPickerHiRes.h>
#include <OpenMS/ML/REGRESSION/LinearRegression.h>
#include <OpenMS/FEATUREFINDER/MultiplexDeltaMasses.h>
#include <OpenMS/FEATUREFINDER/MultiplexDeltaMassesGenerator.h>
#include <OpenMS/FEATUREFINDER/MultiplexIsotopicPeakPattern.h>
#include <OpenMS/FEATUREFINDER/MultiplexFilteringCentroided.h>
#include <OpenMS/FEATUREFINDER/MultiplexFilteringProfile.h>
#include <OpenMS/FEATUREFINDER/MultiplexClustering.h>
#include <OpenMS/FEATUREFINDER/MultiplexSatelliteCentroided.h>
#include <OpenMS/FEATUREFINDER/FeatureFinderMultiplexAlgorithm.h>
#include <OpenMS/ML/CLUSTERING/GridBasedCluster.h>
#include <OpenMS/PROCESSING/FEATURE/FeatureOverlapFilter.h>
#include <OpenMS/DATASTRUCTURES/DPosition.h>
#include <OpenMS/DATASTRUCTURES/DBoundingBox.h>
//Contrib includes
#include <boost/algorithm/string/split.hpp>
#include <boost/algorithm/string/replace.hpp>
#include <boost/algorithm/string/classification.hpp>
#include <QDir>
//std includes
#include <cmath>
#include <vector>
#include <algorithm>
#include <fstream>
#include <limits>
#include <locale>
#include <iomanip>
#include <set>
using namespace std;
using namespace OpenMS;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_FeatureFinderMultiplex FeatureFinderMultiplex
@brief Detects peptide pairs in LC-MS data and determines their relative abundance.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → FeatureFinderMultiplex →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FileConverter </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> @ref TOPP_IDMapper</td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FileFilter </td>
</tr>
</table>
</CENTER>
FeatureFinderMultiplex is a tool for the fully automated analysis of quantitative proteomics data. It detects pairs of isotopic envelopes with fixed m/z separation.
It requires no prior sequence identification of the peptides and works on both profile or centroided spectra. In what follows we outline the algorithm.
<b>Algorithm</b>
The algorithm is divided into three parts: filtering, clustering and linear fitting, see Fig. (d), (e) and (f).
In the following discussion let us consider a particular mass spectrum at retention time 1350 s, see Fig. (a).
It contains a peptide of mass 1492 Da and its 6 Da heavier labelled counterpart. Both are doubly charged in this instance.
Their isotopic envelopes therefore appear at 746 and 749 in the spectrum. The isotopic peaks within each envelope are separated by 0.5.
The spectrum was recorded at finite intervals. In order to read accurate intensities at arbitrary m/z we spline-fit over the data, see Fig. (b).
We would like to search for such peptide pairs in our LC-MS data set. As a warm-up let us consider a standard intensity cut-off filter, see Fig. (c).
Scanning through the entire m/z range (red dot) only data points with intensities above a certain threshold pass the filter.
Unlike such a local filter, the filter used in our algorithm takes intensities at a range of m/z positions into account, see Fig. (d). A data point (red dot) passes if
- all six intensities at m/z, m/z+0.5, m/z+1, m/z+3, m/z+3.5 and m/z+4 lie above a certain threshold,
- the intensity profiles in neighbourhoods around all six m/z positions show a good correlation and
- the relative intensity ratios within a peptide agree up to a factor with the ratios of a theoretic averagine model.
Let us now filter not only a single spectrum but all spectra in our data set. Data points that pass the filter form clusters in the t-m/z plane, see Fig. (e).
Each cluster corresponds to the mono-isotopic mass trace of the lightest peptide of a SILAC pattern. We now use hierarchical clustering methods to assign each data point to a specific cluster.
The optimum number of clusters is determined by maximizing the silhouette width of the partitioning.
Each data point in a cluster corresponds to three pairs of intensities (at [m/z, m/z+3], [m/z+0.5, m/z+3.5] and [m/z+1, m/z+4]).
A plot of all intensity pairs in a cluster shows a clear linear correlation, see Fig. (f).
Using linear regression we can determine the relative amounts of labelled and unlabelled peptides in the sample.
@image html SILACAnalyzer_algorithm.png
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_FeatureFinderMultiplex.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_FeatureFinderMultiplex.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPFeatureFinderMultiplex :
public TOPPBase
{
private:
// input and output files
String in_;
String out_;
String out_multiplets_;
String out_blacklist_;
public:
TOPPFeatureFinderMultiplex() :
TOPPBase("FeatureFinderMultiplex", "Determination of peak ratios in LC-MS data")
{
}
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "LC-MS dataset in either centroid or profile mode");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerOutputFile_("out", "<file>", "", "Output file containing the individual peptide features.", false);
setValidFormats_("out", ListUtils::create<String>("featureXML"));
registerOutputFile_("out_multiplets", "<file>", "", "Optional output file containing all detected peptide groups (i.e. peptide pairs or triplets or singlets or ..). The m/z-RT positions correspond to the lightest peptide in each group.", false, true);
setValidFormats_("out_multiplets", ListUtils::create<String>("consensusXML"));
registerOutputFile_("out_blacklist", "<file>", "", "Optional output file containing all peaks which have been associated with a peptide feature (and subsequently blacklisted).", false, true);
setValidFormats_("out_blacklist", ListUtils::create<String>("mzML"));
addEmptyLine_();
registerStringOption_("faims_merge_features", "<true/false>", "true",
"For FAIMS data with multiple compensation voltages: Merge features representing the same analyte "
"detected at different CV values into a single feature. Only features with DIFFERENT FAIMS CV values "
"are merged (same CV = different analytes). Has no effect on non-FAIMS data.", false);
setValidStrings_("faims_merge_features", {"true", "false"});
registerFullParam_(FeatureFinderMultiplexAlgorithm().getDefaults());
}
/**
* @brief process parameters of 'input/output' section
*/
void getParameters_in_out_()
{
in_ = getStringOption_("in");
out_ = getStringOption_("out");
out_multiplets_ = getStringOption_("out_multiplets");
out_blacklist_ = getStringOption_("out_blacklist");
}
/**
* @brief Write feature map to featureXML file.
*
* @param[in] filename name of featureXML file
* @param[out] map feature map for output
*/
void writeFeatureMap_(const String& filename, FeatureMap& map) const
{
FileHandler().storeFeatures(filename, map, {FileTypes::FEATUREXML});
}
/**
* @brief Write consensus map to consensusXML file.
*
* @param[in] filename name of consensusXML file
* @param[out] map consensus map for output
*/
void writeConsensusMap_(const String& filename, ConsensusMap& map) const
{
for (auto & ch : map.getColumnHeaders())
{
ch.second.filename = getStringOption_("in");
}
FileHandler().storeConsensusFeatures(filename, map, {FileTypes::CONSENSUSXML});
}
/**
* @brief Write blacklist to mzML file.
*
* @param[in] filename name of mzML file
* @param[out] blacklist blacklist for output
*/
void writeBlacklist_(const String& filename, const MSExperiment& blacklist) const
{
FileHandler().storeExperiment(filename, blacklist, {FileTypes::MZML});
}
/**
* @brief determine the number of samples
* for example n=2 for SILAC, or n=1 for simple feature detection
*
* @param[in] labels string describing the labels
*/
static size_t numberOfSamples(String labels)
{
// samples can be deliminated by any kind of brackets
labels.substitute("(", "[");
labels.substitute("{", "[");
size_t n = std::count(labels.begin(), labels.end(), '[');
// if no labels are specified, we have n=1 for simple feature detection
if (n == 0)
{
n = 1;
}
return n;
}
ExitCodes main_(int, const char**) override
{
/**
* handle parameters
*/
getParameters_in_out_();
if ((out_.empty()) && (out_multiplets_.empty()))
{
throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Strings for all output files are empty. Please specify at least one output file.");
}
/**
* load input
*/
FileHandler file;
MSExperiment exp;
// only read MS1 spectra
std::vector<int> levels;
levels.push_back(1);
file.getOptions().setMSLevels(levels);
OPENMS_LOG_DEBUG << "Loading input..." << endl;
file.loadExperiment(in_, exp, {FileTypes::MZML}, log_type_);
// Prepare algorithm parameters
Param params = getParam_();
params.remove("in");
params.remove("out");
params.remove("out_multiplets");
params.remove("out_blacklist");
params.remove("log");
params.remove("debug");
params.remove("threads");
params.remove("no_progress");
params.remove("force");
params.remove("test");
// Split by FAIMS CV (returns single NaN-keyed element for non-FAIMS data)
auto faims_groups = IMDataConverter::splitByFAIMSCV(std::move(exp));
const bool has_faims = faims_groups.size() > 1 || !std::isnan(faims_groups[0].first);
if (has_faims)
{
OPENMS_LOG_INFO << "FAIMS data detected with " << faims_groups.size() << " compensation voltage(s)." << endl;
}
FeatureMap combined_feature_map;
ConsensusMap combined_consensus_map;
MSExperiment combined_blacklist;
bool first_group = true;
// Process each FAIMS CV group (or single group for non-FAIMS data)
for (auto& [group_cv, faims_group] : faims_groups)
{
if (has_faims)
{
OPENMS_LOG_INFO << "Processing FAIMS CV group: " << group_cv << " V (" << faims_group.size() << " spectra)" << endl;
}
// Create algorithm instance for this group
FeatureFinderMultiplexAlgorithm algorithm_cv;
algorithm_cv.setParameters(params);
algorithm_cv.setLogType(this->log_type_);
algorithm_cv.run(faims_group, true);
// Annotate features with FAIMS CV (if FAIMS data) and add to combined results
FeatureMap& feature_map_cv = algorithm_cv.getFeatureMap();
for (auto& feat : feature_map_cv)
{
if (has_faims)
{
feat.setMetaValue(Constants::UserParam::FAIMS_CV, group_cv);
}
combined_feature_map.push_back(feat);
}
// Copy ColumnHeaders from first group (they're the same across all groups)
ConsensusMap& consensus_map_cv = algorithm_cv.getConsensusMap();
if (first_group)
{
combined_consensus_map.setColumnHeaders(consensus_map_cv.getColumnHeaders());
combined_consensus_map.setExperimentType(consensus_map_cv.getExperimentType());
first_group = false;
}
// Annotate consensus features with FAIMS CV
for (auto& cf : consensus_map_cv)
{
if (has_faims)
{
cf.setMetaValue(Constants::UserParam::FAIMS_CV, group_cv);
}
combined_consensus_map.push_back(cf);
}
// Combine blacklist
MSExperiment& blacklist_cv = algorithm_cv.getBlacklist();
for (auto& spec : blacklist_cv)
{
if (has_faims)
{
spec.setMetaValue(Constants::UserParam::FAIMS_CV, group_cv);
}
combined_blacklist.addSpectrum(std::move(spec));
}
}
if (has_faims)
{
OPENMS_LOG_INFO << "Combined " << combined_feature_map.size() << " features from all FAIMS CV groups." << endl;
// Optionally merge features representing the same analyte at different CV values
if (getStringOption_("faims_merge_features") == "true")
{
Size before_merge = combined_feature_map.size();
FeatureOverlapFilter::mergeFAIMSFeatures(combined_feature_map, 5.0, 0.05);
OPENMS_LOG_INFO << "FAIMS feature merge: " << before_merge << " -> " << combined_feature_map.size()
<< " features (merged " << (before_merge - combined_feature_map.size()) << ")" << endl;
}
}
// ensure unique IDs for the combined maps
combined_feature_map.ensureUniqueId();
combined_consensus_map.ensureUniqueId();
// write feature map, consensus maps and blacklist
// Note: use simple setPrimaryMSRunPath since exp was moved
if (!(out_.empty()))
{
combined_feature_map.setPrimaryMSRunPath({in_});
writeFeatureMap_(out_, combined_feature_map);
}
if (!(out_multiplets_.empty()))
{
StringList ms_run_paths(numberOfSamples(params.getValue("algorithm:labels").toString()), in_);
combined_consensus_map.setPrimaryMSRunPath(ms_run_paths);
writeConsensusMap_(out_multiplets_, combined_consensus_map);
}
if (!(out_blacklist_.empty()))
{
writeBlacklist_(out_blacklist_, combined_blacklist);
}
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPFeatureFinderMultiplex tool;
return tool.main(argc, argv);
}
///@endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/PercolatorAdapter.cpp | .cpp | 52,459 | 1,209 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Mathias Walzer $
// $Authors: Andreas Simon, Mathias Walzer, Matthew The $
// --------------------------------------------------------------------------
#include <OpenMS/config.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/ANALYSIS/ID/PercolatorFeatureSetHelper.h>
#include <OpenMS/CONCEPT/Constants.h>
#include <OpenMS/CONCEPT/EnumHelpers.h>
#include <OpenMS/PROCESSING/ID/IDFilter.h>
#include <OpenMS/FORMAT/CsvFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/FORMAT/PercolatorInfile.h>
#include <OpenMS/FORMAT/OSWFile.h>
#include <OpenMS/SYSTEM/File.h>
#include <QtCore/qfile.h>
#include <iostream>
#include <cmath>
#include <string>
#include <set>
//#include <typeinfo>
#include <boost/algorithm/clamp.hpp>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_PercolatorAdapter PercolatorAdapter
@brief PercolatorAdapter facilitates the input to, the call of and output integration of Percolator.
Percolator (http://percolator.ms/) is a tool to apply semi-supervised learning for peptide
identification from shotgun proteomics datasets.
@experimental This tool is work in progress and usage and input requirements might change.
<center>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → PercolatorAdapter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PSMFeatureExtractor </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFilter </td>
</tr>
</table>
</center>
<p>Percolator is search engine sensitive, i.e. it's input features vary,
depending on the search engine. Must be prepared beforehand. If you do not want
to use the specific features, use the generic_feature_set flag. Will incorporate
the score attribute of a PSM, so be sure, the score you want is set as main
score with @ref TOPP_IDScoreSwitcher . Be aware, that you might very well
experience a performance loss compared to the search engine specific features.
You can also perform protein inference with percolator when you activate the protein fdr parameter.
Additionally you need to set the enzyme setting.
We only read the q-value for protein groups since Percolator has a more elaborate FDR estimation.
For proteins we add q-value as main score and PEP as metavalue.
For PSMs you can choose the main score. Peptide level FDRs cannot be parsed and used yet.</p>
Multithreading: The thread parameter is passed to percolator.
Note: By default, a minimum of 3 threads is used (default of percolator) even if the number of threads
is set to e.g. 1 for backwards compatibility reasons. You can still force the usage of less than 3 threads
by setting the force flag.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_PercolatorAdapter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_PercolatorAdapter.html
Percolator is written by Lukas Käll (http://per-colator.com/ Copyright Lukas Käll <lukas.kall@scilifelab.se>)
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class PercolatorAdapter :
public TOPPBase
{
public:
PercolatorAdapter() :
TOPPBase("PercolatorAdapter", "Facilitate input to Percolator and reintegrate.", true)
{
}
protected:
struct PercolatorResult
{
String PSMId;
double score;
double qvalue;
double posterior_error_prob;
String peptide;
char preAA;
char postAA;
StringList proteinIds;
PercolatorResult(const String& pid, const double s, const double q, const String& p, const char pre, const char pos, const StringList& pl):
PSMId (pid),
score (s),
qvalue (q),
posterior_error_prob (0.0),
peptide (p),
preAA (pre),
postAA (pos),
proteinIds (pl)
{
}
explicit PercolatorResult(StringList& row):
proteinIds()
{
// peptide sequence
StringList pep;
std::size_t left_dot = row[4].find_first_of('.');
std::size_t right_dot = row[4].find_last_of('.');
OPENMS_PRECONDITION(left_dot < right_dot, "Peptide sequence encoding must have dot notation (e.g., A.PEPTIDER.C).")
// retrieve pre and post AA, e.g., A and C in "A.PEPTIDE.C" or ".PEPTIDE."
preAA = (left_dot == 0 || row[4][left_dot - 1] == '-') ? '[' : row[4][left_dot - 1]; // const char PeptideEvidence::N_TERMINAL_AA = '[';
postAA = (right_dot + 1 < row[4].size() || row[4][right_dot + 1] == '-') ? ']' : row[4][right_dot + 1]; // const char PeptideEvidence::C_TERMINAL_AA = ']';
// retrieve sequence between dots, e.g., PEPTIDE
peptide = row[4].substr(left_dot + 1, (right_dot - 1) - (left_dot + 1) + 1);
// SVM-score
score = row[1].toDouble();
// q-Value
qvalue = row[2].toDouble();
// PEP
posterior_error_prob = row[3].toDouble();
// scannr. as written in preparePIN
PSMId = row[0];
proteinIds = vector<String>(row.begin()+5,row.end());
}
bool operator!=(const PercolatorResult& rhs) const
{
if (PSMId != rhs.PSMId || score != rhs.score || qvalue != rhs.qvalue ||
posterior_error_prob != rhs.posterior_error_prob || peptide != rhs.peptide ||
proteinIds != rhs.proteinIds)
{
return true;
}
return false;
}
bool operator==(const PercolatorResult& rhs) const
{
return !(operator !=(rhs));
}
};
struct PercolatorProteinResult
{
String protein_accession;
double qvalue;
double posterior_error_prob;
PercolatorProteinResult(const String& pid, const double q, const double pep):
protein_accession (pid),
qvalue (q),
posterior_error_prob (pep)
{
}
bool operator!=(const PercolatorProteinResult& rhs) const
{
if (protein_accession != rhs.protein_accession || qvalue != rhs.qvalue ||
posterior_error_prob != rhs.posterior_error_prob)
{
return true;
}
return false;
}
bool operator==(const PercolatorProteinResult& rhs) const
{
return !(operator !=(rhs));
}
};
void registerOptionsAndFlags_() override
{
static const bool is_required(true);
static const bool is_advanced_option(true);
static const bool force_openms_format(true);
registerInputFileList_("in", "<files>", StringList(), "Input file(s)", !is_required);
setValidFormats_("in", ListUtils::create<String>("mzid,idXML"));
registerInputFileList_("in_decoy", "<files>", StringList(), "Input decoy file(s) in case of separate searches", !is_required);
setValidFormats_("in_decoy", ListUtils::create<String>("mzid,idXML"));
registerInputFile_("in_osw", "<file>", "", "Input file in OSW format", !is_required);
setValidFormats_("in_osw", ListUtils::create<String>("OSW"));
registerOutputFile_("out", "<file>", "", "Output file");
setValidFormats_("out", ListUtils::create<String>("idXML,mzid,osw"));
registerOutputFile_("out_pin", "<file>", "", "Write pin file (e.g., for debugging)", !is_required, is_advanced_option);
setValidFormats_("out_pin", ListUtils::create<String>("tsv"), !force_openms_format);
registerOutputFile_("out_pout_target", "<file>", "", "Write pout file (e.g., for debugging)", !is_required, is_advanced_option);
setValidFormats_("out_pout_target", ListUtils::create<String>("tab"), !force_openms_format);
registerOutputFile_("out_pout_decoy", "<file>", "", "Write pout file (e.g., for debugging)", !is_required, is_advanced_option);
setValidFormats_("out_pout_decoy", ListUtils::create<String>("tab"), !force_openms_format);
registerOutputFile_("out_pout_target_proteins", "<file>", "", "Write pout file (e.g., for debugging)", !is_required, is_advanced_option);
setValidFormats_("out_pout_target_proteins", ListUtils::create<String>("tab"), !force_openms_format);
registerOutputFile_("out_pout_decoy_proteins", "<file>", "", "Write pout file (e.g., for debugging)", !is_required, is_advanced_option);
setValidFormats_("out_pout_decoy_proteins", ListUtils::create<String>("tab"), !force_openms_format);
registerStringOption_("out_type", "<type>", "", "Output file type -- default: determined from file extension or content.", false);
setValidStrings_("out_type", ListUtils::create<String>("mzid,idXML,osw"));
String enzs = "no_enzyme,elastase,pepsin,proteinasek,thermolysin,chymotrypsin,lys-n,lys-c,arg-c,asp-n,glu-c,trypsin,trypsinp";
registerStringOption_("enzyme", "<enzyme>", "trypsin", "Type of enzyme: "+enzs , !is_required);
setValidStrings_("enzyme", ListUtils::create<String>(enzs));
registerInputFile_("percolator_executable", "<executable>",
// choose the default value according to the platform where it will be executed
#ifdef OPENMS_WINDOWSPLATFORM
"percolator.exe",
#else
"percolator",
#endif
"The Percolator executable. Provide a full or relative path, or make sure it can be found in your PATH environment.", is_required, !is_advanced_option, {"is_executable"}
);
registerFlag_("peptide_level_fdrs", "Calculate peptide-level FDRs instead of PSM-level FDRs.");
registerFlag_("protein_level_fdrs", "Use the picked protein-level FDR to infer protein probabilities. Use the -fasta option and -decoy_pattern to set the Fasta file and decoy pattern.");
registerStringOption_("osw_level", "<osw_level>", "ms2", "OSW: the data level selected for scoring.", !is_required);
setValidStrings_("osw_level", StringList(OSWFile::names_of_oswlevel.begin(), OSWFile::names_of_oswlevel.end()));
registerStringOption_("score_type", "<type>", "q-value", "Type of the peptide main score", false);
setValidStrings_("score_type", ListUtils::create<String>("q-value,pep,svm"));
//Advanced parameters
registerFlag_("generic_feature_set", "Use only generic (i.e. not search engine specific) features. Generating search engine specific features for common search engines by PSMFeatureExtractor will typically boost the identification rate significantly.", is_advanced_option);
registerIntOption_("subset_max_train", "<number>", 0, "Only train an SVM on a subset of <x> PSMs, and use the resulting score vector to evaluate the other PSMs. Recommended when analyzing huge numbers (>1 million) of PSMs. When set to 0, all PSMs are used for training as normal.", !is_required, is_advanced_option);
registerDoubleOption_("cpos", "<value>", 0.0, "Cpos, penalty for mistakes made on positive examples. Set by cross validation if not specified.", !is_required, is_advanced_option);
registerDoubleOption_("cneg", "<value>", 0.0, "Cneg, penalty for mistakes made on negative examples. Set by cross validation if not specified.", !is_required, is_advanced_option);
registerDoubleOption_("testFDR", "<value>", 0.01, "False discovery rate threshold for evaluating best cross validation result and the reported end result.", !is_required, is_advanced_option);
registerDoubleOption_("trainFDR", "<value>", 0.01, "False discovery rate threshold to define positive examples in training. Set to testFDR if 0.", !is_required, is_advanced_option);
registerIntOption_("maxiter", "<number>", 10, "Maximal number of iterations", !is_required, is_advanced_option);
registerIntOption_("nested_xval_bins", "<number>", 1, "Number of nested cross-validation bins in the 3 splits.", !is_required, is_advanced_option);
registerFlag_("quick_validation", "Quicker execution by reduced internal cross-validation.", is_advanced_option);
registerOutputFile_("weights", "<file>", "", "Output final weights to the given file", !is_required, is_advanced_option);
setValidFormats_("weights", ListUtils::create<String>("tsv"), !force_openms_format);
registerInputFile_("init_weights", "<file>", "", "Read initial weights to the given file", !is_required, is_advanced_option);
setValidFormats_("init_weights", ListUtils::create<String>("tsv"), !force_openms_format);
registerFlag_("static", "Use static model (requires init-weights parameter to be set)", is_advanced_option);
registerStringOption_("default_direction", "<featurename>", "", "The most informative feature given as the feature name, can be negated to indicate that a lower value is better.", !is_required, is_advanced_option);
registerIntOption_("verbose", "<level>", 2, "Set verbosity of output: 0=no processing info, 5=all.", !is_required, is_advanced_option);
registerFlag_("unitnorm", "Use unit normalization [0-1] instead of standard deviation normalization", is_advanced_option);
registerFlag_("test_each_iteration", "Measure performance on test set each iteration", is_advanced_option);
registerFlag_("override", "Override error check and do not fall back on default score vector in case of suspect score vector", is_advanced_option);
registerIntOption_("seed", "<value>", 1, "Setting seed of the random number generator.", !is_required, is_advanced_option);
registerIntOption_("doc", "<value>", 0, "Include description of correct features", !is_required, is_advanced_option);
registerFlag_("klammer", "Retention time features calculated as in Klammer et al. Only available if -doc is set", is_advanced_option);
registerInputFile_("fasta", "<file>", "", "Provide the fasta file as the argument to this flag, which will be used for protein grouping based on an in-silico digest (only valid if option -protein_level_fdrs is active).", !is_required, is_advanced_option);
setValidFormats_("fasta", ListUtils::create<String>("FASTA"));
registerStringOption_("decoy_pattern", "<value>", "random", "Define the text pattern to identify the decoy proteins and/or PSMs, set this up if the label that identifies the decoys in the database is not the default (Only valid if option -protein_level_fdrs is active).", !is_required, is_advanced_option);
registerFlag_("post_processing_tdc", "Use target-decoy competition to assign q-values and PEPs.", is_advanced_option);
registerFlag_("train_best_positive", "Enforce that, for each spectrum, at most one PSM is included in the positive set during each training iteration. If the user only provides one PSM per spectrum, this filter will have no effect.", is_advanced_option);
//OSW/IPF parameters
registerDoubleOption_("ipf_max_peakgroup_pep", "<value>", 0.7, "OSW/IPF: Assess transitions only for candidate peak groups until maximum posterior error probability.", !is_required, is_advanced_option);
registerDoubleOption_("ipf_max_transition_isotope_overlap", "<value>", 0.5, "OSW/IPF: Maximum isotope overlap to consider transitions in IPF.", !is_required, is_advanced_option);
registerDoubleOption_("ipf_min_transition_sn", "<value>", 0, "OSW/IPF: Minimum log signal-to-noise level to consider transitions in IPF. Set -1 to disable this filter.", !is_required, is_advanced_option);
}
// Function adapted from Enzyme.h in Percolator converter
// TODO: adapt to OpenMS enzymes. Use existing functionality in EnzymaticDigestion.
bool isEnz_(const char& n, const char& c, string& enz)
{
if (enz == "trypsin")
{
return ((n == 'K' || n == 'R') && c != 'P') || n == '-' || c == '-';
}
else if (enz == "trypsinp")
{
return (n == 'K' || n == 'R') || n == '-' || c == '-';
}
else if (enz == "chymotrypsin")
{
return ((n == 'F' || n == 'W' || n == 'Y' || n == 'L') && c != 'P') || n == '-' || c == '-';
}
else if (enz == "thermolysin")
{
return ((c == 'A' || c == 'F' || c == 'I' || c == 'L' || c == 'M'
|| c == 'V' || (n == 'R' && c == 'G')) && n != 'D' && n != 'E') || n == '-' || c == '-';
}
else if (enz == "proteinasek")
{
return (n == 'A' || n == 'E' || n == 'F' || n == 'I' || n == 'L'
|| n == 'T' || n == 'V' || n == 'W' || n == 'Y') || n == '-' || c == '-';
}
else if (enz == "pepsin")
{
return ((c == 'F' || c == 'L' || c == 'W' || c == 'Y' || n == 'F'
|| n == 'L' || n == 'W' || n == 'Y') && n != 'R') || n == '-' || c == '-';
}
else if (enz == "elastase")
{
return ((n == 'L' || n == 'V' || n == 'A' || n == 'G') && c != 'P')
|| n == '-' || c == '-';
}
else if (enz == "lys-n")
{
return (c == 'K')
|| n == '-' || c == '-';
}
else if (enz == "lys-c")
{
return ((n == 'K') && c != 'P')
|| n == '-' || c == '-';
}
else if (enz == "arg-c")
{
return ((n == 'R') && c != 'P')
|| n == '-' || c == '-';
}
else if (enz == "asp-n")
{
return (c == 'D')
|| n == '-' || c == '-';
}
else if (enz == "glu-c")
{
return ((n == 'E') && (c != 'P'))
|| n == '-' || c == '-';
}
else
{
return true;
}
}
void readPoutAsMap_(const String& pout_file, std::map<String, PercolatorResult>& pep_map)
{
CsvFile csv_file(pout_file, '\t');
StringList row;
for (Size i = 1; i < csv_file.rowCount(); ++i)
{
csv_file.getRow(i, row);
PercolatorResult res(row);
// note: Since we create our pin file in a way that the SpecID (=PSMId) is composed of filename + spectrum native id
// this will be passed through Percolator and we use it again to read it back in.
String spec_ref = res.PSMId + res.peptide;
writeDebug_("PSM identifier in pout file: " + spec_ref, 10);
// retain only the best result in the unlikely case that a PSMId+peptide combination occurs multiple times
if (pep_map.find(spec_ref) == pep_map.end())
{
pep_map.insert( map<String, PercolatorResult>::value_type ( spec_ref, res ) );
}
}
}
/// We only read the q-value for protein groups since Percolator has a more elaborate FDR estimation.
/// For proteins we add q-value as main score and PEP as metavalue.
void readProteinPoutAsMapAndAddGroups_(const String& pout_protein_file, std::map<String, PercolatorProteinResult>& protein_map, ProteinIdentification& protID_to_add_grps)
{
CsvFile csv_file(pout_protein_file, '\t');
StringList row;
std::vector<ProteinIdentification::ProteinGroup>& grps = protID_to_add_grps.getIndistinguishableProteins();
for (Size i = 1; i < csv_file.rowCount(); ++i)
{
csv_file.getRow(i, row);
StringList protein_accessions;
row[0].split(",", protein_accessions);
double qvalue = row[2].toDouble();
double posterior_error_prob = row[3].toDouble();
for (const String& str : protein_accessions)
{
protein_map.insert( map<String, PercolatorProteinResult>::value_type (str, PercolatorProteinResult(str, qvalue, posterior_error_prob ) ) );
}
ProteinIdentification::ProteinGroup grp;
grp.probability = qvalue;
grp.accessions = protein_accessions;
grps.push_back(grp);
}
}
ExitCodes readInputFiles_(const StringList& in_list, PeptideIdentificationList& all_peptide_ids, vector<ProteinIdentification>& all_protein_ids, bool isDecoy, bool& found_decoys, int& min_charge, int& max_charge)
{
for (StringList::const_iterator fit = in_list.begin(); fit != in_list.end(); ++fit)
{
String file_idx(distance(in_list.begin(), fit));
PeptideIdentificationList peptide_ids;
vector<ProteinIdentification> protein_ids;
String in = *fit;
FileTypes::Type in_type = FileHandler::getType(in);
OPENMS_LOG_INFO << "Loading input file: " << in << endl;
if (in_type == FileTypes::IDXML)
{
FileHandler().loadIdentifications(in, protein_ids, peptide_ids, {FileTypes::IDXML});
}
else if (in_type == FileTypes::MZIDENTML)
{
OPENMS_LOG_WARN << "Converting from mzid: possible loss of information depending on target format." << endl;
FileHandler().loadIdentifications(in, protein_ids, peptide_ids, {FileTypes::IDXML});
}
//else catched by TOPPBase:registerInput being mandatory mzid or idxml
if (protein_ids.empty())
{
throw Exception::ElementNotFound(
__FILE__,
__LINE__,
OPENMS_PRETTY_FUNCTION,
"File '" + in + "' has not ProteinIDRuns.");
}
else if (protein_ids.size() > 1)
{
throw Exception::InvalidValue(
__FILE__,
__LINE__,
OPENMS_PRETTY_FUNCTION,
"File '" + in + "' has more than one ProteinIDRun. This is currently not correctly handled."
"Please use the merge_proteins_add_psms option if you used IDMerger. Alternatively, pass"
" all original single-run idXML inputs as list to this tool.",
"# runs: " + String(protein_ids.size()));
}
//being paranoid about the presence of target decoy denominations, which are crucial to the percolator process
size_t index = 0;
for (PeptideIdentification& pep_id : peptide_ids)
{
index++;
if (in_list.size() > 1)
{
String scan_identifier = PercolatorInfile::getScanIdentifier(pep_id, index);
scan_identifier = "file=" + file_idx + "," + scan_identifier;
pep_id.setSpectrumReference( scan_identifier);
}
for (PeptideHit& hit : pep_id.getHits())
{
if (!hit.metaValueExists("target_decoy"))
{
if (isDecoy)
{
hit.setMetaValue("target_decoy", "decoy");
found_decoys = true;
}
else
{
hit.setMetaValue("target_decoy", "target");
}
}
else if (hit.isDecoy())
{
found_decoys = true;
}
if (hit.getCharge() > max_charge)
{
max_charge = hit.getCharge();
}
if (hit.getCharge() < min_charge)
{
min_charge = hit.getCharge();
}
// TODO: set min/max scores?
}
}
//paranoia check if this comes from the same search engine! (only in the first proteinidentification of the first proteinidentifications vector vector)
if (!all_protein_ids.empty())
{
if (protein_ids.front().getSearchEngine() != all_protein_ids.front().getSearchEngine())
{
writeLogError_("Input files are not all from the same search engine: " + protein_ids.front().getSearchEngine() + " and " + all_protein_ids.front().getSearchEngine() +
". Use TOPP_PSMFeatureExtractor to merge results from different search engines if desired. Aborting!");
return INCOMPATIBLE_INPUT_DATA;
}
bool identical_extra_features = true;
ProteinIdentification::SearchParameters all_search_parameters = all_protein_ids.front().getSearchParameters();
ProteinIdentification::SearchParameters search_parameters = protein_ids.front().getSearchParameters();
if (all_search_parameters.metaValueExists("extra_features"))
{
StringList all_search_feature_list = ListUtils::create<String>(all_search_parameters.getMetaValue("extra_features").toString());
set<String> all_search_feature_set(all_search_feature_list.begin(),all_search_feature_list.end());
if (search_parameters.metaValueExists("extra_features"))
{
StringList search_feature_list = ListUtils::create<String>(search_parameters.getMetaValue("extra_features").toString());
set<String> search_feature_set(search_feature_list.begin(), search_feature_list.end());
identical_extra_features = (search_feature_set == all_search_feature_set);
}
else
{
identical_extra_features = false;
}
}
if (!identical_extra_features)
{
writeLogError_("Input files do not have the same set of extra features from TOPP_PSMFeatureExtractor. Aborting!");
return INCOMPATIBLE_INPUT_DATA;
}
if (protein_ids.front().getScoreType() != all_protein_ids.front().getScoreType())
{
OPENMS_LOG_WARN << "Warning: differing ScoreType between input files" << endl;
}
if (search_parameters.digestion_enzyme != all_search_parameters.digestion_enzyme)
{
OPENMS_LOG_WARN << "Warning: differing DigestionEnzyme between input files" << endl;
}
if (search_parameters.variable_modifications != all_search_parameters.variable_modifications)
{
OPENMS_LOG_WARN << "Warning: differing VarMods between input files" << endl;
}
if (search_parameters.fixed_modifications != all_search_parameters.fixed_modifications)
{
OPENMS_LOG_WARN << "Warning: differing FixMods between input files" << endl;
}
if (search_parameters.charges != all_search_parameters.charges)
{
OPENMS_LOG_WARN << "Warning: differing SearchCharges between input files" << endl;
}
if (search_parameters.fragment_mass_tolerance != all_search_parameters.fragment_mass_tolerance)
{
OPENMS_LOG_WARN << "Warning: differing FragTol between input files" << endl;
}
if (search_parameters.precursor_mass_tolerance != all_search_parameters.precursor_mass_tolerance)
{
OPENMS_LOG_WARN << "Warning: differing PrecTol between input files" << endl;
}
}
OPENMS_LOG_INFO << "Merging peptide ids." << endl;
all_peptide_ids.insert(all_peptide_ids.end(), peptide_ids.begin(), peptide_ids.end());
OPENMS_LOG_INFO << "Merging protein ids." << endl;
PercolatorFeatureSetHelper::mergeMULTISEProteinIds(all_protein_ids, protein_ids);
}
return EXECUTION_OK;
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// general variables and data to perform PercolatorAdapter
//-------------------------------------------------------------
PeptideIdentificationList all_peptide_ids;
vector<ProteinIdentification> all_protein_ids;
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
const StringList in_list = getStringList_("in");
const StringList in_decoy = getStringList_("in_decoy");
OPENMS_LOG_DEBUG << "Input file (of target?): " << ListUtils::concatenate(in_list, ",") << " & " << ListUtils::concatenate(in_decoy, ",") << " (decoy)" << endl;
const String in_osw = getStringOption_("in_osw");
const OSWFile::OSWLevel osw_level = (OSWFile::OSWLevel)Helpers::indexOf(OSWFile::names_of_oswlevel, getStringOption_("osw_level"));
//output file names and types
String out = getStringOption_("out");
FileTypes::Type out_type = FileTypes::nameToType(getStringOption_("out_type"));
if (out_type == FileTypes::UNKNOWN)
{
out_type = FileHandler::getTypeByFileName(out);
}
if (out_type == FileTypes::UNKNOWN)
{
writeLogError_("Fatal error: Could not determine output file type!");
return PARSE_ERROR;
}
const String percolator_executable(getStringOption_("percolator_executable"));
if (in_list.empty() && in_osw.empty())
{
writeLogError_("Fatal error: no input file given (parameter 'in' or 'in_osw')");
printUsage_();
return ILLEGAL_PARAMETERS;
}
if (!in_list.empty() && !in_osw.empty())
{
writeLogError_("Fatal error: Provide either mzid/idXML or osw input files (parameter 'in' or 'in_osw')");
printUsage_();
return ILLEGAL_PARAMETERS;
}
if (out.empty())
{
writeLogError_("Fatal error: no output file given (parameter 'out')");
printUsage_();
return ILLEGAL_PARAMETERS;
}
if (!in_osw.empty() && out_type != FileTypes::OSW)
{
writeLogError_("Fatal error: OSW input requires OSW output.");
printUsage_();
return ILLEGAL_PARAMETERS;
}
if (!in_list.empty() && out_type == FileTypes::OSW)
{
writeLogError_("Fatal error: idXML/mzid input requires idXML/mzid output.");
printUsage_();
return ILLEGAL_PARAMETERS;
}
bool peptide_level_fdrs = getFlag_("peptide_level_fdrs");
bool protein_level_fdrs = getFlag_("protein_level_fdrs");
Int description_of_correct = getIntOption_("doc");
double ipf_max_peakgroup_pep = getDoubleOption_("ipf_max_peakgroup_pep");
double ipf_max_transition_isotope_overlap = getDoubleOption_("ipf_max_transition_isotope_overlap");
double ipf_min_transition_sn = getDoubleOption_("ipf_min_transition_sn");
//-------------------------------------------------------------
// read input
//-------------------------------------------------------------
string enz_str = getStringOption_("enzyme");
// create temp directory to store percolator in file pin.tab temporarily
File::TempDir tmp_dir(debug_level_ >= 2);
String txt_designator = File::getUniqueName();
String pin_file;
if (getStringOption_("out_pin").empty())
{
pin_file = tmp_dir.getPath() + txt_designator + "_pin.tab";
}
else
{
pin_file = getStringOption_("out_pin");
}
String pout_target_file(tmp_dir.getPath() + txt_designator + "_target_pout_psms.tab");
String pout_decoy_file(tmp_dir.getPath() + txt_designator + "_decoy_pout_psms.tab");
String pout_target_file_peptides(tmp_dir.getPath() + txt_designator + "_target_pout_peptides.tab");
String pout_decoy_file_peptides(tmp_dir.getPath() + txt_designator + "_decoy_pout_peptides.tab");
String pout_target_file_proteins(tmp_dir.getPath() + txt_designator + "_target_pout_proteins.tab");
String pout_decoy_file_proteins(tmp_dir.getPath() + txt_designator + "_decoy_pout_proteins.tab");
// prepare OSW I/O
if (out_type == FileTypes::OSW && in_osw != out)
{
// Copy input OSW to output OSW, because we want to retain all information
remove(out.c_str());
if (!out.empty())
{
std::ifstream src(in_osw.c_str(), std::ios::binary);
std::ofstream dst(out.c_str(), std::ios::binary);
dst << src.rdbuf();
}
}
// idXML or mzid input
if (out_type != FileTypes::OSW)
{
//TODO introduce min/max charge to parameters. For now take available range
int max_charge = 0;
int min_charge = 10;
bool is_decoy = false;
bool found_decoys = false;
ExitCodes read_exit = readInputFiles_(in_list, all_peptide_ids, all_protein_ids, is_decoy, found_decoys, min_charge, max_charge);
if (read_exit != EXECUTION_OK)
{
return read_exit;
}
if (!in_decoy.empty())
{
is_decoy = true;
read_exit = readInputFiles_(in_decoy, all_peptide_ids, all_protein_ids, is_decoy, found_decoys, min_charge, max_charge);
if (read_exit != EXECUTION_OK)
{
return read_exit;
}
}
OPENMS_LOG_DEBUG << "Using min/max charges of " << min_charge << "/" << max_charge << endl;
if (!found_decoys)
{
writeLogError_("No decoys found, search results discrimination impossible. Aborting!");
printUsage_();
return INCOMPATIBLE_INPUT_DATA;
}
if (all_peptide_ids.empty())
{
writeLogError_("No peptide hits found in input file. Aborting!");
printUsage_();
return INPUT_FILE_EMPTY;
}
if (all_protein_ids.empty())
{
writeLogError_("No protein hits found in input file. Aborting!");
printUsage_();
return INPUT_FILE_EMPTY;
}
//-------------------------------------------------------------
// prepare pin
//-------------------------------------------------------------
StringList feature_set;
feature_set.push_back("SpecId");
feature_set.push_back("Label");
feature_set.push_back("ScanNr");
if (description_of_correct != 0)
{
feature_set.push_back("retentiontime");
feature_set.push_back("deltamass");
}
feature_set.push_back("ExpMass");
feature_set.push_back("CalcMass");
feature_set.push_back("mass");
feature_set.push_back("peplen");
for (int i = min_charge; i <= max_charge; ++i)
{
feature_set.push_back("charge" + String(i));
}
feature_set.push_back("enzN");
feature_set.push_back("enzC");
feature_set.push_back("enzInt");
feature_set.push_back("dm");
feature_set.push_back("absdm");
ProteinIdentification::SearchParameters search_parameters = all_protein_ids.front().getSearchParameters();
if (search_parameters.metaValueExists("extra_features"))
{
StringList extra_feature_set = ListUtils::create<String>(search_parameters.getMetaValue("extra_features").toString());
feature_set.insert(feature_set.end(), extra_feature_set.begin(), extra_feature_set.end());
}
else if (getFlag_("generic_feature_set"))
{
feature_set.push_back("score");
}
else
{
writeLogError_("No search engine specific features found. Generate search engine specific features using PSMFeatureExtractor or set the -generic-features-set flag to override. Aborting!");
printUsage_();
return INCOMPATIBLE_INPUT_DATA;
}
feature_set.push_back("Peptide");
feature_set.push_back("Proteins");
OPENMS_LOG_DEBUG << "Writing percolator input file." << endl;
PercolatorInfile::store(pin_file, all_peptide_ids, feature_set, enz_str, min_charge, max_charge);
}
// OSW input
else
{
OPENMS_LOG_DEBUG << "Writing percolator input file." << endl;
std::ofstream pin_output(pin_file);
OSWFile::readToPIN(in_osw, osw_level, pin_output, ipf_max_peakgroup_pep, ipf_max_transition_isotope_overlap, ipf_min_transition_sn);
}
QStringList arguments;
// Check all set parameters and get them into arguments StringList
{
if (peptide_level_fdrs)
{
arguments << "-r" << pout_target_file_peptides.toQString();
arguments << "-B" << pout_decoy_file_peptides.toQString();
}
else
{
arguments << "-U";
}
arguments << "-m" << pout_target_file.toQString();
arguments << "-M" << pout_decoy_file.toQString();
if (protein_level_fdrs)
{
arguments << "-l" << pout_target_file_proteins.toQString();
arguments << "-L" << pout_decoy_file_proteins.toQString();
String fasta_file = getStringOption_("fasta");
if (fasta_file.empty())
{
fasta_file = "auto";
}
arguments << "-f" << fasta_file.toQString();
arguments << "-z" << String(enz_str).toQString();
String decoy_pattern = getStringOption_("decoy_pattern");
if (decoy_pattern != "random") arguments << "-P" << decoy_pattern.toQString();
}
int cv_threads = getIntOption_("threads"); // pass-through of OpenMS thread parameter
if (cv_threads != 3) // default in percolator is 3
{
// If a lower than default value (3) is chosen the user needs to enforce it.
// This ensures that existing workflows (which implicitly used 3 threads) don't slow down
// if e.g. the OpenMS version and this adapter is updated.
if (cv_threads > 3 || getFlag_("force"))
{
arguments << "--num-threads" << String(cv_threads).toQString();
}
}
double cpos = getDoubleOption_("cpos");
double cneg = getDoubleOption_("cneg");
if (cpos != 0.0)
{
arguments << "-p" << String(cpos).toQString();
}
if (cneg != 0.0)
{
arguments << "-n" << String(cneg).toQString();
}
double train_FDR = getDoubleOption_("trainFDR");
double test_FDR = getDoubleOption_("testFDR");
if (train_FDR != 0.01)
{
arguments << "-F" << String(train_FDR).toQString();
}
if (test_FDR != 0.01)
{
arguments << "-t" << String(test_FDR).toQString();
}
Int max_iter = getIntOption_("maxiter");
if (max_iter != 10)
{
arguments << "-i" << String(max_iter).toQString();
}
Int subset_max_train = getIntOption_("subset_max_train");
if (subset_max_train > 0)
{
arguments << "-N" << String(subset_max_train).toQString();
}
if (getFlag_("quick_validation"))
{
arguments << "-x";
}
if (getFlag_("post_processing_tdc"))
{
arguments << "-Y";
}
if (getFlag_("train_best_positive"))
{
arguments << "--train-best-positive";
}
if (getFlag_("static"))
{
arguments << "--static";
}
Int nested_xval_bins = getIntOption_("nested_xval_bins");
if (nested_xval_bins > 1)
{
arguments << "--nested-xval-bins" << String(nested_xval_bins).toQString();
}
String weights_file = getStringOption_("weights");
String init_weights_file = getStringOption_("init_weights");
String default_search_direction = getStringOption_("default_direction");
if (!weights_file.empty())
{
arguments << "-w" << weights_file.toQString();
}
if (!init_weights_file.empty())
{
arguments << "-W" << init_weights_file.toQString();
}
if (!default_search_direction.empty())
{
arguments << "-V" << default_search_direction.toQString();
}
Int verbose_level = getIntOption_("verbose");
if (verbose_level != 2)
{
arguments << "-v" << String(verbose_level).toQString();
}
if (getFlag_("unitnorm"))
{
arguments << "-u";
}
if (getFlag_("test_each_iteration"))
{
arguments << "-R";
}
if (getFlag_("override"))
{
arguments << "-O";
}
Int seed = getIntOption_("seed");
if (seed != 1)
{
arguments << "-S" << String(seed).toQString();
}
if (getFlag_("klammer"))
{
arguments << "-K";
}
if (description_of_correct != 0)
{
arguments << "-D" << String(description_of_correct).toQString();
}
arguments << pin_file.toQString();
}
writeLogInfo_("Prepared percolator input.");
//-------------------------------------------------------------
// run percolator
//-------------------------------------------------------------
// Percolator execution with the executable and the arguments StringList
TOPPBase::ExitCodes exit_code = runExternalProcess_(percolator_executable.toQString(), arguments);
if (exit_code != EXECUTION_OK)
{
return exit_code;
}
//-------------------------------------------------------------
// reintegrate pout results
//-------------------------------------------------------------
// when percolator finished calculation, it stores the results -r option (with or without -U) or -m (which seems to be not working)
// WARNING: The -r option cannot be used in conjunction with -U: no peptide level statistics are calculated, redirecting PSM level statistics to provided file instead.
map<String, PercolatorResult> pep_map;
String pout_target = getStringOption_("out_pout_target");
String pout_decoy = getStringOption_("out_pout_decoy");
String pout_target_proteins = getStringOption_("out_pout_target_proteins");
String pout_decoy_proteins = getStringOption_("out_pout_decoy_proteins");
if (peptide_level_fdrs)
{
readPoutAsMap_(pout_target_file_peptides, pep_map);
readPoutAsMap_(pout_decoy_file_peptides, pep_map);
// copy file in tmp folder to output
if (!pout_target.empty())
{
QFile::copy(pout_target_file_peptides.toQString(), pout_target.toQString());
}
if (!pout_decoy.empty())
{
QFile::copy(pout_decoy_file_peptides.toQString(), pout_decoy.toQString());
}
}
else
{
readPoutAsMap_(pout_target_file, pep_map);
readPoutAsMap_(pout_decoy_file, pep_map);
// copy file in tmp folder to output
if (!pout_target.empty())
{
QFile::copy(pout_target_file.toQString(), pout_target.toQString());
}
if (!pout_decoy.empty())
{
QFile::copy(pout_decoy_file.toQString(), pout_decoy.toQString());
}
}
map<String, PercolatorProteinResult> protein_map;
if (protein_level_fdrs)
{
readProteinPoutAsMapAndAddGroups_(pout_target_file_proteins, protein_map, all_protein_ids[0]);
readProteinPoutAsMapAndAddGroups_(pout_decoy_file_proteins, protein_map, all_protein_ids[0] );
// copy file in tmp folder to output filename
if (!pout_target_proteins.empty())
{
QFile::copy(pout_target_file_proteins.toQString(), pout_target_proteins.toQString());
}
if (!pout_decoy_proteins.empty())
{
QFile::copy(pout_target_file_proteins.toQString(), pout_decoy_proteins.toQString());
}
}
// idXML or mzid input
if (in_osw.empty())
{
// Add the percolator results to the peptide vector of the original input file
//size_t c_debug = 0;
size_t cnt = 0;
String run_identifier = all_protein_ids.front().getIdentifier();
const String scoreType = getStringOption_("score_type");
size_t index = 0;
for (PeptideIdentification& pep_id : all_peptide_ids)
{
String old_score_type{pep_id.getScoreType()}; // copy because we modify the score type below
index++;
pep_id.setIdentifier(run_identifier);
if (scoreType == "pep")
{
pep_id.setScoreType("Posterior Error Probability");
}
else
{
//TODO we should make a difference between peptide-level q-values and psm-level q-values!
// I am just not changing it right now, because a lot of tools currently depend on
// the score being exactly "q-value"
pep_id.setScoreType(scoreType);
}
pep_id.setHigherScoreBetter(scoreType == "svm");
String scan_identifier = PercolatorInfile::getScanIdentifier(pep_id, index);
String file_identifier = pep_id.getMetaValue("file_origin", String());
file_identifier += (String)pep_id.getMetaValue("id_merge_index", String());
//check each PeptideHit for compliance with one of the PercolatorResults (by sequence)
for (PeptideHit& hit : pep_id.getHits())
{
String peptide_sequence = hit.getSequence().toBracketString(false, true);
String psm_identifier = file_identifier + scan_identifier + peptide_sequence;
//Only for super debug
writeDebug_("PSM identifier in PeptideHit: " + psm_identifier, 10);
map<String, PercolatorResult>::iterator pr = pep_map.find(psm_identifier);
if (pr != pep_map.end())
{
hit.setMetaValue(old_score_type, hit.getScore()); // old search engine "main" score as metavalue
hit.setMetaValue("MS:1001492", pr->second.score); // svm score
hit.setMetaValue("MS:1001491", pr->second.qvalue); // percolator q value
hit.setMetaValue("MS:1001493", pr->second.posterior_error_prob); // percolator pep
if (scoreType == "q-value")
{
hit.setScore(pr->second.qvalue);
}
else if (scoreType == "pep")
{
hit.setScore(pr->second.posterior_error_prob);
}
else if (scoreType == "svm")
{
hit.setScore(pr->second.score);
}
++cnt;
}
else
{
// If the input contains multiple PSMs per spectrum, Percolator only reports the top scoring PSM.
// The remaining PSMs should be reported as not identified
writeDebug_("PSM identifier " + psm_identifier + " not found in peptide map", 10);
// Percolator's svm score is scaled such that 0.0 is the score at the chosen FDR threshold,
// with positive scores representing PSMs under the FDR threshold (i.e. identified)
// and negative scores PSMs above the FDR threshold (i.e. not identified);
// -100.0 is typically more than low enough to represent a confidently non-identified PSM.
hit.setMetaValue("MS:1001492", -100.0); // svm score
hit.setMetaValue("MS:1001491", 1.0); // percolator q value
hit.setMetaValue("MS:1001493", 1.0); // percolator pep
if (scoreType == "q-value" || scoreType == "pep")
{
hit.setScore(1.0); // set q-value or PEP to 1.0 if hit not found in results
}
else if (scoreType == "svm")
{
hit.setScore(-100.0); // set SVM score to -100.0 if hit not found in results
}
}
}
}
if (!peptide_level_fdrs)
{
OPENMS_LOG_INFO << "PSM-level FDR: All PSMs are returned by percolator. Reannotating all PSMs in input data with percolator output." << endl;
}
else
{
OPENMS_LOG_INFO << "Peptide-level FDR: Only the best PSM per Peptide is returned by percolator. Reannotating the best PSM in input data with percolator output." << endl;
}
OPENMS_LOG_INFO << "Scores of all other PSMs will be set to 1.0." << endl;
OPENMS_LOG_INFO << cnt << " suitable PeptideHits of " << all_peptide_ids.size() << " PSMs were reannotated." << endl;
// TODO: There should only be 1 ProteinIdentification element in this vector, no need for a for loop
for (ProteinIdentification& prot_id_run : all_protein_ids)
{
// it is not a real search engine but we set it so that we know that
// scores were postprocessed
prot_id_run.setSearchEngine("Percolator");
prot_id_run.setSearchEngineVersion("3.07"); // TODO: read from percolator
if (protein_level_fdrs)
{
//check each ProteinHit for compliance with one of the PercolatorProteinResults (by accession)
for (ProteinHit& protein : prot_id_run.getHits())
{
String protein_accession = protein.getAccession();
map<String, PercolatorProteinResult>::iterator pr = protein_map.find(protein_accession);
if (pr != protein_map.end())
{
protein.setMetaValue("MS:1001493", pr->second.posterior_error_prob); // percolator pep
protein.setScore(pr->second.qvalue);
//remove to mark the protein as mapped. We can safely assume that every protein
// only occurs once in Percolator output.
protein_map.erase(pr);
}
else
{
protein.setScore(1.0); // set q-value to 1.0 if hit not found in results
protein.setMetaValue("MS:1001493", 1.0); // same for percolator pep
}
}
if (protein_level_fdrs)
{
prot_id_run.setInferenceEngine("Percolator");
prot_id_run.setInferenceEngineVersion("3.07");
}
prot_id_run.setScoreType("q-value");
prot_id_run.setHigherScoreBetter(false);
prot_id_run.sort();
}
if (!protein_map.empty()) //there remain unmapped proteins from Percolator
{
for (const auto& prot : protein_map)
{
if (prot.second.posterior_error_prob < 1.0) //actually present according to Percolator
{
OPENMS_LOG_WARN << "Warning: Protein " << prot.first << " reported by Percolator with non-zero probability was"
"not present in the input idXML. Ignoring to keep consistency of the PeptideIndexer settings..";
}
}
// filter groups that might contain these unmapped proteins so we do not get errors while writing our output.
IDFilter::updateProteinGroups(all_protein_ids[0].getIndistinguishableProteins(), all_protein_ids[0].getHits());
}
//TODO add software percolator and PercolatorAdapter
prot_id_run.setMetaValue("percolator", "PercolatorAdapter");
ProteinIdentification::SearchParameters search_parameters = prot_id_run.getSearchParameters();
search_parameters.setMetaValue("Percolator:peptide_level_fdrs", peptide_level_fdrs);
search_parameters.setMetaValue("Percolator:protein_level_fdrs", protein_level_fdrs);
search_parameters.setMetaValue("Percolator:generic_feature_set", getFlag_("generic_feature_set"));
search_parameters.setMetaValue("Percolator:testFDR", getDoubleOption_("testFDR"));
search_parameters.setMetaValue("Percolator:trainFDR", getDoubleOption_("trainFDR"));
search_parameters.setMetaValue("Percolator:maxiter", getIntOption_("maxiter"));
search_parameters.setMetaValue("Percolator:subset_max_train", getIntOption_("subset_max_train"));
search_parameters.setMetaValue("Percolator:quick_validation", getFlag_("quick_validation"));
search_parameters.setMetaValue("Percolator:static", getFlag_("static"));
search_parameters.setMetaValue("Percolator:weights", getStringOption_("weights"));
search_parameters.setMetaValue("Percolator:init_weights", getStringOption_("init_weights"));
search_parameters.setMetaValue("Percolator:default_direction", getStringOption_("default_direction"));
search_parameters.setMetaValue("Percolator:cpos", getDoubleOption_("cpos"));
search_parameters.setMetaValue("Percolator:cneg", getDoubleOption_("cneg"));
search_parameters.setMetaValue("Percolator:unitnorm", getFlag_("unitnorm"));
search_parameters.setMetaValue("Percolator:override", getFlag_("override"));
search_parameters.setMetaValue("Percolator:seed", getIntOption_("seed"));
search_parameters.setMetaValue("Percolator:doc", getIntOption_("doc"));
search_parameters.setMetaValue("Percolator:klammer", getFlag_("klammer"));
search_parameters.setMetaValue("Percolator:fasta", getStringOption_("fasta"));
search_parameters.setMetaValue("Percolator:decoy_pattern", getStringOption_("decoy_pattern"));
search_parameters.setMetaValue("Percolator:post_processing_tdc", getFlag_("post_processing_tdc"));
search_parameters.setMetaValue("Percolator:train_best_positive", getFlag_("train_best_positive"));
prot_id_run.setSearchParameters(search_parameters);
}
// Storing the PeptideHits with calculated q-value, pep and svm score
FileHandler().storeIdentifications(out, all_protein_ids, all_peptide_ids, {FileTypes::IDXML, FileTypes::MZIDENTML});
}
else
{
std::map< std::string, OSWFile::PercolatorFeature > features;
for (auto const &feat : pep_map)
{
features.emplace(std::piecewise_construct,
std::forward_as_tuple(feat.second.PSMId),
std::forward_as_tuple(feat.second.score, feat.second.qvalue, feat.second.posterior_error_prob));
}
OSWFile::writeFromPercolator(out, osw_level, features);
}
writeLogInfo_("PercolatorAdapter finished successfully!");
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
PercolatorAdapter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/LuciphorAdapter.cpp | .cpp | 26,861 | 675 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Petra Gutenbrunner, Oliver Alka $
// $Authors: Petra Gutenbrunner $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CHEMISTRY/ModificationDefinitionsSet.h>
#include <OpenMS/CHEMISTRY/ModificationsDB.h>
#include <OpenMS/DATASTRUCTURES/String.h>
#include <OpenMS/PROCESSING/ID/IDFilter.h>
#include <OpenMS/FORMAT/CsvFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/FORMAT/PepXMLFile.h>
#include <OpenMS/KERNEL/StandardTypes.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/SYSTEM/JavaInfo.h>
#include <OpenMS/ANALYSIS/ID/IDScoreSwitcherAlgorithm.h>
#include <QProcessEnvironment>
#include <cstddef>
#include <fstream>
#include <map>
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_LuciphorAdapter LuciphorAdapter
@brief Adapter for the LuciPHOr2: a site localisation tool of generic post-translational modifications from tandem mass spectrometry data.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → LuciphorAdapter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFileConverter</td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFilter or @n any protein/peptide processing tool</td>
</tr>
</table>
</CENTER>
LuciPHOr2 must be installed before this wrapper can be used. Please make sure that Java and LuciPHOr2 are working.@n
The following LuciPHOr2 version is required: luciphor2 (JAVA-based version of Luciphor) (1.2014Oct10). At the time of writing, it could be downloaded from http://luciphor2.sourceforge.net.
Input spectra for LuciPHOr2 have to be in pepXML file format. The input mzML file must be the same as the one used to create the pepXML input file.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_LuciphorAdapter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_LuciphorAdapter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
using namespace OpenMS;
using namespace std;
class LuciphorAdapter :
public TOPPBase
{
public:
LuciphorAdapter() :
TOPPBase("LuciphorAdapter", "Modification site localisation using LuciPHOr2."),
// parameter choices (the order of the values must be the same as in the LuciPHOr2 parameters!):
fragment_methods_(ListUtils::create<String>("CID,HCD")),
fragment_error_units_(ListUtils::create<String>("Da,ppm")),
score_selection_method_(ListUtils::create<String>("Peptide Prophet probability,Mascot Ion Score,-log(E-value),X!Tandem Hyperscore,Sequest Xcorr"))
{
}
protected:
struct LuciphorPSM
{
String spec_id;
int scan_nr;
int scan_idx;
int charge;
String predicted_pep;
double delta_score;
double predicted_pep_score;
double global_flr;
double local_flr;
LuciphorPSM() : scan_nr(-1), scan_idx(-1), charge(-1), delta_score(-1), predicted_pep_score(-1), global_flr(-1), local_flr(-1) {}
};
// lists of allowed parameter values:
vector<String> fragment_methods_, fragment_error_units_, score_selection_method_;
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input spectrum file");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerInputFile_("id", "<file>", "", "Protein/peptide identifications file");
setValidFormats_("id", ListUtils::create<String>("idXML"));
registerOutputFile_("out", "<file>", "", "Output file");
setValidFormats_("out", ListUtils::create<String>("idXML"));
registerInputFile_("executable", "<file>", "luciphor2.jar", "LuciPHOr2 .jar file. Provide a full or relative path, or make sure it can be found in your PATH environment.", true, false, {"is_executable"});
registerStringOption_("fragment_method", "<choice>", fragment_methods_[0], "Fragmentation method", false);
setValidStrings_("fragment_method", fragment_methods_);
registerDoubleOption_("fragment_mass_tolerance", "<value>", 0.5, "Tolerance of the peaks in the fragment spectrum", false);
registerStringOption_("fragment_error_units", "<choice>", fragment_error_units_[0], "Unit of fragment mass tolerance", false);
setValidStrings_("fragment_error_units", fragment_error_units_);
registerDoubleOption_("min_mz", "<value>", 150.0, "Do not consider peaks below this value for matching fragment ions", false);
vector<String> all_mods;
ModificationsDB::getInstance()->getAllSearchModifications(all_mods);
registerStringList_("target_modifications", "<mods>", ListUtils::create<String>("Phospho (S),Phospho (T),Phospho (Y)"), "List the amino acids to be searched for and their mass modifications, specified using UniMod (www.unimod.org) terms, e.g. 'Carbamidomethyl (C)'", false);
setValidStrings_("target_modifications", all_mods);
registerStringList_("neutral_losses", "<value>", ListUtils::create<String>("sty -H3PO4 -97.97690"), "List the types of neutral losses that you want to consider. The residue field is case sensitive. For example: lower case 'sty' implies that the neutral loss can only occur if the specified modification is present. Syntax: NL = <RESDIUES> -<NEUTRAL_LOSS_MOLECULAR_FORMULA> <MASS_LOST>", false);
registerDoubleOption_("decoy_mass", "<value>", 79.966331, "How much to add to an amino acid to make it a decoy", false);
setMinFloat_("decoy_mass", 1.0);
registerStringList_("decoy_neutral_losses", "<value>", ListUtils::create<String>("X -H3PO4 -97.97690"), "For handling the neutral loss from a decoy sequence. The syntax for this is identical to that of the normal neutral losses given above except that the residue is always 'X'. Syntax: DECOY_NL = X -<NEUTRAL_LOSS_MOLECULAR_FORMULA> <MASS_LOST>", false);
registerIntOption_("max_charge_state", "<num>", 5, "Do not consider PSMs with a charge state above this value", false);
setMinInt_("max_charge_state", 1);
registerIntOption_("max_peptide_length", "<num>", 40, "Restrict scoring to peptides with a length shorter than this value", false);
setMinInt_("max_peptide_length", 1);
registerIntOption_("max_num_perm", "<num>", 16384, "Maximum number of permutations a sequence can have", false);
setMinInt_("max_num_perm", 1);
registerDoubleOption_("modeling_score_threshold", "<value>", 0.95, "Minimum score a PSM needs to be considered for modeling", false);
setMinFloat_("modeling_score_threshold", 0.0);
registerDoubleOption_("scoring_threshold", "<value>", 0.0, "PSMs below this value will be discarded", false);
setMinFloat_("scoring_threshold", 0.0);
registerIntOption_("min_num_psms_model", "<num>", 50, "The minimum number of PSMs you need for any charge state in order to build a model for it", false);
setMinInt_("min_num_psms_model", 1);
registerIntOption_("num_threads", "<num>", 6, "For multi-threading, 0 = use all CPU found by JAVA", false);
setMinInt_("num_threads", 0);
registerStringOption_("run_mode", "<choice>", "0", "Determines how Luciphor will run: 0 = calculate FLR then rerun scoring without decoys (two iterations), 1 = Report Decoys: calculate FLR but don't rescore PSMs, all decoy hits will be reported", false);
setValidStrings_("run_mode", ListUtils::create<String>("0,1"));
registerDoubleOption_("rt_tolerance", "<num>", 0.01, "Set the retention time tolerance (for the mapping of identifications to spectra in case multiple search engines were used)", false);
setMinFloat_("rt_tolerance", 0.0);
registerInputFile_("java_executable", "<file>", "java", "The Java executable. Usually Java is on the system PATH. If Java is not found, use this parameter to specify the full path to Java", false, false, {"is_executable"});
registerIntOption_("java_memory", "<num>", 3500, "Maximum Java heap size (in MB)", false);
registerIntOption_("java_permgen", "<num>", 0, "Maximum Java permanent generation space (in MB); only for Java 7 and below", false, true);
}
String makeModString_(const String& mod_name)
{
const ResidueModification* mod = ModificationsDB::getInstance()->getModification(mod_name);
const String& residue = mod->getOrigin();
return String(residue + " " + mod->getDiffMonoMass());
}
ExitCodes parseParameters_(map<String, vector<String> >& config_map, const String& id, const String& in,
const String& out, const vector<String>& target_mods, String selection_method)
{
FileHandler fh;
config_map["SPECTRUM_PATH"].push_back(File::path(File::absolutePath(in)));
config_map["SPECTRUM_SUFFIX"].push_back(FileTypes::typeToName(fh.getTypeByFileName(in)));
config_map["INPUT_DATA"].push_back(id);
String type = FileTypes::typeToName(fh.getTypeByFileName(id));
config_map["INPUT_TYPE"].push_back(0);
config_map["ALGORITHM"].push_back(ListUtils::getIndex<String>(fragment_methods_, getStringOption_("fragment_method")));
config_map["MS2_TOL"].push_back(getDoubleOption_("fragment_mass_tolerance"));
config_map["MS2_TOL_UNITS"].push_back(ListUtils::getIndex<String>(fragment_error_units_, getStringOption_("fragment_error_units")));
config_map["MIN_MZ"].push_back(getDoubleOption_("min_mz"));
config_map["OUTPUT_FILE"].push_back(out);
config_map["DECOY_MASS"].push_back(getDoubleOption_("decoy_mass"));
config_map["MAX_CHARGE_STATE"].push_back(getIntOption_("max_charge_state"));
config_map["MAX_PEP_LEN"].push_back(getIntOption_("max_peptide_length"));
config_map["MAX_NUM_PERM"].push_back(getIntOption_("max_num_perm"));
config_map["SELECTION_METHOD"].push_back(ListUtils::getIndex<String>(score_selection_method_, selection_method));
config_map["MODELING_SCORE_THRESHOLD"].push_back(getDoubleOption_("modeling_score_threshold"));
config_map["SCORING_THRESHOLD"].push_back(getDoubleOption_("scoring_threshold"));
config_map["MIN_NUM_PSMS_MODEL"].push_back(getIntOption_("min_num_psms_model"));
config_map["NUM_THREADS"].push_back(getIntOption_("num_threads"));
config_map["RUN_MODE"].push_back(getStringOption_("run_mode"));
for (vector<String>::const_iterator it = target_mods.begin(); it != target_mods.end(); ++it)
{
config_map["TARGET_MOD"].push_back(makeModString_(*it));
}
vector<String> neutral_losses = getStringList_("neutral_losses");
for (vector<String>::const_iterator it = neutral_losses.begin(); it != neutral_losses.end(); ++it)
{
config_map["NL"].push_back(*it);
}
vector<String> dcy_neutral_losses = getStringList_("decoy_neutral_losses");
for (vector<String>::const_iterator it = dcy_neutral_losses.begin(); it != dcy_neutral_losses.end(); ++it)
{
config_map["DECOY_NL"].push_back(*it);
}
return EXECUTION_OK;
}
void writeConfigurationFile_(const String& out_path, map<String, vector<String> >& config_map)
{
ofstream output(out_path.c_str());
output << "## Input file for Luciphor2 (aka: LucXor). (part of OpenMS)\n\n";
for (std::map<String, vector<String> >::iterator it = config_map.begin(); it != config_map.end(); ++it)
{
String key = it->first;
if (!key.empty())
{
for (vector<String>::iterator it_val = it->second.begin(); it_val != it->second.end(); ++it_val)
{
output << key << " = " << *it_val << "\n";
}
}
}
//------------------------------------------------------------------
// static parameter definition
//------------------------------------------------------------------
// Generate a tab-delimited file of all the matched peaks
output << "WRITE_MATCHED_PEAKS_FILE = 0\n";
output << "MOD_PEP_REP = 0 ## 0 = show single character modifications, 1 = show TPP-formatted modifications\n";
output << "## This option can be used to help diagnose problems with Luciphor. Multi-threading is disabled in debug mode.\n";
output << "DEBUG_MODE = 0 ## 0 = default: turn off debugging\n";
output << " ## 1 = write peaks selected for modeling to disk\n";
output << " ## 2 = write the scores of all permutations for each PSM to disk\n";
output << " ## 3 = write the matched peaks for the top scoring permutation to disk\n";
output << " ## 4 = write HCD non-parametric models to disk (HCD-mode only option)\n";
}
struct LuciphorPSM splitSpecId_(const String& spec_id)
{
struct LuciphorPSM l_psm;
l_psm.spec_id = spec_id;
vector<String> parts;
spec_id.split(".", parts);
l_psm.scan_nr = parts[1].toInt();
l_psm.charge = parts[3].toInt();
return l_psm;
}
ExitCodes convertTargetModification_(const vector<String>& target_mods, map<String, String>& modifications)
{
modifications.clear();
for (vector<String>::const_iterator it = target_mods.begin(); it !=target_mods.end(); ++it)
{
String mod_param_value = *it;
String mod;
vector<String> parts;
mod_param_value.split(' ', parts);
if (parts.size() != 2)
{
writeLogError_("Error: cannot parse modification '" + mod_param_value + "'");
return PARSE_ERROR;
}
else
{
mod = parts[0];
String AAs = parts[1];
// LuciPHOr2 discards C-term and N-term modifications in the sequence. The modifications must be added based on the original sequence.
if (!AAs.hasPrefix("(C-term") && !AAs.hasPrefix("(N-term"))
{
AAs.remove(')');
AAs.remove('(');
// because origin can be e.g. (STY)
for (String::iterator aa = AAs.begin(); aa != AAs.end(); ++aa)
{
modifications[*aa] = mod;
}
}
}
}
return EXECUTION_OK;
}
String parseLuciphorOutput_(const String& l_out, map<int, LuciphorPSM>& l_psms, const SpectrumLookup& lookup)
{
if (!File::exists(l_out))
{
OPENMS_LOG_ERROR << "LuciPhor2 was not able to provide an output. Please set debug >= 4 for additional information." << std::endl;
throw Exception::FileNotFound(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, l_out);
}
CsvFile tsvfile(l_out, '\t');
String spec_id = "";
for (Size row_count = 1; row_count < tsvfile.rowCount(); ++row_count) // skip header line
{
vector<String> elements;
if (!tsvfile.getRow(row_count, elements))
{
writeLogError_("Error: could not split row " + String(row_count) + " of file '" + l_out + "'");
return PARSE_ERROR;
}
spec_id = elements[0];
struct LuciphorPSM l_psm = splitSpecId_(spec_id);
l_psm.scan_idx = lookup.findByScanNumber(l_psm.scan_nr);
l_psm.predicted_pep = elements[2];
l_psm.delta_score = elements[7].toDouble();
l_psm.predicted_pep_score = elements[8].toDouble();
l_psm.global_flr = elements[10].toDouble();
l_psm.local_flr = elements[11].toDouble();
if (l_psms.count(l_psm.scan_idx) > 0)
{
return "Duplicate scannr existing " + String(l_psm.scan_nr) + ".";
}
l_psms[l_psm.scan_idx] = l_psm;
}
return "";
}
// remove all modifications which are LuciPHOr2 target modifications,
// because for these LuciPHOr2 could predict a different position.
AASequence removeLuciphorTargetMods_(const AASequence& original_seq, const map<String, String>& target_mods_conv)
{
if (!original_seq.isModified()) {
return original_seq;
}
AASequence seq_converted = AASequence::fromString(original_seq.toUnmodifiedString());
// set C-term/N-term modification
if (original_seq.hasNTerminalModification())
{
seq_converted.setNTerminalModification(original_seq.getNTerminalModificationName());
}
if (original_seq.hasCTerminalModification())
{
seq_converted.setCTerminalModification(original_seq.getCTerminalModificationName());
}
// set all modifications, which were not changed by LuciPHOr2
for (Size i = 0; i < original_seq.size(); ++i)
{
if (original_seq.getResidue(i).isModified())
{
String mod = original_seq.getResidue(i).getModificationName();
// no target modification, modification can be set
bool found = false;
for (map<String, String>::const_iterator iter = target_mods_conv.begin(); iter != target_mods_conv.end() && !found; ++iter)
{
if (mod == iter->second)
{
found = true;
}
}
if (!found)
{
seq_converted.setModification(i, mod);
}
}
}
return seq_converted;
}
// set modifications changed by LuciPHOr2
ExitCodes setLuciphorTargetMods_(AASequence& seq, String seq_luciphor, const map<String, String>& target_mods_conv)
{
for (Size i = 0; i < seq_luciphor.length(); ++i)
{
char aa = seq_luciphor[i];
if (std::islower(aa))
{
map<String, String>::const_iterator iter = target_mods_conv.find(String(aa).toUpper());
if (iter != target_mods_conv.end())
{
if (seq.getResidue(i).isModified())
{
writeLogError_("Error: ambiguous modifications on AA '" + iter->first + "' (" + seq.getResidue(i).getModificationName() + ", " + iter->second + ")");
return PARSE_ERROR;
}
else
{
seq.setModification(i, iter->second);
}
}
}
}
return EXECUTION_OK;
}
void addScoreToMetaValues_(PeptideHit& hit, const String score_type)
{
if (!hit.metaValueExists(score_type) && !hit.metaValueExists(score_type + "_score"))
{
if (score_type.hasSubstring("score"))
{
hit.setMetaValue(score_type, hit.getScore());
}
else
{
hit.setMetaValue(score_type + "_score", hit.getScore());
}
}
}
String getSelectionMethod_(const PeptideIdentification& pep_id, String search_engine)
{
String selection_method = "";
if (pep_id.getScoreType() == "Posterior Error Probability" || pep_id.getScoreType() == "pep" || search_engine == "Percolator")
{
selection_method = score_selection_method_[0];
}
else if (search_engine == "Mascot")
{
selection_method = score_selection_method_[1];
}
else if (search_engine == "XTandem")
{
selection_method = score_selection_method_[3];
}
else
{
String msg = "SELECTION_METHOD parameter could not be set. Only Mascot, X! Tandem, or Posterior Error Probability score types are supported.";
throw Exception::RequiredParameterNotGiven(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, msg);
}
return selection_method;
}
ExitCodes main_(int, const char**) override
{
String java_executable = getStringOption_("java_executable");
if (!getFlag_("force"))
{
if (!JavaInfo::canRun(java_executable))
{
writeLogError_("Fatal error: Java is needed to run LuciPHOr2!");
return EXTERNAL_PROGRAM_ERROR;
}
}
else
{
writeLogWarn_("The installation of Java was not checked.");
}
//tmp_dir
File::TempDir tmp_dir(debug_level_ >= 2);
// create a temporary config file for LuciPHOr2 parameters
String conf_file = tmp_dir.getPath() + "luciphor2_input_template.txt";
String id = getStringOption_("id");
String in = getStringOption_("in");
String out = getStringOption_("out");
double rt_tolerance = getDoubleOption_("rt_tolerance");
FileHandler fh;
FileTypes::Type in_type = fh.getType(id);
PeptideIdentificationList pep_ids;
vector<ProteinIdentification> prot_ids;
PeakMap exp;
PeakFileOptions options;
options.clearMSLevels();
options.addMSLevel(2);
FileHandler().loadExperiment(in, exp, {FileTypes::MZML}, log_type_);
exp.sortSpectra(true);
// convert idXML input to pepXML if necessary
if (in_type == FileTypes::IDXML)
{
FileHandler().loadIdentifications(id, prot_ids, pep_ids, {FileTypes::IDXML});
if (!pep_ids.empty())
{
IDFilter::keepNBestHits(pep_ids, 1); // LuciPHOR2 only calculates the best hit
// Switch to PEP score type similar to BayesianProteinInferenceAlgorithm
IDScoreSwitcherAlgorithm switcher;
Size counter(0);
try
{
switcher.switchToGeneralScoreType(pep_ids, IDScoreSwitcherAlgorithm::ScoreType::PEP, counter);
}
catch (OpenMS::Exception::MissingInformation& /*e*/)
{
OPENMS_LOG_WARN << "Warning: Could not switch to PEP score type. Continuing with current score type." << std::endl;
}
}
else
{
OPENMS_LOG_WARN << "No PeptideIdentifications found in the IdXMLFile. Please check your previous steps.\n";
}
// create a temporary pepXML file for LuciPHOR2 input
String id_file_name = FileHandler::swapExtension(File::basename(id), FileTypes::PEPXML);
id = tmp_dir.getPath() + id_file_name;
PepXMLFile().store(id, prot_ids, pep_ids, in, "", false, rt_tolerance);
}
else
{
writeLogError_("Error: Unknown input file type given. Aborting!");
printUsage_();
return ILLEGAL_PARAMETERS;
}
vector<String> target_mods = getStringList_("target_modifications");
if (target_mods.empty())
{
writeLogError_("Error: No target modification existing.");
return ILLEGAL_PARAMETERS;
}
// initialize map
map<String, vector<String> > config_map;
String selection_method = getSelectionMethod_(pep_ids[0], prot_ids.begin()->getSearchEngine());
ExitCodes ret = parseParameters_(config_map, id, in, out, target_mods, selection_method);
if (ret != EXECUTION_OK)
{
return ret;
}
writeConfigurationFile_(conf_file, config_map);
// memory for JVM
QString java_memory = "-Xmx" + QString::number(getIntOption_("java_memory")) + "m";
int java_permgen = getIntOption_("java_permgen");
QString executable = getStringOption_("executable").toQString();
QStringList process_params; // the actual process is Java, not LuciPHOr2!
process_params << java_memory;
if (java_permgen > 0)
{
process_params << "-XX:MaxPermSize=" + QString::number(java_permgen);
}
process_params << "-jar" << executable << conf_file.toQString();
//-------------------------------------------------------------
// LuciPHOr2
//-------------------------------------------------------------
TOPPBase::ExitCodes exit_code = runExternalProcess_(java_executable.toQString(), process_params);
if (exit_code != EXECUTION_OK)
{
return exit_code;
}
SpectrumLookup lookup;
lookup.rt_tolerance = rt_tolerance;
lookup.readSpectra(exp.getSpectra());
map<int, LuciphorPSM> l_psms;
ProteinIdentification::SearchParameters search_params;
String error = parseLuciphorOutput_(out, l_psms, lookup);
if (!error.empty())
{
error = "Error: LuciPHOr2 output is not correctly formated. " + error;
writeLogError_(error);
return PARSE_ERROR;
}
//-------------------------------------------------------------
// writing output - merge LuciPHOr2 result to idXML
//-------------------------------------------------------------
PeptideIdentificationList pep_out;
map<String, String> target_mods_conv;
ret = convertTargetModification_(target_mods, target_mods_conv);
if (ret != EXECUTION_OK)
{
return ret;
}
for (PeptideIdentification& pep : pep_ids)
{
Size scan_idx;
const String& ID_native_ids = pep.getSpectrumReference();
try
{
scan_idx = lookup.findByNativeID(ID_native_ids);
}
catch (Exception::ElementNotFound&)
{
// fall-back if native ids are missing
writeLogWarn_("Unable to map native ID of identification to spectrum native ID. " + ID_native_ids);
scan_idx = lookup.findByRT(pep.getRT());
}
vector<PeptideHit> scored_peptides;
if (!pep.getHits().empty())
{
PeptideHit scored_hit = pep.getHits()[0];
addScoreToMetaValues_(scored_hit, pep.getScoreType());
struct LuciphorPSM l_psm;
if (l_psms.count(scan_idx) > 0)
{
l_psm = l_psms.at(scan_idx);
AASequence original_seq = scored_hit.getSequence();
AASequence predicted_seq = removeLuciphorTargetMods_(original_seq, target_mods_conv);
ret = setLuciphorTargetMods_(predicted_seq, l_psm.predicted_pep, target_mods_conv);
if (ret != EXECUTION_OK)
{
return ret;
}
scored_hit.setMetaValue("search_engine_sequence", scored_hit.getSequence().toString());
scored_hit.setMetaValue("Luciphor_pep_score", l_psm.predicted_pep_score);
scored_hit.setMetaValue("Luciphor_global_flr", l_psm.global_flr);
scored_hit.setMetaValue("Luciphor_local_flr", l_psm.local_flr);
scored_hit.setScore(l_psm.delta_score);
scored_hit.setSequence(predicted_seq);
}
else
{
scored_hit.setScore(-1);
}
scored_peptides.push_back(scored_hit);
}
else
{
writeLogError_("Error: LuciPHOr2 output does not match with idXML.");
return PARSE_ERROR;
}
PeptideIdentification new_pep_id(pep);
new_pep_id.setScoreType("Luciphor_delta_score");
new_pep_id.setHigherScoreBetter(true);
new_pep_id.setHits(scored_peptides);
new_pep_id.sort();
pep_out.push_back(new_pep_id);
}
// store which modifications have been localized
for (auto& p : prot_ids)
{
p.getSearchParameters().setMetaValue(Constants::UserParam::LOCALIZED_MODIFICATIONS_USERPARAM, getStringList_("target_modifications"));
}
FileHandler().storeIdentifications(out, prot_ids, pep_out, {FileTypes::IDXML});
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
LuciphorAdapter tool;
return tool.main(argc, argv);
}
///@endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/ConsensusMapNormalizer.cpp | .cpp | 7,332 | 150 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Lars Nilse $
// $Authors: Hendrik Brauer, Oliver Kohlbacher, Johannes Junker $
// --------------------------------------------------------------------------
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/FORMAT/ConsensusXMLFile.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/ANALYSIS/MAPMATCHING/ConsensusMapNormalizerAlgorithmThreshold.h>
#include <OpenMS/ANALYSIS/MAPMATCHING/ConsensusMapNormalizerAlgorithmMedian.h>
#include <OpenMS/ANALYSIS/MAPMATCHING/ConsensusMapNormalizerAlgorithmQuantile.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_ConsensusMapNormalizer ConsensusMapNormalizer
@brief Normalization of intensities in a set of maps using robust regression.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → ConsensusMapNormalizer →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> @ref TOPP_FeatureLinkerUnlabeled @n (or another feature grouping tool) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_ProteinQuantifier </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_TextExporter </td>
</tr>
</table>
</CENTER>
The tool normalizes the intensities of a set of maps (consensusXML file). The following normalization algorithms are available:
- Robust regression: Maps are normalized pair-wise relative to the map with the most features. Given two maps, peptide featues are classified as non-outliers (ratio_threshold < intensity ratio < 1/ratio_threshold) or outliers. From the non-outliers an average intensity ratio is calculated and used for normalization.
- Median correction: The median of all maps is set to the median of the map with the most features.
- Quantile normalization: Performs an exact quantile normalization if the number of features is equal across all maps. Otherwise, an approximate quantile normalization using resampling is applied.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_ConsensusMapNormalizer.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_ConsensusMapNormalizer.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPConsensusMapNormalizer :
public TOPPBase
{
public:
TOPPConsensusMapNormalizer() :
TOPPBase("ConsensusMapNormalizer", "Normalizes maps of one consensusXML file")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "input file");
setValidFormats_("in", ListUtils::create<String>("consensusXML"));
registerOutputFile_("out", "<file>", "", "output file");
setValidFormats_("out", ListUtils::create<String>("consensusXML"));
addEmptyLine_();
registerStringOption_("algorithm_type", "<type>", "robust_regression", "The normalization algorithm that is applied. 'robust_regression' scales each map by a fator computed from the ratios of non-differential background features (as determined by the ratio_threshold parameter), 'quantile' performs quantile normalization, 'median' scales all maps to the same median intensity, 'median_shift' shifts the median instead of scaling (WARNING: if you have regular, log-normal MS data, 'median_shift' is probably the wrong choice. Use only if you know what you're doing!)", false, false);
setValidStrings_("algorithm_type", ListUtils::create<String>("robust_regression,median,median_shift,quantile"));
registerDoubleOption_("ratio_threshold", "<ratio>", 0.67, "Only for 'robust_regression': the parameter is used to distinguish between non-outliers (ratio_threshold < intensity ratio < 1/ratio_threshold) and outliers.", false);
setMinFloat_("ratio_threshold", 0.001);
setMaxFloat_("ratio_threshold", 1.0);
registerStringOption_("accession_filter", "<regexp>", "", "Use only features with accessions (partially) matching this regular expression for computing the normalization factors. Useful, e.g., if you have known house keeping proteins in your samples. When this parameter is empty or the regular expression matches the empty string, all features are used (even those without an ID). No effect if quantile normalization is used.", false, true);
registerStringOption_("description_filter", "<regexp>", "", "Use only features with description (partially) matching this regular expression for computing the normalization factors. Useful, e.g., if you have known house keeping proteins in your samples. When this parameter is empty or the regular expression matches the empty string, all features are used (even those without an ID). No effect if quantile normalization is used.", false, true);
}
ExitCodes main_(int, const char **) override
{
String in = getStringOption_("in");
String out = getStringOption_("out");
String algo_type = getStringOption_("algorithm_type");
String acc_filter = getStringOption_("accession_filter");
String desc_filter = getStringOption_("description_filter");
double ratio_threshold = getDoubleOption_("ratio_threshold");
FileHandler infile;
ConsensusMap map;
infile.loadConsensusFeatures(in, map, {FileTypes::CONSENSUSXML}, log_type_);
//map normalization
if (algo_type == "robust_regression")
{
map.sortBySize();
vector<double> results = ConsensusMapNormalizerAlgorithmThreshold::computeCorrelation(map, ratio_threshold, acc_filter, desc_filter);
ConsensusMapNormalizerAlgorithmThreshold::normalizeMaps(map, results);
}
else if (algo_type == "median")
{
ConsensusMapNormalizerAlgorithmMedian::normalizeMaps(map, ConsensusMapNormalizerAlgorithmMedian::NM_SCALE, acc_filter, desc_filter);
}
else if (algo_type == "median_shift")
{
ConsensusMapNormalizerAlgorithmMedian::normalizeMaps(map, ConsensusMapNormalizerAlgorithmMedian::NM_SHIFT, acc_filter, desc_filter);
}
else if (algo_type == "quantile")
{
if (!acc_filter.empty() || !desc_filter.empty())
{
OPENMS_LOG_WARN << endl << "NOTE: Accession / description filtering is not supported in quantile normalization mode. Ignoring filters." << endl << endl;
}
ConsensusMapNormalizerAlgorithmQuantile::normalizeMaps(map);
}
else
{
cerr << "Unknown algorithm type '" << algo_type.c_str() << "'." << endl;
return ILLEGAL_PARAMETERS;
}
//annotate output with data processing info and save output file
addDataProcessing_(map, getProcessingInfo_(DataProcessing::NORMALIZATION));
infile.storeConsensusFeatures(out, map, {FileTypes::CONSENSUSXML});
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPConsensusMapNormalizer tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/DeMeanderize.cpp | .cpp | 6,844 | 216 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Andreas Bertsch $
// --------------------------------------------------------------------------
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_DeMeanderize DeMeanderize
@brief Repairs MALDI experiments which were spotted line by line in a meandering pattern.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot pot pot </th>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1>
@image html DeMeanderize.png
</td>
</tr>
</table>
</CENTER>
<B>Problem Description:</B>
MALDI (Matrix-Assisted Laser Desorption/Ionization) spotting robots often apply samples to
target plates in a "meandering" or "snake-like" pattern to optimize spotting efficiency. This means:
- Row 1 is spotted from left to right: spots 1, 2, 3, ..., N
- Row 2 is spotted from right to left: spots N+1, N+2, ..., 2N (but in reverse physical order!)
- Row 3 is spotted from left to right again: spots 2N+1, 2N+2, ..., 3N
- And so on...
When the mass spectrometer reads these spots in physical order (left to right for all rows),
the spectra from alternating rows have incorrect retention time assignments, appearing in
reverse chronological order compared to when they were spotted.
<B>Solution:</B>
DeMeanderize corrects this by:
1. Identifying MS1 spectra that belong to reversed rows
2. Recalculating their pseudo-retention times (RT) to reflect the actual spotting order
3. Sorting all spectra by their corrected RT values
<B>Algorithm Details:</B>
The tool processes spectra sequentially and assigns pseudo-RT values:
- For normal rows (first, third, fifth, etc.):
@code
RT = spectrum_number * RT_distance
@endcode
- For reversed rows (second, fourth, sixth, etc.):
@code
RT = (row_start_number + (num_spots_per_row - position_in_row)) * RT_distance
@endcode
The algorithm maintains:
- A counter tracking total MS1 spectra processed
- The position within the current row (0 to num_spots_per_row-1)
- A flag indicating whether the current row should be reversed
After processing @p num_spots_per_row MS1 spectra, the algorithm switches to the next row
and toggles the reversal flag.
<B>Parameters:</B>
- @p num_spots_per_row: Number of spots in each row of the plate (default: 48)
- @p RT_distance: The pseudo-RT spacing between adjacent spots (default: 1.0)
<B>Example:</B>
For a plate with 4 spots per row and RT_distance=1.0:
@code
Original physical order: 1 2 3 4 8 7 6 5 9 10 11 12
Spotting order: 1 2 3 4 5 6 7 8 9 10 11 12
Original RTs: 1 2 3 4 5 6 7 8 9 10 11 12
Corrected RTs: 1 2 3 4 8 7 6 5 9 10 11 12
@endcode
After correction and sorting, spectra are ordered by actual spotting sequence.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_DeMeanderize.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_DeMeanderize.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPDeMeanderize :
public TOPPBase
{
public:
TOPPDeMeanderize() :
TOPPBase("DeMeanderize", "Orders the spectra of MALDI spotting plates correctly.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<mzML-file>", "", "Input experiment file, containing the wrongly sorted spectra.");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerOutputFile_("out", "<mzML-file>", "", "Output experiment file with correctly sorted spectra.");
setValidFormats_("out", ListUtils::create<String>("mzML"));
registerIntOption_("num_spots_per_row", "<integer>", 48, "Number of spots in one column, until next row is spotted.", false);
setMinInt_("num_spots_per_row", 1);
registerDoubleOption_("RT_distance", "<integer>", 1.0, "RT distance between two spots which is used to calculated pseudo RT.", false, true);
setMinFloat_("RT_distance", 0.0);
}
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
String in(getStringOption_("in"));
String out(getStringOption_("out"));
Size num_spots_per_row(getIntOption_("num_spots_per_row"));
double RT_distance(getDoubleOption_("RT_distance"));
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
PeakMap exp;
FileHandler().loadExperiment(in, exp, {FileTypes::MZML}, log_type_);
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
ProgressLogger pl;
pl.setLogType(log_type_);
pl.startProgress(0, exp.size(), "Assigning pseudo RTs.");
Size num_ms1(0), num_ms1_base(0), row_counter(0);
bool row_to_reverse(false);
double actual_RT(0);
for (Size i = 0; i != exp.size(); ++i)
{
pl.setProgress(i);
if (row_to_reverse)
{
actual_RT = (double)(num_ms1_base + (num_spots_per_row - row_counter)) * RT_distance;
writeDebug_("RT=" + String(actual_RT) + " (modified, row_counter=" + String(row_counter) + ")", 1);
}
else
{
actual_RT = (double)num_ms1 * RT_distance;
writeDebug_("RT=" + String(actual_RT), 1);
}
exp[i].setRT(actual_RT);
if (exp[i].getMSLevel() == 1)
{
if (++row_counter >= num_spots_per_row)
{
row_counter = 0;
if (row_to_reverse)
{
row_to_reverse = false;
}
else
{
row_to_reverse = true;
}
}
++num_ms1;
if (!row_to_reverse)
{
num_ms1_base = num_ms1;
}
}
}
pl.endProgress();
// sort the spectra according to their new RT
exp.sortSpectra();
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
FileHandler().storeExperiment(out, exp, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPDeMeanderize tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/MSFraggerAdapter.cpp | .cpp | 62,002 | 1,078 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Lukas Zimmermann $
// $Authors: Lukas Zimmermann, Leon Bichmann $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/ID/PercolatorFeatureSetHelper.h>
#include <OpenMS/ANALYSIS/ID/PeptideIndexing.h>
#include <OpenMS/APPLICATIONS/SearchEngineBase.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/DATASTRUCTURES/DefaultParamHandler.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/PepXMLFile.h>
#include <OpenMS/CHEMISTRY/ProteaseDB.h>
#include <OpenMS/CHEMISTRY/ModifiedPeptideGenerator.h>
#include <OpenMS/SYSTEM/JavaInfo.h>
#include <QStringList>
#include <iostream>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_MSFraggerAdapter MSFraggerAdapter
@brief Peptide Identification with MSFragger
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → MSFraggerAdapter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any signal-/preprocessing tool @n (in mzML format)</td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFilter or @n any protein/peptide processing tool</td>
</tr>
</table>
</CENTER>
@em MSFragger must be installed before this adapter can be used. This adapter is fully compatible with version 3.2 of MSFragger
and later versions of MSFragger were tested up to version 3.5.
All MSFragger parameters (as specified in the fragger.params file) have been transcribed to parameters of this OpenMS util.
It is not possible to provide an explicit fragger.params file to avoid redundancy with the ini file.
This adapter creates an fragger.params file prior to calling MSFragger. If the fragger.params file should be inspected, set the
-debug option to 2. MSFraggerAdapter will print the path to the working directory to standard out.
MSFragger can process multiple input files (mzML, mzXML) one after another. The number of output files specified must match
the number of input spectra files. The output file is then matched to the input file by index. The default parameters of the
adapter are the same as given by the official MSFragger manual.
Please cite:
Andy T Kong, Felipe V Leprevost, Dmitry M Avtonomov, Dattatreya Mellacheruvu & Alexey I Nesvizhskii
MSFragger: ultrafast and comprehensive peptide identification in mass spectrometry–based proteomics
Nature Methods volume 14, pages 513–520 (2017) doi:10.1038/nmeth.4256
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_MSFraggerAdapter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_MSFraggerAdapter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPMSFraggerAdapter final :
public SearchEngineBase
{
public:
static const String license;
static const String java_executable;
static const String java_heapmemory;
static const String executable;
static const String in;
static const String out;
static const String opt_out;
static const String database;
// tolerance
static const String precursor_mass_tolerance_lower;
static const String precursor_mass_tolerance_upper;
static const String precursor_mass_unit;
static const String precursor_true_tolerance;
static const String precursor_true_unit;
static const String fragment_mass_tolerance;
static const String fragment_mass_unit;
static const String isotope_error;
// digest
static const String search_enzyme_name;
static const String search_enzyme_cutafter;
static const String search_enzyme_nocutbefore;
static const String num_enzyme_termini;
static const String allowed_missed_cleavage;
static const String digest_min_length;
static const String digest_max_length;
static const String digest_mass_range_min;
static const String digest_mass_range_max;
// varmod
static const String clip_nterm_m;
static const String varmod_masses;
static const String varmod_syntax;
static const String varmod_enable_common;
static const String variable_modifications_unimod;
static const String not_allow_multiple_variable_mods_on_residue;
static const String max_variable_mods_per_peptide;
static const String max_variable_mods_combinations;
// spectrum
static const String minimum_peaks;
static const String use_topn_peaks;
static const String minimum_ratio;
static const String clear_mz_range_min;
static const String clear_mz_range_max;
static const String max_fragment_charge;
static const String override_charge;
static const String precursor_charge_min;
static const String precursor_charge_max;
// search
static const String track_zero_topn;
static const String zero_bin_accept_expect;
static const String zero_bin_mult_expect;
static const String add_topn_complementary;
static const String min_fragments_modeling;
static const String min_matched_fragments;
static const String output_report_topn;
static const String output_max_expect;
static const String localize_delta_mass;
// statmod
static const String add_cterm_peptide;
static const String add_nterm_peptide;
static const String add_cterm_protein;
static const String add_nterm_protein;
static const String add_G_glycine;
static const String add_A_alanine;
static const String add_S_serine;
static const String add_P_proline;
static const String add_V_valine;
static const String add_T_threonine;
static const String add_C_cysteine;
static const String add_L_leucine;
static const String add_I_isoleucine;
static const String add_N_asparagine;
static const String add_D_aspartic_acid;
static const String add_Q_glutamine;
static const String add_K_lysine;
static const String add_E_glutamic_acid;
static const String add_M_methionine;
static const String add_H_histidine;
static const String add_F_phenylalanine;
static const String add_R_arginine;
static const String add_Y_tyrosine;
static const String add_W_tryptophan;
static const String fixed_modifications_unimod;
// Log level for verbose output
static const int LOG_LEVEL_VERBOSE;
TOPPMSFraggerAdapter() :
SearchEngineBase("MSFraggerAdapter", "Peptide Identification with MSFragger.\n"
"Important note:\n"
"The Regents of the University of Michigan (\"Michigan\") grants us permission to redistribute \n"
"the MS Fragger application developed by Michigan within the OpenMS Pipeline and make available \n"
"for use on related service offerings supported by the University of Tubingen and the Center for\n"
"Integrative Bioinformatics. \n"
"Per the license agreement the use of the pipeline and associated materials is for academic \n"
"research, non-commercial or educational purposes. Any commercial use inquiries \n"
"must be directed to the University of Michigan Technology Transfer Office at \n"
"techtransfer@umich.edu. All right title and interest in MS Fragger shall remain with the \n"
"University of Michigan.\n"
"\n"
"For details, please see the supplied license file or \n"
"https://raw.githubusercontent.com/OpenMS/THIRDPARTY/master/All/MSFragger/License.txt \n"
, false,
{
{"Kong AT, Leprevost FV, Avtonomov DM, Mellacheruvu D, Nesvizhskii AI",
"MSFragger: ultrafast and comprehensive peptide identification in mass spectrometry–based proteomics",
"Nature Methods volume 14, pages 513–520 (2017)",
"doi:10.1038/nmeth.4256"}
}),
java_exe(""),
exe(""),
parameter_file_path(""),
input_file(),
output_file()
{
}
protected:
void registerOptionsAndFlags_() override
{
const StringList emptyStrings;
const std::vector< double > emptyDoubles;
const StringList validUnits = ListUtils::create<String>("Da,ppm");
const StringList zero_to_five = ListUtils::create<String>("0,1,2,3,4,5");
// License agreement
registerStringOption_(TOPPMSFraggerAdapter::license, "<license>", "", "Set to yes, if you have read and agreed to the MSFragger license terms.", true, false);
setValidStrings_(TOPPMSFraggerAdapter::license, {"yes","no"});
// Java executable
registerInputFile_(TOPPMSFraggerAdapter::java_executable, "<file>", "java", "The Java executable. Usually Java is on the system PATH. If Java is not found, use this parameter to specify the full path to Java", false, false, {"skipexists"});
registerIntOption_(TOPPMSFraggerAdapter::java_heapmemory, "<num>", 3500, "Maximum Java heap size (in MB)", false);
// Handle executable
registerInputFile_(TOPPMSFraggerAdapter::executable, "<path_to_executable>", "MSFragger.jar", "Path to the MSFragger executable to use; may be empty if the executable is globally available.", true, false, {"is_executable"});
// Input file
registerInputFile_(TOPPMSFraggerAdapter::in, "<file>", "", "Input File with specta for MSFragger");
setValidFormats_(TOPPMSFraggerAdapter::in, ListUtils::create<String>("mzML,mzXML"));
// Output file
registerOutputFile_(TOPPMSFraggerAdapter::out, "<file>", "", "MSFragger output file");
setValidFormats_(TOPPMSFraggerAdapter::out, ListUtils::create<String>("idXML"), true);
// Optional output file
registerOutputFile_(TOPPMSFraggerAdapter::opt_out, "<file>", "", "MSFragger optional output file", false);
setValidFormats_(TOPPMSFraggerAdapter::opt_out, ListUtils::create<String>("pepXML"), true);
// Path to database to search
registerInputFile_(TOPPMSFraggerAdapter::database, "<path_to_fasta>", "", "Protein FASTA database file path", true, false);
setValidFormats_(TOPPMSFraggerAdapter::database, ListUtils::create<String>("FASTA,fasta,fa,fas"), false);
// TOPP tolerance
registerTOPPSubsection_("tolerance", "Search Tolerances");
// Precursor mass tolerance and unit
_registerNonNegativeDouble(TOPPMSFraggerAdapter::precursor_mass_tolerance_lower, "<precursor_mass_tolerance>", 20.0, "Lower precursor mass tolerance", false, false);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::precursor_mass_tolerance_upper, "<precursor_mass_tolerance>", 20.0, "Upper precursor mass tolerance", false, false);
registerStringOption_(TOPPMSFraggerAdapter::precursor_mass_unit, "<precursor_mass_unit>", "ppm", "Unit of precursor mass tolerance", false, false);
setValidStrings_(TOPPMSFraggerAdapter::precursor_mass_unit, validUnits);
// Precursor true tolerance
_registerNonNegativeDouble(TOPPMSFraggerAdapter::precursor_true_tolerance, "<precursor_true_tolerance>", 0.0, "True precursor mass tolerance (window is +/- this value). Used for tie breaker of results (in spectrally ambiguous cases) and zero bin boosting in open searches (0 disables these features). This option is STRONGLY recommended for open searches.", false, false);
registerStringOption_(TOPPMSFraggerAdapter::precursor_true_unit, "<precursor_true_unit>", "ppm", "Unit of precursor true tolerance", false, false);
setValidStrings_(TOPPMSFraggerAdapter::precursor_true_unit, validUnits);
// Fragment mass tolerance
_registerNonNegativeDouble(TOPPMSFraggerAdapter::fragment_mass_tolerance, "<fragment_mass_tolerance>", 20.0, "Fragment mass tolerance (window is +/- this value)", false, false);
registerStringOption_(TOPPMSFraggerAdapter::fragment_mass_unit, "<fragment_mass_unit>", "ppm", "Unit of fragment mass tolerance", false, false);
setValidStrings_(TOPPMSFraggerAdapter::fragment_mass_unit, validUnits);
// Isotope error
registerStringOption_(TOPPMSFraggerAdapter::isotope_error, "<isotope_error>", "0", "Isotope correction for MS/MS events triggered on isotopic peaks. Should be set to 0 (disabled) for open search or 0/1/2 for correction of narrow window searches. Shifts the precursor mass window to multiples of this value multiplied by the mass of C13-C12.", false, false);
setValidStrings_(TOPPMSFraggerAdapter::isotope_error, ListUtils::create<String>("0,1,2,0/1/2"));
// TOPP digest
registerTOPPSubsection_("digest", "In-Silico Digestion Parameters");
// Enzyme
StringList enzyme_names;
ProteaseDB::getInstance()->getAllNames(enzyme_names);
registerStringOption_(TOPPMSFraggerAdapter::search_enzyme_name, "<search_enzyme_name>", "Trypsin", "Name of the enzyme to be written to the pepXML file", false, false);
setValidStrings_(TOPPMSFraggerAdapter::search_enzyme_name, enzyme_names);
// Cut after
registerStringOption_(TOPPMSFraggerAdapter::search_enzyme_cutafter, "<search_enzyme_cutafter>", "KR", "Residues after which the enzyme cuts (specified as a string of amino acids)", false , false);
// No cut before
registerStringOption_(TOPPMSFraggerAdapter::search_enzyme_nocutbefore, "<search_enzyme_nocutbefore>", "P", "Residues that the enzyme will not cut before", false, false);
// Number of enzyme termini
registerStringOption_(TOPPMSFraggerAdapter::num_enzyme_termini, "<num_enzyme_termini>", "fully", "Number of enzyme termini (non-enzymatic (0), semi (1), fully (2)", false, false);
setValidStrings_(TOPPMSFraggerAdapter::num_enzyme_termini, ListUtils::create<String>("non-enzymatic,semi,fully"));
// Allowed missed cleavages
registerStringOption_(TOPPMSFraggerAdapter::allowed_missed_cleavage, "<allowed_missed_cleavage>", "2", "Allowed number of missed cleavages", false, false);
setValidStrings_(TOPPMSFraggerAdapter::allowed_missed_cleavage, zero_to_five); // 5 is the max. allowed value according to MSFragger
// Digest min length
_registerNonNegativeInt(TOPPMSFraggerAdapter::digest_min_length, "<digest_min_length>", 7, "Minimum length of peptides to be generated during in-silico digestion", false, false);
// Digest max length
_registerNonNegativeInt(TOPPMSFraggerAdapter::digest_max_length, "<digest_max_length>", 64, "Maximum length of peptides to be generated during in-silico digestion", false, false);
// Digest min mass range
_registerNonNegativeDouble(TOPPMSFraggerAdapter::digest_mass_range_min, "<digest_mass_range_min>", 500.0, "Min mass of peptides to be generated (Da)", false, false);
// Digest max mass range
_registerNonNegativeDouble(TOPPMSFraggerAdapter::digest_mass_range_max, "<digest_mass_range_max>", 5000.0, "Max mass of peptides to be generated (Da)", false, false);
// TOPP varmod
registerTOPPSubsection_("varmod", "Variable Modification Parameters");
// Clip nterm M
registerFlag_(TOPPMSFraggerAdapter::clip_nterm_m, "Specifies the trimming of a protein N-terminal methionine as a variable modification", false);
// Modifications
registerDoubleList_(TOPPMSFraggerAdapter::varmod_masses, "<varmod1_mass .. varmod7_mass>", emptyDoubles , "Masses for variable modifications", false, false);
registerStringList_(TOPPMSFraggerAdapter::varmod_syntax, "<varmod1_syntax .. varmod7_syntax>", emptyStrings, "Syntax Strings for variable modifications", false, false);
registerStringList_(TOPPMSFraggerAdapter::variable_modifications_unimod, "<varmod1_unimod .. varmod7_unimod>", emptyStrings, "Variable modifications in unimod syntax, is added to mass+syntax varmod list", false, false);
registerFlag_(TOPPMSFraggerAdapter::varmod_enable_common, "Enable common variable modifications (15.9949 M and 42.0106 [^)", false);
// allow_multiple_variable_mods_on_residue
registerFlag_(TOPPMSFraggerAdapter::not_allow_multiple_variable_mods_on_residue, "Do not allow any one amino acid to be modified by multiple variable modifications", false);
// Max variable mods per mod
registerStringOption_(TOPPMSFraggerAdapter::max_variable_mods_per_peptide, "<max_variable_mods_per_peptide>", "2", "Maximum total number of variable modifications per peptide", false, false);
setValidStrings_(TOPPMSFraggerAdapter::max_variable_mods_per_peptide, zero_to_five);
// Max variable mods combinations
_registerNonNegativeInt(TOPPMSFraggerAdapter::max_variable_mods_combinations, "<max_variable_mods_combinations>", 5000, "Maximum allowed number of modified variably modified peptides from each peptide sequence, (maximum of 65534). If a greater number than the maximum is generated, only the unmodified peptide is considered", false, false);
setMaxInt_(TOPPMSFraggerAdapter::max_variable_mods_combinations, 65534);
// TOPP spectrum
registerTOPPSubsection_("spectrum", "Spectrum Processing Parameters");
_registerNonNegativeInt(TOPPMSFraggerAdapter::minimum_peaks, "<minimum_peaks>", 10, "Minimum number of peaks in experimental spectrum for matching", false, false);
_registerNonNegativeInt(TOPPMSFraggerAdapter::use_topn_peaks, "<use_topN_peaks>", 50, "Pre-process experimental spectrum to only use top N peaks", false, false);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::minimum_ratio, "<minimum_ratio>", 0.0, "Filters out all peaks in experimental spectrum less intense than this multiple of the base peak intensity", false, false);
setMaxFloat_(TOPPMSFraggerAdapter::minimum_ratio, 1.0);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::clear_mz_range_min, "<clear_mz_range_min>", 0.0, "Removes peaks in this m/z range prior to matching (minimum value). Useful for iTRAQ/TMT experiments (i.e. 0.0 150.0)", false, false);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::clear_mz_range_max, "<clear_mz_range_max>", 0.0, "Removes peaks in this m/z range prior to matching (maximum value). Useful for iTRAQ/TMT experiments (i.e. 0.0 150.0)", false, false);
registerStringOption_(TOPPMSFraggerAdapter::max_fragment_charge, "<max_fragment_charge>", "2", "Maximum charge state for theoretical fragments to match", false, false);
setValidStrings_(TOPPMSFraggerAdapter::max_fragment_charge, ListUtils::create<String>("1,2,3,4"));
registerFlag_(TOPPMSFraggerAdapter::override_charge, "Ignores precursor charge and uses charge state specified in precursor_charge range (parameters: spectrum:precursor_charge_min and spectrum:precursor_charge_max)" , false);
_registerNonNegativeInt(TOPPMSFraggerAdapter::precursor_charge_min, "<precursor_charge_min>", 1, "Min charge of precursor charge range to consider. If specified, also spectrum:override_charge must be set)" , false, false);
_registerNonNegativeInt(TOPPMSFraggerAdapter::precursor_charge_max, "<precursor_charge_max>", 4, "Max charge of precursor charge range to consider. If specified, also spectrum:override_charge must be set)" , false, false);
registerTOPPSubsection_("search", "Open Search Features");
_registerNonNegativeInt(TOPPMSFraggerAdapter::track_zero_topn, "<track_zero_topn>", 0, "Track top N unmodified peptide results separately from main results internally for boosting features. Should be set to a number greater than search:output_report_topN if zero bin boosting is desired", false, false);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::zero_bin_accept_expect, "<zero_bin_accept_expect>", 0.0, "Ranks a zero-bin hit above all non-zero-bin hit if it has expectation less than this value", false, false);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::zero_bin_mult_expect, "<zero_bin_mult_expect>", 1.0, "Multiplies expect value of PSMs in the zero-bin during results ordering (set to less than 1 for boosting)", false, false);
_registerNonNegativeInt(TOPPMSFraggerAdapter::add_topn_complementary, "<add_topn_complementary>", 0, "Inserts complementary ions corresponding to the top N most intense fragments in each experimental spectrum. Useful for recovery of modified peptides near C-terminus in open search. 0 disables this option", false, false);
_registerNonNegativeInt(TOPPMSFraggerAdapter::min_fragments_modeling, "<min_fragments_modeling>", 3, "Minimum number of matched peaks in PSM for inclusion in statistical modeling", false, false);
_registerNonNegativeInt(TOPPMSFraggerAdapter::min_matched_fragments, "<min_matched_fragments>", 4, "Minimum number of matched peaks for PSM to be reported. MSFragger recommends a minimum of 4 for narrow window searching and 6 for open searches", false, false);
_registerNonNegativeInt(TOPPMSFraggerAdapter::output_report_topn, "<output_report_topn>", 1, "Reports top N PSMs per input spectrum", false, false);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::output_max_expect, "<output_max_expect>", 50.0, "Suppresses reporting of PSM if top hit has expectation greater than this threshold", false, false);
_registerNonNegativeInt(TOPPMSFraggerAdapter::localize_delta_mass, "<localize_delta_mass>", 0, "Include fragment ions mass-shifted by unknown modifications (recommended for open and mass offset searches) (0 for OFF, 1 for ON)", false, false);
registerTOPPSubsection_("statmod", "Static Modification Parameters");
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_cterm_peptide, "<add_cterm_peptide>", 0.0, "Statically add mass in Da to C-terminal of peptide", false, false);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_nterm_peptide, "<add_nterm_peptide>", 0.0, "Statically add mass in Da to N-terminal of peptide", false, false);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_cterm_protein, "<add_cterm_protein>", 0.0, "Statically add mass in Da to C-terminal of protein", false, false);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_nterm_protein, "<add_nterm_protein>", 0.0, "Statically add mass in Da to N-terminal of protein", false, false);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_G_glycine, "<add_G_glycine>", 0.0, "Statically add mass to glycine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_A_alanine, "<add_A_alanine>", 0.0, "Statically add mass to alanine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_S_serine, "<add_S_serine>", 0.0, "Statically add mass to serine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_P_proline, "<add_P_proline>", 0.0, "Statically add mass to proline", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_V_valine, "<add_V_valine>", 0.0, "Statically add mass to valine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_T_threonine, "<add_T_threonine>", 0.0, "Statically add mass to threonine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_C_cysteine, "<add_C_cysteine>", 57.021464, "Statically add mass to cysteine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_L_leucine, "<add_L_leucine>", 0.0, "Statically add mass to leucine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_I_isoleucine, "<add_I_isoleucine>", 0.0, "Statically add mass to isoleucine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_N_asparagine, "<add_N_asparagine>", 0.0, "Statically add mass to asparagine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_D_aspartic_acid, "<add_D_aspartic_acid>", 0.0, "Statically add mass to aspartic_acid", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_Q_glutamine, "<add_Q_glutamine>", 0.0, "Statically add mass to glutamine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_K_lysine, "<add_K_lysine>", 0.0, "Statically add mass to lysine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_E_glutamic_acid, "<add_E_glutamic_acid>", 0.0, "Statically add mass to glutamic_acid", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_M_methionine, "<add_M_methionine>", 0.0, "Statically add mass to methionine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_H_histidine, "<add_H_histidine>", 0.0, "Statically add mass to histidine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_F_phenylalanine, "<add_F_phenylalanine>", 0.0, "Statically add mass to phenylalanine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_R_arginine, "<add_R_arginine>", 0.0, "Statically add mass to arginine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_Y_tyrosine, "<add_Y_tyrosine>", 0.0, "Statically add mass to tyrosine", false, true);
_registerNonNegativeDouble(TOPPMSFraggerAdapter::add_W_tryptophan, "<add_W_tryptophan>", 0.0, "Statically add mass to tryptophan", false, true);
registerStringList_(TOPPMSFraggerAdapter::fixed_modifications_unimod, "<fixedmod1_unimod .. fixedmod7_unimod>", emptyStrings, "Fixed modifications in unimod syntax if specific mass is unknown, e.g. Carbamidomethylation (C). When multiple different masses are given for one aminoacid this parameter (unimod) will have priority.", false, false);
// register peptide indexing parameter (with defaults for this search engine) TODO: check if search engine defaults are needed
registerPeptideIndexingParameter_(PeptideIndexing().getParameters());
}
ExitCodes main_(int, const char**) override
{
if (getStringOption_(TOPPMSFraggerAdapter::license) != "yes" && !getFlag_("test"))
{
_fatalError("MSFragger may only be used upon acceptance of license terms.");
}
File::TempDir working_directory(debug_level_ >= 2);
try
{
// java executable
this->java_exe = this->getStringOption_(TOPPMSFraggerAdapter::java_executable);
if (!JavaInfo::canRun(this->java_exe, true))
{
return ExitCodes::EXTERNAL_PROGRAM_NOTFOUND;
}
// executable
this->exe = this->getStringOption_(TOPPMSFraggerAdapter::executable);
if (this->exe.empty())
{
// looks for MSFRAGGER_PATH in the environment
QString qmsfragger_path = getenv("MSFRAGGER_PATH");
if (qmsfragger_path.isEmpty())
{
std::cerr << "No executable for MSFragger could be found (also not in MSFRAGGER_PATH)!";
return ExitCodes::EXTERNAL_PROGRAM_NOTFOUND;
}
this->exe = qmsfragger_path;
}
// input, output, database name
const String database = File::absolutePath(this->getStringOption_(TOPPMSFraggerAdapter::database)); // the working dir will be a TMP-dir, so we need absolute paths
input_file = (this->getStringOption_(TOPPMSFraggerAdapter::in)).toQString();
output_file = this->getStringOption_(TOPPMSFraggerAdapter::out);
optional_output_file = this->getStringOption_(TOPPMSFraggerAdapter::opt_out);
// tolerance
const double arg_precursor_mass_tolerance_lower(this->getDoubleOption_(TOPPMSFraggerAdapter::precursor_mass_tolerance_lower));
const double arg_precursor_mass_tolerance_upper(this->getDoubleOption_(TOPPMSFraggerAdapter::precursor_mass_tolerance_upper));
const String & arg_precursor_mass_unit = this->getStringOption_(TOPPMSFraggerAdapter::precursor_mass_unit);
const double arg_precursor_true_tolerance(this->getDoubleOption_(TOPPMSFraggerAdapter::precursor_true_tolerance));
const String & arg_precursor_true_unit = this->getStringOption_(TOPPMSFraggerAdapter::precursor_true_unit);
const double arg_fragment_mass_tolerance(this->getDoubleOption_(TOPPMSFraggerAdapter::fragment_mass_tolerance));
const String & arg_fragment_mass_unit = this->getStringOption_(TOPPMSFraggerAdapter::fragment_mass_unit);
const String & arg_isotope_error = this->getStringOption_(TOPPMSFraggerAdapter::isotope_error);
// digest
const String & arg_search_enzyme_name = this->getStringOption_(TOPPMSFraggerAdapter::search_enzyme_name);
const String & arg_search_enzyme_cutafter = this->getStringOption_(TOPPMSFraggerAdapter::search_enzyme_cutafter);
const String & arg_search_enzyme_nocutbefore = this->getStringOption_(TOPPMSFraggerAdapter::search_enzyme_nocutbefore);
std::map< String,int > num_enzyme_termini;
num_enzyme_termini["non-enzymatic"] = 0;
num_enzyme_termini["semi"] = 1;
num_enzyme_termini["fully"] = 2;
const int arg_num_enzyme_termini = num_enzyme_termini[this->getStringOption_(TOPPMSFraggerAdapter::num_enzyme_termini)];
const String & arg_allowed_missed_cleavage = this->getStringOption_(TOPPMSFraggerAdapter::allowed_missed_cleavage);
const int arg_digest_min_length = this->getIntOption_(TOPPMSFraggerAdapter::digest_min_length);
const int arg_digest_max_length = this->getIntOption_(TOPPMSFraggerAdapter::digest_max_length);
ensureRange(arg_digest_min_length, arg_digest_max_length, "Maximum length of digest is not allowed to be smaller than minimum length of digest");
const double arg_digest_mass_range_min = this->getDoubleOption_(TOPPMSFraggerAdapter::digest_mass_range_min);
const double arg_digest_mass_range_max = this->getDoubleOption_(TOPPMSFraggerAdapter::digest_mass_range_max);
ensureRange(arg_digest_mass_range_min, arg_digest_mass_range_max, "Maximum digest mass is not allowed to be smaller than minimum digest mass!");
// varmod
const bool arg_clip_nterm_m = this->getFlag_(clip_nterm_m);
std::vector< double > arg_varmod_masses = this->getDoubleList_(TOPPMSFraggerAdapter::varmod_masses);
std::vector< String > arg_varmod_syntax = this->getStringList_(TOPPMSFraggerAdapter::varmod_syntax);
std::vector< String > arg_varmod_unimod = this->getStringList_(TOPPMSFraggerAdapter::variable_modifications_unimod);
// assignment of mass to syntax is by index, so the vectors have to be the same length
if (arg_varmod_masses.size() != arg_varmod_syntax.size())
{
_fatalError("List of arguments for the parameters 'varmod_masses' and 'varmod_syntax' must have the same length!");
}
// only up to 7 variable modifications are allowed
if (arg_varmod_masses.size() > 7)
{
_fatalError("MSFragger is restricted to at most 7 variable modifications.");
}
// add common variable modifications if requested
if (this->getFlag_(varmod_enable_common))
{
// oxidation on methionine
this->_addVarMod(arg_varmod_masses, arg_varmod_syntax, 15.9949, "M");
// N-terminal acetylation
this->_addVarMod(arg_varmod_masses, arg_varmod_syntax, 42.0106, "[^");
}
const bool arg_not_allow_multiple_variable_mods_on_residue = this->getFlag_(TOPPMSFraggerAdapter::not_allow_multiple_variable_mods_on_residue);
const String & arg_max_variable_mods_per_peptide = this->getStringOption_(TOPPMSFraggerAdapter::max_variable_mods_per_peptide);
const int arg_max_variable_mods_combinations = this->getIntOption_(TOPPMSFraggerAdapter::max_variable_mods_combinations);
// spectrum
const int arg_minimum_peaks = this->getIntOption_(TOPPMSFraggerAdapter::minimum_peaks);
const int arg_use_topn_peaks = this->getIntOption_(TOPPMSFraggerAdapter::use_topn_peaks);
const double arg_minimum_ratio = this->getDoubleOption_(TOPPMSFraggerAdapter::minimum_ratio);
const double arg_clear_mz_range_min = this->getDoubleOption_(TOPPMSFraggerAdapter::clear_mz_range_min);
const double arg_clear_mz_range_max = this->getDoubleOption_(TOPPMSFraggerAdapter::clear_mz_range_max);
ensureRange(arg_clear_mz_range_min, arg_clear_mz_range_max, "Maximum clear mz value is not allowed to be smaller than minimum clear mz value!");
const String & arg_max_fragment_charge = this->getStringOption_(TOPPMSFraggerAdapter::max_fragment_charge);
const bool arg_override_charge = this->getFlag_(TOPPMSFraggerAdapter::override_charge);
const int arg_precursor_charge_min = this->getIntOption_(TOPPMSFraggerAdapter::precursor_charge_min);
const int arg_precursor_charge_max = this->getIntOption_(TOPPMSFraggerAdapter::precursor_charge_max);
ensureRange(arg_precursor_charge_min, arg_precursor_charge_max, "Maximum precursor charge is not allowed to be smaller than minimum precursor charge!");
// ensures that the user is aware of overriding the precursoe charges
if ((arg_precursor_charge_min != 1 || arg_precursor_charge_max != 4) && !arg_override_charge)
{
_fatalError("If you want to ignore the precursor charge, please also set the -" + override_charge + " flag!");
}
// search
const int arg_track_zero_topn = this->getIntOption_(TOPPMSFraggerAdapter::track_zero_topn);
const double arg_zero_bin_accept_expect = this->getDoubleOption_(TOPPMSFraggerAdapter::zero_bin_accept_expect);
const double arg_zero_bin_mult_expect = this->getDoubleOption_(TOPPMSFraggerAdapter::zero_bin_mult_expect);
const int arg_add_topn_complementary = this->getIntOption_(TOPPMSFraggerAdapter::add_topn_complementary);
const int arg_min_fragments_modeling = this->getIntOption_(TOPPMSFraggerAdapter::min_fragments_modeling);
const int arg_min_matched_fragments = this->getIntOption_(TOPPMSFraggerAdapter::min_matched_fragments);
const int arg_output_report_topn = this->getIntOption_(TOPPMSFraggerAdapter::output_report_topn);
const double arg_output_max_expect = this->getDoubleOption_(TOPPMSFraggerAdapter::output_max_expect);
const int arg_localize_delta_mass = this->getIntOption_(TOPPMSFraggerAdapter::localize_delta_mass);
// statmod
double arg_add_cterm_peptide = this->getDoubleOption_(TOPPMSFraggerAdapter::add_cterm_peptide);
double arg_add_nterm_peptide = this->getDoubleOption_(TOPPMSFraggerAdapter::add_nterm_peptide);
double arg_add_cterm_protein = this->getDoubleOption_(TOPPMSFraggerAdapter::add_cterm_protein);
double arg_add_nterm_protein = this->getDoubleOption_(TOPPMSFraggerAdapter::add_nterm_protein);
double arg_add_G_glycine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_G_glycine);
double arg_add_A_alanine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_A_alanine);
double arg_add_S_serine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_S_serine);
double arg_add_P_proline = this->getDoubleOption_(TOPPMSFraggerAdapter::add_P_proline);
double arg_add_V_valine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_V_valine);
double arg_add_T_threonine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_T_threonine);
double arg_add_C_cysteine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_C_cysteine);
double arg_add_L_leucine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_L_leucine);
double arg_add_I_isoleucine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_I_isoleucine);
double arg_add_N_asparagine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_N_asparagine);
double arg_add_D_aspartic_acid = this->getDoubleOption_(TOPPMSFraggerAdapter::add_D_aspartic_acid);
double arg_add_Q_glutamine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_Q_glutamine);
double arg_add_K_lysine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_K_lysine);
double arg_add_E_glutamic_acid = this->getDoubleOption_(TOPPMSFraggerAdapter::add_E_glutamic_acid);
double arg_add_M_methionine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_M_methionine);
double arg_add_H_histidine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_H_histidine);
double arg_add_F_phenylalanine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_F_phenylalanine);
double arg_add_R_arginine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_R_arginine);
double arg_add_Y_tyrosine = this->getDoubleOption_(TOPPMSFraggerAdapter::add_Y_tyrosine);
double arg_add_W_tryptophan = this->getDoubleOption_(TOPPMSFraggerAdapter::add_W_tryptophan);
std::vector< String > arg_fixmod_unimod = this->getStringList_(TOPPMSFraggerAdapter::fixed_modifications_unimod);
// parameters have been read in and verified, they are now going to be written into the fragger.params file in a temporary directory
this->parameter_file_path = File::getTemporaryFile();
writeDebug_("Parameter file for MSFragger: '" + this->parameter_file_path + "'", TOPPMSFraggerAdapter::LOG_LEVEL_VERBOSE);
writeDebug_("Working Directory: '" + working_directory.getPath() + "'", TOPPMSFraggerAdapter::LOG_LEVEL_VERBOSE);
writeDebug_("If you want to keep the working directory and the parameter file, set the -debug to 2", 1);
ofstream os(this->parameter_file_path.c_str());
// Write all the parameters into the file
os << "database_name = " << String(database)
<< "\nnum_threads = " << this->getIntOption_("threads")
<< "\n\nprecursor_mass_lower = " << (-arg_precursor_mass_tolerance_lower)
<< "\nprecursor_mass_upper = " << arg_precursor_mass_tolerance_upper
<< "\nprecursor_mass_units = " << (arg_precursor_mass_unit == "Da" ? 0 : 1)
<< "\nprecursor_true_tolerance = " << arg_precursor_true_tolerance
<< "\nprecursor_true_units = " << (arg_precursor_true_unit == "Da" ? 0 : 1)
<< "\nfragment_mass_tolerance = " << arg_fragment_mass_tolerance
<< "\nfragment_mass_units = " << (arg_fragment_mass_unit == "Da" ? 0 : 1)
<< "\n\nisotope_error = " << arg_isotope_error
<< "\n\nsearch_enzyme_name = " << arg_search_enzyme_name
<< "\nsearch_enzyme_cutafter = " << arg_search_enzyme_cutafter
<< "\nsearch_enzyme_butnotafter = " << arg_search_enzyme_nocutbefore
<< "\n\nnum_enzyme_termini = " << arg_num_enzyme_termini
<< "\nallowed_missed_cleavage = " << arg_allowed_missed_cleavage
<< "\n\nclip_nTerm_M = " << arg_clip_nterm_m << '\n';
// Write variable modifications from masses/syntax and unimod to unique set (and also write to log)
writeLogInfo_("Variable Modifications set to:");
std::set< std::pair< double, String > > varmods_combined;
Size i;
for (i = 0; i < arg_varmod_masses.size(); ++i)
{
std::pair <double, String> tmp_mod = std::make_pair (arg_varmod_masses[i], arg_varmod_syntax[i]);
varmods_combined.insert(tmp_mod);
}
// TODO Move to Modified Peptide Generator
if (!arg_varmod_unimod.empty())
{
// String filter for terminal aminoacid modification, delete mod from String list, continue with other unimods
std::vector< String > n_terminal_aa_mods;
std::vector< String > c_terminal_aa_mods;
std::vector< int > n_terminal_aa_mods_toDel;
std::vector< int > c_terminal_aa_mods_toDel;
for (Size i=0; i<arg_varmod_unimod.size(); i++)
{
int nt = arg_varmod_unimod[i].find(" (N-term");
int ct = arg_varmod_unimod[i].find(" (C-term");
if (!(nt == -1 && ct == -1)) // has -term modification
{
int closed_arg = arg_varmod_unimod[i].find("term)"); // Check if the terminal argument is closed or continued with aminoacid
if (closed_arg == -1)
{
int j = arg_varmod_unimod[i].find("-term");
if (arg_varmod_unimod[i].substr(j+7)!=")")
{
_fatalError("Multiple aminoacids in terminal modification are not allowed");
}
String res = arg_varmod_unimod[i].substr(j+6, 1);
String mod = arg_varmod_unimod[i].substr(0, j-3);
String modificationString = mod.append(" (").append(res).append(")");
if (nt != -1)
{
n_terminal_aa_mods.push_back(modificationString);
n_terminal_aa_mods_toDel.push_back(i);
}
if (ct != -1)
{
c_terminal_aa_mods.push_back(modificationString);
c_terminal_aa_mods_toDel.push_back(i);
}
}
}
}
// Write the variable modification in correct syntax to a combined list and delete element from parameter list
const ModifiedPeptideGenerator::MapToResidueType n_var_mod_temp = ModifiedPeptideGenerator::getModifications(n_terminal_aa_mods);
for (auto const & r : n_var_mod_temp.val)
{
const double deltamass = r.first->getDiffMonoMass();
const String res = r.second->getOneLetterCode();
std::pair <double, String> tmp_mod = std::make_pair (deltamass, "n" + res);
varmods_combined.insert(tmp_mod);
}
for (auto const & i : n_terminal_aa_mods_toDel)
{
arg_varmod_unimod.erase(arg_varmod_unimod.begin()+i);
}
const ModifiedPeptideGenerator::MapToResidueType c_var_mod_temp = ModifiedPeptideGenerator::getModifications(c_terminal_aa_mods);
for (auto const & r : c_var_mod_temp.val)
{
const double deltamass = r.first->getDiffMonoMass();
const String res = r.second->getOneLetterCode();
std::pair <double, String> tmp_mod = std::make_pair (deltamass, "c" + res);
varmods_combined.insert(tmp_mod);
}
for (auto const & i : c_terminal_aa_mods_toDel)
{
arg_varmod_unimod.erase(arg_varmod_unimod.begin()+i);
}
// Collect all other modifications and filter true terminal modifications for correct syntax in MSFragger
const ModifiedPeptideGenerator::MapToResidueType variable_mod = ModifiedPeptideGenerator::getModifications(arg_varmod_unimod);
for (auto const & r : variable_mod.val)
{
String res;
const double deltamass = r.first->getDiffMonoMass();
if (r.first->getTermSpecificity() == ResidueModification::N_TERM)
{
res = "n^";
}
else if (r.first->getTermSpecificity() == ResidueModification::C_TERM)
{
res = "c^";
}
else if (r.first->getTermSpecificity() == ResidueModification::PROTEIN_N_TERM)
{
res = "[^";
}
else if (r.first->getTermSpecificity() == ResidueModification::PROTEIN_C_TERM)
{
res = "]^";
}
else
{
res = r.second->getOneLetterCode();
}
std::pair <double, String> tmp_mod = std::make_pair (deltamass, res);
varmods_combined.insert(tmp_mod);
}
}
i = 0;
for (auto const & m : varmods_combined)
{
const String varmod = "variable_mod_0" + String(i+1) + " = " + String(m.first) + " " + String(m.second);
os << "\n" << varmod;
writeLogInfo_(varmod);
i++;
}
// collect all unimod fixed modifications and specify deltamass for each aminoacid
if (!arg_fixmod_unimod.empty())
{
const ModifiedPeptideGenerator::MapToResidueType fixed_mod = ModifiedPeptideGenerator::getModifications(arg_fixmod_unimod);
for (auto const & r : fixed_mod.val)
{
const double deltamass = r.first->getDiffMonoMass();
if (r.first->getTermSpecificity() == ResidueModification::N_TERM)
{
arg_add_nterm_peptide = deltamass;
}
else if (r.first->getTermSpecificity() == ResidueModification::C_TERM)
{
arg_add_cterm_peptide = deltamass;
}
else if (r.first->getTermSpecificity() == ResidueModification::PROTEIN_N_TERM)
{
arg_add_nterm_protein = deltamass;
}
else if (r.first->getTermSpecificity() == ResidueModification::PROTEIN_C_TERM)
{
arg_add_cterm_protein = deltamass;
}
else
{
const String res = r.second->getOneLetterCode();
switch(res[0])
{
case 'G':
arg_add_G_glycine = deltamass;
break;
case 'A':
arg_add_A_alanine = deltamass;
break;
case 'S':
arg_add_S_serine = deltamass;
break;
case 'P':
arg_add_P_proline = deltamass;
break;
case 'V':
arg_add_V_valine = deltamass;
break;
case 'T':
arg_add_T_threonine = deltamass;
break;
case 'C':
arg_add_C_cysteine = deltamass;
break;
case 'L':
arg_add_L_leucine = deltamass;
break;
case 'I':
arg_add_I_isoleucine = deltamass;
break;
case 'N':
arg_add_N_asparagine = deltamass;
break;
case 'D':
arg_add_D_aspartic_acid = deltamass;
break;
case 'Q':
arg_add_Q_glutamine = deltamass;
break;
case 'K':
arg_add_K_lysine = deltamass;
break;
case 'E':
arg_add_E_glutamic_acid = deltamass;
break;
case 'M':
arg_add_M_methionine = deltamass;
break;
case 'H':
arg_add_H_histidine = deltamass;
break;
case 'F':
arg_add_F_phenylalanine = deltamass;
break;
case 'R':
arg_add_R_arginine = deltamass;
break;
case 'Y':
arg_add_Y_tyrosine = deltamass;
break;
case 'W':
arg_add_W_tryptophan = deltamass;
break;
}
}
}
}
os << std::endl
<< "\nallow_multiple_variable_mods_on_residue = " << (arg_not_allow_multiple_variable_mods_on_residue ? 0 : 1)
<< "\nmax_variable_mods_per_peptide = " << arg_max_variable_mods_per_peptide
<< "\nmax_variable_mods_combinations = " << arg_max_variable_mods_combinations
<< "\n\noutput_file_extension = " << "pepXML"
<< "\noutput_format = " << "pepXML"
<< "\noutput_report_topN = " << arg_output_report_topn
<< "\noutput_max_expect = " << arg_output_max_expect
<< "\n\nprecursor_charge = " << arg_precursor_charge_min << " " << arg_precursor_charge_max
<< "\noverride_charge = " << (arg_override_charge ? 1 : 0)
<< "\n\ndigest_min_length = " << arg_digest_min_length
<< "\ndigest_max_length = " << arg_digest_max_length
<< "\ndigest_mass_range = " << arg_digest_mass_range_min << " " << arg_digest_mass_range_max
<< "\nmax_fragment_charge = " << arg_max_fragment_charge
<< "\n\ntrack_zero_topN = " << arg_track_zero_topn
<< "\nzero_bin_accept_expect = " << arg_zero_bin_accept_expect
<< "\nzero_bin_mult_expect = " << arg_zero_bin_mult_expect
<< "\nadd_topN_complementary = " << arg_add_topn_complementary
<< "\n\nminimum_peaks = " << arg_minimum_peaks
<< "\nuse_topN_peaks = " << arg_use_topn_peaks
<< "\nlocalize_delta_mass = " << arg_localize_delta_mass
<< "\nmin_fragments_modelling = " << arg_min_fragments_modeling
<< "\nmin_matched_fragments = " << arg_min_matched_fragments
<< "\nminimum_ratio = " << arg_minimum_ratio
<< "\nclear_mz_range = " << arg_clear_mz_range_min << " " << arg_clear_mz_range_max
<< "\nadd_Cterm_peptide = " << arg_add_cterm_peptide
<< "\nadd_Nterm_peptide = " << arg_add_nterm_peptide
<< "\nadd_Cterm_protein = " << arg_add_cterm_protein
<< "\nadd_Nterm_protein = " << arg_add_nterm_protein
<< "\n\nadd_G_glycine = " << arg_add_G_glycine
<< "\nadd_A_alanine = " << arg_add_A_alanine
<< "\nadd_S_serine = " << arg_add_S_serine
<< "\nadd_P_proline = " << arg_add_P_proline
<< "\nadd_V_valine = " << arg_add_V_valine
<< "\nadd_T_threonine = " << arg_add_T_threonine
<< "\nadd_C_cysteine = " << arg_add_C_cysteine
<< "\nadd_L_leucine = " << arg_add_L_leucine
<< "\nadd_I_isoleucine = " << arg_add_I_isoleucine
<< "\nadd_N_asparagine = " << arg_add_N_asparagine
<< "\nadd_D_aspartic_acid = " << arg_add_D_aspartic_acid
<< "\nadd_Q_glutamine = " << arg_add_Q_glutamine
<< "\nadd_K_lysine = " << arg_add_K_lysine
<< "\nadd_E_glutamic_acid = " << arg_add_E_glutamic_acid
<< "\nadd_M_methionine = " << arg_add_M_methionine
<< "\nadd_H_histidine = " << arg_add_H_histidine
<< "\nadd_F_phenylalanine = " << arg_add_F_phenylalanine
<< "\nadd_R_arginine = " << arg_add_R_arginine
<< "\nadd_Y_tyrosine = " << arg_add_Y_tyrosine
<< "\nadd_W_tryptophan = " << arg_add_W_tryptophan;
os.close();
}
catch (int)
{
return ILLEGAL_PARAMETERS;
}
QStringList process_params; // the actual process is Java, not MSFragger
process_params << "-Xmx" + QString::number(this->getIntOption_(java_heapmemory)) + "m"
<< "-jar" << this->exe.toQString()
<< this->parameter_file_path.toQString()
<< input_file;
if (this->debug_level_ >= TOPPMSFraggerAdapter::LOG_LEVEL_VERBOSE)
{
writeDebug_("COMMAND LINE CALL IS:", 1);
String command_line = this->java_exe;
for (const auto& process_param : process_params)
{
command_line += (" " + process_param);
}
writeDebug_(command_line, TOPPMSFraggerAdapter::LOG_LEVEL_VERBOSE);
}
TOPPBase::ExitCodes exit_code = runExternalProcess_(java_exe.toQString(), process_params, working_directory.getPath().toQString());
if (exit_code != EXECUTION_OK)
{
return exit_code;
}
// convert from pepXML to idXML
String pepxmlfile = FileHandler::swapExtension(input_file, FileTypes::PEPXML);
PeptideIdentificationList peptide_identifications;
std::vector<ProteinIdentification> protein_identifications;
PepXMLFile().load(pepxmlfile, protein_identifications, peptide_identifications);
for (auto it = protein_identifications.begin(); it != protein_identifications.end(); it++)
{
it->setSearchEngine("MSFragger");
//Whatever the pepXML says, overwrite origin as the input mzML
it->setPrimaryMSRunPath({this->getStringOption_(TOPPMSFraggerAdapter::in)}, false);
}
// write all (!) parameters as metavalues to the search parameters
if (!protein_identifications.empty())
{
DefaultParamHandler::writeParametersToMetaValues(this->getParam_(), protein_identifications[0].getSearchParameters(), this->getToolPrefix());
}
// if "reindex" parameter is set to true will perform reindexing
if (auto ret = reindex_(protein_identifications, peptide_identifications); ret != EXECUTION_OK) return ret;
// add percolator features
StringList feature_set;
PercolatorFeatureSetHelper::addMSFRAGGERFeatures(feature_set);
protein_identifications.front().getSearchParameters().setMetaValue("extra_features", ListUtils::concatenate(feature_set, ","));
FileHandler().storeIdentifications(output_file, protein_identifications, peptide_identifications, {FileTypes::IDXML});
// remove the msfragger pepXML output from the user location
if (optional_output_file.empty())
{
File::remove(pepxmlfile);
}
else
{ // rename the pepXML file to the opt_out
File::rename(pepxmlfile.toQString(), optional_output_file.toQString());
}
// remove ".pepindex" database file
if (this->debug_level_ < 2)
{
String db_index = this->getStringOption_(TOPPMSFraggerAdapter::database) + ".1.pepindex";
File::remove(db_index);
}
return EXECUTION_OK;
}
private:
String java_exe;
String exe;
String parameter_file_path;
QString input_file;
String output_file;
String optional_output_file;
// adds variable modification if not already present
void _addVarMod(std::vector< double > & masses, std::vector< String > & syntaxes, const double mass, const String & syntax) const
{
const std::vector< double >::iterator it1 = std::find(masses.begin(), masses.end(), mass);
const std::vector< String >::iterator it2 = std::find(syntaxes.begin(), syntaxes.end(), syntax);
// add the provided variable modification if not already present
if ( it1 == masses.end()
|| it2 == syntaxes.end()
|| std::distance(masses.begin(), it1) != std::distance(syntaxes.begin(), it2))
{
masses.push_back(mass);
syntaxes.push_back(syntax);
}
}
inline void _registerNonNegativeInt(const String & param_name, const String & argument, const int default_value, const String & description, const bool required, const bool advanced)
{
this->registerIntOption_(param_name, argument, default_value, description, required, advanced);
this->setMinInt_(param_name, 0);
}
inline void _registerNonNegativeDouble(const String & param_name, const String & argument, const double default_value, const String & description, const bool required, const bool advanced)
{
this->registerDoubleOption_(param_name, argument, default_value, description, required, advanced);
this->setMinFloat_(param_name, 0.0);
}
inline void _fatalError(const String & message)
{
OPENMS_LOG_FATAL_ERROR << "FATAL: " << message << std::endl;
throw 1;
}
void checkUnique(const StringList & elements, const String & message)
{
for (Size i = 0; i < elements.size(); ++i)
{
for (Size j = 0; j < i; ++j)
{
if (elements[i] == elements[j])
{
_fatalError(message);
}
}
}
}
inline void ensureRange(const double left, const double right, const String & message) const
{
if (right < left)
{
OPENMS_LOG_ERROR << "FATAL: " << message << std::endl;
throw 1;
}
}
};
const String TOPPMSFraggerAdapter::java_executable = "java_executable";
const String TOPPMSFraggerAdapter::java_heapmemory = "java_heapmemory";
const String TOPPMSFraggerAdapter::executable = "executable";
const String TOPPMSFraggerAdapter::in = "in";
const String TOPPMSFraggerAdapter::out = "out";
const String TOPPMSFraggerAdapter::opt_out = "opt_out";
const String TOPPMSFraggerAdapter::database = "database";
// tolerance
const String TOPPMSFraggerAdapter::precursor_mass_tolerance_lower = "tolerance:precursor_mass_tolerance_lower";
const String TOPPMSFraggerAdapter::precursor_mass_tolerance_upper = "tolerance:precursor_mass_tolerance_upper";
const String TOPPMSFraggerAdapter::precursor_mass_unit = "tolerance:precursor_mass_unit";
const String TOPPMSFraggerAdapter::precursor_true_tolerance = "tolerance:precursor_true_tolerance";
const String TOPPMSFraggerAdapter::precursor_true_unit = "tolerance:precursor_true_unit";
const String TOPPMSFraggerAdapter::fragment_mass_tolerance = "tolerance:fragment_mass_tolerance";
const String TOPPMSFraggerAdapter::fragment_mass_unit = "tolerance:fragment_mass_unit";
const String TOPPMSFraggerAdapter::isotope_error = "tolerance:isotope_error";
// digest
const String TOPPMSFraggerAdapter::search_enzyme_name = "digest:search_enzyme_name";
const String TOPPMSFraggerAdapter::search_enzyme_cutafter = "digest:search_enzyme_cutafter";
const String TOPPMSFraggerAdapter::search_enzyme_nocutbefore = "digest:search_enzyme_nocutbefore";
const String TOPPMSFraggerAdapter::num_enzyme_termini = "digest:num_enzyme_termini";
const String TOPPMSFraggerAdapter::allowed_missed_cleavage = "digest:allowed_missed_cleavage";
const String TOPPMSFraggerAdapter::digest_min_length = "digest:min_length";
const String TOPPMSFraggerAdapter::digest_max_length = "digest:max_length";
const String TOPPMSFraggerAdapter::digest_mass_range_min = "digest:mass_range_min";
const String TOPPMSFraggerAdapter::digest_mass_range_max = "digest:mass_range_max";
// varmod
const String TOPPMSFraggerAdapter::clip_nterm_m = "varmod:clip_nterm_m";
const String TOPPMSFraggerAdapter::varmod_masses = "varmod:masses";
const String TOPPMSFraggerAdapter::varmod_syntax = "varmod:syntaxes";
const String TOPPMSFraggerAdapter::varmod_enable_common = "varmod:enable_common";
const String TOPPMSFraggerAdapter::not_allow_multiple_variable_mods_on_residue = "varmod:not_allow_multiple_variable_mods_on_residue";
const String TOPPMSFraggerAdapter::max_variable_mods_per_peptide = "varmod:max_variable_mods_per_peptide";
const String TOPPMSFraggerAdapter::max_variable_mods_combinations = "varmod:max_variable_mods_combinations";
const String TOPPMSFraggerAdapter::variable_modifications_unimod = "varmod:unimod";
// spectrum
const String TOPPMSFraggerAdapter::minimum_peaks = "spectrum:minimum_peaks";
const String TOPPMSFraggerAdapter::use_topn_peaks = "spectrum:use_topn_peaks";
const String TOPPMSFraggerAdapter::minimum_ratio = "spectrum:minimum_ratio";
const String TOPPMSFraggerAdapter::clear_mz_range_min = "spectrum:clear_mz_range_min";
const String TOPPMSFraggerAdapter::clear_mz_range_max = "spectrum:clear_mz_range_max";
const String TOPPMSFraggerAdapter::max_fragment_charge = "spectrum:max_fragment_charge";
const String TOPPMSFraggerAdapter::override_charge = "spectrum:override_charge";
const String TOPPMSFraggerAdapter::precursor_charge_min = "spectrum:precursor_charge_min";
const String TOPPMSFraggerAdapter::precursor_charge_max = "spectrum:precursor_charge_max";
// search
const String TOPPMSFraggerAdapter::track_zero_topn = "search:track_zero_topn";
const String TOPPMSFraggerAdapter::zero_bin_accept_expect = "search:zero_bin_accept_expect";
const String TOPPMSFraggerAdapter::zero_bin_mult_expect = "search:zero_bin_mult_expect";
const String TOPPMSFraggerAdapter::add_topn_complementary = "search:add_topn_complementary";
const String TOPPMSFraggerAdapter::min_fragments_modeling = "search:min_fragments_modeling";
const String TOPPMSFraggerAdapter::min_matched_fragments = "search:min_matched_fragments";
const String TOPPMSFraggerAdapter::output_report_topn = "search:output_report_topn";
const String TOPPMSFraggerAdapter::output_max_expect = "search:output_max_expect";
const String TOPPMSFraggerAdapter::localize_delta_mass = "search:localize_delta_mass";
// statmod
const String TOPPMSFraggerAdapter::add_cterm_peptide = "statmod:add_cterm_peptide";
const String TOPPMSFraggerAdapter::add_nterm_peptide = "statmod:add_nterm_peptide";
const String TOPPMSFraggerAdapter::add_cterm_protein = "statmod:add_cterm_protein";
const String TOPPMSFraggerAdapter::add_nterm_protein = "statmod:add_nterm_protein";
const String TOPPMSFraggerAdapter::add_G_glycine = "statmod:add_G_glycine";
const String TOPPMSFraggerAdapter::add_A_alanine = "statmod:add_A_alanine";
const String TOPPMSFraggerAdapter::add_S_serine = "statmod:add_S_serine";
const String TOPPMSFraggerAdapter::add_P_proline = "statmod:add_P_proline";
const String TOPPMSFraggerAdapter::add_V_valine = "statmod:add_V_valine";
const String TOPPMSFraggerAdapter::add_T_threonine = "statmod:add_T_threonine";
const String TOPPMSFraggerAdapter::add_C_cysteine = "statmod:add_C_cysteine";
const String TOPPMSFraggerAdapter::add_L_leucine = "statmod:add_L_leucine";
const String TOPPMSFraggerAdapter::add_I_isoleucine = "statmod:add_I_isoleucine";
const String TOPPMSFraggerAdapter::add_N_asparagine = "statmod:add_N_asparagine";
const String TOPPMSFraggerAdapter::add_D_aspartic_acid = "statmod:add_D_aspartic_acid";
const String TOPPMSFraggerAdapter::add_Q_glutamine = "statmod:add_Q_glutamine";
const String TOPPMSFraggerAdapter::add_K_lysine = "statmod:add_K_lysine";
const String TOPPMSFraggerAdapter::add_E_glutamic_acid = "statmod:add_E_glutamic_acid";
const String TOPPMSFraggerAdapter::add_M_methionine = "statmod:add_M_methionine";
const String TOPPMSFraggerAdapter::add_H_histidine = "statmod:add_H_histidine";
const String TOPPMSFraggerAdapter::add_F_phenylalanine = "statmod:add_F_phenylalanine";
const String TOPPMSFraggerAdapter::add_R_arginine = "statmod:add_R_arginine";
const String TOPPMSFraggerAdapter::add_Y_tyrosine = "statmod:add_Y_tyrosine";
const String TOPPMSFraggerAdapter::add_W_tryptophan = "statmod:add_W_tryptophan";
const String TOPPMSFraggerAdapter::license = "license";
const String TOPPMSFraggerAdapter::fixed_modifications_unimod = "statmod:unimod";
const int TOPPMSFraggerAdapter::LOG_LEVEL_VERBOSE = 1;
int main(int argc, const char** argv)
{
TOPPMSFraggerAdapter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/NucleicAcidSearchEngine.cpp | .cpp | 59,292 | 1,439 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Timo Sachsenberg, Samuel Wein, Hendrik Weisser $
// --------------------------------------------------------------------------
#include <OpenMS/KERNEL/StandardTypes.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/DATASTRUCTURES/ListUtils.h>
#include <OpenMS/DATASTRUCTURES/Param.h>
#include <OpenMS/DATASTRUCTURES/String.h>
#include <OpenMS/MATH/StatisticFunctions.h> // for "median"
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/KERNEL/MSSpectrum.h>
#include <OpenMS/KERNEL/Peak1D.h>
#include <OpenMS/METADATA/SpectrumSettings.h>
#include <OpenMS/METADATA/ID/IdentificationData.h>
#include <OpenMS/METADATA/ID/IdentificationDataConverter.h>
// file types
#include <OpenMS/FORMAT/FASTAFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/MzTabFile.h>
#include <OpenMS/FORMAT/OMSFile.h>
#include <OpenMS/FORMAT/SVOutStream.h>
// digestion enzymes
#include <OpenMS/CHEMISTRY/RNaseDigestion.h>
#include <OpenMS/CHEMISTRY/RNaseDB.h>
// ribonucleotides
#include <OpenMS/CHEMISTRY/RibonucleotideDB.h>
#include <OpenMS/CHEMISTRY/ElementDB.h>
#include <OpenMS/CHEMISTRY/ModifiedNASequenceGenerator.h>
#include <OpenMS/CHEMISTRY/NASequence.h>
// preprocessing and filtering of spectra
#include <OpenMS/PROCESSING/FILTERING/ThresholdMower.h>
#include <OpenMS/PROCESSING/FILTERING/NLargest.h>
#include <OpenMS/PROCESSING/FILTERING/WindowMower.h>
#include <OpenMS/PROCESSING/SCALING/Normalizer.h>
// spectra comparison
#include <OpenMS/CHEMISTRY/NucleicAcidSpectrumGenerator.h>
#include <OpenMS/COMPARISON/SpectrumAlignment.h>
#include <OpenMS/ANALYSIS/ID/MetaboliteSpectralMatching.h>
// post-processing of results
#include <OpenMS/ANALYSIS/ID/FalseDiscoveryRate.h>
#include <OpenMS/PROCESSING/ID/IDFilter.h>
#include <QtCore/QProcess>
#include <algorithm>
#include <iostream>
#include <vector>
#include <map>
#include <regex>
// multithreading
#ifdef _OPENMP
#include <omp.h>
#define NUMBER_OF_THREADS (omp_get_num_threads())
#else
#define NUMBER_OF_THREADS (1)
#endif
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_NucleicAcidSearchEngine NucleicAcidSearchEngine
@brief Matches tandem mass spectra to nucleic acid sequences.
Given a FASTA file containing RNA sequences (and optionally decoys) and an mzML file from a nucleic acid mass spec experiment:
- Generate a list of digestion fragments from the FASTA file (based on a specified RNase)
- Search the mzML input for MS2 spectra with parent masses corresponding to any of these sequence fragments
- Match the MS2 spectra to theoretically generated spectra
- Score the resulting matches
Output is in the form of an mzTab-like text file containing the search results.
Optionally, an idXML file suitable for visualizing search results in TOPPView (parameter @p id_out) and a "target coordinates" file for label-free quantification using FeatureFinderMetaboIdent (parameter @p lfq_out) can be generated.
Modified ribonucleotides can either be specified in the FASTA input file if they are expected at a specific site. As globally replacing the unmodified base (as @e fixed modifications), or set as @e variable modifications in the tool options.
Information on available modifications is taken from the Modomics database (http://modomics.genesilico.pl/).
In addition to these "standard" modifications, OpenMS defines "generic" and "ambiguous" ones:
<br>
A generic modification represents a group of modifications that cannot be distinguished by tandem mass spectrometry.
For example, "mA" stands for any methyladenosine (could be "m1A", "m2A", "m6A" or "m8A"), "mmA" for any dimethyladenosine (with two methyl groups on the base), and "mAm" for any 2'-O-dimethyladenosine (with one methyl group each on base and ribose).
There is no technical difference between searching for "mA" or e.g. "m1A", but the generic code better represents that no statement can be made about the position of the methyl group on the base.
<br>
In contrast, an ambiguous modification represents two isobaric modifications (or modification groups) with a methyl group on either the base or the ribose, that could in principle be distinguished based on a-B ions.
For example, "mA?" stands for methyladenosine ("mA", see above) or 2'-O-methyladenosine ("Am").
When using ambiguous modifications in a search, NucleicAcidSearchEngine can optionally try to assign the alternative that generates better a-B ion matches in a spectrum (see parameter @p modifications:resolve_ambiguities).
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_NucleicAcidSearchEngine.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_NucleicAcidSearchEngine.html
*/
class NucleicAcidSearchEngine :
public TOPPBase
{
using ConstRibonucleotidePtr = const Ribonucleotide*;
public:
NucleicAcidSearchEngine() :
TOPPBase("NucleicAcidSearchEngine", "Annotate nucleic acid identifications to MS/MS spectra."),
fragment_ion_codes_({"a-B", "a", "b", "c", "d", "w", "x", "y", "z"}),
resolve_ambiguous_mods_(false)
{
}
protected:
vector<String> fragment_ion_codes_;
map<String, String> ambiguous_mods_; //< map: specific code -> ambig. code
bool resolve_ambiguous_mods_;
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input file: spectra");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerInputFile_("database", "<file>", "", "Input file: sequence database. Required unless 'digest' is set.", false);
setValidFormats_("database", ListUtils::create<String>("fasta"));
registerInputFile_("digest", "<file>", "", "Input file: pre-digested sequence database. Can be used instead of 'database'. Sets all 'oligo:...' parameters.", false);
setValidFormats_("digest", {"oms"});
registerOutputFile_("out", "<file>", "", "Output file: mzTab");
setValidFormats_("out", ListUtils::create<String>("mzTab"));
registerOutputFile_("id_out", "<file>", "", "Output file: idXML (for visualization in TOPPView)", false);
setValidFormats_("id_out", ListUtils::create<String>("idXML"));
registerOutputFile_("db_out", "<file>", "", "Output file: oms (SQLite database)", false);
setValidFormats_("db_out", ListUtils::create<String>("oms"));
registerOutputFile_("digest_out", "<file>", "", "Output file: sequence database digest. Ignored if 'digest' input is used.", false);
setValidFormats_("digest_out", {"oms"});
registerOutputFile_("lfq_out", "<file>", "", "Output file: targets for label-free quantification using FeatureFinderMetaboIdent ('id' input)", false);
setValidFormats_("lfq_out", vector<String>(1, "tsv"));
registerOutputFile_("theo_ms2_out", "<file>", "", "Output file: theoretical MS2 spectra for precursor mass matches", false, true);
setValidFormats_("theo_ms2_out", ListUtils::create<String>("mzML"));
registerOutputFile_("exp_ms2_out", "<file>", "", "Output file: experimental MS2 spectra for precursor mass matches", false, true);
setValidFormats_("exp_ms2_out", ListUtils::create<String>("mzML"));
registerFlag_("decharge_ms2", "Decharge the MS2 spectra for scoring", true);
registerTOPPSubsection_("precursor", "Precursor (parent ion) options");
registerDoubleOption_("precursor:mass_tolerance", "<tolerance>", 10.0, "Precursor mass tolerance (+/- around uncharged precursor mass)", false);
registerStringOption_("precursor:mass_tolerance_unit", "<unit>", "ppm", "Unit of precursor mass tolerance", false, false);
setValidStrings_("precursor:mass_tolerance_unit", ListUtils::create<String>("Da,ppm"));
registerIntOption_("precursor:min_charge", "<num>", -1, "Minimum precursor charge to be considered", false, false);
registerIntOption_("precursor:max_charge", "<num>", -20, "Maximum precursor charge to be considered", false, false);
registerFlag_("precursor:include_unknown_charge", "Include MS2 spectra with unknown precursor charge - try to match them in any possible charge between 'min_charge' and 'max_charge', at the risk of a higher error rate", false);
registerFlag_("precursor:use_avg_mass", "Use average instead of monoisotopic precursor masses (appropriate for low-resolution instruments)", false);
// Whether to look for precursors with salt adducts
registerFlag_("precursor:use_adducts", "Consider possible salt adducts (see 'precursor:potential_adducts') when matching precursor masses", false);
registerStringList_("precursor:potential_adducts", "<list>", ListUtils::create<String>("Na:+"), "Adducts considered to explain mass differences. Format: 'Element:Charge(+/-)', i.e. the number of '+' or '-' indicates the charge, e.g. 'Ca:++' indicates +2. Only used if 'precursor:use_adducts' is set.", false, false);
IntList isotopes = {0, 1, 2, 3, 4};
registerIntList_("precursor:isotopes", "<list>", isotopes, "Correct for mono-isotopic peak misassignments. E.g.: 1 = precursor may be misassigned to the first isotopic peak. Ignored if 'use_avg_mass' is set.", false, false);
registerTOPPSubsection_("fragment", "Fragment (Product Ion) Options");
registerDoubleOption_("fragment:mass_tolerance", "<tolerance>", 10.0, "Fragment mass tolerance (+/- around fragment m/z)", false);
registerStringOption_("fragment:mass_tolerance_unit", "<unit>", "ppm", "Unit of fragment mass tolerance", false, false);
setValidStrings_("fragment:mass_tolerance_unit", ListUtils::create<String>("Da,ppm"));
registerStringList_("fragment:ions", "<choice>", fragment_ion_codes_, "Fragment ions to include in theoretical spectra", false);
setValidStrings_("fragment:ions", fragment_ion_codes_);
registerTOPPSubsection_("modifications", "Modification options");
// add modified ribos from database
vector<String> all_mods;
for (const auto& r : *RibonucleotideDB::getInstance())
{
if (r->isModified())
{
String code = r->getCode();
// commas aren't allowed in parameter string restrictions:
all_mods.push_back(code.remove(','));
}
}
registerStringList_("modifications:fixed", "<mods>", ListUtils::create<String>(""), "Fixed modifications, specified using modomics (https://genesilico.pl/modomics/modifications) terms.", false);
setValidStrings_("modifications:fixed", all_mods);
registerStringList_("modifications:variable", "<mods>", ListUtils::create<String>(""), "Variable modifications", false);
setValidStrings_("modifications:variable", all_mods);
registerIntOption_("modifications:variable_max_per_oligo", "<num>", 2, "Maximum number of residues carrying a variable modification per candidate oligonucleotide", false, false);
registerFlag_("modifications:resolve_ambiguities", "Attempt to resolve ambiguous modifications (e.g. 'mA?' for 'mA'/'Am') based on a-B ions.\nThis incurs a performance cost because two modifications have to be considered for each case.\nRequires a-B ions to be enabled in parameter 'fragment:ions'.");
registerTOPPSubsection_("oligo", "Oligonucleotide (digestion) options (ignored if 'digest' input is used)");
registerIntOption_("oligo:min_size", "<num>", 5, "Minimum size an oligonucleotide must have after digestion to be considered in the search", false);
registerIntOption_("oligo:max_size", "<num>", 0, "Maximum size an oligonucleotide must have after digestion to be considered in the search, leave at 0 for no limit", false);
registerIntOption_("oligo:missed_cleavages", "<num>", 1, "Number of missed cleavages", false, false);
StringList all_enzymes;
RNaseDB::getInstance()->getAllNames(all_enzymes);
registerStringOption_("oligo:enzyme", "<choice>", "no cleavage", "The enzyme used for RNA digestion", false);
setValidStrings_("oligo:enzyme", all_enzymes);
registerTOPPSubsection_("report", "Reporting Options");
registerIntOption_("report:top_hits", "<num>", 1, "Maximum number of top-scoring hits per spectrum that are reported ('0' for all hits)", false, true);
setMinInt_("report:top_hits", 0);
registerTOPPSubsection_("fdr", "False Discovery Rate options");
registerStringOption_("fdr:decoy_pattern", "<string>", "", "String used as part of the accession to annotate decoy sequences (e.g. 'DECOY_'). Leave empty to skip the FDR/q-value calculation.", false);
registerDoubleOption_("fdr:cutoff", "<value>", 1.0, "Cut-off for FDR filtering; search hits with higher q-values will be removed", false);
setMinFloat_("fdr:cutoff", 0.0);
setMaxFloat_("fdr:cutoff", 1.0);
registerFlag_("fdr:remove_decoys", "Do not score hits to decoy sequences and remove them when filtering");
}
// relevant information about an MS2 precursor ion
struct PrecursorInfo
{
Size scan_index;
Int charge;
Size isotope;
IdentificationData::AdductOpt adduct;
PrecursorInfo(Size scan_index, Int charge, Size isotope,
const IdentificationData::AdductOpt& adduct = std::nullopt):
scan_index(scan_index), charge(charge), isotope(isotope), adduct(adduct)
{
}
};
// slimmer structure to store basic hit information
struct AnnotatedHit
{
IdentificationData::IdentifiedOligoRef oligo_ref;
NASequence sequence;
double precursor_error_ppm; // precursor mass error in ppm
vector<PeptideHit::PeakAnnotation> annotations; // peak/ion annotations
const PrecursorInfo* precursor_ref; // precursor information
};
typedef multimap<double, AnnotatedHit, greater<double>> HitsByScore;
// query modified residues from database
set<ConstRibonucleotidePtr> getModifications_(const set<String>& mod_names)
{
set<ConstRibonucleotidePtr> modifications;
auto db_ptr = RibonucleotideDB::getInstance();
std::regex double_digits(R"((\d)(?=\d))");
for (String m : mod_names)
{
ConstRibonucleotidePtr mod = 0;
try
{
mod = db_ptr->getRibonucleotide(m);
}
catch (Exception::ElementNotFound& /*e*/)
{
// commas between numbers were removed - try reinserting them:
m = std::regex_replace(m, double_digits, "$&,");
mod = db_ptr->getRibonucleotide(m);
}
if (resolve_ambiguous_mods_ && mod->isAmbiguous())
{
pair<ConstRibonucleotidePtr, ConstRibonucleotidePtr> alternatives =
db_ptr->getRibonucleotideAlternatives(m);
modifications.insert(alternatives.first);
modifications.insert(alternatives.second);
// keep track of reverse associations (specific -> ambiguous);
// constraint: each mod. can only occur in one ambiguity group!
ambiguous_mods_[alternatives.first->getCode()] = m;
ambiguous_mods_[alternatives.second->getCode()] = m;
}
else
{
modifications.insert(mod);
}
}
if (ambiguous_mods_.empty()) // no ambiguous mods to resolve
{
resolve_ambiguous_mods_ = false;
}
return modifications;
}
// check for minimum and maximum size
class HasInvalidLength
{
Size min_size_;
Size max_size_;
public:
explicit HasInvalidLength(Size min_size, Size max_size)
: min_size_(min_size), max_size_(max_size)
{
}
bool operator()(const NASequence& s) const { return (s.size() < min_size_ || s.size() > max_size_); }
};
// turn an adduct string (param. "precursor:potential_adducts") into a formula
// @TODO: adapt "AdductInfo::parseAdductString" and use that
AdductInfo parseAdduct_(const String& adduct)
{
StringList parts;
adduct.split(':', parts);
if (parts.size() != 2)
{
String error = "entry in parameter 'precursor:potential_adducts' does not have two parts separated by ':'";
throw Exception::InvalidValue(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
error, adduct);
}
// determine charge of adduct (by number of '+' or '-')
Int pos_charge = std::count(parts[1].begin(), parts[1].end(), '+');
Int neg_charge = std::count(parts[1].begin(), parts[1].end(), '-');
OPENMS_LOG_DEBUG << ": " << pos_charge - neg_charge << endl;
if (pos_charge > 0 && neg_charge > 0)
{
String error = "entry in parameter 'precursor:potential_adducts' mixes positive and negative charges";
throw Exception::InvalidValue(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
error, adduct);
}
EmpiricalFormula ef(parts[0]);
int charge = (pos_charge > 0) ? pos_charge : -neg_charge;
return AdductInfo(adduct, ef, charge);
}
// spectrum must not contain 0 intensity peaks and must be sorted by m/z
void deisotopeAndSingleChargeMSSpectrum_(
MSSpectrum& in,
Int min_charge,
Int max_charge,
double fragment_tolerance,
bool fragment_unit_ppm,
bool keep_only_deisotoped = false,
Size min_isopeaks = 3,
Size max_isopeaks = 10,
bool make_single_charged = true)
{
if (in.empty()) return;
MSSpectrum old_spectrum = in;
// determine charge seeds and extend them
vector<Size> mono_isotopic_peak(old_spectrum.size(), 0);
vector<Int> features(old_spectrum.size(), -1);
Int feature_number = 0;
bool negative_mode = (max_charge < 0);
Int step = negative_mode ? -1 : 1;
for (Size current_peak = 0; current_peak != old_spectrum.size(); ++current_peak)
{
double current_mz = old_spectrum[current_peak].getPosition()[0];
for (Int q = max_charge; abs(q) >= abs(min_charge); q -= step) // important: test charge hypothesis from high to low (in terms of absolute values)
{
// try to extend isotopes from mono-isotopic peak
// if extension larger then min_isopeaks possible:
// - save charge q in mono_isotopic_peak[]
// - annotate all isotopic peaks with feature number
if (features[current_peak] == -1) // only process peaks which have no assigned feature number
{
bool has_min_isopeaks = true;
vector<Size> extensions;
for (Size i = 0; i < max_isopeaks; ++i)
{
double expected_mz = current_mz + i * Constants::C13C12_MASSDIFF_U / abs(q);
Size p = old_spectrum.findNearest(expected_mz);
double tolerance_dalton = fragment_unit_ppm ? fragment_tolerance * old_spectrum[p].getPosition()[0] * 1e-6 : fragment_tolerance;
if (fabs(old_spectrum[p].getPosition()[0] - expected_mz) > tolerance_dalton) // test for missing peak
{
if (i < min_isopeaks)
{
has_min_isopeaks = false;
}
break;
}
else
{
/*
// TODO: include proper averagine model filtering. for now start at the second peak to test hypothesis
Size n_extensions = extensions.size();
if (n_extensions != 0)
{
if (old_spectrum[p].getIntensity() > old_spectrum[extensions[n_extensions - 1]].getIntensity())
{
if (i < min_isopeaks)
{
has_min_isopeaks = false;
}
break;
}
}
// averagine check passed
*/
extensions.push_back(p);
}
}
if (has_min_isopeaks)
{
//cout << "min peaks at " << current_mz << " " << " extensions: " << extensions.size() << endl;
mono_isotopic_peak[current_peak] = q;
for (Size i = 0; i != extensions.size(); ++i)
{
features[extensions[i]] = feature_number;
}
feature_number++;
}
}
}
}
in.clear(false);
for (Size i = 0; i != old_spectrum.size(); ++i)
{
Int z = mono_isotopic_peak[i];
if (keep_only_deisotoped)
{
if (z == 0)
{
continue;
}
// if already single charged or no decharging selected keep peak as it is
if (!make_single_charged)
{
in.push_back(old_spectrum[i]);
}
else // make singly charged
{
Peak1D p = old_spectrum[i];
if (negative_mode) // z < 0 in this case
{
z = abs(z);
p.setMZ(p.getMZ() * z + (z - 1) * Constants::PROTON_MASS_U);
}
else
{
p.setMZ(p.getMZ() * z - (z - 1) * Constants::PROTON_MASS_U);
}
in.push_back(p);
}
}
else
{
// keep all unassigned peaks
if (features[i] < 0)
{
in.push_back(old_spectrum[i]);
continue;
}
// convert mono-isotopic peak with charge assigned by deisotoping
if (z != 0)
{
if (!make_single_charged)
{
in.push_back(old_spectrum[i]);
}
else // make singly charged
{
Peak1D p = old_spectrum[i];
if (negative_mode) // z < 0 in this case
{
z = abs(z);
p.setMZ(p.getMZ() * z + (z - 1) * Constants::PROTON_MASS_U);
}
else
{
p.setMZ(p.getMZ() * z - (z - 1) * Constants::PROTON_MASS_U);
}
in.push_back(p);
}
}
}
}
in.sortByPosition();
}
void preprocessSpectra_(PeakMap& exp, double fragment_mass_tolerance, bool fragment_mass_tolerance_unit_ppm, bool single_charge_spectra, bool negative_mode, Int min_charge, Int max_charge, bool include_unknown_charge)
{
// filter MS2 map
// remove 0 intensities
ThresholdMower threshold_mower_filter;
threshold_mower_filter.filterPeakMap(exp);
Normalizer normalizer;
normalizer.filterPeakMap(exp);
// sort by rt
exp.sortSpectra(false);
// filter settings
WindowMower window_mower_filter;
Param filter_param = window_mower_filter.getParameters();
filter_param.setValue("windowsize", 100.0, "The size of the sliding window along the m/z axis.");
// Note: we expect a higher number for NA than e.g., for peptides
filter_param.setValue("peakcount", 50, "The number of peaks that should be kept.");
filter_param.setValue("movetype", "jump", "Whether sliding window (one peak steps) or jumping window (window size steps) should be used.");
window_mower_filter.setParameters(filter_param);
// Note: we expect a higher number for NA than e.g., for peptides
NLargest nlargest_filter = NLargest(1000);
Size n_zero_charge = 0, n_inferred_charge = 0;
#pragma omp parallel for
for (SignedSize exp_index = 0; exp_index < (SignedSize)exp.size();
++exp_index)
{
MSSpectrum& spec = exp[exp_index];
// sort by mz
spec.sortByPosition();
if (spec.getPrecursors().empty()) continue; // this shouldn't happen
Int precursor_charge = spec.getPrecursors()[0].getCharge();
if (precursor_charge == 0) // no charge information
{
n_zero_charge++;
// maybe we are able to infer the charge state:
if (spec.getPrecursors().size() > 1) // multiplexed PRM experiment
{
// all precursors belong to the same parent, but with different charge
// states; we want to find the precursor with highest charge and infer
// its charge state:
map<double, Size> precursors; // precursor: m/z -> index
for (Size i = 0; i < spec.getPrecursors().size(); ++i)
{
precursors[spec.getPrecursors()[i].getMZ()] = i;
}
double mz1 = precursors.begin()->first;
double mz2 = (++precursors.begin())->first;
double mz_ratio = mz1 / mz2;
Int step = negative_mode ? -1 : 1;
Int inferred_charge = 0;
for (Int charge = max_charge; abs(charge) > abs(min_charge);
charge -= step)
{
double charge_ratio = (abs(charge) - 1.0) / abs(charge);
double ratios_ratio = mz_ratio / charge_ratio;
if ((ratios_ratio > 0.99) && (ratios_ratio < 1.01))
{
inferred_charge = charge;
break;
}
}
if (inferred_charge == 0)
{
OPENMS_LOG_ERROR
<< "Error: unable to determine charge state for spectrum '"
<< spec.getNativeID() << "' based on precursor m/z values "
<< mz1 << " and " << mz2 << endl;
}
else
{
++n_inferred_charge;
OPENMS_LOG_DEBUG << "Inferred charge state " << inferred_charge
<< " for spectrum '" << spec.getNativeID() << "'"
<< endl;
// keep only precursor with highest charge, set inferred charge:
Precursor prec = spec.getPrecursors()[precursors.begin()->second];
prec.setCharge(abs(inferred_charge));
spec.setPrecursors(vector<Precursor>(1, prec));
}
}
}
// deisotope
Int coef = negative_mode ? -1 : 1;
// @TODO: what happens here if "precursor_charge" is zero?
deisotopeAndSingleChargeMSSpectrum_(spec, coef, coef * precursor_charge, fragment_mass_tolerance, fragment_mass_tolerance_unit_ppm, false, 3, 20, single_charge_spectra);
// remove noise
window_mower_filter.filterPeakSpectrum(spec);
nlargest_filter.filterPeakSpectrum(spec);
// sort (nlargest changes order)
spec.sortByPosition();
}
if (n_zero_charge)
{
OPENMS_LOG_WARN << "Warning: no charge state information available for "
<< n_zero_charge << " out of " << exp.size()
<< " spectra." << endl;
if (n_inferred_charge)
{
OPENMS_LOG_INFO << "Inferred charge states for " << n_inferred_charge
<< " spectra." << endl;
}
if (n_zero_charge - n_inferred_charge > 0)
{
OPENMS_LOG_INFO
<< "Spectra without charge information will be "
<< (include_unknown_charge ? "included in the processing" : "skipped")
<< " (see parameter 'precursor:include_unknown_charge')" << endl;
}
}
}
double calculatePrecursorMass_(double mz, Int charge, Int isotope,
double adduct_mass, bool negative_mode)
{
// we want to calculate the unadducted (!) precursor mass at neutral charge:
double mass = mz * charge - adduct_mass;
// compensate for loss or gain of protons that confer the charge:
if (negative_mode)
{
mass += Constants::PROTON_MASS_U * charge;
}
else
{
mass -= Constants::PROTON_MASS_U * charge;
}
// correct for precursor not being the monoisotopic peak:
mass -= isotope * Constants::C13C12_MASSDIFF_U;
return mass;
}
void resolveAmbiguousMods_(HitsByScore& hits)
{
OPENMS_PRECONDITION(hits.size() > 1, "more than one hit expected");
auto previous_it = hits.begin();
// If the current hit is an ambiguity variant of the previous one, combine
// both into one hit. For example, if we have two hits with these sequences:
// 1. "AUC[mA]Gp", 2. "AUC[Am]Gp"
// The result should be: 1. "AUC[mA?]Gp" (note ambiguity code), 2. removed.
for (auto hit_it = ++hits.begin(); hit_it != hits.end(); /* no ++ here! */)
{
double previous_score = previous_it->first;
NASequence& previous_seq = previous_it->second.sequence;
const NASequence& current_seq = hit_it->second.sequence;
if ((hit_it->first != previous_score) ||
(current_seq.size() != previous_seq.size())) // different hits
{
previous_it = hit_it;
++hit_it;
continue;
}
bool remove_current = true;
NASequence replacement; // potential replacement sequence for previous hit
for (Size i = 0; i < current_seq.size(); ++i)
{
if (previous_seq[i]->getCode() == current_seq[i]->getCode()) continue;
if (const auto pos_current = ambiguous_mods_.find(current_seq[i]->getCode()); pos_current == ambiguous_mods_.end())
{
// difference is not due to an ambiguous mod. - don't combine hits:
remove_current = false;
break;
}
else
{
// is this ribonucleotide in the previous hit already ambiguous?
const String& ambig_code = pos_current->second;
if (previous_seq[i]->getCode() == ambig_code) continue;
// if not, should we replace it with an ambiguous mod.?
if (const auto pos_previous = ambiguous_mods_.find(previous_seq[i]->getCode());
(pos_previous == ambiguous_mods_.end()) ||
(pos_previous->second != ambig_code)) // mods don't match
{
remove_current = false;
break;
}
if (replacement.empty()) replacement = previous_seq;
replacement[i] = RibonucleotideDB::getInstance()->
getRibonucleotide(ambig_code);
}
}
if (remove_current) // current hit is redundant -> remove it
{
if (!replacement.empty()) previous_seq = replacement;
hit_it = hits.erase(hit_it);
}
else
{
previous_it = hit_it;
++hit_it;
}
}
}
void postProcessHits_(const PeakMap& exp,
vector<HitsByScore>& annotated_hits,
IdentificationData& id_data,
bool negative_mode)
{
IdentificationData::InputFileRef file_ref = id_data.getInputFiles().begin();
IdentificationData::ScoreTypeRef score_ref =
id_data.getScoreTypes().begin();
// @TODO: change OpenMP schedule from default ("static") to "dynamic"/"guided"?
#pragma omp parallel for
for (SignedSize scan_index = 0;
scan_index < (SignedSize)annotated_hits.size(); ++scan_index)
{
if (annotated_hits[scan_index].empty()) continue;
const MSSpectrum& spectrum = exp[scan_index];
IdentificationData::Observation obs(spectrum.getNativeID(), file_ref,
spectrum.getRT(),
spectrum.getPrecursors()[0].getMZ());
obs.setMetaValue("scan_index", static_cast<unsigned int>(scan_index));
obs.setMetaValue("precursor_intensity",
spectrum.getPrecursors()[0].getIntensity());
IdentificationData::ObservationRef obs_ref;
#pragma omp critical (id_data_access)
obs_ref = id_data.registerObservation(obs);
if (resolve_ambiguous_mods_ && (annotated_hits[scan_index].size() > 1))
{
resolveAmbiguousMods_(annotated_hits[scan_index]);
}
// create full oligo hit structure from annotated hits
for (const auto& pair : annotated_hits[scan_index])
{
double score = pair.first;
const AnnotatedHit& hit = pair.second;
OPENMS_LOG_DEBUG << "Hit sequence: " << hit.sequence.toString() << endl;
// transfer parent matches from unmodified oligo:
IdentificationData::IdentifiedOligo oligo = *hit.oligo_ref;
oligo.sequence = hit.sequence;
IdentificationData::IdentifiedOligoRef oligo_ref;
#pragma omp critical (id_data_access)
oligo_ref = id_data.registerIdentifiedOligo(oligo);
Int charge = hit.precursor_ref->charge;
if ((charge > 0) && negative_mode) charge = -charge;
IdentificationData::ObservationMatch match(oligo_ref, obs_ref, charge);
match.addScore(score_ref, score, id_data.getCurrentProcessingStep());
match.peak_annotations[id_data.getCurrentProcessingStep()] =
hit.annotations;
// @TODO: add a field for this to "IdentificationData::ObservationMatch"?
match.setMetaValue(Constants::UserParam::PRECURSOR_ERROR_PPM_USERPARAM,
hit.precursor_error_ppm);
match.setMetaValue("isotope_offset", hit.precursor_ref->isotope);
match.adduct_opt = hit.precursor_ref->adduct;
#pragma omp critical (id_data_access)
id_data.registerObservationMatch(match);
}
}
id_data.cleanup();
}
void calculateAndFilterFDR_(IdentificationData& id_data, bool only_top_hits)
{
IdentificationData::ScoreTypeRef score_ref = id_data.findScoreType("hyperscore");
FalseDiscoveryRate fdr;
Param fdr_params = fdr.getDefaults();
fdr_params.setValue("use_all_hits", only_top_hits ? "false" : "true");
bool remove_decoys = getFlag_("fdr:remove_decoys");
fdr_params.setValue("add_decoy_peptides", remove_decoys ? "false" : "true");
fdr.setParameters(fdr_params);
IdentificationData::ScoreTypeRef fdr_ref =
fdr.applyToObservationMatches(id_data, score_ref);
double fdr_cutoff = getDoubleOption_("fdr:cutoff");
if (remove_decoys) // remove references to decoys from shared oligos
{
IDFilter::removeDecoys(id_data);
}
if (fdr_cutoff < 1.0)
{
IDFilter::filterObservationMatchesByScore(id_data, fdr_ref, fdr_cutoff);
OPENMS_LOG_INFO << "Search hits after FDR filtering: "
<< id_data.getObservationMatches().size()
<< "\nIdentified spectra after FDR filtering: "
<< id_data.getObservations().size() << endl;
}
}
void generateLFQInput_(IdentificationData& id_data, const String& out_file)
{
using AdductedOligo = pair<NASequence, IdentificationData::AdductOpt>;
using PrecursorPair = pair<double, double>; // precursor intensity, RT
// mapping: charge -> list of precursors
using PrecursorsByCharge = map<Int, vector<PrecursorPair>>;
map<AdductedOligo, PrecursorsByCharge> rt_info;
for (const IdentificationData::ObservationMatch& match :
id_data.getObservationMatches())
{
const NASequence& seq =
match.identified_molecule_var.getIdentifiedOligoRef()->sequence;
auto key = make_pair(seq, match.adduct_opt);
double rt = match.observation_ref->rt;
double prec_int =
match.observation_ref->getMetaValue("precursor_intensity");
rt_info[key][match.charge].push_back(make_pair(prec_int, rt));
}
SVOutStream tsv(out_file);
tsv.modifyStrings(false);
tsv << "CompoundName" << "SumFormula" << "Mass" << "Charge"
<< "RetentionTime" << "RetentionTimeRange" << "IsoDistribution" << endl;
for (const auto& entry : rt_info)
{
String name = entry.first.first.toString();
EmpiricalFormula ef = entry.first.first.getFormula();
const IdentificationData::AdductOpt& adduct = entry.first.second;
if (adduct)
{
name += "+[" + (*adduct)->getName() + "]";
ef += (*adduct)->getEmpiricalFormula();
}
// @TODO: use charge-specific RTs?
vector<Int> charges;
vector<double> rts;
for (const auto& charge_pair : entry.second)
{
charges.push_back(charge_pair.first);
// use intensity-weighted mean of precursor RTs as "apex" RT:
double weighted_rt = 0.0, total_weight = 0.0;
for (const auto& rt_pair : charge_pair.second)
{
weighted_rt += rt_pair.first * rt_pair.second;
total_weight += rt_pair.first;
}
rts.push_back(weighted_rt / total_weight);
}
tsv << name << ef << 0 << ListUtils::concatenate(charges, ",");
// overall target RT is median over all charge states:
tsv << Math::median(rts.begin(), rts.end(), false) << 0 << 0 << endl;
}
}
ExitCodes main_(int, const char**) override
{
ProgressLogger progresslogger;
progresslogger.setLogType(log_type_);
IdentificationData id_data; // container for results
// load parameters and check validity:
String in_mzml = getStringOption_("in");
String in_db = getStringOption_("database");
String in_digest = getStringOption_("digest");
if (in_db.empty() && in_digest.empty())
{
OPENMS_LOG_ERROR << "Error: parameter 'database' or 'digest' must be set"
<< endl;
return ILLEGAL_PARAMETERS;
}
if (!in_db.empty() && !in_digest.empty())
{
OPENMS_LOG_WARN
<< "Warning: both 'database' and 'digest' are set; ignoring 'database'"
<< endl;
}
String out = getStringOption_("out");
String id_out = getStringOption_("id_out");
String db_out = getStringOption_("db_out");
String lfq_out = getStringOption_("lfq_out");
String theo_ms2_out = getStringOption_("theo_ms2_out");
String exp_ms2_out = getStringOption_("exp_ms2_out");
bool use_avg_mass = getFlag_("precursor:use_avg_mass");
Int min_charge = getIntOption_("precursor:min_charge");
Int max_charge = getIntOption_("precursor:max_charge");
// @TODO: allow zero to mean "any charge state in the data"?
if ((min_charge == 0) || (max_charge == 0))
{
OPENMS_LOG_ERROR << "Error: invalid charge state 0" << endl;
return ILLEGAL_PARAMETERS;
}
// charges can be positive or negative, depending on data acquisition mode:
if (((min_charge < 0) && (max_charge > 0)) ||
((min_charge > 0) && (max_charge < 0)))
{
OPENMS_LOG_ERROR << "Error: mixing positive and negative charges is not allowed"
<< endl;
return ILLEGAL_PARAMETERS;
}
// min./max. are based on absolute value:
if (abs(max_charge) < abs(min_charge)) swap(min_charge, max_charge);
bool negative_mode = (max_charge < 0);
Int charge_step = negative_mode ? -1 : 1;
IdentificationData::DBSearchParam search_param;
for (Int charge = min_charge; abs(charge) <= abs(max_charge);
charge += charge_step)
{
search_param.charges.insert(charge);
}
search_param.molecule_type = IdentificationData::MoleculeType::RNA;
search_param.mass_type = (use_avg_mass ?
IdentificationData::MassType::AVERAGE :
IdentificationData::MassType::MONOISOTOPIC);
search_param.precursor_mass_tolerance =
getDoubleOption_("precursor:mass_tolerance");
search_param.precursor_tolerance_ppm =
(getStringOption_("precursor:mass_tolerance_unit") == "ppm");
search_param.fragment_mass_tolerance =
getDoubleOption_("fragment:mass_tolerance");
search_param.fragment_tolerance_ppm =
(getStringOption_("fragment:mass_tolerance_unit") == "ppm");
search_param.min_length = getIntOption_("oligo:min_size");
search_param.max_length = getIntOption_("oligo:max_size");
StringList fixed_mod_names = getStringList_("modifications:fixed");
search_param.fixed_mods.insert(fixed_mod_names.begin(),
fixed_mod_names.end());
StringList var_mod_names = getStringList_("modifications:variable");
search_param.variable_mods.insert(var_mod_names.begin(),
var_mod_names.end());
resolve_ambiguous_mods_ = getFlag_("modifications:resolve_ambiguities");
set<ConstRibonucleotidePtr> fixed_modifications =
getModifications_(search_param.fixed_mods);
set<ConstRibonucleotidePtr> variable_modifications =
getModifications_(search_param.variable_mods);
// @TODO: add slots for these to "IdentificationData::DBSearchParam"?
IntList precursor_isotopes = (use_avg_mass ? vector<Int>(1, 0) :
getIntList_("precursor:isotopes"));
Size max_variable_mods_per_oligo =
getIntOption_("modifications:variable_max_per_oligo");
Size report_top_hits = getIntOption_("report:top_hits");
StringList potential_adducts =
getStringList_("precursor:potential_adducts");
// @TODO: allow different adducts with same mass?
map<double, IdentificationData::AdductOpt> adduct_masses;
adduct_masses[0.0] = std::nullopt; // always consider "no adduct"
bool use_adducts = getFlag_("precursor:use_adducts");
bool include_unknown_charge = getFlag_("precursor:include_unknown_charge");
bool single_charge_spectra = getFlag_("decharge_ms2");
if (use_adducts)
{
for (const String& adduct_name : potential_adducts)
{
AdductInfo adduct = parseAdduct_(adduct_name);
double mass = adduct.getMassShift(use_avg_mass);
IdentificationData::AdductRef ref = id_data.registerAdduct(adduct);
adduct_masses[mass] = ref;
OPENMS_LOG_DEBUG << "Added adduct: " << adduct_name << ", mass shift: "
<< mass << endl;
}
}
// load MS2 map
MSExperiment spectra;
FileHandler f;
PeakFileOptions options;
options.clearMSLevels();
options.addMSLevel(2);
f.setOptions(options);
f.loadExperiment(in_mzml, spectra, {FileTypes::MZML}, log_type_);
spectra.sortSpectra(true);
// input file meta data:
String input_name = test_mode_ ? File::basename(in_mzml) : in_mzml;
IdentificationData::InputFile input(input_name);
vector<String> primary_files;
spectra.getPrimaryMSRunPath(primary_files);
input.primary_files.insert(primary_files.begin(), primary_files.end());
IdentificationData::InputFileRef file_ref =
id_data.registerInputFile(input);
// processing software meta data:
IdentificationData::ScoreType score("hyperscore", true);
IdentificationData::ScoreTypeRef hyperscore_ref =
id_data.registerScoreType(score);
CVTerm qvalue("MS:1002354", "PSM-level q-value", "MS");
score = IdentificationData::ScoreType(qvalue, false);
IdentificationData::ScoreTypeRef qvalue_ref =
id_data.registerScoreType(score);
IdentificationData::ProcessingSoftware software(toolName_(), version_);
// in test mode just overwrite with a generic version:
if (test_mode_) software.setVersion("test");
// @TODO: which should be the "primary" (first) score?
software.assigned_scores.push_back(hyperscore_ref);
software.assigned_scores.push_back(qvalue_ref);
IdentificationData::ProcessingSoftwareRef software_ref =
id_data.registerProcessingSoftware(software);
// @TODO: add suitable data processing action
IdentificationData::ProcessingStep step(software_ref, {file_ref});
IdentificationData::ProcessingStepRef step_ref;
// get digested sequences:
String decoy_pattern = getStringOption_("fdr:decoy_pattern");
if (in_digest.empty()) // new digestion
{
search_param.database = in_db;
search_param.missed_cleavages = getIntOption_("oligo:missed_cleavages");
String enzyme_name = getStringOption_("oligo:enzyme");
search_param.digestion_enzyme =
RNaseDB::getInstance()->getEnzyme(enzyme_name);
IdentificationData::SearchParamRef search_ref =
id_data.registerDBSearchParam(search_param);
step_ref = id_data.registerProcessingStep(step, search_ref);
// reference this step in all following ID data items, if applicable:
id_data.setCurrentProcessingStep(step_ref);
RNaseDigestion digestor;
digestor.setEnzyme(search_param.digestion_enzyme);
digestor.setMissedCleavages(search_param.missed_cleavages);
// set minimum and maximum size of oligo after digestion
Size min_oligo_length = getIntOption_("oligo:min_size");
Size max_oligo_length = getIntOption_("oligo:max_size");
progresslogger.startProgress(0, 1, "loading database from FASTA file...");
vector<FASTAFile::FASTAEntry> fasta_db;
FASTAFile().load(in_db, fasta_db);
progresslogger.endProgress();
OPENMS_LOG_INFO << "Performing in-silico digestion..." << endl;
IdentificationDataConverter::importSequences(
id_data, fasta_db, IdentificationData::MoleculeType::RNA, decoy_pattern);
digestor.digest(id_data, min_oligo_length, max_oligo_length);
String digest_out = getStringOption_("digest_out");
if (!digest_out.empty())
{
OMSFile(log_type_).store(digest_out, id_data);
}
}
else // load digestion results from a previous run
{
OPENMS_LOG_INFO << "Loading pre-digested sequence data..." << endl;
OMSFile(log_type_).load(in_digest, id_data);
if (id_data.getDBSearchParams().empty())
{
OPENMS_LOG_WARN
<< "Warning: no search parameter information found in 'digest' input"
<< endl;
}
IdentificationData::SearchParamRef search_ref =
id_data.getDBSearchParams().begin();
step_ref = id_data.registerProcessingStep(step, search_ref);
// reference this step in all following ID data items:
id_data.setCurrentProcessingStep(step_ref);
}
Size n_nucleic_acids = id_data.getParentSequences().size();
if (!decoy_pattern.empty())
{
bool no_decoys = none_of(
id_data.getParentSequences().begin(),
id_data.getParentSequences().end(),
[](const IdentificationData::ParentSequence& p){ return p.is_decoy; });
if (no_decoys)
{
OPENMS_LOG_ERROR << "Error: 'fdr:decoy_pattern' is set, but no decoy sequences were found" << endl;
return ILLEGAL_PARAMETERS;
}
}
progresslogger.startProgress(0, 1, "filtering spectra...");
// @TODO: move this into the loop below (run only when checks pass)
preprocessSpectra_(spectra, search_param.fragment_mass_tolerance,
search_param.fragment_tolerance_ppm,
single_charge_spectra, negative_mode, min_charge,
max_charge, include_unknown_charge);
progresslogger.endProgress();
OPENMS_LOG_DEBUG << "preprocessed spectra: " << spectra.getNrSpectra()
<< endl;
// build multimap of precursor mass to scan index (and other information):
multimap<double, PrecursorInfo> precursor_mass_map;
for (PeakMap::ConstIterator s_it = spectra.begin(); s_it != spectra.end();
++s_it)
{
int scan_index = s_it - spectra.begin();
const vector<Precursor>& precursors = s_it->getPrecursors();
// there should be only one precursor and MS2 should contain at least a
// few peaks to be considered (at least one per nucleotide in the chain):
if ((precursors.size() != 1) || (s_it->size() < search_param.min_length))
{
continue;
}
set<Int> possible_charges;
Int precursor_charge = precursors[0].getCharge();
if (precursor_charge == 0) // charge information missing
{
if (include_unknown_charge)
{
possible_charges = search_param.charges; // try all allowed charges
}
else
{
continue; // skip
}
}
// compare to charge parameters (the charge value in mzML seems to be
// always positive, so compare by absolute value in negative mode):
else if ((negative_mode &&
((precursor_charge > abs(*search_param.charges.begin())) ||
(precursor_charge < abs(*(--search_param.charges.end()))))) ||
(!negative_mode &&
((precursor_charge < *search_param.charges.begin()) ||
(precursor_charge > *(--search_param.charges.end())))))
{
continue; // charge not in user-supplied range
}
else
{
possible_charges.insert(precursor_charge); // only one possibility
}
double precursor_mz = precursors[0].getMZ();
for (Int precursor_charge : possible_charges)
{
precursor_charge = abs(precursor_charge); // adjust for neg. mode
// calculate precursor mass (optionally corrected for adducts and peak
// misassignment) and map it to MS scan index etc.:
for (const auto& adduct_pair : adduct_masses)
{
for (Int isotope_number : precursor_isotopes)
{
double precursor_mass =
calculatePrecursorMass_(precursor_mz, precursor_charge,
isotope_number, adduct_pair.first,
negative_mode);
PrecursorInfo info(scan_index, precursor_charge, isotope_number,
adduct_pair.second);
precursor_mass_map.insert(make_pair(precursor_mass, info));
}
}
}
}
// create spectrum generator:
NucleicAcidSpectrumGenerator spectrum_generator;
Param param = spectrum_generator.getParameters();
vector<String> temp = getStringList_("fragment:ions");
set<String> selected_ions(temp.begin(), temp.end());
if (resolve_ambiguous_mods_ && !selected_ions.count("a-B"))
{
OPENMS_LOG_WARN << "Warning: option 'modifications:resolve_ambiguities' requires a-B ions in parameter 'fragment:ions' - disabling the option." << endl;
resolve_ambiguous_mods_ = false;
}
for (const auto& code : fragment_ion_codes_)
{
String param_name = "add_" + code + "_ions";
if (selected_ions.count(code))
{
param.setValue(param_name, "true");
}
else
{
param.setValue(param_name, "false");
}
}
param.setValue("add_first_prefix_ion", "true");
param.setValue("add_metainfo", "true");
param.setValue("add_precursor_peaks", "false");
spectrum_generator.setParameters(param);
vector<HitsByScore> annotated_hits(spectra.size());
MSExperiment exp_ms2_spectra, theo_ms2_spectra; // debug output
String msg = "scoring oligonucleotide models against spectra...";
progresslogger.startProgress(0, id_data.getIdentifiedOligos().size(), msg);
Size hit_counter = 0;
// keep a list of (references to) oligos in the original digest:
vector<IdentificationData::IdentifiedOligoRef> digest;
digest.reserve(id_data.getIdentifiedOligos().size());
for (IdentificationData::IdentifiedOligoRef it =
id_data.getIdentifiedOligos().begin(); it !=
id_data.getIdentifiedOligos().end(); ++it)
{
digest.push_back(it);
}
Int base_charge = negative_mode ? -1 : 1;
// shorter oligos take (possibly much) less time to process than longer ones;
// due to the sorting order of "NASequence", they also appear earlier in the
// container - therefore use dynamic scheduling to distribute work evenly:
#pragma omp parallel for schedule(dynamic)
for (SignedSize index = 0; index < SignedSize(digest.size()); ++index)
{
IF_MASTERTHREAD
{
progresslogger.setProgress(index);
}
IdentificationData::IdentifiedOligoRef oligo_ref = digest[index];
vector<NASequence> all_modified_oligos;
NASequence ns = oligo_ref->sequence;
ModifiedNASequenceGenerator::applyFixedModifications(
fixed_modifications, ns);
ModifiedNASequenceGenerator::applyVariableModifications(
variable_modifications, ns, max_variable_mods_per_oligo,
all_modified_oligos, true);
OPENMS_LOG_DEBUG << "Oligo: " << ns.toString() << endl;
// group modified oligos by precursor mass - oligos with the same
// combination of mods (just different placements) will have same mass:
map<double, vector<const NASequence*>> modified_oligos_by_mass;
for (const NASequence& seq : all_modified_oligos)
{
double mass = (use_avg_mass ? seq.getAverageWeight() :
seq.getMonoWeight());
modified_oligos_by_mass[mass].push_back(&seq);
}
for (const auto& pair : modified_oligos_by_mass)
{
double candidate_mass = pair.first;
// determine MS2 precursors that match to the current mass:
double tol = search_param.precursor_mass_tolerance;
if (search_param.precursor_tolerance_ppm)
{
tol *= candidate_mass * 1e-6;
}
multimap<double, PrecursorInfo>::const_iterator low_it =
precursor_mass_map.lower_bound(candidate_mass - tol), up_it =
precursor_mass_map.upper_bound(candidate_mass + tol);
if (low_it == up_it) continue; // no matching precursor in data
// collect all relevant charge states for theoret. spectrum generation:
set<Int> precursor_charges;
for (auto prec_it = low_it; prec_it != up_it; ++prec_it) // OMS_CODING_TEST_EXCLUDE
{
precursor_charges.insert(prec_it->second.charge * base_charge);
}
for (const NASequence* seq_ptr : pair.second)
{
const NASequence& candidate = *seq_ptr;
OPENMS_LOG_DEBUG << "Candidate: " << candidate.toString() << " ("
<< float(candidate_mass) << " Da)" << endl;
// pre-generate spectra:
map<Int, MSSpectrum> theo_spectra_by_charge;
spectrum_generator.getMultipleSpectra(theo_spectra_by_charge,
candidate, precursor_charges,
base_charge);
for (auto prec_it = low_it; prec_it != up_it; ++prec_it) // OMS_CODING_TEST_EXCLUDE
{
OPENMS_LOG_DEBUG << "Matching precursor mass: "
<< float(prec_it->first) << endl;
Size charge = prec_it->second.charge;
// look up theoretical spectrum for this charge:
MSSpectrum& theo_spectrum =
theo_spectra_by_charge[charge * base_charge];
Size scan_index = prec_it->second.scan_index;
const MSSpectrum& exp_spectrum = spectra[scan_index];
vector<PeptideHit::PeakAnnotation> annotations;
double score = MetaboliteSpectralMatching::computeHyperScore(
search_param.fragment_mass_tolerance,
search_param.fragment_tolerance_ppm, exp_spectrum, theo_spectrum,
annotations);
if (!exp_ms2_out.empty())
{
#pragma omp critical (exp_ms2_out)
exp_ms2_spectra.addSpectrum(exp_spectrum);
}
if (!theo_ms2_out.empty())
{
theo_spectrum.setName(candidate.toString());
theo_spectrum.setRT(exp_spectrum.getRT());
#pragma omp critical (theo_ms2_out)
theo_ms2_spectra.addSpectrum(theo_spectrum);
}
if (score < 1e-16) continue; // no hit
#pragma omp atomic
++hit_counter;
OPENMS_LOG_DEBUG << "Score: " << score << endl;
#pragma omp critical (annotated_hits_access)
{
HitsByScore& scan_hits = annotated_hits[scan_index];
HitsByScore::iterator pos = scan_hits.end();
if ((report_top_hits == 0) ||
(scan_hits.size() < report_top_hits))
{
pos = scan_hits.insert(make_pair(score, AnnotatedHit()));
}
else // already have enough hits for this spectrum - replace one?
{
double worst_score = (--scan_hits.end())->first;
if (score >= worst_score)
{
pos = scan_hits.insert(make_pair(score, AnnotatedHit()));
// prune list of hits if possible (careful about tied scores):
Size n_worst = scan_hits.count(worst_score);
if (scan_hits.size() - n_worst >= report_top_hits)
{
scan_hits.erase(worst_score);
}
}
}
// add oligo hit data only if necessary (good enough score):
if (pos != scan_hits.end())
{
AnnotatedHit& ah = pos->second;
ah.oligo_ref = oligo_ref;
ah.sequence = candidate;
// @TODO: is "observed - calculated" the right way around?
ah.precursor_error_ppm =
(prec_it->first - candidate_mass) / candidate_mass * 1.0e6;
ah.annotations = annotations;
ah.precursor_ref = &(prec_it->second);
}
}
}
}
}
}
progresslogger.endProgress();
OPENMS_LOG_INFO << "Undigested nucleic acids: " << n_nucleic_acids
<< "\nOligonucleotides: "
<< id_data.getIdentifiedOligos().size()
<< "\nSearch hits (spectrum matches): " << hit_counter
<< endl;
if (!exp_ms2_out.empty())
{
FileHandler().storeExperiment(exp_ms2_out, exp_ms2_spectra, {FileTypes::MZML}, log_type_);
}
if (!theo_ms2_out.empty())
{
FileHandler().storeExperiment(theo_ms2_out, theo_ms2_spectra, {FileTypes::MZML}, log_type_);
}
progresslogger.startProgress(0, 1, "post-processing search hits...");
postProcessHits_(spectra, annotated_hits, id_data, negative_mode);
progresslogger.endProgress();
OPENMS_LOG_INFO << "Identified spectra: " << id_data.getObservations().size()
<< endl;
// FDR:
if (!decoy_pattern.empty())
{
OPENMS_LOG_INFO << "Performing FDR calculations..." << endl;
calculateAndFilterFDR_(id_data, report_top_hits == 1);
}
id_data.calculateCoverages();
// store results
if (!db_out.empty())
{
OMSFile(log_type_).store(db_out, id_data);
}
MzTab results = IdentificationDataConverter::exportMzTab(id_data);
OPENMS_LOG_DEBUG << "Nucleic acid rows: "
<< results.getNucleicAcidSectionRows().size()
<< "\nOligonucleotide rows: "
<< results.getOligonucleotideSectionRows().size()
<< "\nOligo-spectrum match rows: "
<< results.getOSMSectionRows().size() << endl;
MzTabFile().store(out, results);
// dummy "peptide" results:
if (!id_out.empty())
{
if (!digest.empty())
{
// RNA seqs. were imported from a previous search run - need to "tag"
// them with the current processing step so they get exported properly:
for (IdentificationData::ParentSequenceRef ref =
id_data.getParentSequences().begin(); ref !=
id_data.getParentSequences().end(); ++ref)
{
// @TODO: find a way to avoid the copying (modify in place?):
IdentificationData::ParentSequence copy = *ref;
copy.addProcessingStep(id_data.getCurrentProcessingStep());
id_data.registerParentSequence(copy);
}
}
vector<ProteinIdentification> proteins;
PeptideIdentificationList peptides;
IdentificationDataConverter::exportIDs(id_data, proteins, peptides);
FileHandler().storeIdentifications(id_out, proteins, peptides, {FileTypes::IDXML});
}
if (!lfq_out.empty())
{
generateLFQInput_(id_data, lfq_out);
}
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
NucleicAcidSearchEngine tool;
return tool.main(argc, argv);
}
| C++ |
3D | OpenMS/OpenMS | src/topp/Epifany.cpp | .cpp | 19,574 | 474 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Julianus Pfeuffer $
// $Authors: Julianus Pfeuffer $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/PROCESSING/ID/IDFilter.h>
#include <OpenMS/FORMAT/ExperimentalDesignFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/METADATA/ExperimentalDesign.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/SYSTEM/StopWatch.h>
#include <OpenMS/ANALYSIS/ID/BayesianProteinInferenceAlgorithm.h>
#include <OpenMS/ANALYSIS/ID/ConsensusMapMergerAlgorithm.h>
#include <OpenMS/ANALYSIS/ID/FalseDiscoveryRate.h>
#include <OpenMS/ANALYSIS/ID/IDMergerAlgorithm.h>
#include <OpenMS/ANALYSIS/ID/PeptideProteinResolution.h>
#include <OpenMS/ANALYSIS/ID/IDScoreSwitcherAlgorithm.h>
#include <vector>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_Epifany Epifany
@brief EPIFANY - Efficient protein inference for any peptide-protein network is a Bayesian
protein inference engine. It uses PSM (posterior) probabilities from Percolator, OpenMS' IDPosteriorErrorProbability
or similar tools to calculate posterior probabilities for proteins and protein groups.
@experimental This tool is work in progress and usage and input requirements might change.
<center>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → Epifany →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PercolatorAdapter </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFilter </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDPosteriorErrorProbability </td>
</tr>
</table>
</center>
<p>It is a protein inference engine based on a Bayesian network. Currently the same model like
Fido is used with the main parameters alpha (pep_emission), beta (pep_spurious_emission) and gamma (prot_prior).
If not specified,
these parameters are trained based on their classification performance and calibration via a grid search
by simply running with several possible combinations and evaluating. Unless you see very extreme output
probabilities (e.g. many close to 1.0) or you know good parameters (e.g. from an earlier run),
grid search is recommended although slower. The tool will merge multiple idXML files (union of proteins
and concatenation of PSMs) when given more than one. It assumes one search engine run per input file but
might work on more. Proteins need to be indexed by OpenMS's PeptideIndexer but this is usually done before
Percolator/IDPEP since target/decoy associations are needed there already. Make sure that the input PSM
probabilities are not too extreme already (garbage in - garbage out). After merging the input probabilities
are preprocessed with a low posterior probability cutoff to neglect very unreliable matches. Then
the probabilities are aggregated with the maximum per peptide and the graph is built and split into
connected components. When compiled with the OpenMP
flag (default enabled in the release binaries) the tool is multi-threaded which can
be activated at runtime by the threads parameter. Note that peak memory requirements
may rise significantly when processing multiple components of the graph at the same time.
</p>
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_Epifany.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_Epifany.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class Epifany :
public TOPPBase
{
public:
Epifany() :
TOPPBase("Epifany", "Runs a Bayesian protein inference.")
{
}
protected:
void registerOptionsAndFlags_() override
{
//TODO support separate runs
registerInputFileList_("in", "<file>", StringList(), "Input: identification results");
setValidFormats_("in", {"idXML","consensusXML"});
registerInputFile_("exp_design", "<file>", "", "(Currently unused) Input: experimental design", false);
setValidFormats_("exp_design", ListUtils::create<String>("tsv"));
registerOutputFile_("out", "<file>", "", "Output: identification results with scored/grouped proteins");
setValidFormats_("out", {"idXML","consensusXML"});
registerStringOption_("out_type", "<file>", "", "Output type: auto detected by file extension but can be overwritten here.", false);
setValidStrings_("out_type", {"idXML","consensusXML"});
registerStringOption_("protein_fdr",
"<option>",
"false",
"Additionally calculate the target-decoy FDR on protein-level based on the posteriors", false, false);
setValidStrings_("protein_fdr", {"true","false"});
registerStringOption_("conservative_fdr",
"<option>",
"true",
"Use (D+1)/(T) instead of (D+1)/(T+D) for reporting protein FDRs.", false, true);
setValidStrings_("conservative_fdr", {"true","false"});
registerStringOption_("picked_fdr",
"<option>",
"true",
"Use picked protein FDRs.", false, true);
setValidStrings_("picked_fdr", {"true","false"});
registerStringOption_("picked_decoy_string",
"<decoy_string>",
"",
"If using picked protein FDRs, which decoy string was used? Leave blank for auto-detection.", false, true);
registerStringOption_("picked_decoy_prefix",
"<option>",
"prefix",
"If using picked protein FDRs, was the decoy string a prefix or suffix? Ignored during auto-detection.", false, true);
setValidStrings_("picked_decoy_prefix", {"prefix","suffix"});
registerStringOption_("greedy_group_resolution",
"<option>",
"none",
"Post-process inference output with greedy resolution of shared peptides based on the parent protein probabilities."
" Also adds the resolved ambiguity groups to output.", false, false);
setValidStrings_("greedy_group_resolution", {"none","remove_associations_only","remove_proteins_wo_evidence"});
registerDoubleOption_("min_psms_extreme_probability",
"<float>",
0.0,
"Set PSMs with probability lower than this to this minimum probability.", false, true);
registerDoubleOption_("max_psms_extreme_probability",
"<float>",
1.0,
"Set PSMs with probability higher than this to this maximum probability.", false, false);
addEmptyLine_();
registerSubsection_("algorithm", "Parameters for the Algorithm section");
}
Param getSubsectionDefaults_(const String& /*section*/) const override
{
return BayesianProteinInferenceAlgorithm().getParameters();
}
pair<double,double> checkExtremePSMScores_(PeptideIdentificationList& mergedpeps)
{
double minscore = 2.;
double maxscore = -1.;
//convert all scores to PPs
for (auto& pep_id : mergedpeps)
{
for (auto& pep_hit : pep_id.getHits())
{
double newScore = pep_hit.getScore();
if (newScore > 0)
{
minscore = std::min(minscore, newScore);
}
if (newScore < 1.)
{
maxscore = std::max(maxscore, newScore);
}
}
}
return {minscore, maxscore};
}
void convertPSMScores_(PeptideIdentificationList& mergedpeps)
{
//convert all scores to PPs
for (auto& pep_id : mergedpeps)
{
String score_l = pep_id.getScoreType();
score_l = score_l.toLower();
if (score_l == "pep" || score_l == "posterior error probability")
{
for (auto& pep_hit : pep_id.getHits())
{
double newScore = 1. - pep_hit.getScore();
pep_hit.setScore(newScore);
}
pep_id.setScoreType("Posterior Probability");
pep_id.setHigherScoreBetter(true);
}
else
{
if (score_l != "Posterior Probability")
{
throw OpenMS::Exception::InvalidParameter(
__FILE__,
__LINE__,
OPENMS_PRETTY_FUNCTION,
"Epifany needs Posterior (Error) Probabilities in the Peptide Hits. Use Percolator with PEP score"
"or run IDPosteriorErrorProbability first.");
}
}
}
}
void removeExtremeValues_(PeptideIdentificationList& mergedpeps, double minscore, double maxscore)
{
//convert all scores to PPs
for (auto& pep_id : mergedpeps)
{
for (auto& pep_hit : pep_id.getHits())
{
double score = pep_hit.getScore();
pep_hit.setScore(std::min(std::max(score,minscore),maxscore));
}
}
}
ExitCodes main_(int, const char**) override
{
// get parameters specific for the algorithm underneath
Param epifany_param = getParam_().copy("algorithm:", true);
bool greedy_group_resolution = getStringOption_("greedy_group_resolution") != "none";
bool remove_prots_wo_evidence = getStringOption_("greedy_group_resolution") == "remove_proteins_wo_evidence";
//writeDebug_("Parameters passed to Epifany", epifany_param, 3);
StringList files = getStringList_("in");
if (files.empty())
{
OPENMS_LOG_ERROR << "No files given.\n";
}
FileTypes::Type in_type = FileHandler::getType(files[0]);
String exp_des = getStringOption_("exp_design");
StopWatch sw;
sw.start();
String out_file = getStringOption_("out");
String out_type = getStringOption_("out_type");
if (!files.empty() && (in_type == FileTypes::CONSENSUSXML))
{
if (FileHandler::getTypeByFileName(out_file) != FileTypes::CONSENSUSXML &&
FileTypes::nameToType(out_type) != FileTypes::CONSENSUSXML)
{
OPENMS_LOG_FATAL_ERROR << "Error: Running on consensusXML requires output as consensusXML. Please change the "
"output type.\n";
}
OPENMS_LOG_INFO << "Loading input..." << std::endl;
if (files.size() > 1)
{
OPENMS_LOG_FATAL_ERROR << "Error: Multiple inputs only supported for idXML\n";
}
ConsensusMapMergerAlgorithm cmerge;
ConsensusMap cmap;
FileHandler().loadConsensusFeatures(files[0], cmap, {FileTypes::CONSENSUSXML});
std::optional<const ExperimentalDesign> edopt = maybeGetExpDesign_(exp_des);
if (!exp_des.empty())
{
cmerge.mergeProteinsAcrossFractionsAndReplicates(cmap, edopt.value());
}
else
{
cmerge.mergeAllIDRuns(cmap);
}
OPENMS_LOG_INFO << "Loading took " << sw.toString() << std::endl;
sw.reset();
BayesianProteinInferenceAlgorithm bpi1(getIntOption_("debug"));
bpi1.setParameters(epifany_param);
bpi1.inferPosteriorProbabilities(cmap, greedy_group_resolution, edopt);
OPENMS_LOG_INFO << "Inference total took " << sw.toString() << std::endl;
sw.stop();
if (remove_prots_wo_evidence)
{
OPENMS_LOG_INFO << "Postprocessing: Removing proteins without associated evidence..." << std::endl;
IDFilter::removeUnreferencedProteins(cmap, true);
for (auto& run : cmap.getProteinIdentifications())
{
IDFilter::updateProteinGroups(run.getIndistinguishableProteins(), run.getHits());
}
}
for (auto& run : cmap.getProteinIdentifications())
{
std::sort(run.getHits().begin(), run.getHits().end(),
[](const ProteinHit& f,
const ProteinHit& g)
{return f.getAccession() < g.getAccession();});
//sort for output because they might have been added in a different order
std::sort(
run.getIndistinguishableProteins().begin(),
run.getIndistinguishableProteins().end(),
[](const ProteinIdentification::ProteinGroup& f,
const ProteinIdentification::ProteinGroup& g)
{return f.accessions < g.accessions;});
}
bool calc_protFDR = getStringOption_("protein_fdr") == "true";
if (calc_protFDR)
{
OPENMS_LOG_INFO << "Calculating target-decoy q-values..." << std::endl;
FalseDiscoveryRate fdr;
Param fdrparam = fdr.getParameters();
fdrparam.setValue("conservative", getStringOption_("conservative_fdr"));
fdrparam.setValue("add_decoy_proteins","true");
fdr.setParameters(fdrparam);
for (auto& run : cmap.getProteinIdentifications())
{
if (getStringOption_("picked_fdr") == "true")
{
fdr.applyPickedProteinFDR(run, getStringOption_("picked_decoy_string"), getStringOption_("picked_decoy_prefix") == "prefix");
}
else
{
fdr.applyBasic(run, true);
}
}
}
FileHandler().storeConsensusFeatures(out_file, cmap, {FileTypes::CONSENSUSXML});
}
else // ---------------------------- IdXML -------------------------------------
{
IDMergerAlgorithm merger{};
OPENMS_LOG_INFO << "Loading input..." << std::endl;
vector<ProteinIdentification> mergedprots{1};
PeptideIdentificationList mergedpeps;
if (files.size() > 1)
{
for (String& file : files)
{
vector<ProteinIdentification> prots;
PeptideIdentificationList peps;
FileHandler().loadIdentifications(file, prots, peps, {FileTypes::IDXML});
//TODO merger does not support groups yet, so clear them here right away.
// Not so easy to implement at first sight. Merge groups whenever one protein overlaps?
prots[0].getIndistinguishableProteins().clear();
prots[0].getProteinGroups().clear();
merger.insertRuns(prots, peps);
}
merger.returnResultsAndClear(mergedprots[0], mergedpeps);
}
else
{
FileHandler().loadIdentifications(files[0], mergedprots, mergedpeps, {FileTypes::IDXML});
//TODO For now we delete because we want to add new groups here.
// Think about:
// 1) keeping the groups and allow them to be used as a prior grouping (e.g. gene based)
// 2) keeping the groups and store them in a separate group object to output both and compare.
mergedprots[0].getIndistinguishableProteins().clear();
mergedprots[0].getProteinGroups().clear();
}
// Currently this is needed because otherwise there might be proteins with a previous score
// that get evaluated during FDR without a new posterior being set. (since components of size 1 are skipped)
// Alternative would be to reset scores but this does not work well if you wanna work with i.e. user priors
// However, this is done additionally in the Inference class after filtering, so maybe not necessary.
IDFilter::removeUnreferencedProteins(mergedprots, mergedpeps);
OPENMS_LOG_INFO << "Loading took " << sw.toString() << std::endl;
sw.reset();
//Check if score types are valid.
try
{
IDScoreSwitcherAlgorithm switcher;
Size c = 0;
switcher.switchToGeneralScoreType(mergedpeps, IDScoreSwitcherAlgorithm::ScoreType::PEP, c);
}
catch(Exception::MissingInformation&)
{
OPENMS_LOG_FATAL_ERROR <<
"Epifany expects a Posterior Error Probability score in all Peptide IDs." << endl;
return ExitCodes::INCOMPATIBLE_INPUT_DATA;
}
BayesianProteinInferenceAlgorithm bpi1(getIntOption_("debug"));
bpi1.setParameters(epifany_param);
bpi1.inferPosteriorProbabilities(mergedprots, mergedpeps, greedy_group_resolution);
OPENMS_LOG_INFO << "Inference total took " << sw.toString() << std::endl;
sw.stop();
if (remove_prots_wo_evidence)
{
OPENMS_LOG_INFO << "Postprocessing: Removing proteins without associated evidence..." << std::endl;
IDFilter::removeUnreferencedProteins(mergedprots, mergedpeps);
IDFilter::updateProteinGroups(mergedprots[0].getIndistinguishableProteins(), mergedprots[0].getHits());
IDFilter::updateProteinGroups(mergedprots[0].getProteinGroups(), mergedprots[0].getHits());
}
bool calc_protFDR = getStringOption_("protein_fdr") == "true";
if (calc_protFDR)
{
OPENMS_LOG_INFO << "Calculating target-decoy q-values..." << std::endl;
FalseDiscoveryRate fdr;
Param fdrparam = fdr.getParameters();
fdrparam.setValue("conservative", getStringOption_("conservative_fdr"));
fdrparam.setValue("add_decoy_proteins","true");
fdr.setParameters(fdrparam);
if (getStringOption_("picked_fdr") == "true")
{
fdr.applyPickedProteinFDR(mergedprots[0], getStringOption_("picked_decoy_string"), getStringOption_("picked_decoy_prefix") == "prefix");
}
else
{
fdr.applyBasic(mergedprots[0], true);
}
}
OPENMS_LOG_INFO << "Writing inference run as first ProteinIDRun with " <<
mergedprots[0].getHits().size() << " proteins in " <<
mergedprots[0].getIndistinguishableProteins().size() <<
" indist. groups." << std::endl;
//sort for output because they might have been added in a different order
std::sort(
mergedprots[0].getIndistinguishableProteins().begin(),
mergedprots[0].getIndistinguishableProteins().end(),
[](const ProteinIdentification::ProteinGroup& f,
const ProteinIdentification::ProteinGroup& g)
{return f.accessions < g.accessions;});
FileHandler().storeIdentifications(out_file, mergedprots, mergedpeps, {FileTypes::IDXML});
}
return ExitCodes::EXECUTION_OK;
}
// Some thoughts about how to leverage info from different runs.
//Fractions: Always merge (not much to leverage, maybe agreement at borders)
// - Think about only allowing one/the best PSM per peptidoform across fractions
//Replicates: Use matching ID and quant, also a always merge
//Samples: In theory they could yield different proteins/pep-protein-associations
// 3 options:
// - don't merge: -> don't leverage peptide quant profiles (or use them repeatedly -> same as second opt.?)
// - merge and assume same proteins: -> You can use the current graph and weigh the associations
// based on deviation in profiles
// - merge and don't assume same proteins: -> We need an extended graph, that has multiple versions
// of the proteins for every sample
static std::optional<const ExperimentalDesign> maybeGetExpDesign_(const String& filename)
{
if (filename.empty()) return std::nullopt;
return std::optional<const ExperimentalDesign>(ExperimentalDesignFile::load(filename, false));
}
};
int main(int argc, const char** argv)
{
Epifany tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/MapAlignerPoseClustering.cpp | .cpp | 11,239 | 307 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Marc Sturm, Clemens Groepl, Chris Bielow $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/MAPMATCHING/MapAlignmentAlgorithmPoseClustering.h>
#include <OpenMS/APPLICATIONS/MapAlignerBase.h>
//TODO remove when we get loadsize support in handler
#include <OpenMS/FORMAT/FeatureXMLFile.h>
#ifdef _OPENMP
#include <omp.h>
#endif
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_MapAlignerPoseClustering MapAlignerPoseClustering
@brief Corrects retention time distortions between maps, using a pose clustering approach.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → MapAlignerPoseClustering →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FeatureFinderCentroided @n (or another feature finding algorithm) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FeatureLinkerUnlabeled or @n @ref TOPP_FeatureLinkerUnlabeledQT </td>
</tr>
</table>
</CENTER>
This tool provides an algorithm to align the retention time scales of
multiple input files, correcting shifts and distortions between them.
Retention time adjustment may be necessary to correct for chromatography
differences e.g. before data from multiple LC-MS runs can be combined
(feature grouping), or when one run should be annotated with peptide
identifications obtained in a different run.
All map alignment tools (MapAligner...) collect retention time data from the
input files and - by fitting a model to this data - compute transformations
that map all runs to a common retention time scale. They can apply the
transformations right away and return output files with aligned time scales
(parameter @p out), and/or return descriptions of the transformations in
trafoXML format (parameter @p trafo_out). Transformations stored as trafoXML
can be applied to arbitrary files with the @ref TOPP_MapRTTransformer tool.
The map alignment tools differ in how they obtain retention time data for the
modeling of transformations, and consequently what types of data they can be
applied to. The alignment algorithm implemented here is the pose clustering
algorithm as described in doi:10.1093/bioinformatics/btm209. It is used to
find an affine transformation, which is further refined by a feature grouping
step. This algorithm can be applied to features (featureXML) and peaks
(mzML), but it has mostly been developed and tested on features. For more
details and algorithm-specific parameters (set in the INI file) see "Detailed
Description" in the @ref OpenMS::MapAlignmentAlgorithmPoseClustering "algorithm documentation".
@see @ref TOPP_MapAlignerPoseClustering @ref TOPP_MapRTTransformer
This algorithm uses an affine transformation model.
To speed up the alignment, consider reducing 'max_number_of_peaks_considered'.
If your alignment is not good enough, consider increasing this number (the alignment will take longer though).
<B>The command line parameters of this tool are:</B> @n
@verbinclude TOPP_MapAlignerPoseClustering.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_MapAlignerPoseClustering.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPMapAlignerPoseClustering :
public TOPPMapAlignerBase
{
public:
TOPPMapAlignerPoseClustering() :
TOPPMapAlignerBase("MapAlignerPoseClustering", "Corrects retention time distortions between maps using a pose clustering approach.")
{}
protected:
void registerOptionsAndFlags_() override
{
TOPPMapAlignerBase::registerOptionsAndFlagsMapAligners_("featureXML,mzML",
REF_RESTRICTED);
registerSubsection_("algorithm", "Algorithm parameters section");
}
Param getSubsectionDefaults_(const String& section) const override
{
if (section == "algorithm")
{
MapAlignmentAlgorithmPoseClustering algo;
return algo.getParameters();
}
return Param(); // shouldn't happen
}
ExitCodes main_(int, const char**) override
{
ExitCodes ret = TOPPMapAlignerBase::checkParameters_();
if (ret != EXECUTION_OK)
{
return ret;
}
MapAlignmentAlgorithmPoseClustering algorithm;
Param algo_params = getParam_().copy("algorithm:", true);
algorithm.setParameters(algo_params);
algorithm.setLogType(log_type_);
StringList in_files = getStringList_("in");
if (in_files.size() == 1)
{
OPENMS_LOG_WARN << "Only one file provided as input to MapAlignerPoseClustering." << std::endl;
}
StringList out_files = getStringList_("out");
StringList out_trafos = getStringList_("trafo_out");
Size reference_index = getIntOption_("reference:index");
String reference_file = getStringOption_("reference:file");
FileTypes::Type in_type = FileHandler::getType(in_files[0]);
String file;
if (!reference_file.empty())
{
file = reference_file;
reference_index = in_files.size(); // points to invalid index
}
else if (reference_index > 0) // normal reference (index was checked before)
{
file = in_files[--reference_index]; // ref. index is 1-based in parameters, but should be 0-based here
}
else if (reference_index == 0) // no reference given
{
OPENMS_LOG_INFO << "Picking a reference (by size) ..." << std::flush;
// use map with highest number of features as reference:
Size max_count(0);
FeatureXMLFile f;
for (Size i = 0; i < in_files.size(); ++i)
{
Size s = 0;
if (in_type == FileTypes::FEATUREXML)
{
s = f.loadSize(in_files[i]);
}
else if (in_type == FileTypes::MZML) // this is expensive!
{
PeakMap exp;
FileHandler().loadExperiment(in_files[i], exp, {FileTypes::MZML}, log_type_);
exp.updateRanges();
s = exp.getSize();
}
if (s > max_count)
{
max_count = s;
reference_index = i;
}
}
OPENMS_LOG_INFO << " done" << std::endl;
file = in_files[reference_index];
}
FileHandler f_fxml;
if (out_files.empty()) // no need to store featureXML, thus we can load only minimum required information
{
f_fxml.getFeatOptions().setLoadConvexHull(false);
f_fxml.getFeatOptions().setLoadSubordinates(false);
}
if (in_type == FileTypes::FEATUREXML)
{
FeatureMap map_ref;
FileHandler f_fxml_tmp; // for the reference, we never need CH or subordinates
f_fxml_tmp.getFeatOptions().setLoadConvexHull(false);
f_fxml_tmp.getFeatOptions().setLoadSubordinates(false);
f_fxml_tmp.loadFeatures(file, map_ref, {FileTypes::FEATUREXML}, log_type_);
algorithm.setReference(map_ref);
}
else if (in_type == FileTypes::MZML)
{
PeakMap map_ref;
FileHandler().loadExperiment(file, map_ref, {}, log_type_);
algorithm.setReference(map_ref);
}
ProgressLogger plog;
plog.setLogType(log_type_);
// Collect transformations for optional spectra files
// Pre-allocated for thread-safe access in OpenMP parallel loop
vector<TransformationDescription> transformations(in_files.size());
plog.startProgress(0, in_files.size(), "Aligning input maps");
Size progress(0); // thread-safe progress
// TODO: it should all work on featureXML files, since we might need them for output anyway. Converting to consensusXML is just wasting memory!
#ifdef _OPENMP
#pragma omp parallel for schedule(dynamic, 1)
#endif
for (int i = 0; i < static_cast<int>(in_files.size()); ++i)
{
TransformationDescription trafo;
if (in_type == FileTypes::FEATUREXML)
{
FeatureMap map;
// workaround for loading: use temporary FeatureXMLFile since it is not thread-safe
FileHandler f_fxml_tmp; // do not use OMP-firstprivate, since FeatureXMLFile has no copy c'tor
f_fxml_tmp.getFeatOptions() = f_fxml.getFeatOptions();
f_fxml_tmp.loadFeatures(in_files[i], map);
if (i == static_cast<int>(reference_index))
{
trafo.fitModel("identity");
}
else
{
try
{
algorithm.align(map, trafo);
}
catch (Exception::IllegalArgument& e)
{
OPENMS_LOG_ERROR << "Aligning " << in_files[i] << " to reference " << in_files[reference_index]
<< " failed. No transformation will be applied (RT not changed for this file)." << endl;
writeLogError_("Illegal argument (" + String(e.getName()) + "): " + String(e.what()) + ".");
trafo.fitModel("identity");
}
}
if (!out_files.empty())
{
MapAlignmentTransformer::transformRetentionTimes(map, trafo);
// annotate output with data processing info
addDataProcessing_(map, getProcessingInfo_(DataProcessing::ALIGNMENT));
f_fxml_tmp.storeFeatures(out_files[i], map, {FileTypes::FEATUREXML}, log_type_);
}
}
else if (in_type == FileTypes::MZML)
{
PeakMap map;
FileHandler().loadExperiment(in_files[i], map, {FileTypes::MZML}, log_type_);
if (i == static_cast<int>(reference_index))
{
trafo.fitModel("identity");
}
else
{
algorithm.align(map, trafo);
}
if (!out_files.empty())
{
MapAlignmentTransformer::transformRetentionTimes(map, trafo);
// annotate output with data processing info
addDataProcessing_(map, getProcessingInfo_(DataProcessing::ALIGNMENT));
FileHandler().storeExperiment(out_files[i], map, {FileTypes::MZML}, log_type_);
}
}
// Store transformation for this file
transformations[i] = trafo;
if (!out_trafos.empty())
{
FileHandler().storeTransformations(out_trafos[i], trafo, {FileTypes::TRANSFORMATIONXML});
}
#ifdef _OPENMP
#pragma omp critical (MAPose_Progress)
#endif
{
plog.setProgress(++progress); // thread safe progress counter
}
}
plog.endProgress();
// Transform optional spectra files
// Note: MapAlignerPoseClustering does not support store_original_rt flag
StringList in_spectra_files = getStringList_("in_spectra_files");
StringList out_spectra_files = getStringList_("out_spectra_files");
transformSpectraFiles_(in_spectra_files, out_spectra_files, transformations, false);
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPMapAlignerPoseClustering tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/SpectraFilterThresholdMower.cpp | .cpp | 3,932 | 135 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Mathias Walzer$
// $Authors: $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/PROCESSING/FILTERING/ThresholdMower.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <typeinfo>
using namespace OpenMS;
using namespace std;
/**
@page TOPP_SpectraFilterThresholdMower SpectraFilterThresholdMower
@brief Removes all peaks below an intensity threshold.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → SpectraFilter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any tool operating on MS peak data @n (in mzML format)</td>
</tr>
</table>
</CENTER>
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_SpectraFilterThresholdMower.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_SpectraFilterThresholdMower.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPSpectraFilterThresholdMower :
public TOPPBase
{
public:
TOPPSpectraFilterThresholdMower() :
TOPPBase("SpectraFilterThresholdMower", "Applies thresholdfilter to peak spectra.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "input file ");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerOutputFile_("out", "<file>", "", "output file ");
setValidFormats_("out", ListUtils::create<String>("mzML"));
// register one section for each algorithm
registerSubsection_("algorithm", "Algorithm parameter subsection.");
}
Param getSubsectionDefaults_(const String & /*section*/) const override
{
return ThresholdMower().getParameters();
}
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
//input/output files
String in(getStringOption_("in"));
String out(getStringOption_("out"));
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
PeakMap exp;
FileHandler().loadExperiment(in, exp, {FileTypes::MZML}, log_type_);
//-------------------------------------------------------------
// if meta data arrays are present, remove them and warn
//-------------------------------------------------------------
if (exp.clearMetaDataArrays())
{
writeLogWarn_("Warning: Spectrum meta data arrays cannot be sorted. They are deleted.");
}
//-------------------------------------------------------------
// filter
//-------------------------------------------------------------
Param filter_param = getParam_().copy("algorithm:", true);
writeDebug_("Used filter parameters", filter_param, 3);
ThresholdMower filter;
filter.setParameters(filter_param);
filter.filterPeakMap(exp);
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
//annotate output with data processing info
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::FILTERING));
FileHandler().storeExperiment(out, exp, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPSpectraFilterThresholdMower tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/OpenSwathRewriteToFeatureXML.cpp | .cpp | 6,443 | 209 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hannes Roest $
// $Authors: Hannes Roest $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/KERNEL/FeatureMap.h>
#include <OpenMS/KERNEL/Feature.h>
#include <fstream>
using namespace OpenMS;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_OpenSwathRewriteToFeatureXML OpenSwathRewriteToFeatureXML
@brief Combines featureXML and mProphet tsv to FDR filtered featureXML.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_OpenSwathRewriteToFeatureXML.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_OpenSwathRewriteToFeatureXML.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPOpenSwathRewriteToFeatureXML :
public TOPPBase,
public ProgressLogger
{
public:
TOPPOpenSwathRewriteToFeatureXML()
: TOPPBase("OpenSwathRewriteToFeatureXML","Combines featureXML and mProphet tsv to FDR filtered featureXML.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("csv","<file>","","mProphet tsv output file: \"all_peakgroups.xls\"", false);
setValidFormats_("csv", ListUtils::create<String>("csv"));
registerInputFile_("featureXML","<file>","","input featureXML file");
setValidFormats_("featureXML", ListUtils::create<String>("featureXML"));
registerOutputFile_("out","<file>","","output featureXML file");
setValidFormats_("out", ListUtils::create<String>("featureXML"));
registerDoubleOption_("FDR_cutoff", "<double>", -1, "FDR cutoff (e.g. to remove all features with a an m_score above 0.05 use 0.05 here)", false);
}
void applyFDRcutoff(FeatureMap & feature_map, double cutoff, String fdr_name)
{
FeatureMap out_feature_map = feature_map;
out_feature_map.clear(false);
for (Size i = 0; i < feature_map.size(); i++)
{
if ((double)feature_map[i].getMetaValue(fdr_name) < cutoff)
{
out_feature_map.push_back(feature_map[i]);
}
}
feature_map = out_feature_map;
}
void processInput(const char * filename, FeatureMap & feature_map)
{
FeatureMap out_feature_map = feature_map;
std::map<String, int> added_already;
out_feature_map.clear(false);
std::map<String, Feature*> feature_map_ref;
//for (FeatureMap::iterator feature = feature_map.begin(); feature != feature_map.end(); feature++)
for (Size i = 0; i < feature_map.size(); i++)
{
feature_map_ref[feature_map[i].getUniqueId()] = &feature_map[i];
}
std::ifstream data(filename);
std::string line;
// Read header
std::getline(data, line);
// std::map<int, String> header_dict; // not used
std::map<String, int> header_dict_inv;
{
std::stringstream lineStream(line);
std::string cell;
int cnt = 0;
while (std::getline(lineStream,cell,'\t'))
{
//header_dict[cnt] = cell;
header_dict_inv[cell] = cnt;
cnt++;
}
}
if (header_dict_inv.find("id") == header_dict_inv.end() ||
header_dict_inv.find("m_score") == header_dict_inv.end() ||
header_dict_inv.find("d_score") == header_dict_inv.end() )
{
throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Error: The tsv file is expected to have at least the following headers: id, m_score, d_score. " );
}
// Read file
std::vector<std::string> current_row;
std::string cell;
int line_nr = 0;
double m_score, d_score;
while (std::getline(data, line))
{
line_nr++;
current_row.clear();
std::stringstream lineStream(line);
while (std::getline(lineStream,cell,'\t'))
{
current_row.push_back(cell);
}
String id = current_row[header_dict_inv["id"]];
id = id.substitute("f_", "");
try
{
m_score = ((String)current_row[header_dict_inv["m_score"]]).toDouble();
}
catch (char* /*str*/)
{
throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Error: Could not convert String" + ((String)current_row[header_dict_inv["m_score"]]) + " on line " + String(line_nr));
}
try
{
d_score = ((String)current_row[header_dict_inv["d_score"]]).toDouble();
}
catch (char* /*str*/)
{
throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Error: Could not convert String" + ((String)current_row[header_dict_inv["d_score"]]) + " on line " + String(line_nr));
}
if (feature_map_ref.find(id) != feature_map_ref.end() )
{
Feature* feature = feature_map_ref.find(id)->second;
feature->setMetaValue("m_score", m_score);
feature->setMetaValue("d_score", d_score);
// we are not allowed to have duplicate unique ids
if (added_already.find(id) != added_already.end())
{
throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Error: Duplicate id found in CSV file: " + id );
}
out_feature_map.push_back(*feature);
}
}
feature_map = out_feature_map;
}
ExitCodes main_(int , const char**) override
{
String feature_file = getStringOption_("featureXML");
String csv = getStringOption_("csv");
String out = getStringOption_("out");
double fdr_cutoff = getDoubleOption_("FDR_cutoff");
FeatureMap feature_map;
FileHandler().loadFeatures(feature_file, feature_map, {FileTypes::FEATUREXML});
if (!csv.empty())
{
processInput(csv.c_str(), feature_map);
}
if (fdr_cutoff >= 0)
{
applyFDRcutoff(feature_map, fdr_cutoff, "m_score");
}
feature_map.applyMemberFunction(&UniqueIdInterface::ensureUniqueId);
FileHandler().storeFeatures(out, feature_map, {FileTypes::FEATUREXML});
return EXECUTION_OK;
}
};
int main( int argc, const char** argv )
{
TOPPOpenSwathRewriteToFeatureXML tool;
int code = tool.main(argc,argv);
return code;
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/IDFilter.cpp | .cpp | 40,813 | 868 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Nico Pfeifer, Chris Bielow, Hendrik Weisser $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CHEMISTRY/AASequence.h>
#include <OpenMS/CHEMISTRY/ModificationsDB.h>
#include <OpenMS/CHEMISTRY/ProteaseDB.h>
#include <OpenMS/CHEMISTRY/ProteaseDigestion.h>
#include <OpenMS/FORMAT/FASTAFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/ANALYSIS/ID/IDRipper.h>
#include <OpenMS/ANALYSIS/ID/IDScoreSwitcherAlgorithm.h>
#include <OpenMS/PROCESSING/ID/IDFilter.h>
#include <OpenMS/METADATA/PeptideIdentification.h>
#include <OpenMS/SYSTEM/File.h>
#include <limits>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_IDFilter IDFilter
@brief Filters peptide/protein identification results by different criteria.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=5> → IDFilter →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_MascotAdapterOnline (or other ID engines) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeptideIndexer </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFileConverter </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_ProteinInference </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FalseDiscoveryRate </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDMapper </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_ConsensusID </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_ProteinQuantifier (for spectral counting) </td>
</tr>
</table>
</CENTER>
This tool is used to filter the identifications found by a peptide/protein identification engine like Mascot.
Different filters can be applied.
To enable any of them, just change their default value.
All active filters are applied in order.
Most filtering options should be straight-forward - see the documentation of the different parameters.
For some filters that warrent further discussion, see below.
<b>Score filters</b> (@p score:peptide, @p score:protein):
Peptide or protein hits with scores at least as good as the given cut-off are retained by the filter; hits with worse scores are removed.
Whether scores should be higher or lower than the cut-off depends on the type/orientation of the score.
The score that was most recently set by a processing step is considered for filtering.
For example, it could be a Mascot score (if MascotAdapterOnline was applied) or an FDR (if FalseDiscoveryRate was applied), etc.
@ref TOPP_IDScoreSwitcher is useful to switch to a particular score before filtering.
<b>Protein accession filters</b> (@p whitelist:proteins, @p whitelist:protein_accessions, @p blacklist:proteins, @p blacklist:protein_accessions):
These filters retain only peptide and protein hits that @e do (whitelist) or <em>do not</em> (blacklist) match any of the proteins from a given set.
This set of proteins can be given through a FASTA file (<tt>...:proteins</tt>) or as a list of accessions (<tt>...:protein_accessions</tt>).
Note that even in the case of a FASTA file, matching is only done by protein accession, not by sequence.
If necessary, use @ref TOPP_PeptideIndexer to generate protein references for peptide hits via sequence look-up.
@note Currently mzIdentML (mzid) is not directly supported as an input/output format of this tool. Convert mzid files to/from idXML using @ref TOPP_IDFileConverter if necessary.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_IDFilter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_IDFilter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPIDFilter :
public TOPPBase
{
public:
TOPPIDFilter() :
TOPPBase("IDFilter", "Filters results from protein or peptide identification engines based on different criteria.")
{
}
protected:
void registerOptionsAndFlags_() override
{
vector<String> all_mods;
StringList all_enzymes;
StringList specificity;
ModificationsDB::getInstance()->getAllSearchModifications(all_mods);
ProteaseDB::getInstance()->getAllNames(all_enzymes);
specificity.assign(EnzymaticDigestion::NamesOfSpecificity, EnzymaticDigestion::NamesOfSpecificity + 3); //only allow none,semi,full for now
registerInputFile_("in", "<file>", "", "input file ");
setValidFormats_("in", {"idXML","consensusXML"});
registerOutputFile_("out", "<file>", "", "output file ");
setValidFormats_("out", {"idXML","consensusXML"});
registerTOPPSubsection_("precursor", "Filtering by precursor attributes (RT, m/z, charge, length)");
registerStringOption_("precursor:rt", "[min]:[max]", ":", "Retention time range to extract [s].", false);
registerStringOption_("precursor:mz", "[min]:[max]", ":", "Mass-to-charge range to extract.", false);
registerStringOption_("precursor:length", "[min]:[max]", ":", "Keep only peptide hits with a sequence length in this range.", false);
registerStringOption_("precursor:charge", "[min]:[max]", ":", "Keep only peptide hits with charge states in this range.", false);
registerTOPPSubsection_("score", "Filtering by peptide/protein score.");
auto ids = IDScoreSwitcherAlgorithm();
registerDoubleOption_("score:psm", "<score>", NAN, "The score which should be reached by a peptide hit to be kept. (use 'NAN' to disable this filter)", false);
registerDoubleOption_("score:peptide", "<score>", NAN, "The score which should be reached by a peptide hit to be kept. (use 'NAN' to disable this filter)", false);
registerStringOption_("score:type_peptide", "<type>", "", "Score used for filtering. If empty, the main score is used.", false, true);
setValidStrings_("score:type_peptide", ids.getScoreNames());
registerDoubleOption_("score:protein", "<score>", NAN, "The score which should be reached by a protein hit to be kept. All proteins are filtered based on their singleton scores irrespective of grouping. Use in combination with 'delete_unreferenced_peptide_hits' to remove affected peptides. (use 'NAN' to disable this filter)", false);
registerStringOption_("score:type_protein", "<type>", "", "The type of the score which should be reached by a protein hit to be kept. If empty, the most recently set score is used.", false, true);
setValidStrings_("score:type_protein", ids.getScoreNames());
registerDoubleOption_("score:proteingroup", "<score>", NAN, "The score which should be reached by a protein group to be kept. Performs group level score filtering (including groups of single proteins). Use in combination with 'delete_unreferenced_peptide_hits' to remove affected peptides. (use 'NAN' to disable this filter)", false);
registerTOPPSubsection_("whitelist", "Filtering by whitelisting (only peptides/proteins from a given set can pass)");
registerInputFile_("whitelist:proteins", "<file>", "", "Filename of a FASTA file containing protein sequences.\n"
"All peptides that are not referencing a protein in this file are removed.\n"
"All proteins whose accessions are not present in this file are removed.", false);
setValidFormats_("whitelist:proteins", {"fasta"});
registerStringList_("whitelist:protein_accessions", "<accessions>", vector<String>(), "All peptides that do not reference at least one of the provided protein accession are removed.\nOnly proteins of the provided list are retained.", false);
registerInputFile_("whitelist:peptides", "<file>", "", "Only peptides with the same sequence and modification assignment as any peptide in this file are kept. Use with 'whitelist:ignore_modifications' to only compare by sequence.\n", false);
setValidFormats_("whitelist:peptides", {"idXML"});
registerFlag_("whitelist:ignore_modifications", "Compare whitelisted peptides by sequence only.", true);
registerStringList_("whitelist:modifications", "<selection>", vector<String>(), "Keep only peptides with sequences that contain (any of) the selected modification(s)", false, true);
setValidStrings_("whitelist:modifications", all_mods);
registerTOPPSubsection_("blacklist", "Filtering by blacklisting (only peptides/proteins NOT present in a given set can pass)");
registerInputFile_("blacklist:proteins", "<file>", "", "Filename of a FASTA file containing protein sequences.\n"
"All peptides that are referencing a protein in this file are removed.\n"
"All proteins whose accessions are present in this file are removed.", false);
setValidFormats_("blacklist:proteins", {"fasta"});
registerStringList_("blacklist:protein_accessions", "<accessions>", vector<String>(), "All peptides that reference at least one of the provided protein accession are removed.\nOnly proteins not in the provided list are retained.", false);
registerInputFile_("blacklist:peptides", "<file>", "", "Peptides with the same sequence and modification assignment as any peptide in this file are filtered out. Use with 'blacklist:ignore_modifications' to only compare by sequence.\n", false);
setValidFormats_("blacklist:peptides", {"idXML"});
registerFlag_("blacklist:ignore_modifications", "Compare blacklisted peptides by sequence only.", true);
registerStringList_("blacklist:modifications", "<selection>", vector<String>(), "Remove all peptides with sequences that contain (any of) the selected modification(s)", false, true);
setValidStrings_("blacklist:modifications", all_mods);
registerStringOption_("blacklist:RegEx", "<selection>", "", "Remove all peptides with (unmodified) sequences matched by the RegEx e.g. [BJXZ] removes ambiguous peptides.", false, true);
registerTOPPSubsection_("in_silico_digestion", "This filter option removes peptide hits which are not in the list of in silico peptides generated by the rules specified below");
registerInputFile_("in_silico_digestion:fasta", "<file>", "", "fasta protein sequence database.", false);
setValidFormats_("in_silico_digestion:fasta", {"fasta"});
registerStringOption_("in_silico_digestion:enzyme", "<enzyme>", "Trypsin", "enzyme used for the digestion of the sample",false, true);
setValidStrings_("in_silico_digestion:enzyme", all_enzymes);
registerStringOption_("in_silico_digestion:specificity", "<specificity>", specificity[EnzymaticDigestion::SPEC_FULL], "Specificity of the filter", false, true);
setValidStrings_("in_silico_digestion:specificity", specificity);
registerIntOption_("in_silico_digestion:missed_cleavages", "<integer>", -1,
"range of allowed missed cleavages in the peptide sequences\n"
"By default missed cleavages are ignored", false, true);
setMinInt_("in_silico_digestion:missed_cleavages", -1);
registerFlag_("in_silico_digestion:methionine_cleavage", "Allow methionine cleavage at the N-terminus of the protein.", true);
registerTOPPSubsection_("missed_cleavages", "This filter option removes peptide hits which do not confirm with the allowed missed cleavages specified below.");
registerStringOption_("missed_cleavages:number_of_missed_cleavages", "[min]:[max]", ":",
"range of allowed missed cleavages in the peptide sequences.\n"
"For example: 0:1 -> peptides with two or more missed cleavages will be removed,\n"
"0:0 -> peptides with any missed cleavages will be removed", false);
registerStringOption_("missed_cleavages:enzyme", "<enzyme>", "Trypsin", "enzyme used for the digestion of the sample", false, true);
setValidStrings_("missed_cleavages:enzyme", all_enzymes);
registerTOPPSubsection_("rt", "Filtering by RT predicted by 'RTPredict'");
registerDoubleOption_("rt:p_value", "<float>", 0.0, "Retention time filtering by the p-value predicted by RTPredict.", false, true);
registerDoubleOption_("rt:p_value_1st_dim", "<float>", 0.0, "Retention time filtering by the p-value predicted by RTPredict for first dimension.", false, true);
setMinFloat_("rt:p_value", 0);
setMaxFloat_("rt:p_value", 1);
setMinFloat_("rt:p_value_1st_dim", 0);
setMaxFloat_("rt:p_value_1st_dim", 1);
registerTOPPSubsection_("mz", "Filtering by mass error");
registerDoubleOption_("mz:error", "<float>", -1, "Filtering by deviation to theoretical mass (disabled for negative values).", false, true);
registerStringOption_("mz:unit", "<String>", "ppm", "Absolute or relative error.", false, true);
setValidStrings_("mz:unit", ListUtils::create<String>("Da,ppm"));
registerTOPPSubsection_("best", "Filtering best hits per spectrum (for peptides) or from proteins");
registerIntOption_("best:n_spectra", "<integer>", 0, "Keep only the 'n' best spectra (i.e., PeptideIdentifications) (for n > 0). A spectrum is considered better if it has a higher scoring peptide hit than the other spectrum.", false);
setMinInt_("best:n_spectra", 0);
registerIntOption_("best:n_peptide_hits", "<integer>", 0, "Keep only the 'n' highest scoring peptide hits per spectrum (for n > 0).", false);
setMinInt_("best:n_peptide_hits", 0);
registerStringOption_("best:spectrum_per_peptide", "<String>", "false", "Keep one spectrum per peptide. Value determines if same sequence but different charges or modifications are treated as separate peptides or the same peptide. (default: false = filter disabled).", false);
setValidStrings_("best:spectrum_per_peptide", {"false", "sequence", "sequence+charge", "sequence+modification", "sequence+charge+modification"});
registerIntOption_("best:n_protein_hits", "<integer>", 0, "Keep only the 'n' highest scoring protein hits (for n > 0).", false);
setMinInt_("best:n_protein_hits", 0);
registerFlag_("best:strict", "Keep only the highest scoring peptide hit.\n"
"Similar to n_peptide_hits=1, but if there are ties between two or more highest scoring hits, none are kept.");
registerStringOption_("best:n_to_m_peptide_hits", "[min]:[max]", ":", "Peptide hit rank range to extracts", false, true);
registerFlag_("var_mods", "Keep only peptide hits with variable modifications (as defined in the 'SearchParameters' section of the input file).", false);
registerFlag_("remove_duplicate_psm", "Removes duplicated PSMs per spectrum and retains the one with the higher score.", true);
registerFlag_("remove_shared_peptides", "Only peptides matching exactly one protein are kept. Remember that isoforms count as different proteins!");
registerFlag_("keep_unreferenced_protein_hits", "Proteins not referenced by a peptide are retained in the IDs.");
registerFlag_("remove_decoys", "Remove proteins according to the information in the user parameters. Usually used in combination with 'delete_unreferenced_peptide_hits'.");
registerFlag_("delete_unreferenced_peptide_hits", "Peptides not referenced by any protein are deleted in the IDs. Usually used in combination with 'score:protein' or 'thresh:prot'.");
registerStringList_("remove_peptide_hits_by_metavalue", "<name> 'lt|eq|gt|ne' <value>", StringList(), "Expects a 3-tuple (=3 entries in the list), i.e. <name> 'lt|eq|gt|ne' <value>; the first is the name of meta value, followed by the comparison operator (equal, less, greater, not equal) and the value to compare to. All comparisons are done after converting the given value to the corresponding data value type of the meta value (for lists, this simply compares length, not content!)!", false, true);
}
ExitCodes main_(int, const char**) override
{
const String tmp_feature_id_metaval_ = "tmp_feature_id";
String inputfile_name = getStringOption_("in");
String outputfile_name = getStringOption_("out");
vector<ProteinIdentification> proteins;
PeptideIdentificationList peptides;
//only used for cxml
ConsensusMap cmap;
unordered_map<UInt64, ConsensusFeature*> id_to_featureref;
const auto& infiletype = FileHandler::getType(inputfile_name);
if (infiletype == FileTypes::IDXML)
{
FileHandler().loadIdentifications(inputfile_name, proteins, peptides, {FileTypes::IDXML});
}
else if (infiletype == FileTypes::CONSENSUSXML)
{
FileHandler().loadConsensusFeatures(inputfile_name, cmap, {FileTypes::CONSENSUSXML});
for (auto& f : cmap)
{
UInt64 id = f.getUniqueId();
id_to_featureref[id] = &f;
for (auto& p : f.getPeptideIdentifications())
{
p.setMetaValue(tmp_feature_id_metaval_, String(id));
peptides.push_back(std::move(p));
//if ((UInt64)peptides.back().getMetaValue(tmp_feature_id_metaval_) != id) std::cout << "WHAT THE FUCK" << std::endl;
}
f.getPeptideIdentifications().clear();
}
auto& unassigned = cmap.getUnassignedPeptideIdentifications();
peptides.reserve(peptides.size() + unassigned.size());
std::move(std::begin(unassigned),std::end(unassigned),
std::back_inserter(peptides));
unassigned.clear();
std::swap(proteins, cmap.getProteinIdentifications());
}
Size n_prot_ids = proteins.size();
Size n_prot_hits = IDFilter::countHits(proteins);
Size n_pep_ids = peptides.size();
Size n_pep_hits = IDFilter::countHits(peptides);
// handle remove_meta
StringList meta_info = getStringList_("remove_peptide_hits_by_metavalue");
bool remove_meta_enabled = (!meta_info.empty());
if (remove_meta_enabled && meta_info.size() != 3)
{
writeLogError_("Param 'remove_peptide_hits_by_metavalue' has invalid number of arguments. Expected 3, got " + String(meta_info.size()) + ". Aborting!");
printUsage_();
return ILLEGAL_PARAMETERS;
}
if (remove_meta_enabled && !(meta_info[1] == "lt" || meta_info[1] == "eq" || meta_info[1] == "gt" || meta_info[1] == "ne"))
{
writeLogError_("Param 'remove_peptide_hits_by_metavalue' has invalid second argument. Expected one of 'lt', 'eq', 'gt' or 'ne'. Got '" + meta_info[1] + "'. Aborting!");
printUsage_();
return ILLEGAL_PARAMETERS;
}
// Filtering peptide identification according to set criteria
double rt_high = numeric_limits<double>::infinity(), rt_low = -rt_high;
if (parseRange_(getStringOption_("precursor:rt"), rt_low, rt_high))
{
OPENMS_LOG_INFO << "Filtering peptide IDs by precursor RT..." << endl;
IDFilter::filterPeptidesByRT(peptides, rt_low, rt_high);
}
double mz_high = numeric_limits<double>::infinity(), mz_low = -mz_high;
if (parseRange_(getStringOption_("precursor:mz"), mz_low, mz_high))
{
OPENMS_LOG_INFO << "Filtering peptide IDs by precursor m/z...";
IDFilter::filterPeptidesByMZ(peptides, mz_low, mz_high);
}
// Filtering peptide hits according to set criteria
if (getFlag_("remove_duplicate_psm"))
{
OPENMS_LOG_INFO << "Removing duplicated psms..." << endl;
IDFilter::removeDuplicatePeptideHits(peptides);
}
if (getFlag_("remove_shared_peptides"))
{
OPENMS_LOG_INFO << "Filtering peptides by unique match to a protein..." << endl;
IDFilter::keepUniquePeptidesPerProtein(peptides);
}
double pred_rt_pv = getDoubleOption_("rt:p_value");
if (pred_rt_pv > 0)
{
OPENMS_LOG_INFO << "Filtering by RT prediction p-value..." << endl;
IDFilter::filterPeptidesByRTPredictPValue(
peptides, "predicted_RT_p_value", pred_rt_pv);
}
double pred_rt_pv_1d = getDoubleOption_("rt:p_value_1st_dim");
if (pred_rt_pv_1d > 0)
{
OPENMS_LOG_INFO << "Filtering by RT prediction p-value (first dim.)..." << endl;
IDFilter::filterPeptidesByRTPredictPValue(
peptides, "predicted_RT_p_value_first_dim", pred_rt_pv_1d);
}
String whitelist_fasta = getStringOption_("whitelist:proteins").trim();
if (!whitelist_fasta.empty())
{
OPENMS_LOG_INFO << "Filtering by protein whitelisting (FASTA input)..." << endl;
// load protein accessions from FASTA file:
vector<FASTAFile::FASTAEntry> fasta;
FASTAFile().load(whitelist_fasta, fasta);
set<String> accessions;
for (vector<FASTAFile::FASTAEntry>::iterator it = fasta.begin();
it != fasta.end(); ++it)
{
accessions.insert(it->identifier);
}
IDFilter::keepHitsMatchingProteins(peptides, accessions);
IDFilter::keepHitsMatchingProteins(proteins, accessions);
}
vector<String> whitelist_accessions =
getStringList_("whitelist:protein_accessions");
if (!whitelist_accessions.empty())
{
OPENMS_LOG_INFO << "Filtering by protein whitelisting (accessions input)..."
<< endl;
set<String> accessions(whitelist_accessions.begin(),
whitelist_accessions.end());
IDFilter::keepHitsMatchingProteins(peptides, accessions);
IDFilter::keepHitsMatchingProteins(proteins, accessions);
}
String whitelist_peptides = getStringOption_("whitelist:peptides").trim();
if (!whitelist_peptides.empty())
{
OPENMS_LOG_INFO << "Filtering by inclusion peptide whitelisting..." << endl;
PeptideIdentificationList inclusion_peptides;
vector<ProteinIdentification> inclusion_proteins; // ignored
FileHandler().loadIdentifications(whitelist_peptides, inclusion_proteins,
inclusion_peptides, {FileTypes::IDXML});
bool ignore_mods = getFlag_("whitelist:ignore_modifications");
IDFilter::keepPeptidesWithMatchingSequences(peptides, inclusion_peptides,
ignore_mods);
}
vector<String> whitelist_mods = getStringList_("whitelist:modifications");
if (!whitelist_mods.empty())
{
OPENMS_LOG_INFO << "Filtering peptide IDs by modification whitelisting..."
<< endl;
set<String> good_mods(whitelist_mods.begin(), whitelist_mods.end());
IDFilter::keepPeptidesWithMatchingModifications(peptides, good_mods);
}
String blacklist_fasta = getStringOption_("blacklist:proteins").trim();
if (!blacklist_fasta.empty())
{
OPENMS_LOG_INFO << "Filtering by protein blacklisting (FASTA input)..." << endl;
// load protein accessions from FASTA file:
vector<FASTAFile::FASTAEntry> fasta;
FASTAFile().load(blacklist_fasta, fasta);
set<String> accessions;
for (FASTAFile::FASTAEntry& ft : fasta)
{
accessions.insert(ft.identifier);
}
IDFilter::removeHitsMatchingProteins(peptides, accessions);
IDFilter::removeHitsMatchingProteins(proteins, accessions);
}
vector<String> blacklist_accessions =
getStringList_("blacklist:protein_accessions");
if (!blacklist_accessions.empty())
{
OPENMS_LOG_INFO << "Filtering by protein blacklisting (accessions input)..."
<< endl;
set<String> accessions(blacklist_accessions.begin(),
blacklist_accessions.end());
IDFilter::removeHitsMatchingProteins(peptides, accessions);
IDFilter::removeHitsMatchingProteins(proteins, accessions);
}
String blacklist_peptides = getStringOption_("blacklist:peptides").trim();
if (!blacklist_peptides.empty())
{
OPENMS_LOG_INFO << "Filtering by exclusion peptide blacklisting..." << endl;
PeptideIdentificationList exclusion_peptides;
vector<ProteinIdentification> exclusion_proteins; // ignored
FileHandler().loadIdentifications(blacklist_peptides, exclusion_proteins,
exclusion_peptides, {FileTypes::IDXML});
bool ignore_mods = getFlag_("blacklist:ignore_modifications");
IDFilter::removePeptidesWithMatchingSequences(
peptides, exclusion_peptides, ignore_mods);
}
vector<String> blacklist_mods = getStringList_("blacklist:modifications");
if (!blacklist_mods.empty())
{
OPENMS_LOG_INFO << "Filtering peptide IDs by modification blacklisting..."
<< endl;
set<String> bad_mods(blacklist_mods.begin(), blacklist_mods.end());
IDFilter::removePeptidesWithMatchingModifications(peptides, bad_mods);
}
String blacklist_regex = getStringOption_("blacklist:RegEx");
if (!blacklist_regex.empty())
{
IDFilter::removePeptidesWithMatchingRegEx(peptides, blacklist_regex);
}
if (getFlag_("best:strict"))
{
OPENMS_LOG_INFO << "Filtering by best peptide hits..." << endl;
IDFilter::keepBestPeptideHits(peptides, true);
}
Int min_length = 0, max_length = 0;
if (parseRange_(getStringOption_("precursor:length"), min_length, max_length))
{
OPENMS_LOG_INFO << "Filtering by peptide length..." << endl;
if ((min_length < 0) || (max_length < 0))
{
OPENMS_LOG_ERROR << "Fatal error: negative values are not allowed for parameter 'precursor:length'" << endl;
return ILLEGAL_PARAMETERS;
}
IDFilter::filterPeptidesByLength(peptides, Size(min_length),
Size(max_length));
}
// Filter by digestion enzyme product
String protein_fasta = getStringOption_("in_silico_digestion:fasta").trim();
if (!protein_fasta.empty())
{
OPENMS_LOG_INFO << "Filtering peptides by digested protein (FASTA input)..." << endl;
// load protein accessions from FASTA file:
vector<FASTAFile::FASTAEntry> fasta;
FASTAFile().load(protein_fasta, fasta);
// Configure Enzymatic digestion
ProteaseDigestion digestion;
String enzyme = getStringOption_("in_silico_digestion:enzyme").trim();
if (!enzyme.empty())
{
digestion.setEnzyme(enzyme);
}
String specificity = getStringOption_("in_silico_digestion:specificity").trim();
if (!specificity.empty())
{
digestion.setSpecificity(digestion.getSpecificityByName(specificity));
}
Int missed_cleavages = getIntOption_("in_silico_digestion:missed_cleavages");
bool ignore_missed_cleavages = true;
if (missed_cleavages > -1)
{
ignore_missed_cleavages = false;
if (digestion.getSpecificity() == EnzymaticDigestion::SPEC_FULL)
{
OPENMS_LOG_WARN << "Specificity not full, missed_cleavages option is redundant" << endl;
}
digestion.setMissedCleavages(missed_cleavages);
}
bool methionine_cleavage = false;
if (getFlag_("in_silico_digestion:methionine_cleavage"))
{
methionine_cleavage = true;
}
// Build the digest filter function
IDFilter::DigestionFilter filter(fasta,
digestion,
ignore_missed_cleavages,
methionine_cleavage);
// Filter peptides
filter.filterPeptideEvidences(peptides);
}
// Filter peptide hits by missing cleavages
Int min_cleavages, max_cleavages;
min_cleavages = max_cleavages = IDFilter::PeptideDigestionFilter::disabledValue();
if (parseRange_(getStringOption_("missed_cleavages:number_of_missed_cleavages"), min_cleavages, max_cleavages))
{
// Configure Enzymatic digestion
ProteaseDigestion digestion;
String enzyme = getStringOption_("missed_cleavages:enzyme");
if (!enzyme.empty())
{
digestion.setEnzyme(enzyme);
}
OPENMS_LOG_INFO << "Filtering peptide hits by their missed cleavages count with enzyme " << digestion.getEnzymeName() << "..." << endl;
// Build the digest filter function
IDFilter::PeptideDigestionFilter filter(digestion, min_cleavages, max_cleavages);
// Filter peptide hits
for (auto& peptide : peptides)
{
filter.filterPeptideSequences(peptide.getHits());
}
}
if (getFlag_("var_mods"))
{
OPENMS_LOG_INFO << "Filtering for variable modifications..." << endl;
// gather possible variable modifications from search parameters:
set<String> var_mods;
for (ProteinIdentification& prot : proteins)
{
const ProteinIdentification::SearchParameters& params =
prot.getSearchParameters();
for (vector<String>::const_iterator mod_it =
params.variable_modifications.begin(); mod_it !=
params.variable_modifications.end(); ++mod_it)
{
var_mods.insert(*mod_it);
}
}
IDFilter::keepPeptidesWithMatchingModifications(peptides, var_mods);
}
double psm_score = getDoubleOption_("score:psm");
if (!std::isnan(psm_score))
{
OPENMS_LOG_INFO << "Filtering by PSM score (better than " << psm_score << ")..." << endl;
IDFilter::filterHitsByScore(peptides, psm_score);
}
else
{
OPENMS_LOG_INFO << "No 'score:psm' threshold set. Not filtering by peptide score." << endl;
}
double pep_score = getDoubleOption_("score:peptide");
String score_type = getStringOption_("score:type_peptide");
if (!std::isnan(pep_score))
{
OPENMS_LOG_INFO << "Filtering by peptide score (better than " << pep_score << ")..." << endl;
if (!score_type.empty())
{
IDScoreSwitcherAlgorithm::ScoreType score_type_enum = IDScoreSwitcherAlgorithm::toScoreTypeEnum(score_type);
IDFilter::filterHitsByScore(proteins, pep_score, score_type_enum);
}
else
{
IDFilter::filterHitsByScore(peptides, pep_score);
}
}
else
{
OPENMS_LOG_INFO << "No 'score:peptide' threshold set. Not filtering by peptide score." << endl;
}
Int min_charge = numeric_limits<Int>::min(), max_charge =
numeric_limits<Int>::max();
if (parseRange_(getStringOption_("precursor:charge"), min_charge, max_charge))
{
OPENMS_LOG_INFO << "Filtering by peptide charge..." << endl;
IDFilter::filterPeptidesByCharge(peptides, min_charge, max_charge);
}
const Size best_n_spectra = getIntOption_("best:n_spectra");
if (best_n_spectra > 0)
{
OPENMS_LOG_INFO << "Filtering by best n spectra..." << endl;
IDFilter::keepNBestSpectra(peptides, best_n_spectra);
}
Size best_n_pep = getIntOption_("best:n_peptide_hits");
if (best_n_pep > 0)
{
OPENMS_LOG_INFO << "Filtering by best n peptide hits..." << endl;
IDFilter::keepNBestHits(peptides, best_n_pep);
}
String spectrum_per_peptide = getStringOption_("best:spectrum_per_peptide");
if (spectrum_per_peptide != "false")
{
OPENMS_LOG_INFO << "Keeping best spectrum per " << spectrum_per_peptide << endl;
if (spectrum_per_peptide == "sequence") // group by sequence and return best spectrum (->smallest number of spectra)
{
IDFilter::keepBestPerPeptide(peptides, true, true, 1);
} else if (spectrum_per_peptide == "sequence+modification")
{
IDFilter::keepBestPerPeptide(peptides, false, true, 1);
} else if (spectrum_per_peptide == "sequence+charge")
{
IDFilter::keepBestPerPeptide(peptides, true, false, 1);
} else if (spectrum_per_peptide == "sequence+charge+modification") // group by sequence, modificationm, charge combination and return best spectrum (->largest number of spectra)
{
IDFilter::keepBestPerPeptide(peptides, false, false, 1);
}
}
Int min_rank = 0, max_rank = 0;
if (parseRange_(getStringOption_("best:n_to_m_peptide_hits"), min_rank,
max_rank))
{
OPENMS_LOG_INFO << "Filtering by peptide hit ranks..." << endl;
if ((min_rank < 0) || (max_rank < 0))
{
OPENMS_LOG_ERROR << "Fatal error: negative values are not allowed for parameter 'best:n_to_m_peptide_hits'" << endl;
return ILLEGAL_PARAMETERS;
}
IDFilter::filterHitsByRank(peptides, Size(min_rank), Size(max_rank));
}
double mz_error = getDoubleOption_("mz:error");
if (mz_error > 0)
{
OPENMS_LOG_INFO << "Filtering by mass error..." << endl;
bool unit_ppm = (getStringOption_("mz:unit") == "ppm");
IDFilter::filterPeptidesByMZError(peptides, mz_error, unit_ppm);
}
// Filtering protein identifications according to set criteria
double prot_score = getDoubleOption_("score:protein");
String score_type_prot = getStringOption_("score:type_protein");
if (!std::isnan(prot_score))
{
OPENMS_LOG_INFO << "Filtering by protein score (better than " << prot_score << ") ..." << endl;
if (!score_type_prot.empty())
{
IDScoreSwitcherAlgorithm::ScoreType score_type_prot_enum = IDScoreSwitcherAlgorithm::toScoreTypeEnum(score_type_prot);
IDFilter::filterHitsByScore(proteins, prot_score, score_type_prot_enum);
}
else
{
IDFilter::filterHitsByScore(proteins, prot_score);
}
}
else
{
OPENMS_LOG_INFO << "No 'score:protein' threshold set. Not filtering by protein score." << endl;
}
Size best_n_prot = getIntOption_("best:n_protein_hits");
if (best_n_prot > 0)
{
OPENMS_LOG_INFO << "Filtering by best n protein hits (" << best_n_prot << ") ... " << endl;
IDFilter::keepNBestHits(proteins, best_n_prot);
}
if (getFlag_("remove_decoys"))
{
OPENMS_LOG_INFO << "Removing decoy hits..." << endl;
IDFilter::removeDecoyHits(peptides);
IDFilter::removeDecoyHits(proteins);
}
// Filtering protein identifications according to set criteria
double prot_grp_score = getDoubleOption_("score:proteingroup");
if (!std::isnan(prot_grp_score))
{
for (auto& proteinid : proteins)
{
OPENMS_LOG_INFO << "Filtering by protein group score..." << endl;
IDFilter::filterGroupsByScore(proteinid.getIndistinguishableProteins(), prot_grp_score, proteinid.isHigherScoreBetter());
IDFilter::filterGroupsByScore(proteinid.getProteinGroups(), prot_grp_score, proteinid.isHigherScoreBetter());
}
}
else
{
OPENMS_LOG_INFO << "No 'score:proteingroup' threshold set. Not filtering by protein group score." << endl;
}
// remove peptide hits with meta values:
if (remove_meta_enabled)
{
auto checkMVs = [this, &meta_info](PeptideHit& ph)->bool
{
if (!ph.metaValueExists(meta_info[0])) return true; // not having the meta value means passing the test
DataValue v_data = ph.getMetaValue(meta_info[0]);
DataValue v_user;
switch (v_data.valueType())
{
case DataValue::STRING_VALUE : v_user = String(meta_info[2]); break;
case DataValue::INT_VALUE : v_user = String(meta_info[2]).toInt(); break;
case DataValue::DOUBLE_VALUE : v_user = String(meta_info[2]).toDouble(); break;
case DataValue::STRING_LIST : v_user = (StringList)ListUtils::create<String>(meta_info[2]); break;
case DataValue::INT_LIST : v_user = ListUtils::create<Int>(meta_info[2]); break;
case DataValue::DOUBLE_LIST : v_user = ListUtils::create<double>(meta_info[2]); break;
case DataValue::EMPTY_VALUE : v_user = DataValue::EMPTY; break;
default: throw Exception::ConversionError(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Type of DataValue is unkown!"); break;
}
if (meta_info[1] == "lt")
{
return !(v_data < v_user);
}
else if (meta_info[1] == "eq")
{
return !(v_data == v_user);
}
else if (meta_info[1] == "gt")
{
return !(v_data > v_user);
}
else if (meta_info[1] == "ne")
{
return (v_data == v_user);
}
else
{
writeLogError_("Internal Error. Meta value filtering got invalid comparison operator ('" + meta_info[1] + "'), which should have been caught before! Aborting!");
throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Illegal meta value filtering operator!");
}
}; // of lambda
for (auto & pid : peptides)
{
vector<PeptideHit>& phs = pid.getHits();
phs.erase(remove_if(phs.begin(), phs.end(), checkMVs), phs.end());
}
}
if (getFlag_("remove_decoys"))
{
OPENMS_LOG_INFO << "Removing decoy hits..." << endl;
IDFilter::removeDecoyHits(peptides);
IDFilter::removeDecoyHits(proteins);
}
// Clean-up:
// propagate filter from PSM level to protein level
if (!getFlag_("keep_unreferenced_protein_hits"))
{
OPENMS_LOG_INFO << "Removing unreferenced protein hits..." << endl;
IDFilter::removeUnreferencedProteins(proteins, peptides);
}
// propagate filter from protein level to protein group level
for (ProteinIdentification& prot : proteins)
{
bool valid = IDFilter::updateProteinGroups(prot.getProteinGroups(),
prot.getHits());
if (!valid)
{
OPENMS_LOG_WARN << "Warning: While updating protein groups, some proteins were removed from groups that are still present. The new grouping (especially the group probabilities) may not be completely valid any more." << endl;
}
valid = IDFilter::updateProteinGroups(
prot.getIndistinguishableProteins(), prot.getHits());
if (!valid)
{
OPENMS_LOG_WARN << "Warning: While updating indistinguishable proteins, some proteins were removed from groups that are still present. The new grouping (especially the group probabilities) may not be completely valid any more." << endl;
}
if (!std::isnan(prot_grp_score))
{
// Pass potential filtering on group level down to proteins
IDFilter::removeUngroupedProteins(prot.getIndistinguishableProteins(), prot.getHits());
}
}
// remove non-existant protein references from peptides (and optionally:
// remove peptides with no proteins):
bool rm_pep = getFlag_("delete_unreferenced_peptide_hits");
if (rm_pep)
{
OPENMS_LOG_INFO << "Removing peptide hits without protein references..." << endl;
}
IDFilter::removeDanglingProteinReferences(peptides, proteins, rm_pep);
IDFilter::removeEmptyIdentifications(peptides);
// we want to keep "empty" protein IDs because they contain search meta data
// some stats
OPENMS_LOG_INFO << "Before filtering:\n"
<< n_prot_ids << " identification runs with "
<< n_prot_hits << " proteins,\n"
<< n_pep_ids << " spectra identified with "
<< n_pep_hits << " spectrum matches.\n"
<< "After filtering:\n"
<< proteins.size() << " identification runs with "
<< IDFilter::countHits(proteins) << " proteins,\n"
<< peptides.size() << " spectra identified with "
<< IDFilter::countHits(peptides) << " spectrum matches." << endl;
if (infiletype == FileTypes::IDXML)
{
FileHandler().storeIdentifications(outputfile_name, proteins, peptides, {FileTypes::IDXML});
}
else if (infiletype == FileTypes::CONSENSUSXML)
{
for (auto& p : peptides)
{
if (p.metaValueExists(tmp_feature_id_metaval_))
{
UInt64 id(0);
std::stringstream(p.getMetaValue(tmp_feature_id_metaval_).toString()) >> id;
p.removeMetaValue(tmp_feature_id_metaval_);
auto& f = id_to_featureref[id];
f->getPeptideIdentifications().push_back(std::move(p));
}
else
{
cmap.getUnassignedPeptideIdentifications().push_back(std::move(p));
}
}
peptides.clear();
std::swap(proteins, cmap.getProteinIdentifications());
proteins.clear();
FileHandler().storeConsensusFeatures(outputfile_name, cmap, {FileTypes::CONSENSUSXML});
}
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPIDFilter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/QuantmsIOConverter.cpp | .cpp | 5,319 | 147 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Oliver Alka $
// $Authors: Oliver Alka $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/FORMAT/IdXMLFile.h>
#include <OpenMS/FORMAT/QuantmsIO.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
/// @cond WITH_PARQUET
@page TOPP_QuantmsIOConverter QuantmsIOConverter
@brief Converts IdXML files to parquet format following quantms.io PSM specification.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → QuantmsIOConverter →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> Any identification tool producing idXML </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> quantms.io analysis tools </td>
</tr>
</table>
</CENTER>
QuantmsIOConverter reads peptide and protein identifications from idXML files
and converts them to parquet format following the quantms.io PSM (Peptide
Spectrum Match) specification.
The output parquet file contains PSM data with columns following the quantms.io PSM specification:
- sequence: unmodified peptide sequence
- peptidoform: peptide sequence with modifications
- modifications: peptide modifications (null for now)
- precursor_charge: precursor charge
- posterior_error_probability: PEP score from metavalues (nullable)
- is_decoy: decoy flag (0=target, 1=decoy) based on target_decoy metavalue
- calculated_mz: theoretical m/z from sequence
- observed_mz: experimental precursor m/z
- additional_scores: additional scores (null for now)
- mp_accessions: protein accessions (null for now)
- predicted_rt: predicted retention time (null for now)
- reference_file_name: reference file name
- cv_params: CV parameters (null for now)
- scan: scan identifier
- rt: retention time in seconds (nullable)
- ion_mobility: ion mobility value (nullable, null for now)
- num_peaks: number of peaks (nullable, null for now)
- mz_array: m/z values array (null for now)
- intensity_array: intensity values array (null for now)
Only the first peptide hit per peptide identification is processed (no rank field).
PEP scores are automatically detected from metavalues using known PEP score names.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_QuantmsIOConverter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_QuantmsIOConverter.html
/// @endcond
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPQuantmsIOConverter :
public TOPPBase
{
public:
TOPPQuantmsIOConverter() :
TOPPBase("QuantmsIOConverter", "Converts IdXML files to parquet format following quantms.io PSM specification.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input idXML file");
setValidFormats_("in", ListUtils::create<String>("idXML"));
registerOutputFile_("out", "<file>", "", "Output parquet file", true);
setValidFormats_("out", ListUtils::create<String>("parquet"));
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
const String in = getStringOption_("in");
const String out = getStringOption_("out");
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
vector<ProteinIdentification> protein_identifications;
PeptideIdentificationList peptide_identifications;
OPENMS_LOG_INFO << "Loading idXML file..." << endl;
IdXMLFile idxml_file;
idxml_file.load(in, protein_identifications, peptide_identifications);
if (peptide_identifications.empty())
{
OPENMS_LOG_WARN << "No peptide identifications found in input file." << endl;
return INPUT_FILE_EMPTY;
}
OPENMS_LOG_INFO << "Found " << peptide_identifications.size()
<< " peptide identifications in "
<< protein_identifications.size()
<< " protein identification runs." << endl;
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
OPENMS_LOG_INFO << "Converting to parquet format..." << endl;
QuantmsIO quantms_io;
quantms_io.store(out, protein_identifications, peptide_identifications);
OPENMS_LOG_INFO << "Conversion completed successfully." << endl;
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPQuantmsIOConverter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/MaRaClusterAdapter.cpp | .cpp | 18,793 | 492 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Leon Bichmann $
// $Authors: Mathew The, Leon Bichmann $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/DATASTRUCTURES/StringListUtils.h>
#include <OpenMS/CONCEPT/ProgressLogger.h>
#include <OpenMS/CONCEPT/Constants.h>
#include <OpenMS/CONCEPT/LogStream.h>
#include <OpenMS/FORMAT/CsvFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/SYSTEM/File.h>
#include <iostream>
#include <cmath>
#include <string>
#include <set>
#include <map>
#include <QtCore/QProcess>
#include <boost/algorithm/clamp.hpp>
#include <typeinfo>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_MaRaClusterAdapter MaRaClusterAdapter
@brief MaRaClusterAdapter facilitates the input to, the call of and output integration of MaRaCluster.
MaRaCluster (https://github.com/statisticalbiotechnology/maracluster) is a tool to apply unsupervised clustering of ms2 spectra from shotgun proteomics datasets.
@experimental This tool is work in progress and usage and input requirements might change.
<center>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → MaRaClusterAdapter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1>any signal-/preprocessing tool @n (in mzML format) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_MSGFPlusAdapter </td>
</tr>
</table>
</center>
<p>MaRaCluster is dependent on the input parameter pcut, which is the logarithm of the pvalue cutoff.
The default value is -10, lower values will result in smaller but purer clusters. If specified peptide search results
can be provided as idXML files and the MaRaCluster Adapter will annotate cluster ids as attributes to each peptide
identification, which will be outputed as a merged idXML. Moreover the merged idXML containing only scan numbers,
cluster ids and file origin can be outputed without prior peptide identification searches. The assigned cluster ids in
the respective idXML are equal to the scanindex of the produced clustered mzML.
</p>
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_MaRaClusterAdapter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_MaRaClusterAdapter.html
MaRaCluster is written by Matthew The (https://github.com/statisticalbiotechnology/maracluster
Copyright Matthew The <matthew.the@scilifelab.se>)
Cite Publication:
MaRaCluster: A Fragment Rarity Metric for Clustering Fragment Spectra in Shotgun Proteomics
Journal of proteome research, 2016, 15(3), pp 713-720 DOI: 10.1021/acs.jproteome.5b00749
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class MaRaClusterAdapter :
public TOPPBase
{
public:
MaRaClusterAdapter() :
TOPPBase("MaRaClusterAdapter", "Facilitate input to MaRaCluster and reintegrate.", true,
{ // citation(s), specific for this tool
{ "The M and Käll L", "MaRaCluster: A Fragment Rarity Metric for Clustering Fragment Spectra in Shotgun Proteomics", "J Proteome Res 2016; 15: 3", "10.1021/acs.jproteome.5b00749"}
}
)
{
}
protected:
struct MaRaClusterResult
{
Int file_idx;
Int scan_nr;
MaRaClusterResult(const Int f, const Int s):
file_idx (f),
scan_nr (s)
{
}
explicit MaRaClusterResult(StringList& row)
{
file_idx = row[0].toInt();
scan_nr = row[1].toInt();
}
bool operator!=(const MaRaClusterResult& rhs) const
{
if (file_idx != rhs.file_idx || scan_nr != rhs.scan_nr)
{
return true;
}
return false;
}
bool operator<(const MaRaClusterResult& rhs) const
{
if (file_idx < rhs.file_idx || (file_idx == rhs.file_idx && scan_nr < rhs.scan_nr))
{
return true;
}
return false;
}
bool operator==(const MaRaClusterResult& rhs) const
{
return !(operator !=(rhs));
}
};
void registerOptionsAndFlags_() override
{
static const bool is_required(true);
static const bool is_advanced_option(true);
//input
registerInputFileList_("in", "<files>", StringList(), "Input file(s)", is_required);
setValidFormats_("in", ListUtils::create<String>("mzML,mgf"));
registerInputFileList_("id_in", "<files>", StringList(), "Optional idXML Input file(s) in the same order as mzML files - for Maracluster Cluster annotation", !is_required);
setValidFormats_("id_in", ListUtils::create<String>("idXML"));
//output
registerOutputFile_("out", "<file>", "", "Output file in idXML format", !is_required);
setValidFormats_("out", ListUtils::create<String>("idXML"));
registerOutputFile_("consensus_out", "<file>", "", "Consensus spectra in mzML format", !is_required);
setValidFormats_("consensus_out", ListUtils::create<String>("mzML"));
registerStringOption_("output_directory", "<directory>", "", "Output directory for MaRaCluster original consensus output", false);
//pvalue cutoff
registerDoubleOption_("pcut", "<value>", -10.0, "log(p-value) cutoff, has to be < 0.0. Default: -10.0.", !is_required);
setMaxFloat_("pcut", 0.0);
registerIntOption_("min_cluster_size", "<value>", 1, "minimum number of spectra in a cluster for consensus spectra", !is_required);
// minimal cluster size
setMinInt_("min_cluster_size", 1);
// executable
registerInputFile_("maracluster_executable", "<executable>",
// choose the default value according to the platform where it will be executed
#ifdef OPENMS_WINDOWSPLATFORM
"maracluster.exe",
#else
"maracluster",
#endif
"The maracluster executable. Provide a full or relative path, or make sure it can be found in your PATH environment.", is_required, !is_advanced_option, {"is_executable"}
);
// Advanced parameters
registerIntOption_("verbose", "<level>", 2, "Set verbosity of output: 0=no processing info, 5=all.", !is_required, is_advanced_option);
registerDoubleOption_("precursor_tolerance", "<tolerance>", 20.0, "Precursor monoisotopic mass tolerance", !is_required, is_advanced_option);
registerStringOption_("precursor_tolerance_units", "<choice>", "ppm", "tolerance_mass_units 0=ppm, 1=Da", !is_required, is_advanced_option);
setValidStrings_("precursor_tolerance_units", ListUtils::create<String>("ppm,Da"));
}
// read and parse clustering output csv to store specnumber and clusterid associations
void readMClusterOutputAsMap_(String mcout_file, std::map<MaRaClusterResult, Int>& specid_to_clusterid_map, const std::map<String, Int>& filename_to_idx_map)
{
CsvFile csv_file(mcout_file, '\t');
StringList row;
Int clusterid = 0;
for (Size i = 0; i < csv_file.rowCount(); ++i)
{
csv_file.getRow(i, row);
if (!row.empty())
{
row[0] = String(filename_to_idx_map.at(row[0]));
MaRaClusterResult res(row);
specid_to_clusterid_map[res] = clusterid;
}
else
{
++clusterid;
}
}
}
// replace with PercolatorAdapter function
String getScanIdentifier_(PeptideIdentificationList::iterator it, PeptideIdentificationList::iterator start)
{
// MSGF+ uses this field, is empty if not specified
String scan_identifier = it->getSpectrumReference();
if (scan_identifier.empty())
{
// XTandem uses this (integer) field
// these ids are 1-based in contrast to the index which is 0-based. This might be problematic to use for merging
if (it->metaValueExists("spectrum_id") && !it->getMetaValue("spectrum_id").toString().empty())
{
scan_identifier = "scan=" + it->getMetaValue("spectrum_id").toString();
}
else
{
scan_identifier = "index=" + String(it - start + 1);
OPENMS_LOG_WARN << "no known spectrum identifiers, using index [1,n] - use at own risk." << endl;
}
}
return scan_identifier.removeWhitespaces();
}
// replace with PercolatorAdapter function
Int getScanNumber_(String scan_identifier)
{
Int scan_number = 0;
StringList fields = ListUtils::create<String>(scan_identifier);
for (const String& st : fields)
{
// if scan number is not available, use the scan index
Size idx = 0;
if ((idx = st.find("scan=")) != string::npos)
{
scan_number = st.substr(idx + 5).toInt();
break;
}
else if ((idx = st.find("index=")) != string::npos)
{
scan_number = st.substr(idx + 6).toInt();
break;
}
else if ((idx = st.find("spectrum=")) != string::npos)
{
scan_number = st.substr(idx + 9).toInt();
}
}
return scan_number;
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
const StringList in_list = getStringList_("in");
const String maracluster_executable(getStringOption_("maracluster_executable"));
writeDebug_(String("Path to the maracluster executable: ") + maracluster_executable, 2);
String maracluster_output_directory = getStringOption_("output_directory");
const String consensus_out(getStringOption_("consensus_out"));
const String out(getStringOption_("out"));
if (in_list.empty())
{
writeLogError_("Error: no input file given (parameter 'in')");
printUsage_();
return ILLEGAL_PARAMETERS;
}
if (consensus_out.empty() && out.empty())
{
writeLogError_("Error: no output file given (parameter 'out' or 'consensus_out')");
printUsage_();
return ILLEGAL_PARAMETERS;
}
//-------------------------------------------------------------
// read input
//-------------------------------------------------------------
// create temp directory to store maracluster temporary files
File::TempDir tmp_dir(debug_level_ >= 2);
double pcut = getDoubleOption_("pcut");
String txt_designator = File::getUniqueName();
String input_file_list(tmp_dir.getPath() + txt_designator + ".file_list.txt");
String consensus_output_file(tmp_dir.getPath() + txt_designator + ".clusters_p" + String(Int(-1*pcut)) + ".tsv");
// Create simple text file with one file path per line
// TODO make a bit more exception safe
ofstream os(input_file_list.c_str());
map<String,Int> filename_to_file_idx;
Int file_idx = 0;
for (StringList::const_iterator fit = in_list.begin(); fit != in_list.end(); ++fit, ++file_idx) {
filename_to_file_idx[*fit] = file_idx;
os << *fit;
if (fit + 1 != in_list.end())
{
os << endl;
}
}
os.close();
QStringList arguments;
// Check all set parameters and get them into arguments StringList
{
arguments << "batch";
arguments << "-b" << input_file_list.toQString();
arguments << "-f" << tmp_dir.getPath().toQString();
arguments << "-a" << txt_designator.toQString();
map<String,int> precursor_tolerance_units;
precursor_tolerance_units["ppm"] = 0;
precursor_tolerance_units["Da"] = 1;
arguments << "-p" << (String(getDoubleOption_("precursor_tolerance")) + precursor_tolerance_units[getStringOption_("precursor_tolerance_units")]).toQString();
arguments << "-t" << String(pcut).toQString();
arguments << "-c" << String(pcut).toQString();
Int verbose_level = getIntOption_("verbose");
if (verbose_level != 2)
{
arguments << "-v" << String(verbose_level).toQString();
}
}
writeLogInfo_("Prepared maracluster command.");
//-------------------------------------------------------------
// run MaRaCluster for idXML output
//-------------------------------------------------------------
// MaRaCluster execution with the executable and the arguments StringList
writeLogInfo_("Executing maracluster ...");
auto exit_code = runExternalProcess_(maracluster_executable.toQString(), arguments);
if (exit_code != EXECUTION_OK)
{
return exit_code;
}
//-------------------------------------------------------------
// reintegrate clustering results
//-------------------------------------------------------------
std::map<MaRaClusterResult, Int> specid_to_clusterid_map;
readMClusterOutputAsMap_(consensus_output_file, specid_to_clusterid_map, filename_to_file_idx);
file_idx = 0;
//if specified keep original output in designated directory
if (!maracluster_output_directory.empty())
{
bool copy_status = File::copyDirRecursively(tmp_dir.getPath().toQString(), maracluster_output_directory.toQString());
if (copy_status)
{
OPENMS_LOG_INFO << "MaRaCluster original output was successfully copied to " << maracluster_output_directory << std::endl;
}
else
{
OPENMS_LOG_INFO << "MaRaCluster original output could not be copied to " << maracluster_output_directory << ". Please run MaRaClusterAdapter with debug >= 2." << std::endl;
}
}
//output idXML containing scannumber and cluster id annotation
if (!out.empty())
{
const StringList id_in = getStringList_("id_in");
PeptideIdentificationList all_peptide_ids;
vector<ProteinIdentification> all_protein_ids;
if (!id_in.empty())
{
for (const String& ss : id_in) {
PeptideIdentificationList peptide_ids;
vector<ProteinIdentification> protein_ids;
FileHandler().loadIdentifications(ss, protein_ids, peptide_ids, {FileTypes::IDXML});
for (PeptideIdentificationList::iterator it = peptide_ids.begin(); it != peptide_ids.end(); ++it) {
String scan_identifier = getScanIdentifier_(it, peptide_ids.begin());
Int scan_number = getScanNumber_(scan_identifier);
MaRaClusterResult res(file_idx, scan_number);
// cluster index - 1 is equal to scan_number in consensus.mzML
Int cluster_id = specid_to_clusterid_map[res] - 1;
it->setMetaValue("cluster_id", cluster_id);
String filename = in_list[file_idx];
it->setMetaValue("file_origin", filename);
}
for (ProteinIdentification& prot : protein_ids) {
String filename = in_list[file_idx];
prot.setMetaValue("file_origin", filename);
}
all_peptide_ids.insert(all_peptide_ids.end(), peptide_ids.begin(), peptide_ids.end());
all_protein_ids.insert(all_protein_ids.end(), protein_ids.begin(), protein_ids.end());
}
++file_idx;
}
else
{
for (std::map<MaRaClusterResult,Int>::iterator sid = specid_to_clusterid_map.begin(); sid != specid_to_clusterid_map.end(); ++sid) {
Int scan_nr = sid->first.scan_nr;
Int file_id = sid->first.file_idx;
Int cluster_id = sid->second;
PeptideIdentification pid;
PeptideHit pih;
pid.insertHit(pih);
pid.setSpectrumReference("scan=" + String(scan_nr));
// cluster index - 1 is equal to scan_number in consensus.mzML
pid.setMetaValue("cluster_id", cluster_id - 1);
pid.setMetaValue("file_origin", in_list[file_id]);
all_peptide_ids.push_back(pid);
}
}
if (all_protein_ids.empty())
{
ProteinIdentification protid;
all_protein_ids.push_back(protid);
}
all_protein_ids.back().setMetaValue("maracluster", "MaRaClusterAdapter");
ProteinIdentification::SearchParameters search_parameters = all_protein_ids.back().getSearchParameters();
search_parameters.setMetaValue("MaRaCluster:pvalue-cutoff", pcut);
all_protein_ids.back().setSearchParameters(search_parameters);
writeDebug_("write idXMLFile", 1);
writeDebug_(out, 1);// As the maracluster output file is not needed anymore, the temporary directory is going to be deleted
FileHandler().storeIdentifications(out, all_protein_ids, all_peptide_ids, {FileTypes::IDXML});
}
//output consensus mzML
if (!consensus_out.empty())
{
QStringList arguments_consensus;
// Check all set parameters and get them into arguments StringList
{
arguments_consensus << "consensus";
arguments_consensus << "-l" << consensus_output_file.toQString();
arguments_consensus << "-f" << tmp_dir.getPath().toQString();
arguments_consensus << "-o" << consensus_out.toQString();
Int min_cluster_size = getIntOption_("min_cluster_size");
arguments_consensus << "-M" << String(min_cluster_size).toQString();
Int verbose_level = getIntOption_("verbose");
if (verbose_level != 2) arguments_consensus << "-v" << String(verbose_level).toQString();
}
writeLogInfo_("Prepared maracluster-consensus command.");
//-------------------------------------------------------------
// run MaRaCluster for consensus output
//-------------------------------------------------------------
// MaRaCluster execution with the executable and the arguments StringList
TOPPBase::ExitCodes exit_code = runExternalProcess_(maracluster_executable.toQString(), arguments_consensus);
if (exit_code != EXECUTION_OK)
{
return exit_code;
}
// sort mzML
FileHandler fh;
FileTypes::Type in_type = fh.getType(consensus_out);
OPENMS_LOG_DEBUG << "Input type" << FileTypes::typeToName(in_type) << ". " << std::endl;
PeakMap exp;
fh.loadExperiment(FileHandler::stripExtension(consensus_out) + ".part1." + FileTypes::typeToName(in_type), exp, {in_type}, log_type_, true, true);
exp.sortSpectra();
fh.storeExperiment(consensus_out, exp, {FileTypes::MZML}, log_type_);
}
writeLogInfo_("MaRaClusterAdapter finished successfully!");
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
MaRaClusterAdapter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/OpenSwathAssayGenerator.cpp | .cpp | 21,038 | 437 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: George Rosenberger $
// $Authors: George Rosenberger $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/OPENSWATH/MRMAssay.h>
#include <OpenMS/ANALYSIS/OPENSWATH/TransitionTSVFile.h>
#include <OpenMS/ANALYSIS/OPENSWATH/TransitionPQPFile.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CONCEPT/Exception.h>
#include <OpenMS/CONCEPT/ProgressLogger.h>
#include <OpenMS/CONCEPT/LogStream.h>
#include <OpenMS/DATASTRUCTURES/ListUtils.h>
#include <OpenMS/CHEMISTRY/AASequence.h>
#include <OpenMS/ANALYSIS/OPENSWATH/SwathWindowLoader.h>
#include <OpenMS/MATH/MathFunctions.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/CHEMISTRY/ModificationsDB.h>
#include <iostream>
using namespace OpenMS;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_OpenSwathAssayGenerator OpenSwathAssayGenerator
@brief Generates filtered and optimized assays using TraML files.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → OpenSwathAssayGenerator →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_OpenSwathDecoyGenerator </td>
</tr>
</table>
</CENTER>
This module generates assays for targeted proteomics using a set of rules
that was found to improve the sensitivity and selectivity for detection
of typical peptides (Schubert et al., 2015). The tool operates on @ref
OpenMS::TraMLFile "TraML" files, which can come from @ref
TOPP_TargetedFileConverter or any other tool. In a first step, the tool will
annotate all transitions according to the predefined criteria. In a second
step, the transitions will be filtered to improve sensitivity for detection
of peptides.
Optionally, theoretical identification transitions can be generated when the
TraML will be used for IPF scoring in OpenSWATH, see @ref OpenMS::MRMAssay
"MRMAssay" for more information on the algorithm. This is recommended if
post-translational modifications are scored with OpenSWATH.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_OpenSwathAssayGenerator.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_OpenSwathAssayGenerator.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPOpenSwathAssayGenerator :
public TOPPBase
{
public:
TOPPOpenSwathAssayGenerator() :
TOPPBase("OpenSwathAssayGenerator", "Generates assays according to different models for a specific TraML", true)
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input file");
registerStringOption_("in_type", "<type>", "", "Input file type -- default: determined from file extension or content\n", false);
String formats("tsv,mrm,pqp,TraML");
setValidFormats_("in", ListUtils::create<String>(formats));
setValidStrings_("in_type", ListUtils::create<String>(formats));
formats = "tsv,pqp,TraML";
registerOutputFile_("out", "<file>", "", "Output file");
setValidFormats_("out", ListUtils::create<String>(formats));
registerStringOption_("out_type", "<type>", "", "Output file type -- default: determined from file extension or content\n", false);
setValidStrings_("out_type", ListUtils::create<String>(formats));
registerIntOption_("min_transitions", "<int>", 6, "minimal number of transitions", false);
registerIntOption_("max_transitions", "<int>", 6, "maximal number of transitions", false);
registerStringOption_("allowed_fragment_types", "<type>", "b,y", "allowed fragment types", false);
registerStringOption_("allowed_fragment_charges", "<type>", "1,2,3,4", "allowed fragment charge states", false);
registerFlag_("enable_detection_specific_losses", "set this flag if specific neutral losses for detection fragment ions should be allowed");
registerFlag_("enable_detection_unspecific_losses", "set this flag if unspecific neutral losses (H2O1, H3N1, C1H2N2, C1H2N1O1) for detection fragment ions should be allowed");
registerDoubleOption_("precursor_mz_threshold", "<double>", 0.025, "MZ threshold in Thomson for precursor ion selection", false);
registerDoubleOption_("precursor_lower_mz_limit", "<double>", 400, "lower MZ limit for precursor ions", false);
registerDoubleOption_("precursor_upper_mz_limit", "<double>", 1200, "upper MZ limit for precursor ions", false);
registerDoubleOption_("product_mz_threshold", "<double>", 0.025, "MZ threshold in Thomson for fragment ion annotation", false);
registerDoubleOption_("product_lower_mz_limit", "<double>", 350, "lower MZ limit for fragment ions", false);
registerDoubleOption_("product_upper_mz_limit", "<double>", 2000, "upper MZ limit for fragment ions", false);
registerInputFile_("swath_windows_file", "<file>", "", "Tab separated file containing the SWATH windows for exclusion of fragment ions falling into the precursor isolation window: lower_offset upper_offset \\newline 400 425 \\newline ... Note that the first line is a header and will be skipped.", false, false);
setValidFormats_("swath_windows_file", ListUtils::create<String>("txt"));
registerInputFile_("unimod_file", "<file>", "", "(Modified) Unimod XML file (http://www.unimod.org/xml/unimod.xml) describing residue modifiability", false, false);
setValidFormats_("unimod_file", ListUtils::create<String>("xml"));
registerFlag_("enable_ipf", "IPF: set this flag if identification transitions should be generated for IPF. Note: Requires setting 'unimod_file'.");
registerIntOption_("max_num_alternative_localizations", "<int>", 10000, "IPF: maximum number of site-localization permutations", false, true);
registerFlag_("disable_identification_ms2_precursors", "IPF: set this flag if MS2-level precursor ions for identification should not be allowed for extraction of the precursor signal from the fragment ion data (MS2-level).", true);
registerFlag_("disable_identification_specific_losses", "IPF: set this flag if specific neutral losses for identification fragment ions should not be allowed", true);
registerFlag_("enable_identification_unspecific_losses", "IPF: set this flag if unspecific neutral losses (H2O1, H3N1, C1H2N2, C1H2N1O1) for identification fragment ions should be allowed", true);
registerFlag_("enable_swath_specifity", "IPF: set this flag if identification transitions without precursor specificity (i.e. across whole precursor isolation window instead of precursor MZ) should be generated.", true);
registerFlag_("disable_decoy_transitions", "IPF: set this flag to disable generation of decoy UIS transitions (use for faster testing or when decoys are not needed)", true);
registerIntOption_("ipf_decoy_seed", "<int>", -1, "IPF: random seed for decoy shuffle (-1 = use time-based seed)", false, true);
}
ExitCodes main_(int, const char**) override
{
FileHandler fh;
//input file type
String in = getStringOption_("in");
FileTypes::Type in_type = FileTypes::nameToType(getStringOption_("in_type"));
if (in_type == FileTypes::UNKNOWN)
{
in_type = fh.getType(in);
writeDebug_(String("Input file type: ") + FileTypes::typeToName(in_type), 2);
}
if (in_type == FileTypes::UNKNOWN)
{
writeLogError_("Error: Could not determine input file type!");
return PARSE_ERROR;
}
//output file names and types
String out = getStringOption_("out");
FileTypes::Type out_type = FileTypes::nameToType(getStringOption_("out_type"));
if (out_type == FileTypes::UNKNOWN)
{
out_type = fh.getTypeByFileName(out);
}
if (out_type == FileTypes::UNKNOWN)
{
writeLogError_("Error: Could not determine output file type!");
return PARSE_ERROR;
}
Int min_transitions = getIntOption_("min_transitions");
Int max_transitions = getIntOption_("max_transitions");
String allowed_fragment_types_string = getStringOption_("allowed_fragment_types");
String allowed_fragment_charges_string = getStringOption_("allowed_fragment_charges");
bool enable_detection_specific_losses = getFlag_("enable_detection_specific_losses");
bool enable_detection_unspecific_losses = getFlag_("enable_detection_unspecific_losses");
bool enable_identification_specific_losses = !getFlag_("disable_identification_specific_losses");
bool enable_identification_unspecific_losses = getFlag_("enable_identification_unspecific_losses");
bool enable_identification_ms2_precursors = !getFlag_("disable_identification_ms2_precursors");
bool enable_ipf = getFlag_("enable_ipf");
bool enable_swath_specifity = getFlag_("enable_swath_specifity");
size_t max_num_alternative_localizations = getIntOption_("max_num_alternative_localizations");
double precursor_mz_threshold = getDoubleOption_("precursor_mz_threshold");
double precursor_lower_mz_limit = getDoubleOption_("precursor_lower_mz_limit");
double precursor_upper_mz_limit = getDoubleOption_("precursor_upper_mz_limit");
double product_mz_threshold = getDoubleOption_("product_mz_threshold");
double product_lower_mz_limit = getDoubleOption_("product_lower_mz_limit");
double product_upper_mz_limit = getDoubleOption_("product_upper_mz_limit");
String swath_windows_file = getStringOption_("swath_windows_file");
String unimod_file = getStringOption_("unimod_file");
bool is_test = getFlag_("test");
// Get IPF decoy parameters from command line
int uis_seed = getIntOption_("ipf_decoy_seed");
bool disable_decoy_transitions = getFlag_("disable_decoy_transitions");
// In test mode, use fixed seed and disable decoy transitions for reproducibility
if (is_test)
{
if (uis_seed == -1)
{
uis_seed = 42;
}
disable_decoy_transitions = true;
}
std::vector<String> allowed_fragment_types;
allowed_fragment_types_string.split(",", allowed_fragment_types);
std::vector<String> allowed_fragment_charges_string_vector;
std::vector<size_t> allowed_fragment_charges;
allowed_fragment_charges_string.split(",", allowed_fragment_charges_string_vector);
for (size_t i = 0; i < allowed_fragment_charges_string_vector.size(); i++)
{
size_t charge = std::atoi(allowed_fragment_charges_string_vector.at(i).c_str());
allowed_fragment_charges.push_back(charge);
}
// Require Unimod XML file when running IPF to prevent accidential mistakes
if (enable_ipf && unimod_file.empty())
{
throw Exception::InvalidParameter(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Please provide a valid Unimod XML file for IPF.");
}
// Load Unimod file
if (!unimod_file.empty())
{
if (!ModificationsDB::isInstantiated()) // We need to ensure that ModificationsDB was not instantiated before!
{
ModificationsDB* ptr = ModificationsDB::initializeModificationsDB(unimod_file, String(""), String(""));
OPENMS_LOG_INFO << "Unimod XML: " << ptr->getNumberOfModifications() << " modification types and residue specificities imported from file: " << unimod_file << std::endl;
}
else
{
throw Exception::Precondition(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "ModificationsDB has been instantiated before and can not be generated from the provided Unimod XML file.");
}
}
std::vector<std::pair<double, double> > swathes;
// Check swath window input
if (!swath_windows_file.empty())
{
OPENMS_LOG_INFO << "Validate provided Swath windows file:" << std::endl;
std::vector<double> swath_prec_lower;
std::vector<double> swath_prec_upper;
SwathWindowLoader::readSwathWindows(swath_windows_file, swath_prec_lower, swath_prec_upper);
OPENMS_LOG_INFO << "Read Swath maps file with " << swath_prec_lower.size() << " windows." << std::endl;
for (Size i = 0; i < swath_prec_lower.size(); i++)
{
swathes.push_back(std::make_pair(swath_prec_lower[i], swath_prec_upper[i]));
OPENMS_LOG_DEBUG << "Read lower swath window " << swath_prec_lower[i] << " and upper window " << swath_prec_upper[i] << std::endl;
}
}
// Use memory-efficient Light path for TSV/PQP → TSV/PQP workflows.
// This includes IPF (identifying transitions) which is now supported via uisTransitionsLight().
bool use_light_path = (in_type == FileTypes::TSV || in_type == FileTypes::MRM || in_type == FileTypes::PQP)
&& (out_type == FileTypes::TSV || out_type == FileTypes::PQP);
if (use_light_path)
{
// Memory-efficient Light path
OpenSwath::LightTargetedExperiment light_exp;
OPENMS_LOG_INFO << "Loading " << in << " (Light path)" << std::endl;
if (in_type == FileTypes::TSV || in_type == FileTypes::MRM)
{
Param reader_parameters = getParam_().copy("algorithm:", true);
TransitionTSVFile tsv_reader;
tsv_reader.setLogType(log_type_);
tsv_reader.setParameters(reader_parameters);
tsv_reader.convertTSVToTargetedExperiment(in.c_str(), in_type, light_exp);
}
else if (in_type == FileTypes::PQP)
{
TransitionPQPFile pqp_reader;
Param reader_parameters = getParam_().copy("algorithm:", true);
pqp_reader.setLogType(log_type_);
pqp_reader.setParameters(reader_parameters);
pqp_reader.convertPQPToTargetedExperiment(in.c_str(), light_exp);
}
MRMAssay assays;
assays.setLogType(ProgressLogger::CMD);
OPENMS_LOG_INFO << "Annotating transitions (Light)" << std::endl;
assays.reannotateTransitionsLight(light_exp, precursor_mz_threshold, product_mz_threshold,
allowed_fragment_types, allowed_fragment_charges,
enable_detection_specific_losses, enable_detection_unspecific_losses);
OPENMS_LOG_INFO << "Filtering and selecting detecting transitions (Light)" << std::endl;
assays.restrictTransitionsLight(light_exp, product_lower_mz_limit, product_upper_mz_limit, swathes);
assays.detectingTransitionsLight(light_exp, min_transitions, max_transitions);
if (enable_ipf)
{
// Generate UIS SWATH windows (same logic as heavy path)
std::vector<std::pair<double, double>> uis_swathes;
if (!enable_swath_specifity || swathes.empty())
{
int num_precursor_windows = static_cast<int>(Math::round((precursor_upper_mz_limit - precursor_lower_mz_limit) / precursor_mz_threshold));
for (int i = 0; i < num_precursor_windows; i++)
{
uis_swathes.push_back(std::make_pair((precursor_lower_mz_limit + (i * precursor_mz_threshold)),
(precursor_lower_mz_limit + ((i + 1) * precursor_mz_threshold))));
}
}
else
{
uis_swathes = swathes;
}
OPENMS_LOG_INFO << "Generating identifying transitions for IPF (Light)" << std::endl;
assays.uisTransitionsLight(light_exp, allowed_fragment_types, allowed_fragment_charges,
enable_identification_specific_losses, enable_identification_unspecific_losses,
enable_identification_ms2_precursors, product_mz_threshold, uis_swathes,
-4, max_num_alternative_localizations, uis_seed, disable_decoy_transitions);
// Restrict transitions to product m/z limits (same as heavy path)
std::vector<std::pair<double, double>> empty_swathes;
assays.restrictTransitionsLight(light_exp, product_lower_mz_limit, product_upper_mz_limit, empty_swathes);
}
OPENMS_LOG_INFO << "Writing assays " << out << std::endl;
if (out_type == FileTypes::TSV)
{
TransitionTSVFile tsv_writer;
tsv_writer.setLogType(log_type_);
tsv_writer.convertLightTargetedExperimentToTSV(out.c_str(), light_exp);
}
else if (out_type == FileTypes::PQP)
{
TransitionPQPFile pqp_writer;
pqp_writer.setLogType(log_type_);
pqp_writer.convertLightTargetedExperimentToPQP(out.c_str(), light_exp);
}
}
else
{
// Heavy path for TraML or IPF
TargetedExperiment targeted_exp;
OPENMS_LOG_INFO << "Loading " << in << std::endl;
if (in_type == FileTypes::TSV || in_type == FileTypes::MRM)
{
const char* tr_file = in.c_str();
Param reader_parameters = getParam_().copy("algorithm:", true);
TransitionTSVFile tsv_reader = TransitionTSVFile();
tsv_reader.setLogType(log_type_);
tsv_reader.setParameters(reader_parameters);
tsv_reader.convertTSVToTargetedExperiment(tr_file, in_type, targeted_exp);
tsv_reader.validateTargetedExperiment(targeted_exp);
}
else if (in_type == FileTypes::PQP)
{
const char* tr_file = in.c_str();
TransitionPQPFile pqp_reader = TransitionPQPFile();
Param reader_parameters = getParam_().copy("algorithm:", true);
pqp_reader.setLogType(log_type_);
pqp_reader.setParameters(reader_parameters);
pqp_reader.convertPQPToTargetedExperiment(tr_file, targeted_exp);
pqp_reader.validateTargetedExperiment(targeted_exp);
}
else if (in_type == FileTypes::TRAML)
{
FileHandler().loadTransitions(in, targeted_exp, {FileTypes::TRAML});
}
MRMAssay assays = MRMAssay();
assays.setLogType(ProgressLogger::CMD);
OPENMS_LOG_INFO << "Annotating transitions" << std::endl;
assays.reannotateTransitions(targeted_exp, precursor_mz_threshold, product_mz_threshold, allowed_fragment_types, allowed_fragment_charges, enable_detection_specific_losses, enable_detection_unspecific_losses);
OPENMS_LOG_INFO << "Annotating detecting transitions" << std::endl;
assays.restrictTransitions(targeted_exp, product_lower_mz_limit, product_upper_mz_limit, swathes);
assays.detectingTransitions(targeted_exp, min_transitions, max_transitions);
if (enable_ipf)
{
std::vector<std::pair<double, double> > uis_swathes;
// Generate default UIS SWATH windows if swath specificity is disabled
// or if no swathes were provided (same logic as light path)
if (!enable_swath_specifity || swathes.empty())
{
int num_precursor_windows = static_cast<int>(Math::round((precursor_upper_mz_limit - precursor_lower_mz_limit) / precursor_mz_threshold));
for (int i = 0; i < num_precursor_windows; i++)
{
uis_swathes.push_back(std::make_pair((precursor_lower_mz_limit + (i * precursor_mz_threshold)),
(precursor_lower_mz_limit + ((i + 1) * precursor_mz_threshold))));
}
}
else
{
uis_swathes = swathes;
}
OPENMS_LOG_INFO << "Generating identifying transitions for IPF" << std::endl;
assays.uisTransitions(targeted_exp, allowed_fragment_types, allowed_fragment_charges, enable_identification_specific_losses, enable_identification_unspecific_losses, enable_identification_ms2_precursors, product_mz_threshold, uis_swathes, -4, max_num_alternative_localizations, uis_seed, disable_decoy_transitions);
std::vector<std::pair<double, double> > empty_swathes;
assays.restrictTransitions(targeted_exp, product_lower_mz_limit, product_upper_mz_limit, empty_swathes);
}
OPENMS_LOG_INFO << "Writing assays " << out << std::endl;
if (out_type == FileTypes::TSV)
{
const char* tr_file = out.c_str();
TransitionTSVFile tsv_reader = TransitionTSVFile();
tsv_reader.setLogType(log_type_);
tsv_reader.convertTargetedExperimentToTSV(tr_file, targeted_exp);
}
else if (out_type == FileTypes::PQP)
{
const char * tr_file = out.c_str();
TransitionPQPFile pqp_reader = TransitionPQPFile();
pqp_reader.setLogType(log_type_);
pqp_reader.convertTargetedExperimentToPQP(tr_file, targeted_exp);
}
else if (out_type == FileTypes::TRAML)
{
FileHandler().storeTransitions(out, targeted_exp, {FileTypes::TRAML});
}
}
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPOpenSwathAssayGenerator gen;
return gen.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/CometAdapter.cpp | .cpp | 47,403 | 829 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Leon Bichmann, Timo Sachsenberg $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/SearchEngineBase.h>
#include <OpenMS/ANALYSIS/ID/PercolatorFeatureSetHelper.h>
#include <OpenMS/ANALYSIS/ID/PeptideIndexing.h>
#include <OpenMS/DATASTRUCTURES/DefaultParamHandler.h>
// TODO remove this once we have handler transform support
#include <OpenMS/FORMAT/MzMLFile.h>
#include <OpenMS/FORMAT/PepXMLFile.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/HANDLERS/IndexedMzMLDecoder.h>
#include <OpenMS/FORMAT/DATAACCESS/MSDataWritingConsumer.h>
#include <OpenMS/METADATA/SpectrumMetaDataLookup.h>
#include <OpenMS/KERNEL/MSSpectrum.h>
#include <OpenMS/METADATA/PeptideIdentification.h>
#include <OpenMS/CHEMISTRY/ModificationsDB.h>
#include <OpenMS/CHEMISTRY/ProteaseDB.h>
#include <OpenMS/CHEMISTRY/ResidueDB.h>
#include <OpenMS/CHEMISTRY/ResidueModification.h>
#include <OpenMS/SYSTEM/File.h>
#include <fstream>
#include <QStringList>
#include <QRegularExpression>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_CometAdapter CometAdapter
@brief Identifies peptides in MS/MS spectra via Comet.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → CometAdapter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any signal-/preprocessing tool @n (in mzML format)</td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFilter or @n any protein/peptide processing tool</td>
</tr>
</table>
</CENTER>
@em Comet must be installed/downloaded before this wrapper can be used. OpenMS installers ship with Comet.
@warning We recommend to use Comet 2019.01 rev. 5 or later, due to a serious "empty result" bug in earlier versions (which occurs frequently on Windows; Linux seems not/less affected).
@warning Skip over 'Comet v2024.01.0', since it contains several bugs (see https://github.com/UWPR/Comet/issues/63).
Comet settings not exposed by this adapter can be directly adjusted using a param file, which can be generated using comet -p.
By default, All (!) parameters available explicitly via this param file will take precedence over the wrapper parameters.
Parameter names have been changed to match names found in other search engine adapters, however some are Comet specific.
For a detailed description of all available parameters check the Comet documentation at https://uwpr.github.io/Comet/parameters/
The default parameters are set for a high resolution instrument.
@note This adapter supports 15N labeling by specifying the 20 AA modifications 'Label:15N(x)' as fixed modifications.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_CometAdapter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_CometAdapter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPCometAdapter :
public SearchEngineBase
{
public:
TOPPCometAdapter() :
SearchEngineBase("CometAdapter", "Annotates MS/MS spectra using Comet.", true,
{
{"Eng, Jimmy K. and Jahan, Tahmina A. and Hoopmann, Michael R.",
"Comet: An open-source MS/MS sequence database search tool",
"PROTEOMICS 2013; 13-1: 22--24",
"10.1002/pmic.201200439"}
})
{
}
protected:
map<string,int> num_enzyme_termini {{"semi",1},{"fully",2},{"C-term unspecific", 8},{"N-term unspecific",9}};
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input file");
setValidFormats_("in", { "mzML" } );
registerOutputFile_("out", "<file>", "", "Output file");
setValidFormats_("out", { "idXML"} );
registerInputFile_("database", "<file>", "", "FASTA file", true, false, {"skipexists"});
setValidFormats_("database", { "FASTA" } );
registerInputFile_("comet_executable", "<executable>",
// choose the default value according to the platform where it will be executed
"comet.exe", // this is the name on ALL platforms currently...
"The Comet executable. Provide a full or relative path, or make sure it can be found in your PATH environment.", true, false, {"is_executable"});
//
// Optional parameters
//
//Files
registerOutputFile_("pin_out", "<file>", "", "Output file - for Percolator input", false);
setValidFormats_("pin_out", ListUtils::create<String>("tsv"));
registerInputFile_("default_params_file", "<file>", "", "Default Comet params file. All parameters of this take precedence. A template file can be generated using 'comet.exe -p'", false, false, ListUtils::create<String>("skipexists"));
setValidFormats_("default_params_file", ListUtils::create<String>("txt"));
//Masses
registerDoubleOption_("precursor_mass_tolerance", "<tolerance>", 10.0, "Precursor monoisotopic mass tolerance (Comet parameter: peptide_mass_tolerance). See also precursor_error_units to set the unit.",false);
registerStringOption_("precursor_error_units", "<choice>", "ppm", "Unit of precursor monoisotopic mass tolerance for parameter precursor_mass_tolerance (Comet parameter: peptide_mass_units)", false);
setValidStrings_("precursor_error_units", ListUtils::create<String>("amu,ppm,Da"));
//registerIntOption_("mass_type_parent", "<num>", 1, "0=average masses, 1=monoisotopic masses", false, true);
//registerIntOption_("mass_type_fragment", "<num>", 1, "0=average masses, 1=monoisotopic masses", false, true);
//registerIntOption_("precursor_tolerance_type", "<num>", 0, "0=average masses, 1=monoisotopic masses", false, false);
registerStringOption_(Constants::UserParam::ISOTOPE_ERROR, "<choice>", "off", "This parameter controls whether the peptide_mass_tolerance takes into account possible isotope errors in the precursor mass measurement. Use -8/-4/0/4/8 only for SILAC.", false, false);
setValidStrings_(Constants::UserParam::ISOTOPE_ERROR, ListUtils::create<String>("off,0/1,0/1/2,0/1/2/3,-8/-4/0/4/8,-1/0/1/2/3"));
//Fragment Ions
registerDoubleOption_("fragment_mass_tolerance", "<tolerance>", 0.01,
"This is half the bin size, which is used to segment the MS/MS spectrum. Thus, the value should be a bit higher than for other search engines, since the bin might not be centered around the peak apex (see 'fragment_bin_offset')."
"CAUTION: Low tolerances have heavy impact on RAM usage (since Comet uses a lot of bins in this case). Consider using use_sparse_matrix and/or spectrum_batch_size.", false);
setMinFloat_("fragment_mass_tolerance", 0.0001);
registerStringOption_("fragment_error_units", "<unit>", "Da", "Fragment monoisotopic mass error units", false);
setValidStrings_("fragment_error_units", { "Da" }); // only Da allowed
registerDoubleOption_("fragment_bin_offset", "<fraction>", 0.0, "Offset of fragment bins. Recommended by Comet: low-res: 0.4, high-res: 0.0", false);
setMinFloat_("fragment_bin_offset", 0.0);
setMaxFloat_("fragment_bin_offset", 1.0);
registerStringOption_("instrument", "<choice>", "high_res", "Comets theoretical_fragment_ions parameter: theoretical fragment ion peak representation, high-res: sum of intensities plus flanking bins, ion trap (low-res) ms/ms: sum of intensities of central M bin only", false);
setValidStrings_("instrument", ListUtils::create<String>("low_res,high_res"));
registerStringOption_("use_A_ions", "<num>", "false", "use A ions for PSM", false, true);
setValidStrings_("use_A_ions", ListUtils::create<String>("true,false"));
registerStringOption_("use_B_ions", "<num>", "true", "use B ions for PSM", false, true);
setValidStrings_("use_B_ions", ListUtils::create<String>("true,false"));
registerStringOption_("use_C_ions", "<num>", "false", "use C ions for PSM", false, true);
setValidStrings_("use_C_ions", ListUtils::create<String>("true,false"));
registerStringOption_("use_X_ions", "<num>", "false", "use X ions for PSM", false, true);
setValidStrings_("use_X_ions", ListUtils::create<String>("true,false"));
registerStringOption_("use_Y_ions", "<num>", "true", "use Y ions for PSM", false, true);
setValidStrings_("use_Y_ions", ListUtils::create<String>("true,false"));
registerStringOption_("use_Z_ions", "<num>", "false", "use Z ions for PSM", false, true);
setValidStrings_("use_Z_ions", ListUtils::create<String>("true,false"));
registerStringOption_("use_NL_ions", "<num>", "false", "use neutral loss (NH3, H2O) ions from b/y for PSM", false, true);
setValidStrings_("use_NL_ions", ListUtils::create<String>("true,false"));
//Search Enzyme
vector<String> all_enzymes;
ProteaseDB::getInstance()->getAllCometNames(all_enzymes);
registerStringOption_("enzyme", "<cleavage site>", "Trypsin", "The enzyme used for peptide digestion.", false, false);
setValidStrings_("enzyme", all_enzymes);
registerStringOption_("second_enzyme", "<cleavage site>", "", "Additional enzyme used for peptide digestion.", false, true);
setValidStrings_("second_enzyme", all_enzymes);
registerStringOption_("num_enzyme_termini", "<choice>", "fully", "Specify the termini where the cleavage rule has to match", false, false);
setValidStrings_("num_enzyme_termini", { "semi", "fully", "C-term unspecific", "N-term unspecific" } );
registerIntOption_("missed_cleavages", "<num>", 1, "Number of possible cleavage sites missed by the enzyme. It has no effect if enzyme is unspecific cleavage.", false, false);
setMinInt_("missed_cleavages", 0);
setMaxInt_("missed_cleavages", 5);
registerIntOption_("min_peptide_length", "<num>", 5, "Minimum peptide length to consider.", false);
setMinInt_("min_peptide_length", 5);
setMaxInt_("min_peptide_length", 63);
registerIntOption_("max_peptide_length", "<num>", 63, "Maximum peptide length to consider.", false);
setMinInt_("max_peptide_length", 5);
setMaxInt_("max_peptide_length", 63);
//Output
registerIntOption_("num_hits", "<num>", 1, "Number of peptide hits (PSMs) per spectrum in output file", false, false);
//mzXML/mzML parameters
registerStringOption_("precursor_charge", "[min]:[max]", "0:0", "Precursor charge range to search (if spectrum is not annotated with a charge or if override_charge!=keep any known): 0:[num] == search all charges, 2:6 == from +2 to +6, 3:3 == +3", false, false);
registerStringOption_("override_charge", "<choice>", "keep known search unknown", "_keep any known_: keep any precursor charge state (from input), _ignore known_: ignore known precursor charge state and use precursor_charge parameter, _ignore outside range_: ignore precursor charges outside precursor_charge range, _keep known search unknown_: keep any known precursor charge state. For unknown charge states, search as singly charged if there is no signal above the precursor m/z or use the precursor_charge range", false, false);
setValidStrings_("override_charge", ListUtils::create<String>("keep any known,ignore known,ignore outside range,keep known search unknown"));
registerIntOption_("ms_level", "<num>", 2, "MS level to analyze, valid are levels 2 (default) or 3", false, false);
setMinInt_("ms_level", 2);
setMaxInt_("ms_level", 3);
registerStringOption_("activation_method", "<method>", "ALL", "If not ALL, only searches spectra of the given method", false, false);
setValidStrings_("activation_method", ListUtils::create<String>("ALL,CID,ECD,ETD,PQD,HCD,IRMPD"));
//Misc. parameters
//scan range
registerStringOption_("digest_mass_range", "[min]:[max]", "600:5000", "MH+ peptide mass range to analyze", false, true);
registerIntOption_("max_fragment_charge", "<posnum>", 3, "Set maximum fragment charge state to analyze as long as still lower than precursor charge - 1. (Allowed max 5)", false, false);
setMinInt_("max_fragment_charge", 1);
setMaxInt_("max_fragment_charge", 5);
registerIntOption_("max_precursor_charge", "<posnum>", 5, "set maximum precursor charge state to analyze (allowed max 9)", false, true);
setMinInt_("max_precursor_charge", 1);
setMaxInt_("max_precursor_charge", 9);
registerStringOption_("clip_nterm_methionine", "<bool>", "false", "If set to true, also considers the peptide sequence w/o N-term methionine separately and applies appropriate N-term mods to it", false, false);
setValidStrings_("clip_nterm_methionine", ListUtils::create<String>("true,false"));
registerIntOption_("spectrum_batch_size", "<posnum>", 20000, "max. number of spectra to search at a time; use 0 to search the entire scan range in one batch", false, true);
setMinInt_("spectrum_batch_size", 0);
registerDoubleList_("mass_offsets", "<doubleoffset1, doubleoffset2,...>", {0.0}, "One or more mass offsets to search (values subtracted from deconvoluted precursor mass). Has to include 0.0 if you want the default mass to be searched.", false, true);
// spectral processing
registerIntOption_("minimum_peaks", "<posnum>", 10, "Required minimum number of peaks in spectrum to search (default 10)", false, true);
registerDoubleOption_("minimum_intensity", "<posfloat>", 0.0, "Minimum intensity value to read in", false, true);
setMinFloat_("minimum_intensity", 0.0);
registerStringOption_("remove_precursor_peak", "<choice>", "no", "no = no removal, yes = remove all peaks around precursor m/z, charge_reduced = remove all charge reduced precursor peaks (for ETD/ECD). phosphate_loss = remove the HPO3 (-80) and H3PO4 (-98) precursor phosphate neutral loss peaks. See also remove_precursor_tolerance", false, true);
setValidStrings_("remove_precursor_peak", ListUtils::create<String>("no,yes,charge_reduced,phosphate_loss"));
registerDoubleOption_("remove_precursor_tolerance", "<posfloat>", 1.5, "one-sided tolerance for precursor removal in Thompson", false, true);
registerStringOption_("clear_mz_range", "[minfloatmz]:[maxfloatmz]", "0:0", "for iTRAQ/TMT type data; will clear out all peaks in the specified m/z range, if not 0:0", false, true);
//Modifications
vector<String> all_mods;
ModificationsDB::getInstance()->getAllSearchModifications(all_mods);
registerStringList_("fixed_modifications", "<mods>", ListUtils::create<String>("Carbamidomethyl (C)", ','), "Fixed modifications, specified using Unimod (www.unimod.org) terms, e.g. 'Carbamidomethyl (C)' or 'Oxidation (M)'", false);
setValidStrings_("fixed_modifications", all_mods);
registerStringList_("variable_modifications", "<mods>", ListUtils::create<String>("Oxidation (M)", ','), "Variable modifications, specified using Unimod (www.unimod.org) terms, e.g. 'Carbamidomethyl (C)' or 'Oxidation (M)'", false);
setValidStrings_("variable_modifications", all_mods);
registerIntList_("binary_modifications", "<mods>", {},
"List of modification group indices. Indices correspond to the binary modification index used by comet to group individually searched lists of variable modifications.\n"
"Note: if set, both variable_modifications and binary_modifications need to have the same number of entries as the N-th entry corresponds to the N-th variable_modification.\n"
" if left empty (default), all entries are internally set to 0 generating all permutations of modified and unmodified residues.\n"
" For a detailed explanation please see the parameter description in the Comet help.",
false);
registerIntOption_("max_variable_mods_in_peptide", "<num>", 5, "Set a maximum number of variable modifications per peptide", false, true);
registerStringOption_("require_variable_mod", "<bool>", "false", "If true, requires at least one variable modification per peptide", false, true);
setValidStrings_("require_variable_mod", ListUtils::create<String>("true,false"));
// register peptide indexing parameter (with defaults for this search engine) TODO: check if search engine defaults are needed
registerPeptideIndexingParameter_(PeptideIndexing().getParameters());
}
const vector<const ResidueModification*> getModifications_(const StringList& modNames)
{
vector<const ResidueModification*> modifications;
// iterate over modification names and add to vector
for (const auto& modification : modNames)
{
if (modNames.empty())
{
continue;
}
modifications.push_back(ModificationsDB::getInstance()->getModification(modification));
}
return modifications;
}
ExitCodes createParamFile_(ostream& os, const String& comet_version)
{
os << comet_version << "\n"; // required as first line in the param file
os << "# Comet MS/MS search engine parameters file.\n";
os << "# Everything following the '#' symbol is treated as a comment.\n";
os << "database_name = " << getStringOption_("database") << "\n";
os << "decoy_search = " << 0 << "\n"; // 0=no (default), 1=concatenated search, 2=separate search
os << "peff_format = 0\n"; // 0=no (normal fasta, default), 1=PEFF PSI-MOD, 2=PEFF Unimod
os << "peff_obo =\n"; // path to PSI Mod or Unimod OBO file
os << "num_threads = " << getIntOption_("threads") << "\n"; // 0=poll CPU to set num threads; else specify num threads directly (max 64)
// masses
map<String,int> precursor_error_units;
precursor_error_units["amu"] = 0;
precursor_error_units["mmu"] = 1;
precursor_error_units["ppm"] = 2;
map<string,int> isotope_error;
isotope_error["off"] = 0;
isotope_error["0/1"] = 1;
isotope_error["0/1/2"] = 2;
isotope_error["0/1/2/3"] = 3;
isotope_error["-8/-4/0/4/8"] = 4;
isotope_error["-1/0/1/2/3"] = 5;
// comet_version is something like "# comet_version 2017.01 rev. 1"
QRegularExpression comet_version_regex("(\\d{4})\\.(\\d*)rev");
if (auto match = comet_version_regex.match(comet_version.toQString().remove(' ')); match.hasMatch())
{
const int comet_year = match.captured(1).toInt();
if (comet_version.hasSubstring("2024.01 rev. 0"))
{
OPENMS_LOG_WARN << "Comet v2024.01.0 is known to have several bugs (see https://github.com/UWPR/Comet/issues/63). Please use a different version if possible." << std::endl;
}
// Comet v2024.01.0 introduces "peptide_mass_tolerance_lower" and "peptide_mass_tolerance_upper" parameters
// and deprecates "peptide_mass_tolerance" (which is buggy in this version, see https://github.com/UWPR/Comet/issues/59)
// We need to use the new parameters from this version onwards
double precursor_mass_tolerance = getDoubleOption_("precursor_mass_tolerance");
if (comet_year >= 2024)
{
os << "peptide_mass_tolerance_lower = " << -precursor_mass_tolerance << "\n";
os << "peptide_mass_tolerance_upper = " << precursor_mass_tolerance << "\n";
}
else
{ // for Comet versions before 2024, we use the old parameter
os << "peptide_mass_tolerance = " << precursor_mass_tolerance << "\n";
}
}
else
{
throw OpenMS::Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
"Error: Could not extract year from Comet version string: " + comet_version + ". Please report this to the OpenMS team.");
}
os << "peptide_mass_units = " << precursor_error_units[getStringOption_("precursor_error_units")] << "\n"; // 0=amu, 1=mmu, 2=ppm
os << "mass_type_parent = " << 1 << "\n"; // 0=average masses, 1=monoisotopic masses
os << "mass_type_fragment = " << 1 << "\n"; // 0=average masses, 1=monoisotopic masses
os << "precursor_tolerance_type = " << 1 << "\n"; // 0=MH+ (default), 1=precursor m/z; only valid for amu/mmu tolerances
os << "isotope_error = " << isotope_error[getStringOption_(Constants::UserParam::ISOTOPE_ERROR)] << "\n"; // 0=off, 1=0/1 (C13 error), 2=0/1/2, 3=0/1/2/3, 4=-8/-4/0/4/8 (for +4/+8 labeling)
// search enzyme
String enzyme_name = getStringOption_("enzyme");
String enzyme_number = String(ProteaseDB::getInstance()->getEnzyme(enzyme_name)->getCometID());
String second_enzyme_name = getStringOption_("second_enzyme");
String enzyme2_number = "0";
if (!second_enzyme_name.empty())
{
enzyme2_number = String(ProteaseDB::getInstance()->getEnzyme(second_enzyme_name)->getCometID());
}
os << "search_enzyme_number = " << enzyme_number << "\n"; // choose from list at end of this params file
os << "search_enzyme2_number = " << enzyme2_number << "\n"; // second enzyme; set to 0 if no second enzyme
os << "num_enzyme_termini = " << num_enzyme_termini[getStringOption_("num_enzyme_termini")] << "\n"; // 1 (semi-digested), 2 (fully digested, default), 8 C-term unspecific , 9 N-term unspecific
os << "allowed_missed_cleavage = " << getIntOption_("missed_cleavages") << "\n"; // maximum value is 5; for enzyme search
// Up to 9 variable modifications are supported
// # format: <mass> <residues> <0=variable/else binary> <max_mods_per_peptide> <term_distance> <n/c-term> <required> <neutral_loss>
// e.g. 79.966331 STY 0 3 -1 0 0 97.976896
vector<String> variable_modifications_names = getStringList_("variable_modifications");
const vector<const ResidueModification*> variable_modifications = getModifications_(variable_modifications_names);
if (variable_modifications.size() > 9)
{
throw OpenMS::Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Error: Comet supports at most 9 variable modifications. " + String(variable_modifications.size()) + " provided.");
}
IntList binary_modifications = getIntList_("binary_modifications");
if (!binary_modifications.empty() && binary_modifications.size() != variable_modifications.size())
{
throw OpenMS::Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Error: List of binary modifications needs to have same size as variable modifications.");
}
int max_variable_mods_in_peptide = getIntOption_("max_variable_mods_in_peptide");
Size var_mod_index = 0;
// write out user specified modifications
for (; var_mod_index < variable_modifications.size(); ++var_mod_index)
{
const ResidueModification* mod = variable_modifications[var_mod_index];
double mass = mod->getDiffMonoMass();
String residues = mod->getOrigin();
// support for binary groups, e.g. for SILAC
int binary_group{0};
if (!binary_modifications.empty())
{
binary_group = binary_modifications[var_mod_index];
}
//TODO support mod-specific limit (default for now is the overall max per peptide)
int max_current_mod_per_peptide = max_variable_mods_in_peptide;
//TODO support term-distances?
int term_distance = -1;
int nc_term = 0;
//TODO support agglomeration of Modifications to same AA. Watch out for nc_term value then.
if (mod->getTermSpecificity() == ResidueModification::C_TERM)
{
if (mod->getOrigin() == 'X')
{
residues = "c";
} // else stays mod.getOrigin()
term_distance = 0;
// Since users need to specify mods that apply to multiple residues/terms separately
// 3 and -1 should be equal for now.
nc_term = 3;
}
else if (mod->getTermSpecificity() == ResidueModification::N_TERM)
{
if (mod->getOrigin() == 'X')
{
residues = "n";
} // else stays mod.getOrigin()
term_distance = 0;
// Since users need to specify mods that apply to multiple residues/terms separately
// 2 and -1 should be equal for now.
nc_term = 2;
}
else if (mod->getTermSpecificity() == ResidueModification::PROTEIN_N_TERM)
{
if (mod->getOrigin() == 'X')
{
residues = "n";
} // else stays mod.getOrigin()
term_distance = 0;
nc_term = 0;
}
else if (mod->getTermSpecificity() == ResidueModification::PROTEIN_C_TERM)
{
if (mod->getOrigin() == 'X')
{
residues = "c";
} // else stays mod.getOrigin()
term_distance = 0;
nc_term = 1;
}
//TODO support required variable mods
bool required = false;
os << "variable_mod0" << var_mod_index+1 << " = "
<< mass << " " << residues << " "
<< binary_group << " "
<< max_current_mod_per_peptide << " "
<< term_distance << " "
<< nc_term << " "
<< required << " "
<< "0.0" // TODO: add neutral losses (from Residue or user defined?)
<< "\n";
}
// fill remaining modification slots (if any) in Comet with "no modification"
for (; var_mod_index < 9; ++var_mod_index)
{
os << "variable_mod0" << var_mod_index+1 << " = " << "0.0 X 0 3 -1 0 0 0.0" << "\n";
}
os << "max_variable_mods_in_peptide = " << getIntOption_("max_variable_mods_in_peptide") << "\n";
os << "require_variable_mod = " << (int) (getStringOption_("require_variable_mod") == "true") << "\n";
// fragment ion defaults
// ion trap ms/ms: 1.0005 tolerance, 0.4 offset (mono masses), theoretical_fragment_ions = 1
// high res ms/ms: 0.02 tolerance, 0.0 offset (mono masses), theoretical_fragment_ions = 0
String instrument = getStringOption_("instrument");
double bin_tol = getDoubleOption_("fragment_mass_tolerance") * 2; // convert 1-sided tolerance to bin size
double bin_offset = getDoubleOption_("fragment_bin_offset");
if (instrument == "low_res" && (bin_tol < 0.8 || bin_offset <= 0.2))
{
OPENMS_LOG_ERROR << "Fragment bin size (== 2x 'fragment_mass_tolerance') or offset is quite low for low-res instruments (Comet recommends 1.005 Da bin size & 0.4 Da offset). "
<< "Current value: fragment bin size = " << bin_tol << "(=2x" << bin_tol/2 << ") and offset = " << bin_offset << ". Use the '-force' flag to continue anyway." << std::endl;
if (!getFlag_("force"))
{
return ExitCodes::ILLEGAL_PARAMETERS;
}
OPENMS_LOG_ERROR << "You used the '-force'!" << std::endl;
}
else if (instrument == "high_res" && (bin_tol > 0.1 || bin_offset > 0.1))
{
OPENMS_LOG_ERROR << "Fragment bin size (== 2x 'fragment_mass_tolerance') or offset is quite high for high-res instruments (Comet recommends 0.02 Da bin size & 0.0 Da offset). "
<< "Current value: fragment bin size = " << bin_tol << "(=2x" << bin_tol / 2 << ") and offset = " << bin_offset << ". Use the '-force' flag to continue anyway." << std::endl;
if (!getFlag_("force"))
{
return ExitCodes::ILLEGAL_PARAMETERS;
}
OPENMS_LOG_ERROR << "You used the '-force'!" << std::endl;
}
os << "fragment_bin_tol = " << bin_tol << "\n"; // binning to use on fragment ions
os << "fragment_bin_offset = " << bin_offset << "\n"; // offset position to start the binning (0.0 to 1.0)
os << "theoretical_fragment_ions = " << (int)(instrument == "low_res") << "\n"; // 0=use flanking bins as well; 1=use M bin only
os << "use_A_ions = " << (int)(getStringOption_("use_A_ions")=="true") << "\n";
os << "use_B_ions = " << (int)(getStringOption_("use_B_ions")=="true") << "\n";
os << "use_C_ions = " << (int)(getStringOption_("use_C_ions")=="true") << "\n";
os << "use_X_ions = " << (int)(getStringOption_("use_X_ions")=="true") << "\n";
os << "use_Y_ions = " << (int)(getStringOption_("use_Y_ions")=="true") << "\n";
os << "use_Z_ions = " << (int)(getStringOption_("use_Z_ions")=="true") << "\n";
os << "use_NL_ions = " << (int)(getStringOption_("use_NL_ions")=="true") << "\n"; // 0=no, 1=yes to consider NH3/H2O neutral loss peaks
// output
os << "output_sqtstream = " << 0 << "\n"; // 0=no, 1=yes write sqt to standard output
os << "output_sqtfile = " << 0 << "\n"; // 0=no, 1=yes write sqt file
os << "output_txtfile = " << 0 << "\n"; // 0=no, 1=yes write tab-delimited txt file
os << "output_pepxmlfile = " << 1 << "\n"; // 0=no, 1=yes write pep.xml file
os << "export_additional_pepxml_scores = " << 1 << "\n"; // Hidden parameter of comet that adds additional comet scores to the pep.xml
os << "output_percolatorfile = " << !getStringOption_("pin_out").empty() << "\n"; // 0=no, 1=yes write Percolator tab-delimited input file
os << "print_expect_score = " << 1 << "\n"; // 0=no, 1=yes to replace Sp with expect in out & sqt
os << "num_output_lines = " << getIntOption_("num_hits") << "\n"; // num peptide results to show
os << "show_fragment_ions = " << 0 << "\n"; // 0=no, 1=yes for out files only
os << "sample_enzyme_number = " << enzyme_number << "\n"; // Sample enzyme which is possibly different than the one applied to the search.
// mzXML parameters
map<string,int> override_charge;
override_charge["keep any known"] = 0;
override_charge["ignore known"] = 1;
override_charge["ignore outside range"] = 2;
override_charge["keep known search unknown"] = 3;
int precursor_charge_min(0), precursor_charge_max(0);
if (!parseRange_(getStringOption_("precursor_charge"), precursor_charge_min, precursor_charge_max))
{
OPENMS_LOG_INFO << "precursor_charge range not set. Defaulting to 0:0 (disable charge filtering)." << endl;
}
os << "scan_range = " << "0 0" << "\n"; // start and scan scan range to search; 0 as 1st entry ignores parameter
os << "precursor_charge = " << precursor_charge_min << " " << precursor_charge_max << "\n"; // precursor charge range to analyze; does not override any existing charge; 0 as 1st entry ignores parameter
os << "override_charge = " << override_charge[getStringOption_("override_charge")] << "\n"; // 0=no, 1=override precursor charge states, 2=ignore precursor charges outside precursor_charge range, 3=see online
os << "ms_level = " << getIntOption_("ms_level") << "\n"; // MS level to analyze, valid are levels 2 (default) or 3
os << "activation_method = " << getStringOption_("activation_method") << "\n"; // activation method; used if activation method set; allowed ALL, CID, ECD, ETD, PQD, HCD, IRMPD
// misc parameters
double digest_mass_range_min(600.0), digest_mass_range_max(5000.0);
if (!parseRange_(getStringOption_("digest_mass_range"), digest_mass_range_min, digest_mass_range_max))
{
OPENMS_LOG_INFO << "digest_mass_range not set. Defaulting to 600.0 5000.0." << endl;
}
os << "digest_mass_range = " << digest_mass_range_min << " " << digest_mass_range_max << "\n"; // MH+ peptide mass range to analyze
os << "num_results = " << 100 << "\n"; // number of search hits to store internally
os << "skip_researching = " << 1 << "\n"; // for '.out' file output only, 0=search everything again (default), 1=don't search if .out exists
os << "max_fragment_charge = " << getIntOption_("max_fragment_charge") << "\n"; // set maximum fragment charge state to analyze (allowed max 5)
os << "max_precursor_charge = " << getIntOption_("max_precursor_charge") << "\n"; // set maximum precursor charge state to analyze (allowed max 9)
os << "nucleotide_reading_frame = " << 0 << "\n"; // 0=proteinDB, 1-6, 7=forward three, 8=reverse three, 9=all six
os << "clip_nterm_methionine = " << (int)(getStringOption_("clip_nterm_methionine")=="true") << "\n"; // 0=leave sequences as-is; 1=also consider sequence w/o N-term methionine
os << "peptide_length_range = " << getIntOption_("min_peptide_length") << " " << getIntOption_("max_peptide_length") << "\n"; // minimum and maximum peptide length to analyze (default 5 63; max length 63)
os << "spectrum_batch_size = " << getIntOption_("spectrum_batch_size") << "\n"; // max. // of spectra to search at a time; 0 to search the entire scan range in one loop
os << "max_duplicate_proteins = 20\n"; // maximum number of protein names to report for each peptide identification; -1 reports all duplicates
os << "equal_I_and_L = 1\n";
os << "output_suffix =\n"; // add a suffix to output base names i.e. suffix "-C" generates base-C.pep.xml from base.mzXML input
os << "mass_offsets = " << ListUtils::concatenate(getDoubleList_("mass_offsets"), " ") << "\n"; // one or more mass offsets to search (values subtracted from deconvoluted precursor mass)
os << "precursor_NL_ions =\n"; // one or more precursor neutral loss masses, will be added to xcorr analysis
// spectral processing
map<string,int> remove_precursor_peak;
remove_precursor_peak["no"] = 0;
remove_precursor_peak["yes"] = 1;
remove_precursor_peak["charge_reduced"] = 2;
remove_precursor_peak["phosphate_loss"] = 3;
double clear_mz_range_min(0.0), clear_mz_range_max(0.0);
if (!parseRange_(getStringOption_("clear_mz_range"), clear_mz_range_min, clear_mz_range_max))
{
OPENMS_LOG_INFO << "clear_mz_range not set. Defaulting to 0:0 (disable m/z filter)." << endl;
}
os << "minimum_peaks = " << getIntOption_("minimum_peaks") << "\n"; // required minimum number of peaks in spectrum to search (default 10)
os << "minimum_intensity = " << getDoubleOption_("minimum_intensity") << "\n"; // minimum intensity value to read in
os << "remove_precursor_peak = " << remove_precursor_peak[getStringOption_("remove_precursor_peak")] << "\n"; // 0=no, 1=yes, 2=all charge reduced precursor peaks (for ETD), 3=phosphate neutral loss peaks
os << "remove_precursor_tolerance = " << getDoubleOption_("remove_precursor_tolerance") << "\n"; // +- Da tolerance for precursor removal
os << "clear_mz_range = " << clear_mz_range_min << " " << clear_mz_range_max << "\n"; // for iTRAQ/TMT type data; will clear out all peaks in the specified m/z range
// write fixed modifications - if not specified residue parameter is zero
// Aminoacid:
// add_AA.OneletterCode_AA.ThreeLetterCode = xxx
// Terminus:
// add_N/Cterm_peptide = xxx protein not available yet
vector<String> fixed_modifications_names = getStringList_("fixed_modifications");
const vector<const ResidueModification*> fixed_modifications = getModifications_(fixed_modifications_names);
// merge duplicates, targeting the same AA
std::map<String, double> mods;
// Comet sets Carbamidometyl (C) as modification as default even if not specified.
// Therefore there is the need to set it to 0, unless its set as flag (see loop below)
mods["add_C_cysteine"] = 0;
for (const auto& fm : fixed_modifications)
{
// check modification (amino acid or terminal)
String AA = fm->getOrigin(); // X (constructor) or amino acid (e.g. K)
String term_specificity = fm->getTermSpecificityName(); // N-term, C-term, none
if ((AA != "X") && (term_specificity == "none"))
{
const Residue* r = ResidueDB::getInstance()->getResidue(AA);
String name = r->getName();
mods["add_" + r->getOneLetterCode() + "_" + name.toLower()] += fm->getDiffMonoMass();
}
else if (term_specificity == "N-term" || term_specificity == "C-term")
{
mods["add_" + term_specificity.erase(1,1) + "_peptide"] += fm->getDiffMonoMass();
}
else if (term_specificity == "Protein N-term" || term_specificity == "Protein C-term")
{
term_specificity.erase(0,8); // remove "Protein "
mods["add_" + term_specificity.erase(1,1) + "_protein"] += fm->getDiffMonoMass();
}
}
for (const auto& mod : mods)
{
os << mod.first << " = " << mod.second << "\n";
}
// COMET_ENZYME_INFO _must_ be at the end of this parameters file
os << "[COMET_ENZYME_INFO]" << "\n";
os << "0. No_enzyme 0 - -" << "\n";
os << "1. Trypsin 1 KR P" << "\n";
os << "2. Trypsin/P 1 KR -" << "\n";
os << "3. Lys_C 1 K P" << "\n";
os << "4. Lys_N 0 K -" << "\n";
os << "5. Arg_C 1 R P" << "\n";
os << "6. Asp_N 0 D -" << "\n";
os << "7. CNBr 1 M -" << "\n";
os << "8. Glu_C 1 DE P" << "\n";
os << "9. PepsinA 1 FL P" << "\n";
os << "10. Chymotrypsin 1 FWYL P" << "\n";
os << "11. No_cut 1 @ @" << "\n";
os << "12. Arg-C/P 1. R _" << "\n";
os << "13. Lys-C/P 1 K -" << "\n";
os << "14. Leukocyte_elastase 1 ALIV -" << "\n";
os << "15. Chymotrypsin/P 1 FWYL -" << "\n";
os << "16. Asp-N/B 0 D -" << "\n";
os << "17. Asp-N_ambic 0 DE -" << "\n";
os << "18. Formic_acid 1 D -" << "\n";
os << "19. TrypChymo 1 FYWLKR P" << "\n";
os << "20. V8-DE 1 DE P" << "\n";
os << "21. V8-E 1 E P" << "\n";
os << "22. proline_endopeptidase 1 P -" << "\n";
os << "23. Alpha-lytic_protease 1 TASV -" << "\n";
os << "24. 2-iodobenzoate 1 W -" << "\n";
os << "25. iodosobenzoate 1 W -" << "\n";
os << "26. staphylococcal_protease/D 1 E -" << "\n";
os << "27. proline-endopeptidase/HKR 1 P -" << "\n";
os << "28. Glu-CP 1 DE P" << "\n";
os << "29. PepsinA__P 1 FL P" << "\n";
os << "30. cyanogen-bromide 1 M -" << "\n";
os << "31. Clostripain/P 1 R -" << "\n";
os << "32. elastase-trypsin-chymotrypsin 1 ALIVKRWFY P" << "\n";
return ExitCodes::EXECUTION_OK;
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
// do this early, to see if comet is installed
String comet_executable = getStringOption_("comet_executable");
File::TempDir tmp_dir(debug_level_ >= 2);
writeDebug_("Comet is writing the default parameter file...", 1);
TOPPBase::ExitCodes exit_code = runExternalProcess_(comet_executable.toQString(), QStringList() << "-p", tmp_dir.getPath().toQString());
if (exit_code != EXECUTION_OK)
{
return exit_code; // will do the right thing, since it's correctly mapping TOPPBase exit codes
}
// the first line of 'comet.params.new' contains a string like: "# comet_version 2017.01 rev. 1"
String comet_version;
{
std::ifstream ifs(tmp_dir.getPath() + "/comet.params.new");
getline(ifs, comet_version);
}
writeDebug_("Comet Version extracted is: '" + comet_version + "\n", 2);
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
int ms_level = getIntOption_("ms_level");
String inputfile_name = getRawfileName(ms_level);
String out = getStringOption_("out");
String db_name = getDBFilename();
// tmp_dir
String tmp_pepxml = tmp_dir.getPath() + "result.pep.xml";
String tmp_pin = tmp_dir.getPath() + "result.pin";
String default_params = getStringOption_("default_params_file");
String tmp_file;
// default params given or to be written
if (default_params.empty())
{
tmp_file = tmp_dir.getPath() + "param.txt";
ofstream os(tmp_file.c_str());
auto ret = createParamFile_(os, comet_version);
os.close();
if (ret != EXECUTION_OK)
{
return ret;
}
}
else
{
tmp_file = default_params;
}
// check for mzML index (comet requires one)
MSExperiment exp;
MzMLFile mzml_file{};
String input_file_with_index = inputfile_name;
if (!mzml_file.hasIndex(inputfile_name))
{
OPENMS_LOG_WARN << "The mzML file provided to CometAdapter is not indexed, but comet requires one. "
<< "We will add an index by writing a temporary file. If you run this analysis more often, consider indexing your mzML in advance!" << std::endl;
// Low memory conversion
// write mzML with index again
auto tmp_file = File::getTemporaryFile() + ".mzML";
PlainMSDataWritingConsumer consumer(tmp_file);
consumer.getOptions().addMSLevel(ms_level); // only load msLevel 2
bool skip_full_count = true;
mzml_file.transform(inputfile_name, &consumer, skip_full_count);
input_file_with_index = tmp_file;
}
// Load spectra metadata to map to idXML
mzml_file.getOptions().setMetadataOnly(false);
mzml_file.getOptions().setFillData(false);
mzml_file.getOptions().clearMSLevels();
// Ion mobility data is currently stored in MS2
mzml_file.getOptions().addMSLevel(2);
mzml_file.load(inputfile_name, exp);
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
String paramP = "-P" + tmp_file;
String paramN = "-N" + FileHandler::stripExtension(FileHandler::stripExtension(tmp_pepxml));
QStringList arguments;
arguments << paramP.toQString() << paramN.toQString() << input_file_with_index.toQString();
//-------------------------------------------------------------
// run comet
//-------------------------------------------------------------
// Comet execution with the executable and the arguments StringList
exit_code = runExternalProcess_(comet_executable.toQString(), arguments);
if (exit_code != EXECUTION_OK)
{
return exit_code;
}
//-------------------------------------------------------------
// writing IdXML output
//-------------------------------------------------------------
// read the pep.xml put of Comet and write it to idXML
vector<String> fixed_modifications_names = getStringList_("fixed_modifications");
vector<String> variable_modifications_names = getStringList_("variable_modifications");
PeptideIdentificationList peptide_identifications;
vector<ProteinIdentification> protein_identifications;
writeDebug_("load PepXMLFile", 1);
PepXMLFile pepfile{};
pepfile.setPreferredFixedModifications(getModifications_(fixed_modifications_names));
pepfile.setPreferredVariableModifications(getModifications_(variable_modifications_names));
pepfile.load(tmp_pepxml, protein_identifications, peptide_identifications);
writeDebug_("write idXMLFile", 1);
writeDebug_(out, 1);
//Whatever the pepXML says, overwrite origin as the input mzML
protein_identifications[0].setPrimaryMSRunPath({inputfile_name}, exp);
// seems like version is not correctly parsed from pepXML. Overwrite it here.
protein_identifications[0].setSearchEngineVersion(comet_version);
// TODO let this be parsed by the pepXML parser if this info is present there.
protein_identifications[0].getSearchParameters().enzyme_term_specificity =
static_cast<EnzymaticDigestion::Specificity>(num_enzyme_termini[getStringOption_("num_enzyme_termini")]);
protein_identifications[0].getSearchParameters().charges = getStringOption_("precursor_charge");
protein_identifications[0].getSearchParameters().db = getStringOption_("database");
// write all (!) parameters as metavalues to the search parameters
if (!protein_identifications.empty())
{
DefaultParamHandler::writeParametersToMetaValues(this->getParam_(), protein_identifications[0].getSearchParameters(), this->getToolPrefix());
}
// if "reindex" parameter is set to true will perform reindexing
if (auto ret = reindex_(protein_identifications, peptide_identifications); ret != EXECUTION_OK) return ret;
// Parse ion mobility information if present
bool all_ids_have_im = SpectrumMetaDataLookup::addMissingIMToPeptideIDs(peptide_identifications, exp);
if (all_ids_have_im)
{
protein_identifications[0].setMetaValue(Constants::UserParam::IM, exp.getSpectrum(0).getDriftTimeUnitAsString());
}
// Parse FAIMS compensation voltage if present
SpectrumMetaDataLookup::addMissingFAIMSToPeptideIDs(peptide_identifications, exp);
// remove base_name meta value from peptide identifications
for (auto& peptide_identification : peptide_identifications)
{
peptide_identification.removeMetaValue("base_name");
}
// add percolator features
StringList feature_set;
PercolatorFeatureSetHelper::addCOMETFeatures(peptide_identifications, feature_set);
protein_identifications.front().getSearchParameters().setMetaValue("extra_features", ListUtils::concatenate(feature_set, ","));
FileHandler().storeIdentifications(out, protein_identifications, peptide_identifications, {FileTypes::IDXML});
//-------------------------------------------------------------
// create (move) optional pin output
//-------------------------------------------------------------
String pin_out = getStringOption_("pin_out");
if (!pin_out.empty())
{ // move the temporary file to the actual destination:
if (!File::rename(tmp_pin, pin_out))
{
return CANNOT_WRITE_OUTPUT_FILE;
}
}
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPCometAdapter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/MzTabExporter.cpp | .cpp | 7,558 | 218 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Timo Sachsenberg $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/DATASTRUCTURES/StringListUtils.h>
#include <OpenMS/MATH/MathFunctions.h>
#include <OpenMS/KERNEL/ConsensusMap.h>
#include <OpenMS/PROCESSING/ID/IDFilter.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/MzTabFile.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/METADATA/MetaInfoInterfaceUtils.h>
#include <OpenMS/METADATA/ProteinIdentification.h>
#include <OpenMS/METADATA/ProteinHit.h>
#include <OpenMS/METADATA/PeptideEvidence.h>
#include <OpenMS/CHEMISTRY/ModificationsDB.h>
#include <OpenMS/FORMAT/MzTab.h>
#include <vector>
#include <algorithm>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_MzTabExporter MzTabExporter
@brief This application converts several %OpenMS XML formats (featureXML, consensusXML, and idXML) to mzTab.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → MzTabExporter →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> Any tool producing one of the input formats </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> External tools (MS Excel, OpenOffice, Notepad)</td>
</tr>
</table>
</CENTER>
See the mzTab specification for details on the format.
@experimental This algorithm and underlying format is work in progress and might change.
@note Currently mzIdentML (mzid) is not directly supported as an input/output format of this tool. Convert mzid files to/from idXML using @ref TOPP_IDFileConverter if necessary.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_MzTabExporter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_MzTabExporter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
#ifdef __clang__
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wshadow"
#endif
namespace OpenMS
{
class TOPPMzTabExporter :
public TOPPBase
{
public:
TOPPMzTabExporter() :
TOPPBase("MzTabExporter", "Exports various XML formats to an mzTab file.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input files used to generate the mzTab file.", false);
setValidFormats_("in", ListUtils::create<String>("featureXML,consensusXML,idXML,mzid"));
registerOutputFile_("out", "<file>", "", "Output file (mzTab)", true);
setValidFormats_("out", ListUtils::create<String>("mzTab"));
registerFlag_("first_run_inference_only", "Does the first IdentificationRun in the file "
"only represent (protein) inference results? If so, read peptide information only "
"from second to last runs.", true);
registerFlag_("export_all_psms", "Export all PSMs instead of only the best per spectrum", true);
registerStringList_("opt_columns", "<mods>", {"subfeatures"}, "Add optional columns which are not part of the mzTab standard.", false);
setValidStrings_("opt_columns", {"subfeatures"});
}
ExitCodes main_(int, const char**) override
{
// parameter handling
String in = getStringOption_("in");
FileTypes::Type in_type = FileHandler::getType(in);
String out = getStringOption_("out");
StringList optional_columns = getStringList_("opt_columns");
bool export_subfeatures = ListUtils::contains(optional_columns, "subfeatures");
MzTab mztab;
if (in_type == FileTypes::FEATUREXML)
{
// For featureXML we export a "Summary Quantification" file. This means we don't need to report feature quantification values at the assay level
// but only at the (single) study variable variable level.
// load featureXML
FeatureMap feature_map;
FileHandler().loadFeatures(in, feature_map, {FileTypes::FEATUREXML});
// calculate coverage
PeptideIdentificationList pep_ids;
vector<ProteinIdentification> prot_ids = feature_map.getProteinIdentifications();
for (Size i = 0; i < feature_map.size(); ++i) // collect all (assigned and unassigned to a feature) peptide ids
{
const PeptideIdentificationList& pep_ids_bf = feature_map[i].getPeptideIdentifications();
pep_ids.insert(pep_ids.end(), pep_ids_bf.begin(), pep_ids_bf.end());
}
pep_ids.insert(pep_ids.end(), feature_map.getUnassignedPeptideIdentifications().begin(), feature_map.getUnassignedPeptideIdentifications().end());
try // might throw Exception::MissingInformation()
{
for (Size i = 0; i < prot_ids.size(); ++i)
{
prot_ids[i].computeCoverage(pep_ids);
}
}
catch (Exception::MissingInformation& e)
{
OPENMS_LOG_WARN << "Non-critical exception: " << e.what() << "\n";
}
feature_map.setProteinIdentifications(prot_ids);
mztab = MzTab::exportFeatureMapToMzTab(feature_map, in);
}
// export identification data from idXML
if (in_type == FileTypes::IDXML)
{
vector<ProteinIdentification> prot_ids;
PeptideIdentificationList pep_ids;
FileHandler().loadIdentifications(in, prot_ids, pep_ids, {FileTypes::IDXML});
MzTabFile().store(out,
prot_ids,
pep_ids,
getFlag_("first_run_inference_only"),
false,
getFlag_("export_all_psms"));
return EXECUTION_OK;
}
// export identification data from mzIdentML
if (in_type == FileTypes::MZIDENTML)
{
String document_id;
vector<ProteinIdentification> prot_ids;
PeptideIdentificationList pep_ids;
FileHandler().loadIdentifications(in, prot_ids, pep_ids, {FileTypes::MZIDENTML});
MzTabFile().store(out,
prot_ids,
pep_ids,
getFlag_("first_run_inference_only"),
false,
getFlag_("export_all_psms"));
return EXECUTION_OK;
}
// export quantification data
if (in_type == FileTypes::CONSENSUSXML)
{
ConsensusMap consensus_map;
FileHandler().loadConsensusFeatures(in, consensus_map, {FileTypes::CONSENSUSXML});
IDFilter::removeEmptyIdentifications(consensus_map); // MzTab stream exporter currently doesn't support IDs with empty hits.
MzTabFile().store(out,
consensus_map,
getFlag_("first_run_inference_only"),
true,
true,
export_subfeatures,
false,
getFlag_("export_all_psms")); // direct stream to disc
return EXECUTION_OK;
}
MzTabFile().store(out, mztab);
return EXECUTION_OK;
}
};
} //namespace OpenMS
#ifdef __clang__
#pragma clang diagnostic pop
#endif
int main(int argc, const char** argv)
{
TOPPMzTabExporter t;
return t.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/TextExporter.cpp | .cpp | 62,617 | 1,562 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: $
// $Authors: Clemens Groepl, Andreas Bertsch, Chris Bielow, Marc Sturm, Hendrik Weisser $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/DATASTRUCTURES/StringListUtils.h>
#include <OpenMS/MATH/MathFunctions.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/KERNEL/ConsensusMap.h>
#include <OpenMS/FORMAT/SVOutStream.h>
#include <OpenMS/METADATA/MetaInfoInterfaceUtils.h>
#include <OpenMS/KERNEL/FeatureMap.h>
#include <boost/math/special_functions/fpclassify.hpp>
#include <vector>
#include <algorithm>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_TextExporter TextExporter
@brief This application converts several %OpenMS XML formats (featureXML, consensusXML, and idXML) to text files.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → TextExporter →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> almost any TOPP tool </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> external tools (MS Excel, OpenOffice, Notepad)</td>
</tr>
</table>
</CENTER>
The goal of this tool is to create output in a table format that is easily readable in Excel or OpenOffice. Lines in the output correspond to rows in the table; the individual columns are delineated by a separator, e.g. tab (default, TSV format) or comma (CSV format).
Output files begin with comment lines, starting with the special character "#". The last such line(s) will be a header with column names, but this may be preceded by more general comments.
Because the OpenMS XML formats contain different kinds of data in a hierarchical structure, TextExporter produces somewhat unusual TSV/CSV files for many inputs: Different lines in the output may belong to different types of data, and the number of columns and the meanings of the individual fields depend on the type. In such cases, the first column always contains an indicator (in capital letters) for the data type of the current line. In addition, some lines have to be understood relative to a previous line, if there is a hierarchical relationship in the data. (See below for details and examples.)
Missing values are represented by "-1" or "nan" in numeric fields and by blanks in character/text fields.
Depending on the input and the parameters, the output contains the following columns:
<B>featureXML input:</B>
- first column: @p RUN / @p PROTEIN / @p UNASSIGNEDPEPTIDE / @p FEATURE / @p PEPTIDE (indicator for the type of data in the current row)
- a @p RUN line contains information about a protein identification run; further columns: @p run_id, @p score_type, @p score_direction, @p data_time, @p search_engine_version, @p parameters
- a @p PROTEIN line contains data of a protein identified in the previously listed run; further columns: @p score, @p rank, @p accession, @p coverage, @p sequence
- an @p UNASSIGNEDPEPTIDE line contains data of peptide hit that was not assigned to any feature; further columns: @p rt, @p mz, @p score, @p rank, @p sequence, @p charge, @p aa_before, @p aa_after, @p score_type, @p search_identifier, @p accessions
- a @p FEATURE line contains data of a single feature; further columns: @p rt, @p mz, @p intensity, @p charge, @p width, @p quality, @p rt_quality, @p mz_quality, @p rt_start, @p rt_end
- a @p PEPTIDE line contains data of a peptide hit annotated to the previous feature; further columns: same as for @p UNASSIGNEDPEPTIDE
With the @p no_ids flag, only @p FEATURE lines (without the @p FEATURE indicator) are written.
With the @p feature:minimal flag, only the @p rt, @p mz, and @p intensity columns of @p FEATURE lines are written.
<B>consensusXML input:</B>
Output format produced for the @p out parameter:
- first column: @p MAP / @p RUN / @p PROTEIN / @p UNASSIGNEDPEPTIDE / @p CONSENSUS / @p PEPTIDE (indicator for the type of data in the current row)
- a @p MAP line contains information about a sub-map; further columns: @p id, @p filename, @p label, @p size (potentially followed by further columns containing meta data, depending on the input)
- a @p CONSENSUS line contains data of a single consensus feature; further columns: @p rt_cf, @p mz_cf, @p intensity_cf, @p charge_cf, @p width_cf, @p quality_cf, @p rt_X0, @p mz_X0, ..., rt_X1, mz_X1, ...
- @p "..._cf" columns refer to the consensus feature itself, @p "..._Xi" columns refer to a sub-feature from the map with ID "Xi" (no @p quality column in this case); missing sub-features are indicated by "nan" values
- see above for the formats of @p RUN, @p PROTEIN, @p UNASSIGNEDPEPTIDE, @p PEPTIDE lines
With the @p no_ids flag, only @p MAP and @p CONSENSUS lines are written.
Output format produced for the @p consensus_centroids parameter:
- one line per consensus centroid
- columns: @p rt, @p mz, @p intensity, @p charge, @p width, @p quality
Output format produced for the @p consensus_elements parameter:
- one line per sub-feature (element) of a consensus feature
- first column: @p H / @p L (indicator for new/repeated element)
- @p H indicates a new element, @p L indicates the replication of the first element of the current consensus feature (for plotting)
- further columns: @p rt, @p mz, @p intensity, @p charge, @p width, @p rt_cf, @p mz_cf, @p intensity_cf, @p charge_cf, @p width_cf, @p quality_cf
- @p "..._cf" columns refer to the consensus feature, the other columns refer to the sub-feature
With the @p consensus:add_metavalues flag, meta values for each consensus feature are written.
Output format produced for the @p consensus_features parameter:
- one line per consensus feature (suitable for processing with e.g. <a href="http://www.r-project.org">R</a>)
- columns: same as for a @p CONSENSUS line above, followed by additional columns for identification data
- additional columns: @p peptide_N0, @p n_diff_peptides_N0, @p protein_N0, @p n_diff_proteins_N0, @p peptide_N1, ...
- @p "..._Ni" columns refer to the identification run with index "Ni", @p n_diff_... stands for "number of different ..."; different peptides/proteins in one column are separated by "/"
With the @p no_ids flag, the additional columns are not included.
<B>idXML input:</B>
- first column: @p RUN / @p PROTEIN / @p PEPTIDE (indicator for the type of data in the current row)
- see above for the formats of @p RUN, @p PROTEIN, @p PEPTIDE lines
- additional column for @p PEPTIDE lines: @p predicted_rt (predicted retention time)
- additional column for @p PEPTIDE lines: @p predicted_pt (predicted proteotypicity)
With the @p id:proteins_only flag, only @p RUN and @p PROTEIN lines are written.
With the @p id:peptides_only flag, only @p PEPTIDE lines (without the @p PEPTIDE indicator) are written.
With the @p id:first_dim_rt flag, the additional columns @p rt_first_dim and @p predicted_rt_first_dim are included for @p PEPTIDE lines.
@note Currently mzIdentML (mzid) is not directly supported as an input/output format of this tool. Convert mzid files to/from idXML using @ref TOPP_IDFileConverter if necessary.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_TextExporter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_TextExporter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
namespace OpenMS
{
// write data from a feature to the output stream
void writeFeature(SVOutStream& out, Peak2D::CoordinateType rt,
Peak2D::CoordinateType mz, Peak2D::IntensityType intensity,
Int charge, BaseFeature::WidthType width)
{
out.writeValueOrNan(rt);
out.writeValueOrNan(mz);
out.writeValueOrNan(intensity);
out << String(charge);
out.writeValueOrNan(width);
}
// stream output operator for FeatureHandle
SVOutStream& operator<<(SVOutStream& out, const FeatureHandle& feature)
{
writeFeature(out, feature.getRT(), feature.getMZ(), feature.getIntensity(),
feature.getCharge(), feature.getWidth());
return out;
}
// general stream output operator for features and consensus features
SVOutStream& operator<<(SVOutStream& out, const BaseFeature& feature)
{
writeFeature(out, feature.getRT(), feature.getMZ(), feature.getIntensity(),
feature.getCharge(), feature.getWidth());
out.writeValueOrNan(feature.getQuality());
return out;
}
// stream output operator for consensus features
SVOutStream& operator<<(SVOutStream& out, const ConsensusFeature& feature)
{
return out << static_cast<const BaseFeature&>(feature);
}
// stream output operator for features
SVOutStream& operator<<(SVOutStream& out, const Feature& feature)
{
return out << static_cast<const BaseFeature&>(feature);
}
// write the header for feature data
void writeFeatureHeader(SVOutStream& out, const String& suffix = "",
bool incl_quality = true, bool comment = true)
{
StringList elements = ListUtils::create<String>("#rt,mz,intensity,charge,width");
if (!comment)
{
elements[0] = "rt";
}
if (incl_quality)
{
elements.push_back("quality");
}
bool old = out.modifyStrings(false);
for (const String& str : elements)
{
out << str + suffix;
}
out.modifyStrings(old);
}
// write meta value keys in header
void writeMetaValueKeysHeader(SVOutStream& out, const std::set<String>& meta_value_keys = {})
{
for (const auto& key: meta_value_keys)
{
out << key;
}
}
// write the header for exporting consensusXML
void writeConsensusHeader(SVOutStream& out, const String& what,
const String& infile, const String& now,
const StringList& add_comments = StringList())
{
out.write("#" + what + " extracted from " + infile + " on " + now + "\n");
for (const String& str : add_comments)
{
out.write("#" + str + "\n");
}
}
// write the header for run data
void writeRunHeader(SVOutStream& out)
{
bool old = out.modifyStrings(false);
out << "#RUN" << "run_id" << "score_type" << "score_direction"
<< "date_time" << "search_engine_version" << "parameters" << nl;
out.modifyStrings(old);
}
// write the header for protein data
void writeProteinHeader(SVOutStream& out)
{
bool old = out.modifyStrings(false);
out << "#PROTEIN" << "score" << "rank" << "accession" << "protein_description" << "coverage"
<< "sequence";
out.modifyStrings(old);
}
// write the header for protein data
void writeProteinGroupHeader(SVOutStream& out)
{
bool old = out.modifyStrings(false);
out << "#PROTEINGROUP" << "score" << "accessions" << nl;
out.modifyStrings(old);
}
void writeMetaValuesHeader(SVOutStream& output, const StringList& meta_keys)
{
if (!meta_keys.empty())
{
for (const String& str : meta_keys)
{
output << str;
}
}
}
template<typename T>
void writeMetaValues(SVOutStream& output, const T& meta_value_provider, const StringList& meta_keys)
{
if (!meta_keys.empty())
{
for (const String& str : meta_keys)
{
if (meta_value_provider.metaValueExists(str))
{
output << String(meta_value_provider.getMetaValue(str));
}
else
{
output << "";
}
}
}
}
// stream output operator for a ProteinHit
SVOutStream& operator<<(SVOutStream& out, const ProteinHit& hit)
{
out << String(hit.getScore()) << hit.getRank() << hit.getAccession() << hit.getDescription()
<< String(hit.getCoverage()) << hit.getSequence();
return out;
}
// stream output operator for a ProteinGroup
SVOutStream& operator<<(SVOutStream& out, const ProteinIdentification::ProteinGroup& grp)
{
out << String(grp.probability);
String grpaccs = grp.accessions[0];
for (Size s = 1; s < grp.accessions.size(); s++)
{
grpaccs += "," + grp.accessions[s];
}
out << grpaccs;
return out;
}
// stream output operator for SearchParameters
SVOutStream& operator<<(SVOutStream& out,
const ProteinIdentification::SearchParameters sp)
{
String param_line = "db=" + sp.db + ", db_version=" + sp.db_version +
", taxonomy=" + sp.taxonomy + ", charges=" + sp.charges + ", mass_type=";
if (sp.mass_type == ProteinIdentification::PeakMassType::MONOISOTOPIC)
{
param_line += "monoisotopic";
}
else
{
param_line += "average";
}
param_line += ", fixed_modifications=";
for (vector<String>::const_iterator mit = sp.fixed_modifications.begin();
mit != sp.fixed_modifications.end(); ++mit)
{
if (mit != sp.fixed_modifications.begin())
{
param_line += ";";
}
param_line += *mit;
}
param_line += ", variable_modifications=";
for (vector<String>::const_iterator mit = sp.variable_modifications.begin();
mit != sp.variable_modifications.end(); ++mit)
{
if (mit != sp.variable_modifications.begin())
{
param_line += ";";
}
param_line += *mit;
}
param_line += ", enzyme=";
param_line += sp.digestion_enzyme.getName();
param_line += ", missed_cleavages=" + String(sp.missed_cleavages) +
", peak_mass_tolerance=" + String(sp.fragment_mass_tolerance) +
", precursor_mass_tolerance=" + String(sp.precursor_mass_tolerance);
out << param_line;
return out;
}
void writeProteinHit(SVOutStream& out, const ProteinHit& phit, const StringList& protein_hit_meta_keys)
{
out << "PROTEIN" << phit;
writeMetaValues(out, phit, protein_hit_meta_keys);
out << nl;
}
// write a protein identification to the output stream
void writeProteinId(SVOutStream& out, const ProteinIdentification& pid, const StringList& protein_hit_meta_keys)
{
// protein id header
out << "RUN" << pid.getIdentifier() << pid.getScoreType();
if (pid.isHigherScoreBetter())
{
out << "higher-score-better";
}
else
{
out << "lower-score-better";
}
// using ISODate ensures that TOPP tests will run through regardless of
// locale setting
out << pid.getDateTime().toString() << pid.getSearchEngineVersion();
// search parameters
ProteinIdentification::SearchParameters sp = pid.getSearchParameters();
out << sp << nl;
for (const ProteinHit& hit : pid.getHits())
{
writeProteinHit(out, hit, protein_hit_meta_keys);
}
}
// write a protein identification to the output stream
void writeProteinGroups(SVOutStream& out, const vector<ProteinIdentification::ProteinGroup>& pgroups)
{
for (const ProteinIdentification::ProteinGroup& grp : pgroups)
{
out << "PROTEINGROUP" << grp << nl;
}
}
// write the header for peptide data
void writePeptideHeader(SVOutStream& out, const String& what = "PEPTIDE",
bool incl_pred_rt = false,
bool incl_pred_pt = false,
bool incl_first_dim = false,
bool incl_peak_annotations = false)
{
bool old = out.modifyStrings(false);
if (what.empty())
{
out << "#rt";
}
else
{
out << "#" + what << "rt";
}
out << "mz" << "score" << "rank" << "sequence" << "charge" << "aa_before"
<< "aa_after" << "score_type" << "search_identifier" << "accessions"
<< "start" << "end";
if (incl_pred_rt)
{
out << "predicted_rt";
}
if (incl_first_dim)
{
out << "rt_first_dim" << "predicted_rt_first_dim";
}
if (incl_pred_pt)
{
out << "predicted_pt";
}
if (incl_peak_annotations)
{
out << "peak_annotations";
}
out.modifyStrings(old);
}
// stream output operator for a PeptideHit
// TODO: output of multiple peptide evidences
SVOutStream& operator<<(SVOutStream& out, const PeptideHit& hit)
{
vector<PeptideEvidence> pes = hit.getPeptideEvidences();
if (!pes.empty())
{
out << String(hit.getScore()) << hit.getRank() << hit.getSequence()
<< hit.getCharge() << pes[0].getAABefore() << pes[0].getAAAfter();
}
else
{
out << String(hit.getScore()) << hit.getRank() << hit.getSequence()
<< hit.getCharge() << PeptideEvidence::UNKNOWN_AA << PeptideEvidence::UNKNOWN_AA;
}
return out;
}
// write a peptide identification to the output stream
void writePeptideId(SVOutStream& out, const PeptideIdentification& pid,
const String& what = "PEPTIDE", bool incl_pred_rt = false, bool incl_pred_pt = false,
bool incl_first_dim = false, bool incl_peak_annotations = false, const StringList& peptide_id_meta_keys = StringList(), const StringList& peptide_hit_meta_keys = StringList())
{
for (const PeptideHit& hit : pid.getHits())
{
if (!what.empty())
{
out << what;
}
if (pid.hasRT())
{
out << String(pid.getRT());
}
else
{
out << "-1";
}
if (pid.hasMZ())
{
out << String(pid.getMZ());
}
else
{
out << "-1";
}
out << hit << pid.getScoreType() << pid.getIdentifier();
// For each accession/evidence, print the protein, the start and end position in one col each
String accessions;
String start;
bool printStart = false;
String end;
bool printEnd = false;
vector<PeptideEvidence> evid = hit.getPeptideEvidences();
for (vector<PeptideEvidence>::const_iterator evid_it = evid.begin(); evid_it != evid.end(); ++evid_it)
{
if (evid_it != evid.begin())
{
accessions += ";";
start += ";";
end += ";";
}
accessions += evid_it->getProteinAccession();
if (evid_it->getStart() != PeptideEvidence::UNKNOWN_POSITION)
{
start += evid_it->getStart();
printStart = true;
}
if (evid_it->getEnd() != PeptideEvidence::UNKNOWN_POSITION)
{
end += evid_it->getEnd();
printEnd = true;
}
}
out << accessions;
// do not just print a bunch of semicolons
if (printStart)
{
out << start;
}
else
{
out << "";
}
if (printEnd)
{
out << end;
}
else
{
out << "";
}
if (incl_pred_rt)
{
if (hit.metaValueExists("predicted_RT"))
{
out << String(hit.getMetaValue("predicted_RT"));
}
else out << "-1";
}
if (incl_first_dim)
{
if (pid.metaValueExists("first_dim_rt"))
{
out << String(pid.getMetaValue("first_dim_rt"));
}
else out << "-1";
if (hit.metaValueExists("predicted_RT_first_dim"))
{
out << String(hit.getMetaValue("predicted_RT_first_dim"));
}
else out << "-1";
}
if (incl_pred_pt)
{
if (hit.metaValueExists("predicted_PT"))
{
out << String(hit.getMetaValue("predicted_PT"));
}
else out << "-1";
}
if (incl_peak_annotations && !hit.getPeakAnnotations().empty())
{
String pa;
PeptideHit::PeakAnnotation::writePeakAnnotationsString_(pa, hit.getPeakAnnotations());
out << pa;
}
else out << "-1";
writeMetaValues(out, pid, peptide_id_meta_keys);
writeMetaValues(out, hit, peptide_hit_meta_keys);
out << nl;
}
}
class TOPPTextExporter :
public TOPPBase
{
public:
TOPPTextExporter() :
TOPPBase("TextExporter", "Exports various XML formats to a text file.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input file ");
setValidFormats_("in", ListUtils::create<String>("featureXML,consensusXML,idXML,mzML"));
registerOutputFile_("out", "<file>", "", "Output file.");
setValidFormats_("out", ListUtils::create<String>("tsv,csv,txt"));
registerStringOption_("out_type", "<type>", "", "Output file type -- default: determined from file extension, ambiguous file extensions are interpreted as tsv", false);
setValidStrings_("out_type", ListUtils::create<String>("tsv,csv,txt"));
registerStringOption_("replacement", "<string>", "_", "Used to replace occurrences of the separator in strings before writing, if 'quoting' is 'none'", false);
registerStringOption_("quoting", "<method>", "none", "Method for quoting of strings: 'none' for no quoting, 'double' for quoting with doubling of embedded quotes,\n'escape' for quoting with backslash-escaping of embedded quotes", false);
setValidStrings_("quoting", ListUtils::create<String>("none,double,escape"));
registerFlag_("no_ids", "Suppresses output of identification data.");
addEmptyLine_();
registerTOPPSubsection_("feature", "Options for featureXML input files");
registerFlag_("feature:minimal", "Set this flag to write only three attributes: RT, m/z, and intensity.");
registerIntOption_("feature:add_metavalues", "<min_frequency>", -1, "Add columns for meta values which occur with a certain frequency (0-100%). Set to -1 to omit meta values (default).", false);
setMinInt_("feature:add_metavalues", -1);
setMaxInt_("feature:add_metavalues", 100);
addEmptyLine_();
registerTOPPSubsection_("id", "Options for idXML input files");
registerFlag_("id:proteins_only", "Set this flag if you want only protein information from an idXML file");
registerFlag_("id:peptides_only", "Set this flag if you want only peptide information from an idXML file");
registerFlag_("id:protein_groups", "Set this flag if you want to also write indist. group information from an idXML file");
registerFlag_("id:first_dim_rt", "If this flag is set the first_dim RT of the peptide hits will also be printed (if present).");
registerIntOption_("id:add_metavalues", "<min_frequency>", -1, "Add columns for meta values of PeptideID (=spectrum) entries which occur with a certain frequency (0-100%). Set to -1 to omit meta values (default).", false);
setMinInt_("id:add_metavalues", -1);
setMaxInt_("id:add_metavalues", 100);
registerIntOption_("id:add_hit_metavalues", "<min_frequency>", -1, "Add columns for meta values of PeptideHit (=PSM) entries which occur with a certain frequency (0-100%). Set to -1 to omit meta values (default).", false);
setMinInt_("id:add_hit_metavalues", -1);
setMaxInt_("id:add_hit_metavalues", 100);
registerIntOption_("id:add_protein_hit_metavalues", "<min_frequency>", -1, "Add columns for meta values on protein level which occur with a certain frequency (0-100%). Set to -1 to omit meta values (default).", false);
setMinInt_("id:add_protein_hit_metavalues", -1);
setMaxInt_("id:add_protein_hit_metavalues", 100);
registerStringOption_("id:annotations", "<method>", "none", "Format of peak annotations.", false);
setValidStrings_("id:annotations", ListUtils::create<String>("none,default"));
addEmptyLine_();
registerTOPPSubsection_("consensus", "Options for consensusXML input files");
registerOutputFile_("consensus:centroids", "<file>", "", "Output file for centroids of consensus features", false);
setValidFormats_("consensus:centroids", ListUtils::create<String>("csv"));
registerOutputFile_("consensus:elements", "<file>", "", "Output file for elements of consensus features", false);
setValidFormats_("consensus:elements", ListUtils::create<String>("csv"));
registerOutputFile_("consensus:features", "<file>", "", "Output file for consensus features and contained elements from all maps (writes 'nan's if elements are missing)", false);
setValidFormats_("consensus:features", ListUtils::create<String>("csv"));
registerStringOption_("consensus:sorting_method", "<method>", "none", "Sorting options can be combined. The precedence is: sort_by_size, sort_by_maps, sorting_method", false);
setValidStrings_("consensus:sorting_method", ListUtils::create<String>("none,RT,MZ,RT_then_MZ,intensity,quality_decreasing,quality_increasing"));
registerFlag_("consensus:sort_by_maps", "Apply a stable sort by the covered maps, lexicographically", false);
registerFlag_("consensus:sort_by_size", "Apply a stable sort by decreasing size (i.e., the number of elements)", false);
registerFlag_("consensus:add_metavalues", "Add columns for ConsensusFeature meta values.", false);
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
String in = getStringOption_("in");
String out = getStringOption_("out");
bool no_ids = getFlag_("no_ids");
bool first_dim_rt = getFlag_("id:first_dim_rt");
int add_feature_metavalues = getIntOption_("feature:add_metavalues");
int add_id_metavalues = getIntOption_("id:add_metavalues");
int add_hit_metavalues = getIntOption_("id:add_hit_metavalues");
int add_protein_hit_metavalues = getIntOption_("id:add_protein_hit_metavalues");
// output file names and types
FileTypes::Type out_type = FileTypes::nameToType(getStringOption_("out_type"));
if (out_type == FileTypes::UNKNOWN)
{
out_type = FileHandler::getTypeByFileName(out);
}
String sep;
if (out_type == FileTypes::CSV)
{
sep = ",";
}
else
{
sep = "\t";
}
String replacement = getStringOption_("replacement");
String quoting = getStringOption_("quoting");
String::QuotingMethod quoting_method;
if (quoting == "none")
{
quoting_method = String::NONE;
}
else if (quoting == "double")
{
quoting_method = String::DOUBLE;
}
else
{
quoting_method = String::ESCAPE;
}
// input file type
FileTypes::Type in_type = FileHandler::getType(in);
writeDebug_(String("Input file type: ") +
FileTypes::typeToName(in_type), 2);
if (in_type == FileTypes::UNKNOWN)
{
writeLogError_("Error: Could not determine input file type!");
return PARSE_ERROR;
}
StringList meta_keys;
if (in_type == FileTypes::FEATUREXML)
{
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
FeatureMap feature_map;
FileHandler().loadFeatures(in, feature_map, {FileTypes::FEATUREXML}, log_type_);
// extract common id and hit meta values
StringList peptide_id_meta_keys;
StringList peptide_hit_meta_keys;
StringList protein_hit_meta_keys;
PeptideIdentificationList pids;
if (add_id_metavalues >= 0 || add_hit_metavalues >= 0)
{
const PeptideIdentificationList& uapids = feature_map.getUnassignedPeptideIdentifications();
pids.insert(pids.end(), uapids.begin(), uapids.end());
for (const Feature& cm : feature_map)
{
const PeptideIdentificationList& cpids = cm.getPeptideIdentifications();
pids.insert(pids.end(), cpids.begin(), cpids.end());
}
if (add_id_metavalues >= 0)
{
peptide_id_meta_keys = MetaInfoInterfaceUtils::findCommonMetaKeys<PeptideIdentificationList, StringList>(pids.begin(), pids.end(), add_id_metavalues);
// currently there is some hardcoded logic to create extra columns for these meta values so remove them to prevent duplication
std::erase(peptide_id_meta_keys, "predicted_RT");
std::erase(peptide_id_meta_keys, "predicted_RT_first_dim");
std::erase(peptide_id_meta_keys, "first_dim_rt");
std::erase(peptide_id_meta_keys, "predicted_PT");
std::erase(peptide_id_meta_keys, Constants::UserParam::SIGNIFICANCE_THRESHOLD);
}
if (add_hit_metavalues >= 0)
{
vector<PeptideHit> temp_hits;
for (Size i = 0; i != pids.size(); ++i)
{
const vector<PeptideHit>& hits = pids[i].getHits();
temp_hits.insert(temp_hits.end(), hits.begin(), hits.end());
}
// siehe oben / analog machen
peptide_hit_meta_keys = MetaInfoInterfaceUtils::findCommonMetaKeys<vector<PeptideHit>, StringList>(temp_hits.begin(), temp_hits.end(), add_hit_metavalues);
}
}
if (add_feature_metavalues >= 0)
{
meta_keys = MetaInfoInterfaceUtils::findCommonMetaKeys<FeatureMap, StringList>(feature_map.begin(), feature_map.end(), add_feature_metavalues);
}
vector<ProteinIdentification> prot_ids = feature_map.getProteinIdentifications();
if (add_protein_hit_metavalues >= 0)
{
//TODO also iterate over all protein ID runs.
if (prot_ids.size() == 1)
{
protein_hit_meta_keys = MetaInfoInterfaceUtils::findCommonMetaKeys<vector<ProteinHit>, StringList>(prot_ids[0].getHits().begin(), prot_ids[0].getHits().end(), add_id_metavalues);
}
}
// text output
ofstream outstr(out.c_str());
SVOutStream output(outstr, sep, replacement, quoting_method);
bool minimal = getFlag_("feature:minimal");
no_ids |= minimal; // "minimal" implies "no_ids"
// write header:
output.modifyStrings(false);
bool comment = true;
if (!no_ids)
{
writeRunHeader(output);
writeProteinHeader(output);
writeMetaValuesHeader(output, protein_hit_meta_keys);
output << nl;
writePeptideHeader(output, "UNASSIGNEDPEPTIDE");
writeMetaValuesHeader(output, peptide_id_meta_keys);
writeMetaValuesHeader(output, peptide_hit_meta_keys);
output << nl;
output << "#FEATURE";
comment = false;
}
if (minimal)
{
output << "#rt" << "mz" << "intensity";
}
else
{
writeFeatureHeader(output, "", true, comment);
output << "rt_quality" << "mz_quality" << "rt_start" << "rt_end";
}
writeMetaValuesHeader(output, meta_keys);
output << nl;
if (!no_ids)
{
writePeptideHeader(output);
writeMetaValuesHeader(output, peptide_id_meta_keys);
writeMetaValuesHeader(output, peptide_hit_meta_keys);
output << nl;
}
output.modifyStrings(true);
if (!no_ids)
{
for (const ProteinIdentification& prot : prot_ids)
{
writeProteinId(output, prot, protein_hit_meta_keys);
}
for (const PeptideIdentification& pep : feature_map.getUnassignedPeptideIdentifications())
{
writePeptideId(output, pep, "UNASSIGNEDPEPTIDE", false, false, false, false, peptide_id_meta_keys, peptide_hit_meta_keys);
}
}
for (const Feature& feat : feature_map)
{
if (!no_ids)
{
output << "FEATURE";
}
if (minimal)
{
output << String(feat.getRT()) << String(feat.getMZ())
<< String(feat.getIntensity());
}
else
{
output << feat << String(feat.getQuality(0)) << String(feat.getQuality(1));
if (!feat.getConvexHulls().empty())
{
output << String(feat.getConvexHulls().begin()->getBoundingBox().minX())
<< String(feat.getConvexHulls().begin()->getBoundingBox().maxX());
}
else
{
output << "-1" << "-1";
}
}
writeMetaValues(output, feat, meta_keys);
output << nl;
// peptide ids
if (!no_ids)
{
for (const PeptideIdentification& pep : feat.getPeptideIdentifications())
{
writePeptideId(output, pep, "PEPTIDE", false, false, false, false, peptide_id_meta_keys, peptide_hit_meta_keys);
}
}
}
outstr.close();
}
else if (in_type == FileTypes::CONSENSUSXML)
{
String consensus_centroids = getStringOption_("consensus:centroids");
String consensus_elements = getStringOption_("consensus:elements");
String consensus_features = getStringOption_("consensus:features");
String sorting_method = getStringOption_("consensus:sorting_method");
bool sort_by_maps = getFlag_("consensus:sort_by_maps");
bool sort_by_size = getFlag_("consensus:sort_by_size");
bool add_metavalues = getFlag_("consensus:add_metavalues");
ConsensusMap consensus_map;
FileHandler().loadConsensusFeatures(in, consensus_map, {FileTypes::CONSENSUSXML}, log_type_);
// for optional export of ConsensusFeature meta values, collect all possible meta value keys
std::set<String> meta_value_keys;
if (add_metavalues)
{
for (const auto& cf: consensus_map)
{
std::vector<String> cf_meta_value_keys;
cf.getKeys(cf_meta_value_keys);
for (const auto& key: cf_meta_value_keys)
{
meta_value_keys.insert(key);
}
}
}
// extract common id and hit meta values
StringList peptide_id_meta_keys;
StringList peptide_hit_meta_keys;
StringList protein_hit_meta_keys;
PeptideIdentificationList pids;
if (add_id_metavalues >= 0 || add_hit_metavalues >= 0)
{
const PeptideIdentificationList& uapids = consensus_map.getUnassignedPeptideIdentifications();
pids.insert(pids.end(), uapids.begin(), uapids.end());
for (const ConsensusFeature& cm : consensus_map)
{
const PeptideIdentificationList& cpids = cm.getPeptideIdentifications();
pids.insert(pids.end(), cpids.begin(), cpids.end());
}
if (add_id_metavalues >= 0)
{
peptide_id_meta_keys = MetaInfoInterfaceUtils::findCommonMetaKeys<PeptideIdentificationList, StringList>(pids.begin(), pids.end(), add_id_metavalues);
// currently there is some hardcoded logic to create extra columns for these meta values so remove them to prevent duplication
std::erase(peptide_id_meta_keys, "predicted_RT");
std::erase(peptide_id_meta_keys, "predicted_RT_first_dim");
std::erase(peptide_id_meta_keys, "first_dim_rt");
std::erase(peptide_id_meta_keys, "predicted_PT");
std::erase(peptide_id_meta_keys, Constants::UserParam::SIGNIFICANCE_THRESHOLD);
}
if (add_hit_metavalues >= 0)
{
vector<PeptideHit> temp_hits;
for (Size i = 0; i != pids.size(); ++i)
{
const vector<PeptideHit>& hits = pids[i].getHits();
temp_hits.insert(temp_hits.end(), hits.begin(), hits.end());
}
// siehe oben / analog machen
peptide_hit_meta_keys = MetaInfoInterfaceUtils::findCommonMetaKeys<vector<PeptideHit>, StringList>(temp_hits.begin(), temp_hits.end(), add_hit_metavalues);
}
}
if (add_protein_hit_metavalues >= 0)
{
const vector<ProteinIdentification>& prot_ids = consensus_map.getProteinIdentifications();
//TODO also iterate over all protein ID runs.
if (prot_ids.size() == 1)
{
protein_hit_meta_keys = MetaInfoInterfaceUtils::findCommonMetaKeys<vector<ProteinHit>, StringList>(prot_ids[0].getHits().begin(), prot_ids[0].getHits().end(), add_id_metavalues);
}
}
if (sorting_method == "none")
{
// don't sort in this case
}
else if (sorting_method == "RT")
{
consensus_map.sortByRT();
}
else if (sorting_method == "MZ")
{
consensus_map.sortByMZ();
}
else if (sorting_method == "RT_then_MZ")
{
consensus_map.sortByPosition();
}
else if (sorting_method == "intensity")
{
consensus_map.sortByIntensity();
}
else if (sorting_method == "quality_decreasing")
{
consensus_map.sortByQuality(true);
}
else if (sorting_method == "quality_increasing")
{
consensus_map.sortByQuality(false);
}
if (sort_by_maps)
{
consensus_map.sortByMaps();
}
if (sort_by_size)
{
consensus_map.sortBySize();
}
String date_time_now = DateTime::now().get();
// -------------------------------------------------------------------
if (!consensus_centroids.empty())
{
std::ofstream consensus_centroids_file(consensus_centroids.c_str());
if (!consensus_centroids_file)
{
throw Exception::UnableToCreateFile(__FILE__, __LINE__,
OPENMS_PRETTY_FUNCTION,
consensus_centroids);
}
SVOutStream output(consensus_centroids_file, sep, replacement,
quoting_method);
writeConsensusHeader(output, "Centroids of consensus features", in,
date_time_now);
writeFeatureHeader(output);
output << nl;
for (ConsensusMap::const_iterator cmit = consensus_map.begin();
cmit != consensus_map.end(); ++cmit)
{
output << *cmit << nl;
}
consensus_centroids_file.close();
}
// -------------------------------------------------------------------
if (!consensus_elements.empty())
{
std::ofstream consensus_elements_file(consensus_elements.c_str());
if (!consensus_elements_file)
{
throw Exception::UnableToCreateFile(__FILE__, __LINE__,
OPENMS_PRETTY_FUNCTION,
consensus_elements);
}
SVOutStream output(consensus_elements_file, sep, replacement,
quoting_method);
output.modifyStrings(false);
writeConsensusHeader(output, "Elements of consensus features", in,
date_time_now);
output << "#HL";
writeFeatureHeader(output, "", false, false);
writeFeatureHeader(output, "_cf", true, false);
output << nl;
output.modifyStrings(true);
for (ConsensusMap::const_iterator cmit = consensus_map.begin();
cmit != consensus_map.end(); ++cmit)
{
for (ConsensusFeature::const_iterator cfit = cmit->begin();
cfit != cmit->end(); ++cfit)
{
output << "H" << *cfit << *cmit << nl;
}
// We repeat the first feature handle at the end of the list.
// This way you can generate closed line drawings
// See Gnuplot set datafile commentschars
output << "L" << *cmit->begin() << *cmit << nl;
}
consensus_elements_file.close();
}
// -------------------------------------------------------------------
if (!consensus_features.empty())
{
std::ofstream consensus_features_file(consensus_features.c_str());
if (!consensus_features_file)
{
throw Exception::UnableToCreateFile(__FILE__, __LINE__,
OPENMS_PRETTY_FUNCTION,
consensus_features);
}
SVOutStream output(consensus_features_file, sep, replacement,
quoting_method);
std::map<Size, Size> map_id_to_map_num;
std::vector<Size> map_num_to_map_id;
FeatureHandle feature_handle_NaN;
feature_handle_NaN.setRT(
std::numeric_limits<FeatureHandle::CoordinateType>::quiet_NaN());
feature_handle_NaN.setMZ(
std::numeric_limits<FeatureHandle::CoordinateType>::quiet_NaN());
feature_handle_NaN.setIntensity(
std::numeric_limits<FeatureHandle::IntensityType>::quiet_NaN());
// feature_handle_NaN.setCharge(std::numeric_limits<Int>::max());
for (ConsensusMap::ColumnHeaders::const_iterator fdit =
consensus_map.getColumnHeaders().begin();
fdit != consensus_map.getColumnHeaders().end(); ++fdit)
{
map_id_to_map_num[fdit->first] = map_num_to_map_id.size();
map_num_to_map_id.push_back(fdit->first);
}
map<String, Size> prot_runs;
Size max_prot_run = 0;
StringList comments;
if (!no_ids)
{
String pep_line = "Protein identification runs associated with peptide/protein columns below: ";
for (vector<ProteinIdentification>::const_iterator prot_it =
consensus_map.getProteinIdentifications().begin();
prot_it != consensus_map.getProteinIdentifications().end();
++prot_it, ++max_prot_run)
{
String run_id = prot_it->getIdentifier();
// add to comment:
if (max_prot_run > 0)
{
pep_line += ", ";
}
pep_line += String(max_prot_run) + ": '" + run_id + "'";
map<String, Size>::iterator pos = prot_runs.find(run_id);
if (pos != prot_runs.end())
{
cerr << "Warning while exporting '" << in
<< "': protein identification run ID '" << run_id
<< "' occurs more than once" << endl;
}
else prot_runs[run_id] = max_prot_run;
}
if (max_prot_run > 0)
{
--max_prot_run; // increased beyond max. at end of for-loop
}
comments.push_back(pep_line);
}
writeConsensusHeader(output, "Consensus features", in,
date_time_now, comments);
writeFeatureHeader(output, "_cf");
output.modifyStrings(false);
for (Size fhindex = 0; fhindex < map_num_to_map_id.size();
++fhindex)
{
Size map_id = map_num_to_map_id[fhindex];
writeFeatureHeader(output, "_" + String(map_id), false, false);
}
if (!no_ids)
{
for (Size i = 0; i <= max_prot_run; ++i)
{
output << "peptide_" + String(i)
<< "n_diff_peptides_" + String(i)
<< "protein_" + String(i)
<< "n_diff_proteins_" + String(i);
}
}
// append column header for each meta value key
if (add_metavalues)
{
writeMetaValueKeysHeader(output, meta_value_keys);
}
output << nl;
output.modifyStrings(true);
for (ConsensusMap::const_iterator cmit = consensus_map.begin();
cmit != consensus_map.end(); ++cmit)
{
output << *cmit;
std::vector<FeatureHandle> feature_handles(map_num_to_map_id.size(),
feature_handle_NaN);
for (ConsensusFeature::const_iterator cfit = cmit->begin();
cfit != cmit->end(); ++cfit)
{
feature_handles[map_id_to_map_num[cfit->getMapIndex()]] = *cfit;
}
for (Size fhindex = 0; fhindex < feature_handles.size();
++fhindex)
{
output << feature_handles[fhindex];
}
if (!no_ids)
{
vector<set<String> > peptides_by_source(max_prot_run + 1),
proteins_by_source(max_prot_run + 1);
for (PeptideIdentificationList::const_iterator pep_it =
cmit->getPeptideIdentifications().begin(); pep_it !=
cmit->getPeptideIdentifications().end(); ++pep_it)
{
Size index = prot_runs[pep_it->getIdentifier()];
for (vector<PeptideHit>::const_iterator hit_it = pep_it->
getHits().begin(); hit_it != pep_it->getHits().end();
++hit_it)
{
peptides_by_source[index].insert(hit_it->getSequence().toString());
set<String> protein_accessions = hit_it->extractProteinAccessionsSet();
proteins_by_source[index].insert(protein_accessions.begin(), protein_accessions.end());
}
}
vector<set<String> >::iterator pep_it = peptides_by_source.begin(), prot_it = proteins_by_source.begin();
for (; pep_it != peptides_by_source.end(); ++pep_it, ++prot_it)
{
StringList seqs(vector<String>(pep_it->begin(),
pep_it->end())),
accs(vector<String>(prot_it->begin(), prot_it->end()));
for (StringList::iterator acc_it = accs.begin();
acc_it != accs.end(); ++acc_it)
{
acc_it->substitute('/', '_');
}
output << ListUtils::concatenate(seqs, "/") << seqs.size()
<< ListUtils::concatenate(accs, "/") << accs.size();
}
}
// append meta values for each ConsensusFeature
if (add_metavalues)
{
for (const auto& key: meta_value_keys)
{
output << cmit->getMetaValue(key, "");
}
}
output << nl;
}
consensus_features_file.close();
}
// -------------------------------------------------------------------
if (!out.empty())
{
std::ofstream outstr(out.c_str());
if (!outstr)
{
throw Exception::UnableToCreateFile(__FILE__, __LINE__,
OPENMS_PRETTY_FUNCTION, out);
}
SVOutStream output(outstr, sep, replacement, quoting_method);
output.modifyStrings(false);
writeConsensusHeader(output, "Consensus features", in,
date_time_now);
std::map<Size, Size> map_id_to_map_num;
std::vector<Size> map_num_to_map_id;
FeatureHandle feature_handle_NaN;
feature_handle_NaN.setRT(std::numeric_limits<
FeatureHandle::CoordinateType>::quiet_NaN());
feature_handle_NaN.setMZ(std::numeric_limits<
FeatureHandle::CoordinateType>::quiet_NaN());
feature_handle_NaN.setIntensity(std::numeric_limits<FeatureHandle::IntensityType>::quiet_NaN());
feature_handle_NaN.setWidth(std::numeric_limits<
FeatureHandle::WidthType>::quiet_NaN());
feature_handle_NaN.setCharge(0); // just to be sure...
// alternative?:
// feature_handle_NaN.setCharge(std::numeric_limits<Int>::max());
// It's hard to predict which meta keys will be used in file
// descriptions. So we assemble a list each time. Represent keys
// by String, not UInt, for implicit sorting.
std::set<String> all_file_desc_meta_keys;
std::vector<UInt> tmp_meta_keys;
for (ConsensusMap::ColumnHeaders::const_iterator fdit =
consensus_map.getColumnHeaders().begin();
fdit != consensus_map.getColumnHeaders().end(); ++fdit)
{
map_id_to_map_num[fdit->first] = map_num_to_map_id.size();
map_num_to_map_id.push_back(fdit->first);
fdit->second.getKeys(tmp_meta_keys);
for (std::vector<UInt>::const_iterator kit =
tmp_meta_keys.begin(); kit != tmp_meta_keys.end(); ++kit)
{
all_file_desc_meta_keys.insert(
MetaInfoInterface::metaRegistry().getName(*kit));
}
}
// headers (same order as the content of the output):
output << "#MAP" << "id" << "filename" << "label" << "size";
for (std::set<String>::const_iterator kit =
all_file_desc_meta_keys.begin(); kit !=
all_file_desc_meta_keys.end(); ++kit)
{
output << *kit;
}
output << nl;
if (!no_ids)
{
writeRunHeader(output);
writeProteinHeader(output);
writeMetaValuesHeader(output, protein_hit_meta_keys);
output << nl;
writePeptideHeader(output, "UNASSIGNEDPEPTIDE");
writeMetaValuesHeader(output, peptide_id_meta_keys);
writeMetaValuesHeader(output, peptide_hit_meta_keys);
output << nl;
}
output << "#CONSENSUS";
writeFeatureHeader(output, "_cf", true, false);
for (Size fhindex = 0; fhindex < map_num_to_map_id.size();
++fhindex)
{
Size map_id = map_num_to_map_id[fhindex];
writeFeatureHeader(output, "_" + String(map_id), false, false);
}
// append column header for each meta value key
if (add_metavalues)
{
writeMetaValueKeysHeader(output, meta_value_keys);
}
output << nl;
if (!no_ids)
{
writePeptideHeader(output, "PEPTIDE");
writeMetaValuesHeader(output, peptide_id_meta_keys);
writeMetaValuesHeader(output, peptide_hit_meta_keys);
output << nl;
}
output.modifyStrings(true);
// list of maps (intentionally at the beginning, contrary to order in consensusXML)
for (ConsensusMap::ColumnHeaders::const_iterator fdit =
consensus_map.getColumnHeaders().begin(); fdit !=
consensus_map.getColumnHeaders().end(); ++fdit)
{
output << "MAP" << fdit->first << fdit->second.filename
<< fdit->second.label << fdit->second.size;
for (std::set<String>::const_iterator kit =
all_file_desc_meta_keys.begin(); kit !=
all_file_desc_meta_keys.end(); ++kit)
{
if (fdit->second.metaValueExists(*kit))
{
output << String(fdit->second.getMetaValue(*kit));
}
else output << "";
}
output << nl;
}
// proteins and unassigned peptides
if (!no_ids) // proteins
{
for (vector<ProteinIdentification>::const_iterator it =
consensus_map.getProteinIdentifications().begin(); it !=
consensus_map.getProteinIdentifications().end(); ++it)
{
writeProteinId(output, *it, protein_hit_meta_keys);
}
// unassigned peptides
for (PeptideIdentificationList::const_iterator pit = consensus_map.getUnassignedPeptideIdentifications().begin(); pit != consensus_map.getUnassignedPeptideIdentifications().end(); ++pit)
{
writePeptideId(output, *pit, "UNASSIGNEDPEPTIDE", false, false, false, false, peptide_id_meta_keys, peptide_hit_meta_keys);
// first_dim_... stuff not supported for now
}
}
// consensus features (incl. peptide annotations):
for (ConsensusMap::const_iterator cmit = consensus_map.begin();
cmit != consensus_map.end(); ++cmit)
{
std::vector<FeatureHandle> feature_handles(map_num_to_map_id.size(),
feature_handle_NaN);
output << "CONSENSUS" << *cmit;
for (ConsensusFeature::const_iterator cfit = cmit->begin();
cfit != cmit->end(); ++cfit)
{
feature_handles[map_id_to_map_num[cfit->getMapIndex()]] = *cfit;
}
for (Size fhindex = 0; fhindex < feature_handles.size(); ++fhindex)
{
output << feature_handles[fhindex];
}
// append meta values for each ConsensusFeature
if (add_metavalues)
{
for (const auto& key: meta_value_keys)
{
output << cmit->getMetaValue(key, "");
}
}
output << nl;
// peptide ids
if (!no_ids)
{
for (PeptideIdentificationList::const_iterator pit =
cmit->getPeptideIdentifications().begin(); pit !=
cmit->getPeptideIdentifications().end(); ++pit)
{
writePeptideId(output, *pit, "PEPTIDE", false, false, false, false, peptide_id_meta_keys, peptide_hit_meta_keys);
}
}
}
}
return EXECUTION_OK;
}
else if (in_type == FileTypes::IDXML)
{
vector<ProteinIdentification> prot_ids;
PeptideIdentificationList pep_ids;
FileHandler().loadIdentifications(in, prot_ids, pep_ids, {FileTypes::IDXML}, log_type_);
StringList peptide_id_meta_keys;
StringList peptide_hit_meta_keys;
StringList protein_hit_meta_keys;
if (add_id_metavalues >= 0)
{
peptide_id_meta_keys = MetaInfoInterfaceUtils::findCommonMetaKeys<PeptideIdentificationList, StringList>(pep_ids.begin(), pep_ids.end(), add_id_metavalues);
// currently there is some hardcoded logic to create extra columns for these meta values so remove them to prevent duplication
std::erase(peptide_id_meta_keys, "predicted_RT");
std::erase(peptide_id_meta_keys, "predicted_RT_first_dim");
std::erase(peptide_id_meta_keys, "first_dim_rt");
std::erase(peptide_id_meta_keys, "predicted_PT");
std::erase(peptide_id_meta_keys, Constants::UserParam::SIGNIFICANCE_THRESHOLD);
}
if (add_hit_metavalues >= 0)
{
vector<PeptideHit> temp_hits;
for (Size i = 0; i != pep_ids.size(); ++i)
{
const vector<PeptideHit>& hits = pep_ids[i].getHits();
temp_hits.insert(temp_hits.end(), hits.begin(), hits.end());
}
peptide_hit_meta_keys = MetaInfoInterfaceUtils::findCommonMetaKeys<vector<PeptideHit>, StringList>(temp_hits.begin(), temp_hits.end(), add_hit_metavalues);
}
if (add_protein_hit_metavalues >= 0)
{
//TODO also iterate over all protein ID runs.
if (prot_ids.size() == 1)
protein_hit_meta_keys = MetaInfoInterfaceUtils::findCommonMetaKeys<vector<ProteinHit>, StringList>(prot_ids[0].getHits().begin(), prot_ids[0].getHits().end(), add_id_metavalues);
}
ofstream txt_out(out.c_str());
SVOutStream output(txt_out, sep, replacement, quoting_method);
bool proteins_only = getFlag_("id:proteins_only");
bool peptides_only = getFlag_("id:peptides_only");
bool groups = getFlag_("id:protein_groups");
if (proteins_only && peptides_only)
{
throw Exception::InvalidParameter(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "'id:proteins_only' and 'id:peptides_only' cannot be used together");
}
String what = peptides_only ? "" : "PEPTIDE";
if (!peptides_only)
{
writeRunHeader(output);
if (groups)
{
writeProteinGroupHeader(output);
}
writeProteinHeader(output);
writeMetaValuesHeader(output, protein_hit_meta_keys);
output << nl;
}
if (!proteins_only)
{
writePeptideHeader(output, what, true, true, first_dim_rt, true);
writeMetaValuesHeader(output, peptide_id_meta_keys);
writeMetaValuesHeader(output, peptide_hit_meta_keys);
output << nl;
}
for (vector<ProteinIdentification>::const_iterator it =
prot_ids.begin(); it != prot_ids.end(); ++it)
{
String actual_id = it->getIdentifier();
if (!peptides_only)
{
if (groups)
{
writeProteinGroups(output, it->getIndistinguishableProteins());
}
writeProteinId(output, *it, protein_hit_meta_keys);
}
if (!proteins_only)
{
// slight improvement on big idXML files with many different runs:
// index the identifiers and peptide ids to avoid running over
// them again and again (TODO)
for (PeptideIdentificationList::const_iterator pit =
pep_ids.begin(); pit != pep_ids.end(); ++pit)
{
if (pit->getIdentifier() == actual_id)
{
writePeptideId(output, *pit, what, true, true, first_dim_rt, true, peptide_id_meta_keys, peptide_hit_meta_keys);
}
}
}
}
txt_out.close();
}
else if (in_type == FileTypes::MZML)
{
PeakMap exp;
FileHandler().loadExperiment(in, exp, {FileTypes::MZML}, log_type_, false, false);
if (exp.getSpectra().empty() && exp.getChromatograms().empty())
{
writeLogError_("Error: File does not contain spectra or chromatograms.");
return INCOMPATIBLE_INPUT_DATA;
}
ofstream outstr(out.c_str());
SVOutStream output(outstr, sep, replacement, quoting_method);
output.modifyStrings(false);
{
if (exp.getSpectra().empty())
{
writeLogWarn_("Warning: File does not contain spectra. No output for spectra generated!");
}
Size output_count(0);
output << "#MS" << "level" << "rt" << "mz" << "charge" << "peaks" << "index" << "name" << nl;
for (PeakMap::const_iterator it = exp.getSpectra().begin(); it != exp.getSpectra().end(); ++it)
{
int index = (it - exp.getSpectra().begin());
String name = it->getName();
if (it->getMSLevel() == 1)
{
++output_count;
output << "MS" << it->getMSLevel() << String(it->getRT()) << "" << "" << it->size() << index << name << nl;
}
else if (it->getMSLevel() == 2)
{
double precursor_mz = -1;
int precursor_charge = -1;
if (!it->getPrecursors().empty())
{
precursor_mz = it->getPrecursors()[0].getMZ();
precursor_charge = it->getPrecursors()[0].getCharge();
}
++output_count;
output << "MS" << it->getMSLevel() << String(it->getRT()) << precursor_mz << precursor_charge << it->size() << index << name << nl;
}
}
if (output_count != 0)
{
writeLogInfo_("Exported " + String(output_count) + " spectra!");
}
}
{
if (exp.getChromatograms().empty())
{
writeLogWarn_("Warning: File does not contain chromatograms. No output for chromatograms generated!");
}
Size output_count(0);
Size unsupported_chromatogram_count(0);
for (vector<MSChromatogram >::const_iterator it = exp.getChromatograms().begin(); it != exp.getChromatograms().end(); ++it)
{
if (it->getChromatogramType() == ChromatogramSettings::ChromatogramType::SELECTED_REACTION_MONITORING_CHROMATOGRAM)
{
++output_count;
output << "MRM Q1=" << String(it->getPrecursor().getMZ()) << " Q3=" << String(it->getProduct().getMZ()) << nl;
for (MSChromatogram::ConstIterator cit = it->begin(); cit != it->end(); ++cit)
{
output << String(cit->getRT()) << " " << String(cit->getIntensity()) << nl;
}
output << nl;
}
else
{
++unsupported_chromatogram_count;
}
}
if (output_count != 0)
{
writeLogInfo_("Exported " + String(output_count) + " SRM spectra!");
}
if (unsupported_chromatogram_count != 0)
{
writeLogInfo_("Ignored " + String(unsupported_chromatogram_count) + " chromatograms not supported by TextExporter!");
}
}
output << nl;
outstr.close();
}
return EXECUTION_OK;
}
};
}
int main(int argc, const char** argv)
{
TOPPTextExporter t;
return t.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/OpenSwathAnalyzer.cpp | .cpp | 10,907 | 276 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hannes Roest $
// $Authors: Hannes Roest $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/OPENSWATH/MRMFeatureFinderScoring.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/DataAccessHelper.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/SimpleOpenMSSpectraAccessFactory.h>
#include <OpenMS/ANALYSIS/OPENSWATH/OpenSwathHelper.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CONCEPT/Exception.h>
#include <OpenMS/CONCEPT/ProgressLogger.h>
#include <fstream>
#include <memory>
using namespace OpenMS;
using namespace std;
//
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_OpenSwathAnalyzer OpenSwathAnalyzer
@brief Executes a peak-picking and scoring algorithm on MRM/SRM data.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → OpenSwathAnalyzer →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_OpenSwathChromatogramExtractor </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_OpenSwathFeatureXMLToTSV </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_MRMMapper </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_OpenSwathConfidenceScoring </td>
</tr>
</table>
</CENTER>
The idea of the OpenSwath Analyzer is to analyze a series of chromatograms
together with the associated meta information (stored in TraML format) in
order to determine likely places of elution of a peptide in targeted
proteomics data (derived from SWATH-MS or MRM/SRM). This tool will perform
peak picking on the chromatograms and scoring in a single tool, if you only
want the peak picking look at TOPP_MRMTransitionGroupPicker tool.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_OpenSwathAnalyzer.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_OpenSwathAnalyzer.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPOpenSwathAnalyzer : public TOPPBase
{
public:
TOPPOpenSwathAnalyzer() :
TOPPBase("OpenSwathAnalyzer",
"Picks peaks and finds features in an SWATH-MS or SRM experiment.", true)
{
}
protected:
typedef PeakMap MapType;
void registerModelOptions_(const String &default_model)
{
registerTOPPSubsection_("model", "Options to control the modeling of retention time transformations from data");
registerStringOption_("model:type", "<name>", default_model, "Type of model", false, true);
StringList model_types;
TransformationDescription::getModelTypes(model_types);
if (!ListUtils::contains(model_types, default_model))
{
model_types.insert(model_types.begin(), default_model);
}
setValidStrings_("model:type", model_types);
registerFlag_("model:symmetric_regression", "Only for 'linear' model: Perform linear regression on 'y - x' vs. 'y + x', instead of on 'y' vs. 'x'.", true);
}
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "",
"input file containing the chromatograms." /* , false */);
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerInputFile_("tr", "<file>", "", "transition file");
setValidFormats_("tr", ListUtils::create<String>("traML"));
registerInputFile_("rt_norm", "<file>", "",
"RT normalization file (how to map the RTs of this run to the ones stored in the library)",
false);
setValidFormats_("rt_norm", ListUtils::create<String>("trafoXML"));
registerOutputFile_("out", "<file>", "", "output file");
setValidFormats_("out", ListUtils::create<String>("featureXML"));
registerFlag_("no-strict",
"run in non-strict mode and allow some chromatograms to not be mapped.");
addEmptyLine_();
registerInputFileList_("swath_files", "<files>", StringList(),
"[applies only if you have full MS2 spectra maps] "
"Swath files that were used to extract the transitions. "
"If present, SWATH specific scoring will be used.",
false);
setValidFormats_("swath_files", ListUtils::create<String>("mzML"));
registerDoubleOption_("min_upper_edge_dist", "<double>", 0.0,
"[applies only if you have full MS2 spectra maps] "
"Minimal distance to the edge to still consider a precursor, in Thomson (only in SWATH)",
false);
registerModelOptions_("linear");
registerSubsection_("algorithm", "Algorithm parameters section");
}
Param getSubsectionDefaults_(const String &) const override
{
return MRMFeatureFinderScoring().getDefaults();
}
ExitCodes main_(int, const char **) override
{
StringList file_list = getStringList_("swath_files");
String in = getStringOption_("in");
String tr_file = getStringOption_("tr");
String out = getStringOption_("out");
double min_upper_edge_dist = getDoubleOption_("min_upper_edge_dist");
bool nostrict = getFlag_("no-strict");
// If we have a transformation file, trafo will transform the RT in the
// scoring according to the model. If we don't have one, it will apply the
// null transformation.
String trafo_in = getStringOption_("rt_norm");
TransformationDescription trafo;
if (!trafo_in.empty())
{
String model_type = getStringOption_("model:type");
Param model_params = getParam_().copy("model:", true);
FileHandler().loadTransformations(trafo_in, trafo, true, {FileTypes::TRANSFORMATIONXML});
trafo.fitModel(model_type, model_params);
}
Param feature_finder_param = getParam_().copy("algorithm:", true);
// Create the output map, load the input TraML file and the chromatograms
std::shared_ptr<MapType> exp (new MapType());
FeatureMap out_featureFile;
OpenSwath::LightTargetedExperiment transition_exp;
std::cout << "Loading TraML file" << std::endl;
{
TargetedExperiment transitions_exp_tmp;
FileHandler().loadTransitions(tr_file, transitions_exp_tmp, {FileTypes::TRAML});
OpenSwathDataAccessHelper::convertTargetedExp(transitions_exp_tmp, transition_exp);
}
FileHandler().loadExperiment(in, *exp.get(), {FileTypes::MZML}, log_type_);
// If there are no SWATH files, it's just regular SRM/MRM Scoring
if (file_list.empty())
{
MRMFeatureFinderScoring featureFinder;
featureFinder.setParameters(feature_finder_param);
featureFinder.setLogType(log_type_);
featureFinder.setStrictFlag(!nostrict);
OpenMS::MRMFeatureFinderScoring::TransitionGroupMapType transition_group_map;
OpenSwath::SpectrumAccessPtr chromatogram_ptr = SimpleOpenMSSpectraFactory::getSpectrumAccessOpenMSPtr(exp);
std::vector< OpenSwath::SwathMap > empty_maps;
featureFinder.pickExperiment(chromatogram_ptr, out_featureFile,
transition_exp, trafo, empty_maps, transition_group_map);
out_featureFile.ensureUniqueId();
addDataProcessing_(out_featureFile, getProcessingInfo_(DataProcessing::QUANTITATION));
FileHandler().storeFeatures(out, out_featureFile, {FileTypes::FEATUREXML});
return EXECUTION_OK;
}
// Here we deal with SWATH files (can be multiple files)
// Only in OpenMP 3.0 are unsigned loop variables allowed
#ifdef _OPENMP
#pragma omp parallel for
#endif
for (SignedSize i = 0; i < boost::numeric_cast<SignedSize>(file_list.size()); ++i)
{
MRMFeatureFinderScoring featureFinder;
std::shared_ptr<MapType> swath_map (new MapType());
FeatureMap featureFile;
cout << "Loading file " << file_list[i] << endl;
// no progress log on the console in parallel
featureFinder.setLogType(log_type_);
FileHandler().loadExperiment(file_list[i], *swath_map.get(), {FileTypes::MZML}, log_type_);
// Logging and output to the console
#ifdef _OPENMP
#pragma omp critical (featureFinder)
#endif
{
cout << "Doing file " << file_list[i]
#ifdef _OPENMP
<< " (" << i << " out of " << file_list.size() / omp_get_num_threads() << " -- total for all threads: " << file_list.size() << ")" << endl;
#else
<< " (" << i << " out of " << file_list.size() << ")" << endl;
#endif
}
OpenSwath::LightTargetedExperiment transition_exp_used;
bool do_continue = OpenSwathHelper::checkSwathMapAndSelectTransitions(*swath_map.get(), transition_exp, transition_exp_used, min_upper_edge_dist);
if (do_continue)
{
featureFinder.setParameters(feature_finder_param);
featureFinder.setStrictFlag(!nostrict);
OpenMS::MRMFeatureFinderScoring::TransitionGroupMapType transition_group_map;
OpenSwath::SpectrumAccessPtr swath_ptr = SimpleOpenMSSpectraFactory::getSpectrumAccessOpenMSPtr(swath_map);
OpenSwath::SpectrumAccessPtr chromatogram_ptr = SimpleOpenMSSpectraFactory::getSpectrumAccessOpenMSPtr(exp);
std::vector< OpenSwath::SwathMap > swath_maps(1);
swath_maps[0].sptr = swath_ptr;
featureFinder.pickExperiment(chromatogram_ptr, featureFile,
transition_exp_used, trafo, swath_maps, transition_group_map);
// write all features and the protein identifications from tmp_featureFile into featureFile
#ifdef _OPENMP
#pragma omp critical (featureFinder)
#endif
{
for (const Feature& feature : featureFile)
{
out_featureFile.push_back(feature);
}
for (const ProteinIdentification& protid : featureFile.getProteinIdentifications())
{
out_featureFile.getProteinIdentifications().push_back(protid);
}
}
} // end of do_continue
} // end of loop over all files / end of OpenMP
addDataProcessing_(out_featureFile, getProcessingInfo_(DataProcessing::QUANTITATION));
out_featureFile.ensureUniqueId();
FileHandler().storeFeatures(out, out_featureFile, {FileTypes::FEATUREXML});
return EXECUTION_OK;
}
};
int main(int argc, const char **argv)
{
TOPPOpenSwathAnalyzer tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/OpenSwathWorkflow.cpp | .cpp | 68,746 | 1,364 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hannes Roest $
// $Authors: Hannes Roest $
// --------------------------------------------------------------------------
// Consumers
#include <OpenMS/FORMAT/DATAACCESS/MSDataWritingConsumer.h>
#include <OpenMS/FORMAT/DATAACCESS/MSDataSqlConsumer.h>
// Files
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/FORMAT/SwathFile.h>
#include <OpenMS/FORMAT/DATAACCESS/MSDataTransformingConsumer.h>
#include <OpenMS/ANALYSIS/OPENSWATH/SwathWindowLoader.h>
#include <OpenMS/ANALYSIS/OPENSWATH/SwathQC.h>
#include <OpenMS/ANALYSIS/OPENSWATH/TransitionTSVFile.h>
#include <OpenMS/ANALYSIS/OPENSWATH/TransitionPQPFile.h>
#include <OpenMS/ANALYSIS/OPENSWATH/OpenSwathOSWWriter.h>
#include <OpenMS/SYSTEM/File.h>
// Kernel and implementations
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/SpectrumAccessOpenMS.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/SpectrumAccessTransforming.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/SpectrumAccessOpenMSInMemory.h>
#include <OpenMS/OPENSWATHALGO/DATAACCESS/SwathMap.h>
// Helpers
#include <OpenMS/ANALYSIS/OPENSWATH/OpenSwathHelper.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/DataAccessHelper.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/SimpleOpenMSSpectraAccessFactory.h>
// Algorithms
#include <OpenMS/ANALYSIS/OPENSWATH/MRMRTNormalizer.h>
#include <OpenMS/ANALYSIS/OPENSWATH/ChromatogramExtractor.h>
#include <OpenMS/ANALYSIS/OPENSWATH/MRMFeatureFinderScoring.h>
#include <OpenMS/ANALYSIS/OPENSWATH/MRMTransitionGroupPicker.h>
#include <OpenMS/ANALYSIS/OPENSWATH/SwathMapMassCorrection.h>
#include <OpenMS/ANALYSIS/OPENSWATH/OpenSwathWorkflow.h>
#include <cassert>
#include <limits>
// #define OPENSWATH_WORKFLOW_DEBUG
using namespace OpenMS;
// OpenMS base classes
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/APPLICATIONS/OpenSwathBase.h>
#include <OpenMS/CONCEPT/ProgressLogger.h>
#include <QDir>
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_OpenSwathWorkflow OpenSwathWorkflow
@brief Complete workflow to run OpenSWATH
This implements the OpenSWATH workflow as described in Rost and Rosenberger
et al. (Nature Biotechnology, 2014) and provides a complete, integrated
analysis tool without the need to run multiple tools consecutively. See also
http://openswath.org/ for additional documentation.
It executes the following steps in order, which is implemented in @ref OpenMS::OpenSwathWorkflow "OpenSwathWorkflow":
<ul>
<li>Reading of input files, which can be provided as one single mzML or multiple "split" mzMLs (one per SWATH)</li>
<li>Computing the retention time transformation, mass-to-charge and ion mobility correction using calibrant peptides</li>
<li>Reading of the transition list</li>
<li>Extracting the specified transitions</li>
<li>Scoring the peak groups in the extracted ion chromatograms (XIC)</li>
<li>Reporting the peak groups and the chromatograms</li>
</ul>
See below or have a look at the INI file (via "OpenSwathWorkflow -write_ini myini.ini") for available parameters and more functionality.
<h3>Input: SWATH maps and assay library (transition list) </h3>
SWATH maps can be provided as mzML files, either as single file directly from
the machine (this assumes that the SWATH method has 1 MS1 and then n MS2
spectra which are ordered the same way for each cycle). E.g. a valid method
would be MS1, MS2 [400-425], MS2 [425-450], MS1, MS2 [400-425], MS2 [425-450]
while an invalid method would be MS1, MS2 [400-425], MS2 [425-450], MS1, MS2
[425-450], MS2 [400-425] where MS2 [xx-yy] indicates an MS2 scan with an
isolation window starting at xx and ending at yy. OpenSwathWorkflow will try
to read the SWATH windows from the data, if this is not possible please
provide a tab-separated list with the correct windows using the
-swath_windows_file parameter (this is recommended). Note that the software
expects extraction windows (e.g. which peptides to extract from
which window) which cannot have overlaps, otherwise peptides will be
extracted from two different windows.
Alternatively, a set of split files (n+1 mzML files) can be provided, each
containing one SWATH map (or MS1 map).
Since the file size can become rather large, it is recommended to not load the
whole file into memory but rather cache it somewhere on the disk using a
fast-access data format. This can be specified using the -readOptions cacheWorkingInMemory
parameter (this is recommended!).
The assay library (transition list) is provided through the @p -tr parameter and can be in one of the following formats:
<ul>
<li> @ref OpenMS::TraMLFile "TraML" </li>
<li> @ref OpenMS::TransitionTSVFile "OpenSWATH TSV transition lists" </li>
<li> @ref OpenMS::TransitionPQPFile "OpenSWATH PQP SQLite files" </li>
<li> SpectraST MRM transition lists </li>
<li> Skyline transition lists </li>
<li> Spectronaut transition lists </li>
</ul>
<h3>Parameters</h3>
The current parameters are optimized for 2 hour gradients on SCIEX 5600 /
6600 TripleTOF instruments with a peak width of around 30 seconds using iRT
peptides. If your chromatography differs, please consider adjusting
@p -Scoring:TransitionGroupPicker:min_peak_width to allow for smaller or larger
peaks and adjust the @p -rt_extraction_window to use a different extraction
window for the retention time. In m/z domain, consider adjusting
@p -mz_extraction_window to your instrument resolution, which can be in Th or
ppm.
Furthermore, if you wish to use MS1 information, use the @p -enable_ms1 flag
and provide an MS1 map in addition to the SWATH data.
If you encounter issues with peak picking, try to disable peak filtering by
setting @p -Scoring:TransitionGroupPicker:compute_peak_quality false which will
disable the filtering of peaks by chromatographic quality. Furthermore, you
can adjust the smoothing parameters for the peak picking, by adjusting
@p -Scoring:TransitionGroupPicker:PeakPickerChromatogram:sgolay_frame_length or using a
Gaussian smoothing based on your estimated peak width. Adjusting the signal
to noise threshold will make the peaks wider or smaller.
<h3>Output: Feature list and chromatograms </h3>
The output of the OpenSwathWorkflow is a feature list, either as FeatureXML
or a @ref OpenMS::OSWFile "OpenSWATH SQLite file" (use @p -out_features) while the latter is more memory
friendly and can be directly used as input to other tools such as pyProphet (a Python
re-implementation of mProphet) software tool, see Reiter et al (2011, Nature
Methods).
If you analyze large datasets, it is recommended to only use the @ref OpenMS::OSWFile "OSWFile format".
For downstream analysis (e.g. using pyProphet) the @ref OpenMS::OSWFile "OSWFile format" is recommended.
In addition, the extracted chromatograms can be written out using the
@p -out_chrom parameter.
<h4> Feature list output format </h4>
For more information on the feature tables in the @ref OpenMS::OSWFile "OpenSWATH SQLite file output", see @ref OpenMS::OpenSwathOSWWriter "the OpenSwathOSWWriter class".
<h3>Execution flow:</h3>
The overall execution flow for this tool is implemented in @ref OpenMS::OpenSwathWorkflow "OpenSwathWorkflow".
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_OpenSwathWorkflow.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_OpenSwathWorkflow.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPOpenSwathWorkflow
: public TOPPOpenSwathBase
{
public:
TOPPOpenSwathWorkflow()
: TOPPOpenSwathBase("OpenSwathWorkflow", "Complete workflow to run OpenSWATH", true,
{
{"Roest, H.L. et al.",
"OpenSWATH enables automated, targeted analysis of data-independent acquisition MS data",
"Nature Biotechnology volume 32, pages 219–223 (2014)",
"https://doi.org/10.1038/nbt.2841"},
{"Rosenberger, G. et al.",
"Inference and quantification of peptidoforms in large sample cohorts by SWATH-MS",
"Nature Biotechnology volume 35, pages 781–788 (2017)",
"https://doi.org/10.1038/nbt.3908"},
{"Meier, F. et al.",
"diaPASEF: parallel accumulation–serial fragmentation combined with data-independent acquisition",
"Nature Methods volume 17, pages 1229–1236 (2020)",
"https://doi.org/10.1038/s41592-020-00998-0"}
})
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFileList_("in", "<files>", StringList(), "Input files separated by blank");
setValidFormats_("in", ListUtils::create<String>("mzML,mzXML,sqMass"));
registerInputFile_("tr", "<file>", "", "transition file ('TraML','tsv','pqp')");
setValidFormats_("tr", ListUtils::create<String>("traML,tsv,pqp"));
registerStringOption_("tr_type", "<type>", "", "input file type -- default: determined from file extension or content\n", false);
setValidStrings_("tr_type", ListUtils::create<String>("traML,tsv,pqp"));
// iRT calibration
registerStringOption_("auto_irt", "<true|false>", "true",
"Whether to sample iRTs on‐the‐fly (true) from the input targeted transition file (instead of passing specific iRT files). This may be useful if standard iRTs (Biognosys iRT kit) were not spiked-in. If set to false, and no additional iRT files are provided via `-tr_irt` / `-tr_irt_nonlinear`, and no transformation is provided via `-rt_norm`, then no calibration is performed.", false, true);
setValidStrings_("auto_irt", ListUtils::create<String>("true,false"));
registerInputFile_("swath_windows_file", "<file>", "", "Optional, tab-separated file containing the SWATH windows for extraction: lower_offset upper_offset. Note that the first line is a header and will be skipped.", false);
registerFlag_("sort_swath_maps", "Sort input SWATH files when matching to SWATH windows from swath_windows_file", true);
registerStringOption_("enable_ms1", "<true|false>", "true", "Extract the precursor ion trace(s) and use for scoring if present", false, true);
setValidStrings_("enable_ms1", ListUtils::create<String>("true,false"));
registerStringOption_("enable_ipf", "<true|false>", "true", "Enable additional scoring of identification assays using IPF (see online documentation)", false, true);
setValidStrings_("enable_ipf", ListUtils::create<String>("true,false"));
registerOutputFile_("out_features", "<file>", "", "feature output file, either .osw (PyProphet-compatible SQLite file) or .featureXML", false);
setValidFormats_("out_features", ListUtils::create<String>("osw,featureXML"));
registerStringOption_("out_features_type", "<type>", "", "input file type -- default: determined from file extension or content\n", false);
setValidStrings_("out_features_type", {"osw","featureXML"});
registerOutputFile_("out_chrom", "<file>", "", "Also output all computed chromatograms output in mzML (chrom.mzML) or sqMass (SQLite format)", false, true);
setValidFormats_("out_chrom", ListUtils::create<String>("mzML,sqMass"));
// additional QC data
registerOutputFile_("out_qc", "<file>", "", "Optional QC meta data (charge distribution in MS1). Only works with mzML input files.", false, true);
setValidFormats_("out_qc", ListUtils::create<String>("json"));
// misc options
registerDoubleOption_("min_upper_edge_dist", "<double>", 0.0, "Minimal distance to the upper edge of a Swath window to still consider a precursor, in Thomson", false, true);
registerFlag_("pasef", "data is PASEF data");
// RT, mz and IM windows
registerStringOption_("estimate_extraction_windows", "<all|none|rt[,mz][,im]>", "all", "Choose which extraction windows to estimate during iRT calibration. 'all' = estimate RT, m/z, and IM windows; 'none' = use user-set windows; or a comma-separated list from {rt,mz,im}.", false);
registerDoubleOption_("rt_estimation_padding_factor", "<double>", 1.3, "A padding factor to multiply the estimated RT window by. For example, a factor of 1.3 will add a 30% padding to the estimated RT window, so if the estimated RT window is 144, then 43 will be added for a total estimated RT window of 187 seconds. A factor of 1.0 will not add any padding to the estimated window.", false);
setMinFloat_("rt_estimation_padding_factor", 1.0);
registerDoubleOption_("im_estimation_padding_factor", "<double>", 1.0, "A padding factor to multiply the estimated ion_mobility window by. For example, a factor of 1.3 will add a 30% padding to the estimated ion_mobility window, so if the estimated ion_mobility window is 0.03, then 0.009 will be added for a total estimated ion_mobility window of 0.039. A factor of 1.0 will not add any padding to the estimated window.", false);
setMinFloat_("im_estimation_padding_factor", 1.0);
registerDoubleOption_("mz_estimation_padding_factor", "<double>", 1.0, "A padding factor to multiply the estimated m/z window by. For example, a factor of 1.3 will add a 30% padding to the estimated m/z window, so if the estimated m/z window is 18, then 5.4 will be added for a total estimated m/z window of 23.4. A factor of 1.0 will not add any padding to the estimated window.", false);
setMinFloat_("mz_estimation_padding_factor", 1.0);
registerDoubleOption_("rt_extraction_window", "<double>", 600.0, "Only extract RT around this value (-1 means extract over the whole range, a value of 600 means to extract around +/- 300 s of the expected elution).", false);
registerDoubleOption_("extra_rt_extraction_window", "<double>", 0.0, "Output an XIC with a RT-window by this much larger (e.g. to visually inspect a larger area of the chromatogram)", false, true);
setMinFloat_("extra_rt_extraction_window", 0.0);
registerDoubleOption_("ion_mobility_window", "<double>", -1, "Extraction window in ion mobility dimension (in 1/k0 or milliseconds depending on library). This is the full window size, e.g. a value of 10 milliseconds would extract 5 milliseconds on either side. -1 means extract over the whole range or ion mobility is not present. (Default for diaPASEF data: 0.06 1/k0)", false);
registerDoubleOption_("mz_extraction_window", "<double>", 50, "Extraction window in Thomson or ppm (see mz_extraction_window_unit)", false);
setMinFloat_("mz_extraction_window", 0.0);
registerStringOption_("mz_extraction_window_unit", "<name>", "ppm", "Unit for mz extraction", false, true);
setValidStrings_("mz_extraction_window_unit", ListUtils::create<String>("Th,ppm"));
// MS1 mz windows and ion mobility
registerDoubleOption_("mz_extraction_window_ms1", "<double>", 50, "Extraction window used in MS1 in Thomson or ppm (see mz_extraction_window_ms1_unit)", false);
setMinFloat_("mz_extraction_window_ms1", 0.0);
registerStringOption_("mz_extraction_window_ms1_unit", "<name>", "ppm", "Unit of the MS1 m/z extraction window", false, true);
setValidStrings_("mz_extraction_window_ms1_unit", ListUtils::create<String>("ppm,Th"));
registerDoubleOption_("im_extraction_window_ms1", "<double>", -1, "Extraction window in ion mobility dimension for MS1 (in 1/k0 or milliseconds depending on library). -1 means this is not ion mobility data.", false);
registerStringOption_("use_ms1_ion_mobility", "<name>", "true", "Also perform precursor extraction using the same ion mobility window as for fragment ion extraction", false, true);
setValidStrings_("use_ms1_ion_mobility", ListUtils::create<String>("true,false"));
registerStringOption_("matching_window_only", "<name>", "false", "Assume the input data is targeted / PRM-like data with potentially overlapping DIA windows. Will only attempt to extract each assay from the *best* matching DIA window (instead of all matching windows).", false, true);
setValidStrings_("matching_window_only", ListUtils::create<String>("true,false"));
// iRT mz and IM windows
registerDoubleOption_("irt_mz_extraction_window", "<double>", 50, "Extraction window used for iRT and m/z correction in Thomson or ppm (see irt_mz_extraction_window_unit)", false, true);
setMinFloat_("irt_mz_extraction_window", 0.0);
registerStringOption_("irt_mz_extraction_window_unit", "<name>", "ppm", "Unit for mz extraction", false, true);
setValidStrings_("irt_mz_extraction_window_unit", ListUtils::create<String>("Th,ppm"));
registerDoubleOption_("irt_im_extraction_window", "<double>", -1, "Ion mobility extraction window used for iRT (in 1/K0 or milliseconds depending on library). -1 means do not perform ion mobility calibration", false, true);
registerDoubleOption_("irt_nonlinear_rt_extraction_window", "<double>", 600.0, "Only extract RT around this value for non linear iRT calibration (-1 means extract over the whole range, a value of 600 means to extract around +/- 300 s of the expected elution).", false, true);
setMinFloat_("irt_nonlinear_rt_extraction_window", -1.0); // means extract over the whole range
registerDoubleOption_("min_rsq", "<double>", 0.95, "Minimum r-squared of RT peptides regression", false, true);
registerDoubleOption_("min_coverage", "<double>", 0.6, "Minimum relative amount of RT peptides to keep", false, true);
registerFlag_("split_file_input", "The input files each contain one single SWATH (alternatively: all SWATH are in separate files)", true);
registerFlag_("use_elution_model_score", "Turn on elution model score (EMG fit to peak)", true);
registerStringOption_("readOptions", "<name>", "normal", "Whether to run OpenSWATH directly on the input data, cache data to disk first or to perform a datareduction step first. If you choose cache, make sure to also set tempDirectory", false, true);
setValidStrings_("readOptions", ListUtils::create<String>("normal,cache,cacheWorkingInMemory,workingInMemory"));
registerStringOption_("mz_correction_function", "<name>", "none", "Use the retention time normalization peptide MS2 masses to perform a mass correction (linear, weighted by intensity linear or quadratic) of all spectra.", false, true);
setValidStrings_("mz_correction_function", ListUtils::create<String>("none,regression_delta_ppm,unweighted_regression,weighted_regression,quadratic_regression,weighted_quadratic_regression,weighted_quadratic_regression_delta_ppm,quadratic_regression_delta_ppm"));
registerStringOption_("tempDirectory", "<tmp>", File::getTempDirectory(), "Temporary directory to store cached files for example", false, true);
registerStringOption_("extraction_function", "<name>", "tophat", "Function used to extract the signal", false, true);
setValidStrings_("extraction_function", ListUtils::create<String>("tophat,bartlett"));
registerIntOption_("batchSize", "<number>", 1000, "The batch size of chromatograms to process (0 means to only have one batch, sensible values are around 250-1000)", false, true);
setMinInt_("batchSize", 0);
registerIntOption_("outer_loop_threads", "<number>", -1, "How many threads should be used for the outer loop (-1 use all threads, use 4 to analyze 4 SWATH windows in memory at once).", false, true);
registerIntOption_("ms1_isotopes", "<number>", 3, "The number of MS1 isotopes used for extraction", false, true);
setMinInt_("ms1_isotopes", 0);
registerSubsection_("Scoring", "Scoring parameters section");
registerSubsection_("Library", "Library parameters section");
registerSubsection_("Calibration", "Parameters for calibrant iRT peptides for RT normalization and mass / ion mobility correction.");
registerSubsection_("Calibration:RTNormalization", "Parameters for the RTNormalization for iRT peptides. This specifies how the RT alignment is performed and how outlier detection is applied. Outlier detection can be done iteratively (by default) which removes one outlier per iteration or using the RANSAC algorithm.");
registerSubsection_("Calibration:MassIMCorrection", "Parameters for the m/z and ion mobility calibration.");
registerTOPPSubsection_("Debugging", "Debugging");
registerOutputFile_("Debugging:irt_mzml", "<file>", "", "Chromatogram mzML containing the iRT peptides", false);
setValidFormats_("Debugging:irt_mzml", ListUtils::create<String>("mzML"));
registerOutputFile_("Debugging:irt_trafo", "<file>", "", "Transformation file for RT transform", false);
setValidFormats_("Debugging:irt_trafo", ListUtils::create<String>("trafoXML"));
}
Param getSubsectionDefaults_(const String& name) const override
{
if (name == "Scoring")
{
// set sensible default parameters
Param feature_finder_param = MRMFeatureFinderScoring().getDefaults();
feature_finder_param.remove("rt_extraction_window");
feature_finder_param.setValue("stop_report_after_feature", 5);
feature_finder_param.setValue("rt_normalization_factor", 100.0); // for iRT peptides between 0 and 100 (more or less)
feature_finder_param.setValue("Scores:use_ms1_mi", "true");
feature_finder_param.setValue("Scores:use_mi_score", "true");
feature_finder_param.setValue("TransitionGroupPicker:min_peak_width", -1.0);
feature_finder_param.setValue("TransitionGroupPicker:recalculate_peaks", "true");
feature_finder_param.setValue("TransitionGroupPicker:compute_peak_quality", "false");
feature_finder_param.setValue("TransitionGroupPicker:minimal_quality", -1.5);
feature_finder_param.setValue("TransitionGroupPicker:background_subtraction", "none");
feature_finder_param.setValue("TransitionGroupPicker:compute_peak_shape_metrics", "false");
feature_finder_param.remove("TransitionGroupPicker:stop_after_intensity_ratio");
// Peak Picker
feature_finder_param.setValue("TransitionGroupPicker:PeakPickerChromatogram:use_gauss", "false");
feature_finder_param.setValue("TransitionGroupPicker:PeakPickerChromatogram:sgolay_polynomial_order", 3);
feature_finder_param.setValue("TransitionGroupPicker:PeakPickerChromatogram:sgolay_frame_length", 11);
feature_finder_param.setValue("TransitionGroupPicker:PeakPickerChromatogram:peak_width", -1.0);
feature_finder_param.setValue("TransitionGroupPicker:PeakPickerChromatogram:remove_overlapping_peaks", "true");
feature_finder_param.setValue("TransitionGroupPicker:PeakPickerChromatogram:write_sn_log_messages", "false"); // no log messages
// TODO it seems that the legacy method produces slightly larger peaks, e.g. it will not cut off peaks too early
// however the same can be achieved by using a relatively low SN cutoff in the -Scoring:TransitionGroupPicker:PeakPickerChromatogram:signal_to_noise 0.5
feature_finder_param.setValue("TransitionGroupPicker:recalculate_peaks_max_z", 0.75);
feature_finder_param.setValue("TransitionGroupPicker:PeakPickerChromatogram:method", "corrected");
feature_finder_param.setValue("TransitionGroupPicker:PeakPickerChromatogram:signal_to_noise", 0.1);
feature_finder_param.setValue("TransitionGroupPicker:PeakPickerChromatogram:gauss_width", 30.0);
feature_finder_param.setValue("uis_threshold_sn", -1);
feature_finder_param.setValue("uis_threshold_peak_area", 0);
feature_finder_param.remove("TransitionGroupPicker:PeakPickerChromatogram:sn_win_len");
feature_finder_param.remove("TransitionGroupPicker:PeakPickerChromatogram:sn_bin_count");
feature_finder_param.remove("TransitionGroupPicker:PeakPickerChromatogram:stop_after_feature");
// EMG Scoring - turn off by default since it is very CPU-intensive
feature_finder_param.remove("Scores:use_elution_model_score");
feature_finder_param.setValue("EMGScoring:max_iteration", 10);
feature_finder_param.remove("EMGScoring:interpolation_step");
feature_finder_param.remove("EMGScoring:tolerance_stdev_bounding_box");
feature_finder_param.remove("EMGScoring:deltaAbsError");
// remove these parameters
feature_finder_param.remove("EMGScoring:statistics:mean");
feature_finder_param.remove("EMGScoring:statistics:variance");
return feature_finder_param;
}
else if (name == "Library")
{
return TransitionTSVFile().getDefaults();
}
else if (name == "Calibration")
{
Param p;
p.setValue("irt_bins", 100, "Number of RT bins for sampling. (When `auto_irt` is set to 'true')");
p.setMinInt("irt_bins", 5);
p.setValue("irt_peptides_per_bin", 5, "Peptides sampled per bin. (When `auto_irt` is set to 'true')");
p.setMinInt("irt_peptides_per_bin", 1);
p.setValue("irt_seed", 5489, "RNG seed (0 = non‐deterministic). (When `auto_irt` is set to 'true')");
p.setMinInt("irt_seed", 0);
p.setValue("irt_bins_nonlinear", 2000, "Number of RT bins for sampling. (When `auto_irt` is set to 'true')");
p.setMinInt("irt_bins_nonlinear", 5);
p.setValue("irt_peptides_per_bin_nonlinear", 50, "Peptides sampled per bin for additional nonlinear calibration. If 0, nonlinear calibration will not be performed. (When `auto_irt` is set to 'true')");
p.setMinInt("irt_peptides_per_bin_nonlinear", 0);
// one of the following two needs to be set
p.setValue("tr_irt", "", "transition file ('TraML') for linear iRTs. Takes precedent even when `auto_rt` is set to 'true'");
// one of the following two needs to be set
p.setValue("tr_irt_nonlinear", "", "additional nonlinear transition file ('TraML'). Takes precedent even when `auto_rt` is set to 'true'");
// priority peptides for sampling
p.setValue("tr_irt_priority_sampling", "", "Optional custom transition file (TSV format only) containing additional priority peptides for iRT sampling. These peptides will be prioritized alongside the built-in irtkit and cirtkit peptides when `auto_irt` is enabled. Useful for including project-specific or custom iRT peptides.");
p.setValue("rt_norm", "", "RT normalization file (how to map the RTs of this run to the ones stored in the library). If set, tr_irt may be omitted.");
return p;
}
else if (name == "Calibration:RTNormalization")
{
Param p;
p.setValue("alignmentMethod", "linear", "How to perform the alignment to the normalized RT space using anchor points. 'linear': perform linear regression (for few anchor points). 'interpolated': Interpolate between anchor points (for few, noise-free anchor points). 'lowess' Use local regression (for many, noisy anchor points). 'b_spline' use b splines for smoothing.");
p.setValidStrings("alignmentMethod", {"linear","interpolated","lowess","b_spline"});
p.setValue("lowess:auto_span", "true", "If true, or if 'span' is 0, automatically select LOWESS span by cross-validation.");
p.setValidStrings("lowess:auto_span", {"true","false"});
p.setValue("lowess:span", 0.05, "Span parameter for lowess");
p.setMinFloat("lowess:span", 0.0);
p.setMaxFloat("lowess:span", 1.0);
p.setValue("lowess:auto_span_min", 0.15,"Lower bound for auto-selected span.");
p.setMinFloat("lowess:auto_span_min", 0.001);
p.setValue("lowess:auto_span_max", 0.80,"Upper bound for auto-selected span.");
p.setMaxFloat("lowess:auto_span_max", 0.99);
p.setValue("lowess:auto_span_grid", "0.005,0.01,0.05,0.15,0.25,0.30,0.50,0.70,0.90", "Optional explicit grid of span candidates in (0,1]. Comma-separated list, e.g. '0.2,0.3,0.5'. If empty, a default grid is used.");
p.setValue("b_spline:num_nodes", 5, "Number of nodes for b spline");
p.setMinInt("b_spline:num_nodes", 0);
p.setValue("outlierMethod", "iter_residual", "Which outlier detection method to use (valid: 'iter_residual', 'iter_jackknife', 'ransac', 'none'). Iterative methods remove one outlier at a time. Jackknife approach optimizes for maximum r-squared improvement while 'iter_residual' removes the datapoint with the largest residual error (removal by residual is computationally cheaper, use this with lots of peptides).");
p.setValidStrings("outlierMethod", {"iter_residual","iter_jackknife","ransac","none"});
p.setValue("useIterativeChauvenet", "false", "Whether to use Chauvenet's criterion when using iterative methods. This should be used if the algorithm removes too many datapoints but it may lead to true outliers being retained.");
p.setValidStrings("useIterativeChauvenet", {"true","false"});
p.setValue("RANSACMaxIterations", 1000, "Maximum iterations for the RANSAC outlier detection algorithm.");
p.setValue("RANSACMaxPercentRTThreshold", 3, "Maximum threshold in RT dimension for the RANSAC outlier detection algorithm (in percent of the total gradient). Default is set to 3% which is around +/- 4 minutes on a 120 gradient.");
p.setValue("RANSACSamplingSize", 10, "Sampling size of data points per iteration for the RANSAC outlier detection algorithm.");
p.setValue("estimateBestPeptides", "false", "Whether the algorithms should try to choose the best peptides based on their peak shape for normalization. Use this option you do not expect all your peptides to be detected in a sample and too many 'bad' peptides enter the outlier removal step (e.g. due to them being endogenous peptides or using a less curated list of peptides).");
p.setValidStrings("estimateBestPeptides", {"true","false"});
p.setValue("InitialQualityCutoff", 0.5, "The initial overall quality cutoff for a peak to be scored (range ca. -2 to 2)");
p.setValue("OverallQualityCutoff", 5.5, "The overall quality cutoff for a peak to go into the retention time estimation (range ca. 0 to 10)");
p.setValue("NrRTBins", 10, "Number of RT bins to use to compute coverage. This option should be used to ensure that there is a complete coverage of the RT space (this should detect cases where only a part of the RT gradient is actually covered by normalization peptides)");
p.setValue("MinPeptidesPerBin", 1, "Minimal number of peptides that are required for a bin to counted as 'covered'");
p.setValue("MinBinsFilled", 8, "Minimal number of bins required to be covered");
return p;
}
else if (name == "Calibration:MassIMCorrection")
{
Param p = SwathMapMassCorrection().getDefaults();
return p;
}
else
{
throw Exception::InvalidValue(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Unknown subsection", name);
}
}
/**
@brief Selection flags for using auto-estimated extraction windows.
This POD struct indicates for which coordinates (RT, m/z, ion mobility)
the automatically estimated extraction windows should be applied.
*/
struct EstimateWindowsChoice
{
bool rt{false};
bool mz{false};
bool im{false};
};
/**
@brief Parse the user option for selecting estimated extraction windows.
Interprets the option value as one of: \n
- @c "all" → enable all: RT, m/z, and ion mobility \n
- @c "none" → enable none (default; keeps user-specified fixed windows) \n
- a comma-separated list drawn from @c {"rt","mz","im"}, e.g. @c "rt,mz" \n
Parsing is case-insensitive and tolerant of surrounding whitespace.
Unknown tokens or an empty/malformed value raise an exception.
@param[in] estimate_windows_option_str The option string (e.g. "all", "none", "rt,mz").
@return An @c EstimateWindowsChoice with the requested flags set.
@throws Exception::InvalidParameter
If the string is empty/malformed or contains unknown tokens.
*/
EstimateWindowsChoice parseEstimateExtractionWindows_(String estimate_windows_option_str)
{
EstimateWindowsChoice out;
const String s = estimate_windows_option_str.trim().toLower();
if (s == "all")
{
out.rt = out.mz = out.im = true;
return out;
}
if (s == "none")
{
return out; // all false, don't use estimated extraction windows
}
StringList toks;
s.split(',', toks);
if (toks.empty())
{
throw OpenMS::Exception::InvalidParameter(
__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
"estimate_extraction_windows: value is empty or malformed (expected all|none|rt[,mz][,im])");
}
for (String t : toks)
{
t.trim();
t.toLower();
if (t == "rt") { out.rt = true; }
else if (t == "mz") { out.mz = true; }
else if (t == "im") { out.im = true; }
else if (!t.empty())
{
throw OpenMS::Exception::InvalidParameter(
__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
"estimate_extraction_windows: unknown token '" + t +
"'. Allowed: all, none, or a comma-separated value from {rt,mz,im}.");
}
}
return out;
}
/**
@brief Validate an auto-estimated extraction window.
A window is considered valid if it is finite and strictly greater than a
small positive threshold. This guards against denormals (e.g., ~1e-310),
zeros, negative values, and NaNs/Inf.
Typical units:
- RT window: seconds
- m/z window: ppm
- IM window: native instrument units
@param[in] v The candidate window value.
@param[in] min_positive Minimum strictly-positive threshold (default: 1e-9).
Estimates <= this threshold are deemed invalid.
@return True if the window is usable; false otherwise.
*/
inline bool is_valid_win(double v, double min_positive = 1e-9) noexcept
{
return std::isfinite(v) && (v > min_positive);
}
/**
@brief Validate, log, and optionally apply an auto-estimated extraction window.
Behavior:
- If @p applicable is false (e.g., no IM data), logs an INFO and leaves @p dst_param unchanged.
- If the estimate is invalid, logs a WARN and leaves @p dst_param unchanged.
- If the estimate is valid and @p commit is true, logs an INFO and assigns @p dst_param = @p estimate.
- If the estimate is valid and @p commit is false, logs an INFO that reports the estimate and that the user value is kept.
Typical usage:
- RT window (seconds)
- MS2 m/z window (ppm)
- MS1 m/z window (ppm)
- IM window (1/k0), only when applicable (e.g., PASEF/IM data)
@param[in] label Human-readable label used in logs (e.g., "RT", "MS2 m/z (ppm)", "MS1 ion mobility (1/k0)").
@param[in] estimate Auto-estimated window value to consider.
@param[out] dst_param Destination parameter to update on success (by reference).
@param[in] user_value The current/user-specified value (reported in logs).
@param[in] applicable Whether this window is applicable for the current run/config.
If false, the value is not applied and a note is logged.
@param[in] commit Whether to apply the estimate. If false, only logs the estimate vs. user value.
Default: true (backwards compatible).
*/
void apply_window(const char* label,
double estimate,
double& dst_param,
const double user_value,
bool applicable = true,
bool commit = true)
{
if (!applicable)
{
OPENMS_LOG_INFO << "[Estimated] " << label
<< " window: not applicable; keeping user value "
<< user_value << std::endl;
return;
}
if (!is_valid_win(estimate))
{
OPENMS_LOG_WARN << "[Estimated] " << label
<< " window estimate invalid (estimated=" << estimate
<< "); keeping user value " << user_value << std::endl;
return;
}
if (commit)
{
OPENMS_LOG_INFO << "[Estimated] " << label
<< " window applied: " << estimate
<< " (was " << user_value << ")" << std::endl;
dst_param = estimate;
}
else
{
OPENMS_LOG_INFO << "[Estimated] " << label
<< " window estimated: " << estimate
<< "; keeping user value " << user_value << std::endl;
// leave dst_param unchanged
}
}
/**
@brief Load priority peptide sequences from TSV files (irtkit and cirtkit)
Loads peptide sequences from the specified TSV files and returns them as a
set for quick lookup. Used to prioritize common iRT peptides during sampling.
@param[in] tsv_files Vector of file paths to TSV files to load
@param[in] tsv_reader_param Parameters for the TSV reader
@return Set of unique peptide sequences from the loaded files
*/
std::unordered_set<std::string> loadPriorityPeptideSequences(
const std::vector<String>& tsv_files,
const Param& tsv_reader_param)
{
std::unordered_set<std::string> priority_sequences;
for (const auto& tsv_file : tsv_files)
{
if (tsv_file.empty() || !File::exists(tsv_file))
{
OPENMS_LOG_WARN << "Priority peptide file not found: " << tsv_file << std::endl;
continue;
}
try
{
FileTypes::Type file_type = FileHandler::getType(tsv_file);
OpenSwath::LightTargetedExperiment priority_exp = loadTransitionList(file_type, tsv_file, tsv_reader_param);
for (const auto& compound : priority_exp.getCompounds())
{
if (!compound.sequence.empty())
{
priority_sequences.insert(compound.sequence);
}
}
OPENMS_LOG_INFO << "Loaded " << priority_exp.getCompounds().size()
<< " compounds from priority file: " << tsv_file << std::endl;
}
catch (const Exception::BaseException& e)
{
OPENMS_LOG_WARN << "Failed to load priority peptide file " << tsv_file
<< ": " << e.what() << std::endl;
}
}
OPENMS_LOG_INFO << "Total unique priority peptide sequences: "
<< priority_sequences.size() << std::endl;
return priority_sequences;
}
ExitCodes main_(int, const char **) override
{
///////////////////////////////////
// Prepare Parameters
///////////////////////////////////
StringList file_list = getStringList_("in");
String tr_file = getStringOption_("tr");
String out_features = getStringOption_("out_features");
//tr_file input file type
FileTypes::Type tr_type = FileTypes::nameToType(getStringOption_("tr_type"));
if (tr_type == FileTypes::UNKNOWN)
{
tr_type = FileHandler::getType(tr_file);
writeDebug_(String("Input file type (-tr): ") + FileTypes::typeToName(tr_type), 2);
}
if (tr_type == FileTypes::UNKNOWN)
{
writeLogError_("Error: Could not determine input file type for '-tr' !");
return PARSE_ERROR;
}
//tr_file input file type
FileTypes::Type out_features_type = FileTypes::nameToType(getStringOption_("out_features_type"));
if (out_features_type == FileTypes::UNKNOWN)
{
out_features_type = FileHandler::getType(out_features);
writeDebug_(String("Input file type (-out): ") + FileTypes::typeToName(out_features_type), 2);
}
if (out_features_type == FileTypes::UNKNOWN)
{
writeLogError_("Error: Could not determine input file type for '-out_features' !");
return PARSE_ERROR;
}
String out_qc = getStringOption_("out_qc");
bool auto_irt = (getStringOption_("auto_irt") == "true");
Param irt_calibration_params = getParam_().copy("Calibration:", true);
UInt irt_seed = irt_calibration_params.getValue("irt_seed");
UInt irt_bins_lin = irt_calibration_params.getValue("irt_bins");
UInt irt_pep_lin = irt_calibration_params.getValue("irt_peptides_per_bin");
UInt irt_bins_nl = irt_calibration_params.getValue("irt_bins_nonlinear");
UInt irt_pep_nl = irt_calibration_params.getValue("irt_peptides_per_bin_nonlinear");
String irt_tr_file = irt_calibration_params.getValue("tr_irt").toString();
String nonlinear_irt_tr_file = irt_calibration_params.getValue("tr_irt_nonlinear").toString();
String priority_sampling_irt_tr_file = irt_calibration_params.getValue("tr_irt_priority_sampling").toString();
String trafo_in = irt_calibration_params.getValue("rt_norm").toString();
String swath_windows_file = getStringOption_("swath_windows_file");
String out_chrom = getStringOption_("out_chrom");
bool split_file = getFlag_("split_file_input");
bool use_emg_score = getFlag_("use_elution_model_score");
bool force = getFlag_("force");
bool pasef = getFlag_("pasef");
bool sort_swath_maps = getFlag_("sort_swath_maps");
bool use_ms1_traces = getStringOption_("enable_ms1") == "true";
bool enable_uis_scoring = getStringOption_("enable_ipf") == "true";
int batchSize = (int)getIntOption_("batchSize");
int outer_loop_threads = (int)getIntOption_("outer_loop_threads");
int ms1_isotopes = (int)getIntOption_("ms1_isotopes");
Size debug_level = (Size)getIntOption_("debug");
double min_rsq = getDoubleOption_("min_rsq");
double min_coverage = getDoubleOption_("min_coverage");
Param debug_params = getParam_().copy("Debugging:", true);
String readoptions = getStringOption_("readOptions");
String mz_correction_function = getStringOption_("mz_correction_function");
// make sure tmp is a directory with proper separator at the end (downstream methods simply do path + filename)
// (do not use QDir::separator(), since its platform specific (/ or \) while absolutePath() will always use '/')
String tmp_dir = String(QDir(getStringOption_("tempDirectory").c_str()).absolutePath()).ensureLastChar('/');
///////////////////////////////////
// Parameter validation
///////////////////////////////////
bool load_into_memory = false;
if (readoptions == "cacheWorkingInMemory")
{
readoptions = "cache";
load_into_memory = true;
}
else if (readoptions == "workingInMemory")
{
readoptions = "normal";
load_into_memory = true;
}
bool is_sqmass_input = (FileHandler::getTypeByFileName(file_list[0]) == FileTypes::SQMASS);
if (is_sqmass_input && !load_into_memory)
{
std::cout << "When using sqMass input files, it is highly recommended to use the workingInMemory option as otherwise data access will be very slow." << std::endl;
}
if (trafo_in.empty() && irt_tr_file.empty() && !auto_irt)
{
std::cout << "Since neither rt_norm nor tr_irt nor auto_irt is set, OpenSWATH will " <<
"not use RT-transformation (rather a null transformation will be applied)" << std::endl;
}
// -----------------------------------------------------------------
// Validate auto_irt parameters
// -----------------------------------------------------------------
if (auto_irt)
{
// linear sampling must have at least one bin and one peptide per bin
if (irt_bins_lin == 0)
{
writeLogError_("Parameter error: --irt_bins must be > 0 when auto_irt is enabled.");
return PARSE_ERROR;
}
if (irt_pep_lin == 0)
{
writeLogError_("Parameter error: --irt_peptides_per_bin must be > 0 when auto_irt is enabled.");
return PARSE_ERROR;
}
}
// Validate priority iRT sampling file format if provided
if (!priority_sampling_irt_tr_file.empty())
{
if (!File::exists(priority_sampling_irt_tr_file))
{
writeLogError_("Parameter error: Priority iRT file does not exist: " + priority_sampling_irt_tr_file);
return PARSE_ERROR;
}
FileTypes::Type priority_file_type = FileHandler::getType(priority_sampling_irt_tr_file);
if (priority_file_type != FileTypes::TSV)
{
writeLogError_("Parameter error: Priority iRT file must be in TSV format. Provided: " +
FileTypes::typeToName(priority_file_type));
return PARSE_ERROR;
}
}
// Check swath window input
if (!swath_windows_file.empty())
{
OPENMS_LOG_INFO << "Validate provided Swath windows file:" << std::endl;
std::vector<double> swath_prec_lower;
std::vector<double> swath_prec_upper;
SwathWindowLoader::readSwathWindows(swath_windows_file, swath_prec_lower, swath_prec_upper);
for (Size i = 0; i < swath_prec_lower.size(); i++)
{
OPENMS_LOG_DEBUG << "Read lower swath window " << swath_prec_lower[i] << " and upper window " << swath_prec_upper[i] << std::endl;
}
}
double min_upper_edge_dist = getDoubleOption_("min_upper_edge_dist");
bool use_ms1_im = getStringOption_("use_ms1_ion_mobility") == "true";
bool prm = getStringOption_("matching_window_only") == "true";
EstimateWindowsChoice use_est_window_choices = parseEstimateExtractionWindows_(getStringOption_("estimate_extraction_windows"));
ChromExtractParams cp;
cp.min_upper_edge_dist = min_upper_edge_dist;
cp.mz_extraction_window = getDoubleOption_("mz_extraction_window");
cp.ppm = getStringOption_("mz_extraction_window_unit") == "ppm";
cp.rt_extraction_window = getDoubleOption_("rt_extraction_window");
cp.im_extraction_window = getDoubleOption_("ion_mobility_window");
cp.extraction_function = getStringOption_("extraction_function");
cp.extra_rt_extract = getDoubleOption_("extra_rt_extraction_window");
ChromExtractParams cp_irt = cp;
cp_irt.rt_extraction_window = -1; // extract the whole RT range for iRT measurements
cp_irt.mz_extraction_window = getDoubleOption_("irt_mz_extraction_window");
cp_irt.im_extraction_window = getDoubleOption_("irt_im_extraction_window");
if ( (cp_irt.im_extraction_window == -1) & (cp.im_extraction_window != -1) )
{
OPENMS_LOG_WARN << "Warning: -irt_im_extraction_window is not set, this will lead to no ion mobility calibration" << std::endl;
}
cp_irt.ppm = getStringOption_("irt_mz_extraction_window_unit") == "ppm";
ChromExtractParams cp_ms1 = cp;
cp_ms1.mz_extraction_window = getDoubleOption_("mz_extraction_window_ms1");
cp_ms1.ppm = getStringOption_("mz_extraction_window_ms1_unit") == "ppm";
cp_ms1.im_extraction_window = (use_ms1_im) ? getDoubleOption_("im_extraction_window_ms1") : -1;
Param feature_finder_param = getParam_().copy("Scoring:", true);
feature_finder_param.setValue("use_ms1_ion_mobility", getStringOption_("use_ms1_ion_mobility"));
Param tsv_reader_param = getParam_().copy("Library:", true);
if (use_emg_score)
{
feature_finder_param.setValue("Scores:use_elution_model_score", "true");
}
else
{
feature_finder_param.setValue("Scores:use_elution_model_score", "false");
}
if (use_ms1_traces)
{
feature_finder_param.setValue("Scores:use_ms1_correlation", "true");
feature_finder_param.setValue("Scores:use_ms1_fullscan", "true");
}
if (enable_uis_scoring)
{
feature_finder_param.setValue("Scores:use_uis_scores", "true");
}
bool compute_peak_shape_metrics = feature_finder_param.getValue("TransitionGroupPicker:compute_peak_shape_metrics").toBool();
if (compute_peak_shape_metrics)
{
feature_finder_param.setValue("Scores:use_peak_shape_metrics", "true");
}
///////////////////////////////////
// Load the transitions
///////////////////////////////////
OpenSwath::LightTargetedExperiment transition_exp = loadTransitionList(tr_type, tr_file, tsv_reader_param);
OPENMS_LOG_INFO << "Loaded " << transition_exp.getProteins().size() << " proteins, " <<
transition_exp.getCompounds().size() << " compounds with " << transition_exp.getTransitions().size() << " transitions." << std::endl;
if (out_features_type == FileTypes::OSW)
{
if (tr_type == FileTypes::PQP)
{
// copy the PQP file and name it OSW file
std::ifstream src(tr_file.c_str(), std::ios::binary);
std::ofstream dst(out_features.c_str(), std::ios::binary | std::ios::trunc);
dst << src.rdbuf();
}
else if (tr_type == FileTypes::TSV)
{
// Convert TSV to .PQP
TransitionTSVFile tsv_reader;
TargetedExperiment transition_exp_heavy;
tsv_reader.setParameters(tsv_reader_param);
tsv_reader.convertTSVToTargetedExperiment(tr_file.c_str(), tr_type, transition_exp_heavy);
TransitionPQPFile().convertTargetedExperimentToPQP(out_features.c_str(), transition_exp_heavy);
// instead of reloading - edit the already loaded transition_exp to be compatible with .pqp format
// read the PQP to traMLID mapping
auto precursor_traml_to_pqp = TransitionPQPFile().getPQPIDToTraMLIDMap(out_features.c_str(), "PRECURSOR");
auto transition_traml_to_pqp = TransitionPQPFile().getPQPIDToTraMLIDMap(out_features.c_str(), "TRANSITION");
// convert tramlID in transitionExp to PQP ID
for (auto & prec : transition_exp.getCompounds())
{
if (auto id = precursor_traml_to_pqp.find(prec.id); id != precursor_traml_to_pqp.end())
{
prec.id = id->second;
}
}
for (auto & tr : transition_exp.getTransitions())
{
// convert transition tramlID peptide reference in transitionExp to PQP ID
auto pep = precursor_traml_to_pqp.find(tr.getPeptideRef());
if (pep != precursor_traml_to_pqp.end())
{
tr.peptide_ref = pep->second;
}
// Update transition id
auto id = transition_traml_to_pqp.find(tr.transition_name);
if (id != transition_traml_to_pqp.end())
{
tr.transition_name = id->second;
}
}
}
else if (tr_type == FileTypes::TRAML)
{
if (out_features_type == FileTypes::OSW)
{
throw Exception::InvalidParameter(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, String("Conversion from TraML to OSW is not supported."));
}
}
}
// If pasef flag is set, validate that IM is present
if (pasef)
{
auto transitions = transition_exp.getTransitions();
for ( Size k=0; k < (Size)transitions.size(); k++ )
{
if (transitions[k].precursor_im == -1)
{
throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Error: Transition " + transitions[k].getNativeID() + " does not have a valid IM value, this must be set to use the -pasef flag");
}
}
}
///////////////////////////////////
// Load the SWATH files
///////////////////////////////////
std::shared_ptr<ExperimentalSettings> exp_meta(new ExperimentalSettings);
std::vector< OpenSwath::SwathMap > swath_maps;
// collect some QC data
if (!out_qc.empty())
{
OpenSwath::SwathQC qc(30, 0.04);
MSDataTransformingConsumer qc_consumer; // apply some transformation
qc_consumer.setSpectraProcessingFunc(qc.getSpectraProcessingFunc());
qc_consumer.setExperimentalSettingsFunc(qc.getExpSettingsFunc());
if (!loadSwathFiles(file_list, exp_meta, swath_maps, split_file, tmp_dir, readoptions,
swath_windows_file, min_upper_edge_dist, force,
sort_swath_maps, prm, pasef, &qc_consumer))
{
return PARSE_ERROR;
}
qc.storeJSON(out_qc);
}
else
{
if (!loadSwathFiles(file_list, exp_meta, swath_maps, split_file, tmp_dir, readoptions,
swath_windows_file, min_upper_edge_dist, force,
sort_swath_maps, prm, pasef))
{
return PARSE_ERROR;
}
}
///////////////////////////////////
// Get the transformation information (using iRT peptides)
///////////////////////////////////
String irt_trafo_out = debug_params.getValue("irt_trafo").toString();
String irt_mzml_out = debug_params.getValue("irt_mzml").toString();
Param irt_detection_param = getParam_().copy("Calibration:RTNormalization:", true);
Param calibration_param = getParam_().copy("Calibration:MassIMCorrection:", true);
calibration_param.setValue("mz_extraction_window", cp_irt.mz_extraction_window);
calibration_param.setValue("mz_extraction_window_ppm", cp_irt.ppm ? "true" : "false");
calibration_param.setValue("im_extraction_window", cp_irt.im_extraction_window);
calibration_param.setValue("im_estimation_padding_factor", getDoubleOption_("im_estimation_padding_factor"));
calibration_param.setValue("mz_estimation_padding_factor", getDoubleOption_("mz_estimation_padding_factor"));
calibration_param.setValue("mz_correction_function", mz_correction_function);
// Load priority peptide sequences from irtkit and cirtkit if auto_irt is enabled
std::unordered_set<std::string> priority_peptides;
if (auto_irt)
{
String data_path = File::getOpenMSDataPath();
std::vector<String> priority_files;
String irtkit_path = data_path + "/CHEMISTRY/irtkit.tsv";
String cirtkit_path = data_path + "/CHEMISTRY/cirtkit.tsv";
if (File::exists(irtkit_path))
{
priority_files.push_back(irtkit_path);
}
else
{
OPENMS_LOG_WARN << "irtkit.tsv not found at: " << irtkit_path << std::endl;
}
if (File::exists(cirtkit_path))
{
priority_files.push_back(cirtkit_path);
}
else
{
OPENMS_LOG_WARN << "cirtkit.tsv not found at: " << cirtkit_path << std::endl;
}
// Add custom priority iRT file if provided
if (!priority_sampling_irt_tr_file.empty())
{
if (File::exists(priority_sampling_irt_tr_file))
{
priority_files.push_back(priority_sampling_irt_tr_file);
OPENMS_LOG_DEBUG << "Including custom priority iRT file: " << priority_sampling_irt_tr_file << std::endl;
}
}
if (!priority_files.empty())
{
Param priority_tsv_param = TransitionTSVFile().getDefaults();
priority_peptides = loadPriorityPeptideSequences(priority_files, priority_tsv_param);
}
else
{
OPENMS_LOG_WARN << "No priority peptide files found. Continuing without priority sampling." << std::endl;
}
}
// 1) Prepare in‐memory iRT experiments for linear + nonlinear
OpenSwath::LightTargetedExperiment lin_irt_exp;
if (!irt_tr_file.empty())
{
// user‐supplied linear iRT file takes absolute precedence
FileTypes::Type irt_tr_type = FileHandler::getType(irt_tr_file);
Param irt_tsv_reader_param = TransitionTSVFile().getDefaults();
lin_irt_exp = loadTransitionList(irt_tr_type, irt_tr_file, irt_tsv_reader_param);
}
else if (auto_irt)
{
OPENMS_LOG_INFO << "Linear iRT Calibration: Sampling input transition experiment for " << irt_bins_lin << " bins across the RT with " << irt_pep_lin << " peptides per bin" << std::endl;
// sampled transtion_exp on‐the‐fly
// Note1: We sort the targetedExperiment peptides by the aggregated total intensity (i.e. sum of fragment library intensities per peptide),
// in order to reduce the sampling space to the top N fraction of highly intense peptides
// Note2: For linear iRTs we set top fraction to 40%, that is, we reduce the space of peptides to sample for 40% of the highest intense peptides.
// The reason for restricting the space a lot more for linear iRTs is to ensure that we sample the most intense peptides which
// are going to be more likely detected.
lin_irt_exp = OpenSwathHelper::sampleExperiment(
transition_exp,
irt_bins_lin,
irt_pep_lin,
irt_seed,
true,
0.4,
priority_peptides
);
}
OpenSwath::LightTargetedExperiment nl_irt_exp;
if (!nonlinear_irt_tr_file.empty())
{
// user‐supplied nonlinear iRT file
FileTypes::Type irt_nl_tr_type = FileHandler::getType(nonlinear_irt_tr_file);
Param irt_nl_tsv_reader_param = TransitionTSVFile().getDefaults();
nl_irt_exp = loadTransitionList(irt_nl_tr_type, nonlinear_irt_tr_file, irt_nl_tsv_reader_param);
}
else if (auto_irt && irt_pep_nl > 0)
{
OPENMS_LOG_INFO << "NonLinear iRT Calibration: Sampling input transition experiment for " << irt_bins_nl << " bins across the RT with " << irt_pep_nl << " peptides per bin" << std::endl;
// sampled transtion_exp on‐the‐fly for nonlinear (only if >0)
// Note1: We sort the targetedExperiment peptides by the aggregated total intensity (i.e. sum of fragment library intensities per peptide),
// in order to reduce the sampling space to the top N fraction of highly intense peptides
// Note2: For the additional nonlinear iRTs we set top fraction to 80%, that is, we reduce the space of peptides to sample for 80% of the highest intense peptides.
// The reason for being less restrictive of the sampling space for nonlinear iRTs, is because we can be more liberal with the quality of the
// nonlinear iRTs.
nl_irt_exp = OpenSwathHelper::sampleExperiment(
transition_exp,
irt_bins_nl,
irt_pep_nl,
irt_seed,
true,
0.7,
priority_peptides
);
}
// 2) Launch either just linear or linear+nonlinear
TransformationDescription trafo_rtnorm; double estimated_rt_extraction_window;
double rt_estimation_padding_factor = getDoubleOption_("rt_estimation_padding_factor");
if (nl_irt_exp.getTransitions().empty())
{
// --- single, linear calibration ---
auto calibration_result = performCalibration(
trafo_in,
lin_irt_exp,
swath_maps,
min_rsq,
min_coverage,
feature_finder_param,
cp_irt,
irt_detection_param,
calibration_param,
debug_level,
pasef,
load_into_memory,
irt_trafo_out,
irt_mzml_out);
// We need to set trafo_rtnorm to the calibration result
trafo_rtnorm = calibration_result.rt_trafo;
// Use the 0.99 quantile so the window covers ~99% of residuals, ignoring rare extremes (those that are potential outliers).
estimated_rt_extraction_window = calibration_result.rt_trafo.estimateWindow(0.99, true, true, rt_estimation_padding_factor);
// RT (seconds)
apply_window("RT",
estimated_rt_extraction_window,
/*dst*/ cp.rt_extraction_window,
/*user*/ cp.rt_extraction_window,
/*applicable=*/true,
/*commit=*/use_est_window_choices.rt);
// MS2 m/z (ppm)
apply_window("MS2 m/z (ppm)",
calibration_result.ms2_mz_window_ppm,
cp.mz_extraction_window, cp.mz_extraction_window,
/*applicable=*/true,
/*commit=*/use_est_window_choices.mz && cp.ppm);
if (use_est_window_choices.mz && !cp.ppm)
{
OPENMS_LOG_WARN
<< "[Auto-calibration] MS2 m/z window not applied: user selected Thomson (Th) as unit, "
<< "but the estimated window is " << calibration_result.ms2_mz_window_ppm << " ppm. "
<< "Keeping the user-set value " << cp.mz_extraction_window << " Th. "
<< std::endl;
}
// MS2 ion mobility (1/k0)
apply_window("MS2 ion mobility (1/k0)",
calibration_result.ms2_im_window,
cp.im_extraction_window, cp.im_extraction_window,
/*applicable=*/pasef,
/*commit=*/use_est_window_choices.im);
// MS1 m/z (ppm)
apply_window("MS1 m/z (ppm)",
calibration_result.ms1_mz_window_ppm,
cp_ms1.mz_extraction_window, cp_ms1.mz_extraction_window,
/*applicable=*/true,
/*commit=*/use_est_window_choices.mz && cp_ms1.ppm);
if (use_est_window_choices.mz && !cp_ms1.ppm)
{
OPENMS_LOG_WARN
<< "[Auto-calibration] MS1 m/z window not applied: user selected Thomson (Th) as unit, "
<< "but the estimated window is " << calibration_result.ms1_mz_window_ppm << " ppm. "
<< "Keeping the user-set value " << cp_ms1.mz_extraction_window << " Th. "
<< std::endl;
}
// MS1 ion mobility (1/k0)
apply_window("MS1 ion mobility (1/k0)",
calibration_result.ms1_im_window,
cp_ms1.im_extraction_window, cp_ms1.im_extraction_window,
/*applicable=*/pasef && use_ms1_im,
/*commit=*/use_est_window_choices.im);
}
else
{
///////////////////////////////////
// First perform a simple linear transform, then do a second, nonlinear one
///////////////////////////////////
OPENMS_LOG_INFO << "Performing iRT linear transform..." << std::endl;
Param linear_irt = irt_detection_param;
linear_irt.setValue("alignmentMethod", "linear");
Param no_calibration = calibration_param;
no_calibration.setValue("mz_correction_function", "none");
auto calibration_result = performCalibration(trafo_in, lin_irt_exp, swath_maps,
min_rsq, min_coverage, feature_finder_param,
cp_irt, linear_irt, no_calibration,
debug_level, pasef, load_into_memory,
irt_trafo_out, irt_mzml_out);
trafo_rtnorm = calibration_result.rt_trafo;
cp_irt.rt_extraction_window = getDoubleOption_("irt_nonlinear_rt_extraction_window"); // extract some substantial part of the RT range (should be covered by linear correction)
///////////////////////////////////
// Get the secondary transformation (nonlinear)
///////////////////////////////////
OPENMS_LOG_INFO << "Performing additional iRT nonlinear transform..." << std::endl;
OpenSwathCalibrationWorkflow wf;
wf.setLogType(log_type_);
std::vector<OpenMS::MSChromatogram> chroms;
wf.simpleExtractChromatograms_(
swath_maps,
nl_irt_exp,
chroms,
trafo_rtnorm,
cp_irt,
pasef,
load_into_memory);
Param nl_param = irt_detection_param;
nl_param.setValue("estimateBestPeptides", "true");
TransformationDescription im_trafo;
trafo_rtnorm = wf.doDataNormalization_(
nl_irt_exp,
chroms,
im_trafo,
swath_maps,
min_rsq,
min_coverage,
feature_finder_param,
nl_param,
calibration_param,
pasef);
// apply IM‐correction back to the library
if (!irt_trafo_out.empty())
{
String nonlinear_path = irt_trafo_out;
const String ext = ".trafoXML";
nonlinear_path = nonlinear_path.substr(0, nonlinear_path.size() - ext.size());
nonlinear_path += "_nonlinear.trafoXML";
FileHandler().storeTransformations(nonlinear_path, trafo_rtnorm, { FileTypes::TRANSFORMATIONXML });
}
// Use the 0.99 quantile so the window covers ~99% of residuals, ignoring rare extremes (those that are potential outliers).
estimated_rt_extraction_window = trafo_rtnorm.estimateWindow(0.99, true, true, rt_estimation_padding_factor);
TransformationDescription im_trafo_inv = im_trafo;
im_trafo_inv.invert();
for (auto & cmp : transition_exp.getCompounds())
{
cmp.drift_time = im_trafo_inv.apply(cmp.drift_time);
}
double estimated_mz_extraction_window = wf.getEstimatedMzWindow();
double estimated_im_extraction_window = wf.getEstimatedImWindow();
double estimated_ms1_mz_extraction_window = wf.getEstimatedMs1MzWindow();
double estimated_ms1_im_extraction_window = wf.getEstimatedMs1ImWindow();
// RT (seconds)
apply_window("RT",
estimated_rt_extraction_window,
/*dst*/ cp.rt_extraction_window,
/*user*/ cp.rt_extraction_window,
/*applicable=*/true,
/*commit=*/use_est_window_choices.rt);
// MS2 m/z (ppm)
apply_window("MS2 m/z (ppm)",
estimated_mz_extraction_window,
cp.mz_extraction_window, cp.mz_extraction_window,
/*applicable=*/true,
/*commit=*/use_est_window_choices.mz && cp.ppm);
if (use_est_window_choices.mz && !cp.ppm)
{
OPENMS_LOG_WARN
<< "[Auto-calibration] MS2 m/z window not applied: user selected Thomson (Th) as unit, "
<< "but the estimated window is " << calibration_result.ms2_mz_window_ppm << " ppm. "
<< "Keeping the user-set value " << cp.mz_extraction_window << " Th. "
<< std::endl;
}
// MS2 ion mobility (1/k0)
apply_window("MS2 ion mobility (1/k0)",
estimated_im_extraction_window,
cp.im_extraction_window, cp.im_extraction_window,
/*applicable=*/pasef,
/*commit=*/use_est_window_choices.im);
// MS1 m/z (ppm)
apply_window("MS1 m/z (ppm)",
estimated_ms1_mz_extraction_window,
cp_ms1.mz_extraction_window, cp_ms1.mz_extraction_window,
/*applicable=*/true,
/*commit=*/use_est_window_choices.mz && cp_ms1.ppm);
if (use_est_window_choices.mz && !cp_ms1.ppm)
{
OPENMS_LOG_WARN
<< "[Auto-calibration] MS1 m/z window not applied: user selected Thomson (Th) as unit, "
<< "but the estimated window is " << calibration_result.ms1_mz_window_ppm << " ppm. "
<< "Keeping the user-set value " << cp_ms1.mz_extraction_window << " Th. "
<< std::endl;
}
// MS1 ion mobility (1/k0)
apply_window("MS1 ion mobility (1/k0)",
estimated_ms1_im_extraction_window,
cp_ms1.im_extraction_window, cp_ms1.im_extraction_window,
/*applicable=*/pasef && use_ms1_im,
/*commit=*/use_est_window_choices.im);
}
///////////////////////////////////
// Set up chromatogram output
// Either use chrom.mzML or sqliteDB (sqMass)
///////////////////////////////////
Interfaces::IMSDataConsumer* chromatogramConsumer;
UInt64 run_id = OpenMS::UniqueIdGenerator::getUniqueId();
prepareChromOutput(&chromatogramConsumer, exp_meta, transition_exp, out_chrom, run_id);
///////////////////////////////////
// Set up peakgroup file output .osw file
///////////////////////////////////
FeatureMap out_featureFile;
// store features if not writing to .featureXML
bool store_features = (out_features_type != FileTypes::FEATUREXML);
String osw_out_filename = store_features ? out_features : "";
OpenSwathOSWWriter oswwriter(osw_out_filename, run_id, file_list[0], enable_uis_scoring);
OpenSwathWorkflow wf(use_ms1_traces, use_ms1_im, prm, pasef, outer_loop_threads);
wf.setLogType(log_type_);
wf.performExtraction(swath_maps, trafo_rtnorm, cp, cp_ms1, feature_finder_param, transition_exp,
out_featureFile, true, oswwriter, chromatogramConsumer, batchSize, ms1_isotopes, load_into_memory);
if ( out_features_type == FileTypes::FEATUREXML )
{
std::cout << "Writing features ..." << std::endl;
addDataProcessing_(out_featureFile, getProcessingInfo_(DataProcessing::QUANTITATION));
out_featureFile.ensureUniqueId();
FileHandler().storeFeatures(out_features, out_featureFile, {FileTypes::FEATUREXML});
}
delete chromatogramConsumer;
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPOpenSwathWorkflow tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/SpectraFilterNormalizer.cpp | .cpp | 3,665 | 130 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Mathias Walzer$
// $Authors: $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/PROCESSING/SCALING/Normalizer.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <typeinfo>
using namespace OpenMS;
using namespace std;
/**
@page TOPP_SpectraFilterNormalizer SpectraFilterNormalizer
@brief Scale intensities per spectrum to either sum to 1 or have a maximum of 1.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → SpectraFilter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any tool operating on MS peak data @n (in mzML format)</td>
</tr>
</table>
</CENTER>
Normalization is performed for each spectrum independently.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_SpectraFilterNormalizer.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_SpectraFilterNormalizer.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPSpectraFilterNormalizer :
public TOPPBase
{
public:
TOPPSpectraFilterNormalizer() :
TOPPBase("SpectraFilterNormalizer", "Scale intensities per spectrum to either sum to 1 or have a maximum of 1.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "input file");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerOutputFile_("out", "<file>", "", "output file");
setValidFormats_("out", ListUtils::create<String>("mzML"));
// register one section for each algorithm
registerSubsection_("algorithm", "Algorithm parameter subsection.");
}
Param getSubsectionDefaults_(const String & /*section*/) const override
{
return Normalizer().getParameters();
}
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
//input/output files
String in(getStringOption_("in"));
String out(getStringOption_("out"));
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
PeakMap exp;
FileHandler().loadExperiment(in, exp, {FileTypes::MZML}, log_type_);
//-------------------------------------------------------------
// filter
//-------------------------------------------------------------
Param filter_param = getParam_().copy("algorithm:", true);
writeDebug_("Used filter parameters", filter_param, 3);
Normalizer filter;
filter.setParameters(filter_param);
filter.filterPeakMap(exp);
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
//annotate output with data processing info
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::FILTERING));
FileHandler().storeExperiment(out, exp, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPSpectraFilterNormalizer tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/PeakPickerIterative.cpp | .cpp | 3,604 | 123 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hannes Roest $
// $Authors: Hannes Roest $
// --------------------------------------------------------------------------
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/PROCESSING/CENTROIDING/PeakPickerIterative.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_PeakPickerIterative PeakPickerIterative
@brief A tool for peak detection in profile data. Executes the peak picking with @ref OpenMS::PeakPickerIterative "high_res" algorithm.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=4> → PeakPickerIterative →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_BaselineFilter </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=3> any tool operating on MS peak data @n (in mzML format)</td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_NoiseFilterGaussian </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_NoiseFilterSGolay </td>
</tr>
</table>
</CENTER>
The conversion of the ''raw'' ion count data acquired
by the machine into peak lists for further processing
is usually called peak picking. The choice of the algorithm
should mainly depend on the resolution of the data.
As the name implies, the @ref OpenMS::PeakPickerIterative "high_res"
algorithm is fit for high resolution data.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_PeakPickerIterative.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_PeakPickerIterative.html
*/
typedef PeakMap MapType;
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPPeakPickerIterative
: public TOPPBase
{
public:
TOPPPeakPickerIterative()
: TOPPBase("PeakPickerIterative","Finds mass spectrometric peaks in profile mass spectra.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in","<file>","","input file ");
setValidFormats_("in",ListUtils::create<String>("mzML"));
registerOutputFile_("out","<file>","","output file");
setValidFormats_("out",ListUtils::create<String>("mzML"));
registerSubsection_("algorithm", "Algorithm parameters section");
}
Param getSubsectionDefaults_(const String &) const override
{
return PeakPickerIterative().getDefaults();
}
ExitCodes main_(int , const char**) override
{
String in = getStringOption_("in");
String out = getStringOption_("out");
MapType exp;
MapType out_exp;
Param picker_param = getParam_().copy("algorithm:", true);
FileHandler().loadExperiment(in,exp, {FileTypes::MZML}, log_type_);
PeakPickerIterative pp;
pp.setParameters(picker_param);
pp.setLogType(log_type_);
pp.pickExperiment(exp, out_exp);
addDataProcessing_(out_exp, getProcessingInfo_(DataProcessing::PEAK_PICKING));
FileHandler().storeExperiment(out,out_exp, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
};
int main( int argc, const char** argv )
{
TOPPPeakPickerIterative tool;
return tool.main(argc,argv);
}
///@endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/MapAlignerTreeGuided.cpp | .cpp | 11,260 | 250 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Julia Thueringer $
// $Authors: Julia Thueringer $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/MAPMATCHING/MapAlignmentAlgorithmTreeGuided.h>
#include <OpenMS/APPLICATIONS/MapAlignerBase.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/ML/CLUSTERING/ClusterAnalyzer.h> // to print newick tree on cml
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_MapAlignerTreeGuided MapAlignerTreeGuided
@brief Corrects retention time distortions between maps, using information from peptides identified in different maps.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN= "middle" ROWSPAN=2> → MapAlignerTreeGuided →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDMapper @n (or any other source of FDR-filtered
featureXMLs) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FeatureLinkerUnlabeledKD or @n @ref TOPP_FeatureLinkerUnlabeledQT </td>
</tr>
</table>
</CENTER>
This tool provides an algorithm to align the retention time scales of multiple input files, correcting shifts and
distortions between them. Retention time adjustment may be necessary to correct for chromatography differences e.g.
before data from multiple LC-MS runs can be combined (feature grouping), or when one run should be annotated with
peptide identifications obtained in a different run.
All map alignment tools (MapAligner...) collect retention time data from the input files and - by fitting a model to
this data - compute transformations that map all runs to a common retention time scale. They can apply the transformations
right away and return output files with aligned time scales (parameter @p out), and/or return descriptions of the
transformations in trafoXML format (parameter @p trafo_out). Transformations stored as trafoXML can be applied to
arbitrary files with the @ref TOPP_MapRTTransformer tool.
The map alignment tools differ in how they obtain retention time data for the modeling of transformations, and
consequently what types of data they can be applied to. The alignment algorithm implemented here is based on peptide
identifications and applicable to annotated featureXML files. It finds peptide sequences that each pair of input files
have in common, uses them as points of correspondence between the inputs and to evaluate the distances between the
maps for hierarchical clustering. Tree based, the alignment of each cluster pair is performed with the method align() of the
@ref OpenMS::MapAlignmentAlgorithmIdentification. For more details and algorithm-specific parameters (set in the INI file)
see "Detailed Description" in the @ref OpenMS::MapAlignmentAlgorithmTreeGuided "algorithm documentation".
@see @ref TOPP_MapAlignerIdentification @ref TOPP_MapAlignerPoseClustering @ref TOPP_MapRTTransformer
Note that alignment is based on the sequence including modifications, thus an exact match is required. I.e., a peptide with oxidised methionine will not be matched to its unmodified version. This behavior is generally desired since (some) modifications can cause retention time shifts.
Also note that convex hulls are removed for alignment and are therefore missing in the output files.
Since %OpenMS 1.8, the extraction of data for the alignment has been separate from the modeling of RT transformations based on that data. It is now possible to use different models independently of the chosen algorithm. This algorithm has been tested with the "b_spline" model. The different available models are:
- @ref OpenMS::TransformationModelLinear "linear": Linear model.
- @ref OpenMS::TransformationModelBSpline "b_spline": Smoothing spline (non-linear).
- @ref OpenMS::TransformationModelLowess "lowess": Local regression (non-linear).
- @ref OpenMS::TransformationModelInterpolated "interpolated": Different types of interpolation.
<B>The command line parameters of this tool are:</B> @n
@verbinclude TOPP_MapAlignerTreeGuided.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_MapAlignerTreeGuided.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPMapAlignerTreeGuided :
public TOPPMapAlignerBase
{
public:
TOPPMapAlignerTreeGuided() :
TOPPMapAlignerBase("MapAlignerTreeGuided", "Tree guided correction of retention time distortions between maps.")
{
}
private:
template <typename MapType>
void loadInputMaps_(vector<MapType>& maps, StringList& ins, FileHandler& fxml_file)
{
ProgressLogger progresslogger;
progresslogger.setLogType(TOPPMapAlignerBase::log_type_);
progresslogger.startProgress(0, ins.size(), "loading input files");
for (Size i = 0; i < ins.size(); ++i)
{
progresslogger.setProgress(i);
fxml_file.loadFeatures(ins[i], maps[i], {FileTypes::FEATUREXML});
}
progresslogger.endProgress();
}
void storeFeatureXMLs_(vector<FeatureMap>& feature_maps, const StringList& out_files, FileHandler& fxml_file)
{
ProgressLogger progresslogger;
progresslogger.setLogType(TOPPMapAlignerBase::log_type_);
progresslogger.startProgress(0, feature_maps.size(), "writing output files");
for (Size i = 0; i < out_files.size(); ++i)
{
progresslogger.setProgress(i);
// annotate output with data processing info
addDataProcessing_(feature_maps[i], getProcessingInfo_(DataProcessing::ALIGNMENT));
fxml_file.storeFeatures(out_files[i], feature_maps[i], {FileTypes::FEATUREXML});
}
progresslogger.endProgress();
}
void storeTransformationDescriptions_(const vector<TransformationDescription>& transformations, StringList& trafos)
{
ProgressLogger progresslogger;
progresslogger.setLogType(TOPPMapAlignerBase::log_type_);
progresslogger.startProgress(0, trafos.size(),
"writing transformation files");
for (Size i = 0; i < trafos.size(); ++i)
{
progresslogger.setProgress(i);
FileHandler().storeTransformations(trafos[i], transformations[i], {FileTypes::TRANSFORMATIONXML});
}
progresslogger.endProgress();
}
void registerOptionsAndFlags_() override
{
TOPPMapAlignerBase::registerOptionsAndFlagsMapAligners_("featureXML",
REF_NONE);
registerSubsection_("algorithm", "Algorithm parameters section");
registerStringOption_("copy_data", "String", "true", "Copy data (faster, more memory required) or reload data (slower, less memory required) when aligning many files.", false, false);
setValidStrings_("copy_data", {"true","false"});
}
Param getSubsectionDefaults_(const String& section) const override
{
if (section == "algorithm")
{
MapAlignmentAlgorithmTreeGuided algo;
return algo.getParameters();
}
return Param(); // shouldn't happen
}
ExitCodes main_(int, const char**) override
{
ExitCodes ret = checkParameters_();
if (ret != EXECUTION_OK)
{
return ret;
}
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
StringList in_files = getStringList_("in");
StringList out_files = getStringList_("out");
StringList out_trafos = getStringList_("trafo_out");
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
Size in_files_size = in_files.size();
FileHandler fxml_file;
// define here because needed to load and store
FeatureFileOptions param = fxml_file.getFeatOptions();
// to save memory don't load convex hulls and subordinates
param.setLoadSubordinates(false);
param.setLoadConvexHull(false);
fxml_file.setFeatOptions(param);
vector<FeatureMap> feature_maps(in_files_size);
loadInputMaps_(feature_maps, in_files, fxml_file);
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
// constructing tree
vector<vector<double>> maps_ranges(in_files_size); // to save ranges for alignment (larger rt_range -> reference)
std::vector<BinaryTreeNode> tree; // to construct tree with pearson coefficient
MapAlignmentAlgorithmTreeGuided algoTree;
Param algo_params = getParam_().copy("algorithm:", true);
algoTree.setParameters(algo_params);
OpenMS::MapAlignmentAlgorithmTreeGuided::buildTree(feature_maps, tree, maps_ranges);
// print tree
ClusterAnalyzer ca;
OPENMS_LOG_INFO << " Alignment follows Newick tree: " << ca.newickTree(tree, true) << endl;
// alignment
vector<Size> trafo_order;
FeatureMap map_transformed;
// depending on the selected parameter, the input data for the alignment are copied or reloaded after alignment
if (getStringOption_("copy_data") == "true")
{
vector<FeatureMap> copied_maps = feature_maps;
algoTree.treeGuidedAlignment(tree, copied_maps, maps_ranges, map_transformed, trafo_order);
}
else
{
algoTree.treeGuidedAlignment(tree, feature_maps, maps_ranges, map_transformed, trafo_order);
// load() of FeatureXMLFile clears featureMap, so we don't have to care
loadInputMaps_(feature_maps, in_files, fxml_file);
}
//-------------------------------------------------------------
// generating output
//-------------------------------------------------------------
vector<TransformationDescription> transformations(in_files_size); // for trafo_out
algoTree.computeTrafosByOriginalRT(feature_maps, map_transformed, transformations, trafo_order);
OpenMS::MapAlignmentAlgorithmTreeGuided::computeTransformedFeatureMaps(feature_maps, transformations);
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
// store transformed feature_maps
storeFeatureXMLs_(feature_maps, out_files, fxml_file);
// store transformations
storeTransformationDescriptions_(transformations, out_trafos);
// Transform optional spectra files
// Note: MapAlignerTreeGuided does not support store_original_rt flag
StringList in_spectra_files = getStringList_("in_spectra_files");
StringList out_spectra_files = getStringList_("out_spectra_files");
transformSpectraFiles_(in_spectra_files, out_spectra_files, transformations, false);
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPMapAlignerTreeGuided tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/BaselineFilter.cpp | .cpp | 6,098 | 164 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: $
// --------------------------------------------------------------------------
#include <OpenMS/CONCEPT/LogStream.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/PROCESSING/BASELINE/MorphologicalFilter.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_BaselineFilter BaselineFilter
@brief Executes the top-hat filter to remove the baseline of an MS experiment.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → BaselineFilter →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_NoiseFilterSGolay, @n @ref TOPP_NoiseFilterGaussian </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes @n (or ID engines on MS/MS data) </td>
</tr>
</table>
</CENTER>
This nonlinear filter, known as the top-hat operator in morphological
mathematics (see Soille, ''Morphological Image Analysis''), is independent
of the underlying baseline shape. It is able to detect an over brightness
even if the environment is not uniform. The principle is based on the
subtraction of a signal from its opening (erosion followed by a dilation).
The size the structuring element (here a flat line) being conditioned by the
width of the lineament (in our case the maximum width of a mass
spectrometric peak) to be detected.
@note The top-hat filter works only on roughly uniform data!
To generate equally-spaced data you can use the @ref TOPP_Resampler.
@note The length (given in Thomson) of the structuring element should be wider than the
maximum peak width in the raw data.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_BaselineFilter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_BaselineFilter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPBaselineFilter :
public TOPPBase
{
public:
TOPPBaselineFilter() :
TOPPBase("BaselineFilter", "Removes the baseline from profile spectra using a top-hat filter.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "input raw data file ");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerOutputFile_("out", "<file>", "", "output raw data file ");
setValidFormats_("out", ListUtils::create<String>("mzML"));
registerDoubleOption_("struc_elem_length", "<size>", 3, "Length of the structuring element (should be wider than maximal peak width - see documentation).", false);
registerStringOption_("struc_elem_unit", "<unit>", "Thomson", "Unit of 'struc_elem_length' parameter.", false);
setValidStrings_("struc_elem_unit", ListUtils::create<String>("Thomson,DataPoints"));
registerStringOption_("method", "<string>", "tophat", "The name of the morphological filter to be applied. If you are unsure, use the default.", false);
setValidStrings_("method", ListUtils::create<String>("identity,erosion,dilation,opening,closing,gradient,tophat,bothat,erosion_simple,dilation_simple"));
}
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
String in = getStringOption_("in");
String out = getStringOption_("out");
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
PeakMap ms_exp;
FileHandler().loadExperiment(in, ms_exp, {FileTypes::MZML}, log_type_);
if (ms_exp.empty())
{
OPENMS_LOG_WARN << "The given file does not contain any conventional peak data, but might"
" contain chromatograms. This tool currently cannot handle them, sorry.";
return INCOMPATIBLE_INPUT_DATA;
}
// check for peak type (raw data required)
if (ms_exp[0].getType(true) == SpectrumSettings::SpectrumType::CENTROID)
{
writeLogWarn_("Warning: OpenMS peak type estimation indicates that this is not raw data!");
}
//check if spectra are sorted
for (Size i = 0; i < ms_exp.size(); ++i)
{
if (!ms_exp[i].isSorted())
{
writeLogError_("Error: Not all spectra are sorted according to peak m/z positions. Use FileFilter to sort the input!");
return INCOMPATIBLE_INPUT_DATA;
}
}
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
MorphologicalFilter morph_filter;
morph_filter.setLogType(log_type_);
Param parameters;
parameters.setValue("struc_elem_length", getDoubleOption_("struc_elem_length"));
parameters.setValue("struc_elem_unit", getStringOption_("struc_elem_unit"));
parameters.setValue("method", getStringOption_("method"));
morph_filter.setParameters(parameters);
morph_filter.filterExperiment(ms_exp);
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
//annotate output with data processing info
addDataProcessing_(ms_exp, getProcessingInfo_(DataProcessing::BASELINE_REDUCTION));
FileHandler().storeExperiment(out, ms_exp, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPBaselineFilter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/INIUpdater.cpp | .cpp | 12,754 | 399 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/INIUpdater.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/ParamXMLFile.h>
#include <OpenMS/VISUAL/TOPPASScene.h>
#include <QApplication>
#include <QFileInfo>
#include <QFile>
#include <QDir>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_INIUpdater INIUpdater
@brief Update INI and TOPPAS files from previous versions of OpenMS/TOPP
This tool can update old INI files and make them
- compatible to new versions of %OpenMS
- show new parameters introduced with a new %OpenMS version
- delete old parameters which no longer have any effect
The new INI files can be created in-place (with -i option), which will overwrite the
existing file, but create a backup copy with [filename]_[version].ini,
e.g.
@code
INIUpdater -in FileFilter.ini -i
@endcode
will create a file <tt>FileFilter_1.8.ini</tt> if the old ini version was 1.8.
No backup will be created if -out is used, as the original files are not touched (unless you name them
the same).
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_INIUpdater.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_INIUpdater.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
using namespace OpenMS;
class TOPPINIUpdater :
public TOPPBase
{
public:
TOPPINIUpdater() :
TOPPBase("INIUpdater", "Update INI and TOPPAS files to new OpenMS version."),
failed_(),
tmp_files_()
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFileList_("in", "<files>", StringList(), "INI/TOPPAS files that need updating.");
setValidFormats_("in", ListUtils::create<String>("ini,toppas"));
registerFlag_("i", "in-place: Override given INI/TOPPAS files with new content (not compatible with -out)");
registerOutputFileList_("out", "<files>", StringList(), "Optional list of output files (not compatible with -i).", false, false);
setValidFormats_("out", ListUtils::create<String>("ini,toppas"));
}
void updateTOPPAS(const String& infile, const String& outfile)
{
Int this_instance = getIntOption_("instance");
INIUpdater updater;
String tmp_ini_file = File::getTempDirectory() + "/" + File::getUniqueName() + "_INIUpdater.ini";
tmp_files_.push_back(tmp_ini_file);
String path = File::getExecutablePath();
ParamXMLFile paramFile;
Param p;
paramFile.load(infile, p);
// get version of TOPPAS file
String version = "Unknown";
if (!p.exists("info:version"))
{
writeLogWarn_("No OpenMS version information found in file " + infile + "! Assuming OpenMS 1.8 and below.");
version = "1.8.0";
}
else
{
version = p.getValue("info:version").toString();
// TODO: return on newer version?!
}
Int vertices = p.getValue("info:num_vertices");
// update sections
writeDebug_("#Vertices: " + String(vertices), 1);
bool update_success = true;
for (Int v = 0; v < vertices; ++v)
{
String sec_inst = "vertices:" + String(v) + ":";
// check for default instance
if (!p.exists(sec_inst + "toppas_type"))
{
writeLogWarn_("Update for file " + infile + " failed because the vertex #" + String(v) + " does not have a 'toppas_type' node. Check INI file for corruption!");
update_success = false;
break;
}
if (p.getValue(sec_inst + "toppas_type") != "tool") // not a tool (but input/output/merge node)
{
continue;
}
if (!p.exists(sec_inst + "tool_name"))
{
writeLogWarn_("Update for file " + infile + " failed because the vertex #" + String(v) + " does not have a 'tool_name' node. Check INI file for corruption!");
update_success = false;
break;
}
String old_name = p.getValue(sec_inst + "tool_name").toString();
String new_tool;
String ttype;
// find mapping to new tool (might be the same name)
if (p.exists(sec_inst + "tool_type")) ttype = p.getValue(sec_inst + "tool_type").toString();
if (!updater.getNewToolName(old_name, ttype, new_tool))
{
String type_text = ((ttype.empty()) ? "" : " with type '" + ttype + "' ");
writeLogWarn_("Update for file " + infile + " failed because the tool '" + old_name + "'" + type_text + "is unknown. TOPPAS file seems to be corrupted!");
update_success = false;
break;
}
// set new tool name
p.setValue(sec_inst + "tool_name", new_tool);
// delete TOPPAS type
if (new_tool != "GenericWrapper")
{
p.setValue(sec_inst + "tool_type", "");
}
// get defaults of new tool by calling it
QProcess pr;
QStringList arguments;
arguments << "-write_ini";
arguments << tmp_ini_file.toQString();
arguments << "-instance";
arguments << String(this_instance).toQString();
pr.start((path + "/" + new_tool).toQString(), arguments);
if (!pr.waitForFinished(-1))
{
writeLogWarn_("Update for file " + infile + " failed because the tool '" + new_tool + "' returned with an error! Check if the tool works properly.");
update_success = false;
break;
}
// update defaults with old values
Param new_param;
paramFile.load(tmp_ini_file, new_param);
new_param = new_param.copy(new_tool + ":1", true);
Param old_param = p.copy(sec_inst + "parameters", true);
new_param.update(old_param);
// push back changes
p.remove(sec_inst + "parameters:");
p.insert(sec_inst + "parameters", new_param);
}
if (!update_success)
{
failed_.push_back(infile);
return;
}
paramFile.store(tmp_ini_file, p);
// update internal structure (e.g. edges format changed from 1.8 to 1.9)
int argc = 1;
const char* c = "IniUpdater";
const char** argv = &c;
QApplication app(argc, const_cast<char**>(argv), false);
String tmp_dir = File::getTempDirectory() + "/" + File::getUniqueName();
QDir d;
d.mkpath(tmp_dir.toQString());
TOPPASScene ts(nullptr, tmp_dir.toQString(), false);
paramFile.store(tmp_ini_file, p);
ts.load(tmp_ini_file);
ts.store(tmp_ini_file);
paramFile.load(tmp_ini_file, p);
// STORE
if (outfile.empty()) // create a backup
{
QFileInfo fi(infile.toQString());
String new_name = String(fi.path()) + "/" + fi.completeBaseName() + "_v" + version + ".toppas";
QFile::rename(infile.toQString(), new_name.toQString());
// write new file
paramFile.store(infile, p);
}
else
{
paramFile.store(outfile, p);
}
}
void updateINI(const String& infile, const String& outfile)
{
Int this_instance = getIntOption_("instance");
INIUpdater updater;
String tmp_ini_file = File::getTempDirectory() + "/" + File::getUniqueName() + "_INIUpdater.ini";
tmp_files_.push_back(tmp_ini_file);
String path = File::getExecutablePath();
Param p;
ParamXMLFile paramFile;
paramFile.load(infile, p);
// get sections (usually there is only one - or the user has merged INI files manually)
StringList sections = updater.getToolNamesFromINI(p);
if (sections.empty())
{
writeLogWarn_("Update for file " + infile + " failed because tool section does not exist. Check INI file for corruption!");
failed_.push_back(infile);
return;
}
// get version of first section
String version_old = "Unknown";
if (!p.exists(sections[0] + ":version"))
{
writeLogWarn_("No OpenMS version information found in file " + infile + "! Cannot update!");
failed_.push_back(infile);
return;
}
else
{
version_old = p.getValue(sections[0] + ":version").toString();
// TODO: return on newer version?!
}
// update sections
writeDebug_("Section names: " + ListUtils::concatenate(sections, ", "), 1);
bool update_success = true;
for (Size s = 0; s < sections.size(); ++s)
{
String sec_inst = sections[s] + ":" + String(this_instance) + ":";
// check for default instance
if (!p.exists(sec_inst + "debug"))
{
writeLogWarn_("Update for file '" + infile + "' failed because the instance section '" + sec_inst + "' does not exist. Use -instance or check INI file for corruption!");
update_success = false;
break;
}
String new_tool;
String ttype;
// find mapping to new tool (might be the same name)
if (p.exists(sec_inst + "type")) ttype = p.getValue(sec_inst + "type").toString();
if (!updater.getNewToolName(sections[s], ttype, new_tool))
{
String type_text = ((ttype.empty()) ? "" : " with type '" + ttype + "' ");
writeLogWarn_("Update for file '" + infile + "' failed because the tool '" + sections[s] + "'" + type_text + "is unknown. TOPPAS file seems to be corrupted!");
update_success = false;
break;
}
// get defaults of new tool by calling it
QProcess pr;
QStringList arguments;
arguments << "-write_ini";
arguments << tmp_ini_file.toQString();
arguments << "-instance";
arguments << String(this_instance).toQString();
pr.start((path + "/" + new_tool).toQString(), arguments);
if (!pr.waitForFinished(-1))
{
writeLogWarn_("Update for file '" + infile + "' failed because the tool '" + new_tool + "' returned with an error! Check if the tool works properly.");
update_success = false;
break;
}
// update defaults with old values
Param new_param;
paramFile.load(tmp_ini_file, new_param);
new_param = new_param.copy(new_tool, true);
Param old_param = p.copy(sections[s], true);
new_param.update(old_param);
// push back changes
p.remove(sections[s] + ":");
p.insert(new_tool, new_param);
}
if (!update_success)
{
failed_.push_back(infile);
return;
}
// STORE
if (outfile.empty()) // create a backup
{
QFileInfo fi(infile.toQString());
String backup_filename = String(fi.path()) + "/" + fi.completeBaseName() + "_v" + version_old + ".ini";
QFile::rename(infile.toQString(), backup_filename.toQString());
std::cout << "Backup of input file created: " << backup_filename << std::endl;
// write updated/new file
paramFile.store(infile, p);
}
else
{
paramFile.store(outfile, p);
}
}
ExitCodes main_(int, const char**) override
{
StringList in = getStringList_("in");
StringList out = getStringList_("out");
bool inplace = getFlag_("i");
// consistency checks
if (out.empty() && !inplace)
{
writeLogError_("Cannot write output files, as neither -out nor -i are given. Use either of them, but not both!");
printUsage_();
return ILLEGAL_PARAMETERS;
}
if (!out.empty() && inplace)
{
writeLogError_("Two incompatible arguments given (-out and -i). Use either of them, but not both!");
printUsage_();
return ILLEGAL_PARAMETERS;
}
if (!inplace && out.size() != in.size())
{
writeLogError_("Output and input file list length must be equal!");
printUsage_();
return ILLEGAL_PARAMETERS;
}
// do the conversion!
FileHandler fh;
for (Size i = 0; i < in.size(); ++i)
{
FileTypes::Type f_type = fh.getType(in[i]);
if (f_type == FileTypes::INI) updateINI(in[i], inplace ? "" : out[i]);
else if (f_type == FileTypes::TOPPAS) updateTOPPAS(in[i], inplace ? "" : out[i]);
}
for (Size i = 0; i < tmp_files_.size(); ++i)
{
// clean up
File::remove(tmp_files_[i]);
}
if (!failed_.empty())
{
writeLogError_("The following INI/TOPPAS files could not be updated:\n " + ListUtils::concatenate(failed_, "\n "));
return INPUT_FILE_CORRUPT;
}
return EXECUTION_OK;
}
StringList failed_; // list of failed INI/TOPPAS files
StringList tmp_files_;
};
int main(int argc, const char** argv)
{
TOPPINIUpdater tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/NoiseFilterSGolay.cpp | .cpp | 7,632 | 235 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Eva Lange $
// --------------------------------------------------------------------------
#include <OpenMS/config.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CONCEPT/LogStream.h>
#include <OpenMS/DATASTRUCTURES/StringListUtils.h>
#include <OpenMS/PROCESSING/SMOOTHING/SavitzkyGolayFilter.h>
#include <OpenMS/FORMAT/FileHandler.h>
// TODO remove needed here for transform
#include <OpenMS/FORMAT/MzMLFile.h>
#include <OpenMS/FORMAT/DATAACCESS/MSDataWritingConsumer.h>
#include <OpenMS/KERNEL/MSExperiment.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_NoiseFilterSGolay NoiseFilterSGolay
@brief Executes a Savitzky Golay filter to reduce the noise in an MS experiment.
<center>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=4> → NoiseFilterSGolay →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FileConverter </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes</td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> @ref TOPP_Resampler </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes</td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_BaselineFilter</td>
</tr>
</table>
</center>
The idea of the Savitzky Golay filter is to find filter-coefficients
that preserve higher moments, which means to approximate the underlying
function within the moving window by a polynomial of higher order
(typically quadratic or quartic) (see A. Savitzky and M. J. E. Golay,
''Smoothing and Differentiation of Data by Simplified Least Squares Procedures'').
@note The Savitzky Golay filter works only on uniform data (to generate equally spaced data use the @ref TOPP_Resampler tool).
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_NoiseFilterSGolay.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_NoiseFilterSGolay.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPNoiseFilterSGolay :
public TOPPBase
{
public:
TOPPNoiseFilterSGolay() :
TOPPBase("NoiseFilterSGolay", "Removes noise from profile spectra by using a Savitzky Golay filter (on uniform (equidistant) data).")
{
}
/**
@brief Helper class for the Low Memory Noise filtering
*/
class NFSGolayMzMLConsumer :
public MSDataWritingConsumer
{
public:
NFSGolayMzMLConsumer(const String& filename, const SavitzkyGolayFilter& sgf) :
MSDataWritingConsumer(filename)
{
sgf_ = sgf;
}
void processSpectrum_(MapType::SpectrumType& s) override
{
sgf_.filter(s);
}
void processChromatogram_(MapType::ChromatogramType& c) override
{
sgf_.filter(c);
}
private:
SavitzkyGolayFilter sgf_;
};
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "input raw data file ");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerOutputFile_("out", "<file>", "", "output raw data file ");
setValidFormats_("out", ListUtils::create<String>("mzML"));
registerStringOption_("processOption", "<name>", "inmemory", "Whether to load all data and process them in-memory or whether to process the data on the fly (lowmemory) without loading the whole file into memory first", false, true);
setValidStrings_("processOption", ListUtils::create<String>("inmemory,lowmemory"));
registerSubsection_("algorithm", "Algorithm parameters section");
}
Param getSubsectionDefaults_(const String & /*section*/) const override
{
return SavitzkyGolayFilter().getDefaults();
}
ExitCodes doLowMemAlgorithm(const SavitzkyGolayFilter& sgolay)
{
///////////////////////////////////
// Create the consumer object, add data processing
///////////////////////////////////
NFSGolayMzMLConsumer sgolayConsumer(out, sgolay);
sgolayConsumer.addDataProcessing(getProcessingInfo_(DataProcessing::SMOOTHING));
///////////////////////////////////
// Create new MSDataReader and set our consumer
///////////////////////////////////
MzMLFile mz_data_file;
mz_data_file.setLogType(log_type_);
mz_data_file.transform(in, &sgolayConsumer);
return EXECUTION_OK;
}
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
in = getStringOption_("in");
out = getStringOption_("out");
String process_option = getStringOption_("processOption");
Param filter_param = getParam_().copy("algorithm:", true);
writeDebug_("Parameters passed to filter", filter_param, 3);
SavitzkyGolayFilter sgolay;
sgolay.setLogType(log_type_);
sgolay.setParameters(filter_param);
if (process_option == "lowmemory")
{
return doLowMemAlgorithm(sgolay);
}
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
FileHandler mz_data_file;
PeakMap exp;
mz_data_file.loadExperiment(in, exp, {FileTypes::MZML}, log_type_);
if (exp.empty() && exp.getChromatograms().empty())
{
OPENMS_LOG_WARN << "The given file does not contain any conventional peak data, but might"
" contain chromatograms. This tool currently cannot handle them, sorry.";
return INCOMPATIBLE_INPUT_DATA;
}
//check for peak type (profile data required)
if (!exp.empty() && exp[0].getType(true) == SpectrumSettings::SpectrumType::CENTROID)
{
writeLogWarn_("Warning: OpenMS peak type estimation indicates that this is not profile data!");
}
//check if spectra are sorted
for (Size i = 0; i < exp.size(); ++i)
{
if (!exp[i].isSorted())
{
writeLogError_("Error: Not all spectra are sorted according to peak m/z positions. Use FileFilter to sort the input!");
return INCOMPATIBLE_INPUT_DATA;
}
}
//check if chromatograms are sorted
for (Size i = 0; i < exp.getChromatograms().size(); ++i)
{
if (!exp.getChromatogram(i).isSorted())
{
writeLogError_("Error: Not all chromatograms are sorted according to peak m/z positions. Use FileFilter to sort the input!");
return INCOMPATIBLE_INPUT_DATA;
}
}
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
sgolay.filterExperiment(exp);
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
//annotate output with data processing info
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::SMOOTHING));
mz_data_file.storeExperiment(out, exp, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
String in;
String out;
};
int main(int argc, const char ** argv)
{
TOPPNoiseFilterSGolay tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/OpenSwathRTNormalizer.cpp | .cpp | 14,276 | 304 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hannes Roest, George Rosenberger $
// $Authors: Hannes Roest, George Rosenberger $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/CONCEPT/ProgressLogger.h>
#include <OpenMS/ANALYSIS/OPENSWATH/MRMFeatureFinderScoring.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/DataAccessHelper.h>
#include <OpenMS/ANALYSIS/OPENSWATH/DATAACCESS/SimpleOpenMSSpectraAccessFactory.h>
#include <OpenMS/ANALYSIS/OPENSWATH/OpenSwathHelper.h>
#include <OpenMS/ANALYSIS/OPENSWATH/MRMRTNormalizer.h>
using namespace OpenMS;
//
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_OpenSwathRTNormalizer OpenSwathRTNormalizer
@brief The OpenSwathRTNormalizer will find retention time peptides in data.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → OpenSwathRTNormalizer →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2></td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_OpenSwathAnalyzer </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_OpenSwathWorkflow </td>
</tr>
</table>
</CENTER>
This tool will find retention time normalization peptides in data and use them
to generate a transformation between the experimental RT space and the
normalized RT space. The output is a transformation file on how to transform
the RT space into the normalized space.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_OpenSwathRTNormalizer.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_OpenSwathRTNormalizer.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPOpenSwathRTNormalizer : public TOPPBase
{
public:
TOPPOpenSwathRTNormalizer() :
TOPPBase("OpenSwathRTNormalizer",
"Generate a transformation file on how to transform the RT space into the normalized space given a description of RT peptides and their normalized retention time.",
true)
{
}
protected:
typedef PeakMap MapType;
void registerOptionsAndFlags_() override
{
registerInputFileList_("in", "<files>", StringList(), "Input files separated by blank");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerInputFile_("tr", "<file>", "", "transition file with the RT peptides ('TraML' or 'csv')");
setValidFormats_("tr", ListUtils::create<String>("csv,traML"));
registerOutputFile_("out", "<file>", "", "output file");
setValidFormats_("out", ListUtils::create<String>("trafoXML"));
registerInputFile_("rt_norm", "<file>", "", "RT normalization file (how to map the RTs of this run to the ones stored in the library)", false);
setValidFormats_("rt_norm", ListUtils::create<String>("trafoXML"));
registerDoubleOption_("min_rsq", "<double>", 0.95, "Minimum r-squared of RT peptides regression", false);
registerDoubleOption_("min_coverage", "<double>", 0.6, "Minimum relative amount of RT peptides to keep", false);
registerFlag_("estimateBestPeptides", "Whether the algorithms should try to choose the best peptides based on their peak shape for normalization. Use this option you do not expect all your peptides to be detected in a sample and too many 'bad' peptides enter the outlier removal step (e.g. due to them being endogenous peptides or using a less curated list of peptides).", false);
registerSubsection_("algorithm", "Algorithm parameters section");
registerSubsection_("peptideEstimation", "Parameters for the peptide estimation (use -estimateBestPeptides to enable).");
registerSubsection_("RTNormalization", "Parameters for the RTNormalization. RT normalization and outlier detection can be done iteratively (by default) which removes one outlier per iteration or using the RANSAC algorithm.");
}
Param getSubsectionDefaults_(const String & section) const override
{
if (section == "algorithm")
{
return MRMFeatureFinderScoring().getDefaults();
}
else if (section == "peptideEstimation")
{
Param p;
p.setValue("InitialQualityCutoff", 0.5, "The initial overall quality cutoff for a peak to be scored (range ca. -2 to 2)");
p.setValue("OverallQualityCutoff", 5.5, "The overall quality cutoff for a peak to go into the retention time estimation (range ca. 0 to 10)");
p.setValue("NrRTBins", 10, "Number of RT bins to use to compute coverage. This option should be used to ensure that there is a complete coverage of the RT space (this should detect cases where only a part of the RT gradient is actually covered by normalization peptides)");
p.setValue("MinPeptidesPerBin", 1, "Minimal number of peptides that are required for a bin to counted as 'covered'");
p.setValue("MinBinsFilled", 8, "Minimal number of bins required to be covered");
return p;
}
else if (section == "RTNormalization")
{
Param p;
p.setValue("outlierMethod", "iter_residual", "Which outlier detection method to use (valid: 'iter_residual', 'iter_jackknife', 'ransac', 'none'). Iterative methods remove one outlier at a time. Jackknife approach optimizes for maximum r-squared improvement while 'iter_residual' removes the datapoint with the largest residual error (removal by residual is computationally cheaper, use this with lots of peptides).");
p.setValidStrings("outlierMethod", {"iter_residual","iter_jackknife","ransac","none"});
p.setValue("useIterativeChauvenet", "false", "Whether to use Chauvenet's criterion when using iterative methods. This should be used if the algorithm removes too many datapoints but it may lead to true outliers being retained.");
p.setValidStrings("useIterativeChauvenet", {"true","false"});
p.setValue("RANSACMaxIterations", 1000, "Maximum iterations for the RANSAC outlier detection algorithm.");
p.setValue("RANSACMaxPercentRTThreshold", 3, "Maximum threshold in RT dimension for the RANSAC outlier detection algorithm (in percent of the total gradient). Default is set to 3% which is around +/- 4 minutes on a 120 gradient.");
p.setValue("RANSACSamplingSize", 10, "Sampling size of data points per iteration for the RANSAC outlier detection algorithm.");
return p;
}
return Param();
}
ExitCodes main_(int, const char **) override
{
///////////////////////////////////
// Read input files and parameters
///////////////////////////////////
StringList file_list = getStringList_("in");
String tr_file_str = getStringOption_("tr");
String out = getStringOption_("out");
double min_rsq = getDoubleOption_("min_rsq");
double min_coverage = getDoubleOption_("min_coverage");
bool estimateBestPeptides = getFlag_("estimateBestPeptides");
const char * tr_file = tr_file_str.c_str();
MapType all_xic_maps; // all XICs from all files
OpenSwath::LightTargetedExperiment targeted_exp;
std::cout << "Loading TraML file" << std::endl;
{
TargetedExperiment transition_exp_;
FileHandler().loadTransitions(tr_file, transition_exp_, {FileTypes::TRAML});
OpenSwathDataAccessHelper::convertTargetedExp(transition_exp_, targeted_exp);
}
Param pepEstimationParams = getParam_().copy("peptideEstimation:", true);
Param RTNormParams = getParam_().copy("RTNormalization:", true);
String outlier_method = RTNormParams.getValue("outlierMethod").toString();
// 1. Estimate the retention time range of the whole experiment
std::pair<double,double> RTRange = OpenSwathHelper::estimateRTRange(targeted_exp);
std::cout << "Detected retention time range from " << RTRange.first << " to " << RTRange.second << std::endl;
// 2. Store the peptide retention times in an intermediate map
std::map<std::string, double> PeptideRTMap;
for (Size i = 0; i < targeted_exp.getCompounds().size(); i++)
{
PeptideRTMap[targeted_exp.getCompounds()[i].id] = targeted_exp.getCompounds()[i].rt;
}
TransformationDescription trafo;
// If we have a transformation file, trafo will transform the RT in the
// scoring according to the model. If we don't have one, it will apply the
// null transformation.
if (!getStringOption_("rt_norm").empty())
{
String trafo_in = getStringOption_("rt_norm");
String model_type = "linear"; //getStringOption_("model:type");
FileHandler().loadTransformations(trafo_in, trafo, true, {FileTypes::TRANSFORMATIONXML});
}
///////////////////////////////////
// Start computation
///////////////////////////////////
// 3. Extract the RT pairs from the input data
std::vector<std::pair<double, double> > pairs;
for (Size i = 0; i < file_list.size(); ++i)
{
std::shared_ptr<MapType> swath_map (new MapType()); // the map with the extracted ion chromatograms
std::shared_ptr<MapType> xic_map (new MapType());
FeatureMap featureFile;
std::cout << "RT Normalization working on " << file_list[i] << std::endl;
FileHandler().loadExperiment(file_list[i], *xic_map.get(), {FileTypes::MZML}, log_type_);
// Initialize the featureFile and set its parameters (disable for example
// the RT score since here do not know the RT transformation)
MRMFeatureFinderScoring featureFinder;
Param scoring_params = getParam_().copy("algorithm:", true);
scoring_params.setValue("Scores:use_rt_score", "false");
scoring_params.setValue("Scores:use_elution_model_score", "false");
if (estimateBestPeptides)
{
scoring_params.setValue("TransitionGroupPicker:compute_peak_quality", "true");
scoring_params.setValue("TransitionGroupPicker:minimal_quality", pepEstimationParams.getValue("InitialQualityCutoff"));
}
featureFinder.setParameters(scoring_params);
featureFinder.setStrictFlag(false);
std::vector< OpenSwath::SwathMap > swath_maps(1);
swath_maps[0].sptr = SimpleOpenMSSpectraFactory::getSpectrumAccessOpenMSPtr(swath_map);
OpenSwath::SpectrumAccessPtr chromatogram_ptr = SimpleOpenMSSpectraFactory::getSpectrumAccessOpenMSPtr(xic_map);
OpenMS::MRMFeatureFinderScoring::TransitionGroupMapType transition_group_map;
featureFinder.pickExperiment(chromatogram_ptr, featureFile, targeted_exp, trafo, swath_maps, transition_group_map);
// add all the chromatograms to the output
for (Size k = 0; k < xic_map->getChromatograms().size(); k++)
{
all_xic_maps.addChromatogram(xic_map->getChromatograms()[k]);
}
// find most likely correct feature for each group and add it to the
// "pairs" vector by computing pairs of iRT and real RT
std::map<std::string, double> res = OpenSwathHelper::simpleFindBestFeature(transition_group_map,
estimateBestPeptides, pepEstimationParams.getValue("OverallQualityCutoff"));
for (std::map<std::string, double>::iterator it = res.begin(); it != res.end(); ++it)
{
pairs.push_back(std::make_pair(it->second, PeptideRTMap[it->first])); // pair<exp_rt, theor_rt>
}
}
// 4. Perform the outlier detection
std::vector<std::pair<double, double> > pairs_corrected;
if (outlier_method == "iter_residual" || outlier_method == "iter_jackknife")
{
pairs_corrected = MRMRTNormalizer::removeOutliersIterative(pairs, min_rsq, min_coverage,
RTNormParams.getValue("useIterativeChauvenet").toBool(), outlier_method);
}
else if (outlier_method == "ransac")
{
// First, estimate of the maximum deviation from RT that is tolerated:
// Because 120 min gradient can have around 4 min elution shift, we use
// a default value of 3 % of the gradient to find upper RT threshold (3.6 min).
double pcnt_rt_threshold = RTNormParams.getValue("RANSACMaxPercentRTThreshold");
double max_rt_threshold = (RTRange.second - RTRange.first) * pcnt_rt_threshold / 100.0;
pairs_corrected = MRMRTNormalizer::removeOutliersRANSAC(pairs, min_rsq, min_coverage,
RTNormParams.getValue("RANSACMaxIterations"), max_rt_threshold,
RTNormParams.getValue("RANSACSamplingSize"));
}
else if (outlier_method == "none")
{
pairs_corrected = pairs;
}
else
{
throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
String("Illegal argument '") + outlier_method + "' used for outlierMethod (valid: 'iter_residual', 'iter_jackknife', 'ransac', 'none').");
}
// 5. Check whether the found peptides fulfill the binned coverage criteria
// set by the user.
bool enoughPeptides = MRMRTNormalizer::computeBinnedCoverage(RTRange, pairs_corrected,
pepEstimationParams.getValue("NrRTBins"),
pepEstimationParams.getValue("MinPeptidesPerBin"),
pepEstimationParams.getValue("MinBinsFilled") );
if (estimateBestPeptides && !enoughPeptides)
{
throw Exception::IllegalArgument(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
"There were not enough bins with the minimal number of peptides");
}
///////////////////////////////////
// Write output
///////////////////////////////////
TransformationDescription trafo_out;
trafo_out.setDataPoints(pairs_corrected);
Param model_params;
model_params.setValue("symmetric_regression", "false");
String model_type = "linear";
trafo_out.fitModel(model_type, model_params);
FileHandler().storeTransformations(out, trafo_out, {FileTypes::TRANSFORMATIONXML});
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPOpenSwathRTNormalizer tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/OpenMSDatabasesInfo.cpp | .cpp | 2,518 | 80 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Julianus Pfeuffer $
// $Authors: Julianus Pfeuffer $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/CHEMISTRY/ModificationsDB.h>
#include <OpenMS/CHEMISTRY/ProteaseDB.h>
#include <OpenMS/CHEMISTRY/DigestionEnzymeProtein.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_OpenMSDatabasesInfo OpenMSDatabasesInfo
@brief Information about OpenMS' internal databases
This util prints the content of OpenMS' enzyme and modification databases to a TSV file.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_OpenMSDatabasesInfo.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_OpenMSDatabasesInfo.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class OpenMSDatabasesInfo final :
public TOPPBase
{
public:
OpenMSDatabasesInfo() :
TOPPBase("OpenMSDatabasesInfo", "Prints the content of OpenMS' enzyme and modification databases to TSV")
{
}
protected:
// this function will be used to register the tool parameters
// it gets automatically called on tool execution
void registerOptionsAndFlags_() override
{
// Output CSV file
registerOutputFile_("enzymes_out", "<out>", "", "Currently supported enzymes as TSV", true, false);
setValidFormats_("enzymes_out", ListUtils::create<String>("tsv"));
registerOutputFile_("mods_out", "<out>", "", "Currently supported modifications as TSV", true, false);
setValidFormats_("mods_out", ListUtils::create<String>("tsv"));
}
// the main_ function is called after all parameters are read
ExitCodes main_(int, const char**) final
{
auto* enz_db = ProteaseDB::getInstance();
enz_db->writeTSV(getStringOption_("enzymes_out"));
auto* mod_db = ModificationsDB::getInstance();
mod_db->writeTSV(getStringOption_("mods_out"));
return EXECUTION_OK;
}
};
// the actual main function needed to create an executable
int main(int argc, const char **argv)
{
OpenMSDatabasesInfo tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/DatabaseFilter.cpp | .cpp | 5,768 | 169 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Oliver Alka $
// $Authors: Oliver Alka $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/CONCEPT/LogStream.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FASTAFile.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/METADATA/PeptideIdentification.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_DatabaseFilter DatabaseFilter
@brief The DatabaseFilter tool filters a protein database in fasta format according to one or multiple filtering criteria.
The resulting database is written as output. Depending on the reporting method (method="whitelist" or "blacklist") only entries are retained that passed all filters ("whitelist) or failed at least one filter ("blacklist").
Implemented filter criteria:
accession: Filter database according to the set of protein accessions contained in an identification file (idXML, mzIdentML)
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_DatabaseFilter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_DatabaseFilter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPDatabaseFilter :
public TOPPBase
{
public:
TOPPDatabaseFilter() :
TOPPBase("DatabaseFilter", "Filters a protein database (FASTA format) based on identified proteins")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input FASTA file, containing a protein database.");
setValidFormats_("in", {"fasta"});
registerInputFile_("id", "<file>", "", "Input file containing identified peptides and proteins.");
setValidFormats_("id", {"idXML", "mzid"});
registerStringOption_("method", "<choice>", "whitelist", "Switch between white-/blacklisting of protein IDs", false);
setValidStrings_("method", {"whitelist", "blacklist"});
registerOutputFile_("out", "<file>", "", "Output FASTA file where the reduced database will be written to.");
setValidFormats_("out", {"fasta"});
}
void filterByProteinAccessions_(const vector<FASTAFile::FASTAEntry>& db,
const PeptideIdentificationList& peptide_identifications,
bool whitelist,
vector<FASTAFile::FASTAEntry>& db_new)
{
set<String> id_accessions;
for (const auto& pep_id : peptide_identifications)
{
for (const auto& hit : pep_id.getHits())
{
for (const auto& ev : hit.getPeptideEvidences())
{
const String& id_accession = ev.getProteinAccession();
id_accessions.insert(id_accession);
}
}
}
OPENMS_LOG_INFO << "Number of Protein IDs: " << id_accessions.size() << endl;
for (const auto& entry : db)
{
const String& fasta_accession = entry.identifier;
const bool found = id_accessions.find(fasta_accession) != id_accessions.end();
if ((found && whitelist) || (! found && ! whitelist)) // either found in the whitelist or not found in the blacklist
{
db_new.push_back(entry);
}
}
//! [doxygen_snippet_Functionality_2]
}
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
String in(getStringOption_("in"));
String ids(getStringOption_("id"));
String method(getStringOption_("method"));
bool whitelist = (method == "whitelist");
String out(getStringOption_("out"));
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
vector<FASTAFile::FASTAEntry> db;
FASTAFile().load(in, db);
// Check if no filter criteria was given
// If you add a new filter please check if it was set here as well
if (ids.empty())
{
FASTAFile().store(out, db);
}
vector<FASTAFile::FASTAEntry> db_new;
if (!ids.empty()) // filter by protein accessions in id files
{
FileHandler fh;
FileTypes::Type ids_type = fh.getType(ids);
vector<ProteinIdentification> protein_identifications;
PeptideIdentificationList peptide_identifications;
if (ids_type == FileTypes::IDXML || ids_type == FileTypes::MZIDENTML)
{
FileHandler().loadIdentifications(ids, protein_identifications, peptide_identifications, {FileTypes::IDXML, FileTypes::MZIDENTML});
}
else
{
writeLogError_("Error: Unknown input file type given. Aborting!");
printUsage_();
return ILLEGAL_PARAMETERS;
}
OPENMS_LOG_INFO << "Identifications: " << ids.size() << endl;
// run filter
filterByProteinAccessions_(db, peptide_identifications, whitelist, db_new);
}
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
OPENMS_LOG_INFO << "Database entries (before / after): " << db.size() << " / " << db_new.size() << endl;
FASTAFile().store(out, db_new);
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPDatabaseFilter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/AssayGeneratorMetabo.cpp | .cpp | 22,417 | 458 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Oliver Alka, Axel Walter $
// $Authors: Oliver Alka $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/ID/SiriusExportAlgorithm.h>
#include <OpenMS/ANALYSIS/OPENSWATH/MRMAssay.h>
#include <OpenMS/ANALYSIS/OPENSWATH/TransitionPQPFile.h>
#include <OpenMS/ANALYSIS/OPENSWATH/TransitionTSVFile.h>
#include <OpenMS/ANALYSIS/TARGETED/MetaboTargetedAssay.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/PROCESSING/CALIBRATION/PrecursorCorrection.h>
#include <OpenMS/PROCESSING/DEISOTOPING/Deisotoper.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/KERNEL/RangeUtils.h>
using namespace OpenMS;
//-------------------------------------------------------------
//Doxygen docu
//----------------------------------------------------------
/**
@page TOPP_AssayGeneratorMetabo AssayGeneratorMetabo
@brief Generates an assay library using DDA data (Metabolomics)
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → AssayGeneratorMetabo →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FeatureFinderMetabo </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> OpenSWATH pipeline </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_AccurateMassSearch </td>
</tr>
</table>
</CENTER>
Build an assay library from DDA data (MS and MS/MS) (mzML).
Please provide a list of features found in the data (featureXML).
Features can be detected using the FeatureFinderMetabo (FFM) and identifcation information
can be added using the AccurateMassSearch featureXML output.
If the FeatureFinderMetabo featureXML does not contain any identifications the "use_known_unknowns" flag is used automatically.
Internal procedure AssayGeneratorMetabo: \n
1. Input mzML and featureXML \n
2. Reannotate precursor mz and intensity \n
3. Filter feature by number of masstraces \n
4. Assign precursors to specific feature (FeatureMapping) \n
5. Extract feature meta information (if possible) \n
6. Find MS2 spectrum with highest intensity precursor for one feature \n
7. Annotation is performed either the MS2 with the highest intensity precursor or a consensus spectrum can be used for the transition extraction. \n
8. Calculate thresholds (maximum and minimum intensity for transition peak) \n
9. Extract and write transitions (tsv, traml)
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_AssayGeneratorMetabo.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_AssayGeneratorMetabo.html
*/
/// @cond TOPPCLASSES
class TOPPAssayGeneratorMetabo :
public TOPPBase,
private TransitionTSVFile
{
public:
TOPPAssayGeneratorMetabo() :
TOPPBase("AssayGeneratorMetabo", "Assay library generation from DDA data (Metabolomics)")
{}
private:
SiriusExportAlgorithm sirius_export_algorithm;
protected:
void registerOptionsAndFlags_() override
{
registerInputFileList_("in", "<file(s)>", StringList(), "MzML input file(s) used for assay library generation");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerInputFileList_("in_featureinfo", "<file(s)>", StringList(), "FeatureXML input file(s) containing identification information (e.g. AccurateMassSearch)");
setValidFormats_("in_featureinfo", ListUtils::create<String>("featureXML"));
registerOutputFile_("out", "<file>", "", "Assay library output file");
setValidFormats_("out", ListUtils::create<String>("tsv,traML,pqp"));
registerDoubleOption_("ambiguity_resolution_mz_tolerance", "<num>", 10.0, "Mz tolerance for the resolution of identification ambiguity over multiple files", false);
registerStringOption_("ambiguity_resolution_mz_tolerance_unit", "<choice>", "ppm", "Unit of the ambiguity_resolution_mz_tolerance", false, true);
setValidStrings_("ambiguity_resolution_mz_tolerance_unit", ListUtils::create<String>("ppm,Da"));
registerDoubleOption_("ambiguity_resolution_rt_tolerance", "<num>", 10.0, "RT tolerance in seconds for the resolution of identification ambiguity over multiple files", false);
registerDoubleOption_("total_occurrence_filter", "<num>", 0.1, "Filter compound based on total occurrence in analysed samples", false);
setMinFloat_("total_occurrence_filter", 0.0);
setMaxFloat_("total_occurrence_filter", 1.0);
registerStringOption_("method", "<choice>", "highest_intensity", "Spectrum with the highest precursor intensity or a consensus spectrum is used for assay library construction (if no fragment annotation is used).",false);
setValidStrings_("method", ListUtils::create<String>("highest_intensity,consensus_spectrum"));
registerFlag_("exclude_ms2_precursor", "Excludes precursor in ms2 from transition list", false);
registerFlag_("use_known_unknowns", "Use features without identification information", false);
// transition extraction
registerIntOption_("min_transitions", "<int>", 3, "Minimal number of transitions", false);
registerIntOption_("max_transitions", "<int>", 6, "Maximal number of transitions", false);
registerDoubleOption_("cosine_similarity_threshold", "<num>", 0.98, "Threshold for cosine similarity of MS2 spectra from the same precursor used in consensus spectrum creation", false);
registerDoubleOption_("transition_threshold", "<num>", 5, "Further transitions need at least x% of the maximum intensity (default 5%)", false);
registerDoubleOption_("min_fragment_mz", "<num>", 0.0, "Minimal m/z of a fragment ion choosen as a transition", false, true);
registerDoubleOption_("max_fragment_mz", "<num>", 2000.0, "Maximal m/z of a fragment ion choosen as a transition" , false, true);
// precursor
addEmptyLine_();
registerDoubleOption_("precursor_mz_distance", "<num>", 0.0001, "Max m/z distance of the precursor entries of two spectra to be merged in [Da].", false);
registerDoubleOption_("precursor_recalibration_window", "<num>", 0.01, "Tolerance window for precursor selection (Annotation of precursor mz and intensity)", false, true);
registerStringOption_("precursor_recalibration_window_unit", "<choice>", "Da", "Unit of the precursor_mz_tolerance_annotation", false, true);
setValidStrings_("precursor_recalibration_window_unit", ListUtils::create<String>("Da,ppm"));
registerDoubleOption_("precursor_consensus_spectrum_rt_tolerance", "<num>", 5, "Tolerance window (left and right) for precursor selection [seconds], for consensus spectrum generation (only available without fragment annotation)", false);
addEmptyLine_();
registerFlag_("deisotoping_use_deisotoper", "Use Deisotoper (if no fragment annotation is used)", false);
registerDoubleOption_("deisotoping_fragment_tolerance", "<num>", 1, "Tolerance used to match isotopic peaks", false);
registerStringOption_("deisotoping_fragment_unit", "<choice>", "ppm", "Unit of the fragment tolerance", false);
setValidStrings_("deisotoping_fragment_unit", ListUtils::create<String>("ppm,Da"));
registerIntOption_("deisotoping_min_charge", "<num>", 1, "The minimum charge considered", false);
setMinInt_("deisotoping_min_charge", 1);
registerIntOption_("deisotoping_max_charge", "<num>", 1, "The maximum charge considered", false);
setMinInt_("deisotoping_max_charge", 1);
registerIntOption_("deisotoping_min_isopeaks", "<num>", 2, "The minimum number of isotopic peaks (at least 2) required for an isotopic cluster", false);
setMinInt_("deisotoping_min_isopeaks", 2);
registerIntOption_("deisotoping_max_isopeaks", "<num>", 3, "The maximum number of isotopic peaks (at least 2) considered for an isotopic cluster", false);
setMinInt_("deisotoping_max_isopeaks", 3);
registerFlag_("deisotoping_keep_only_deisotoped", "Only monoisotopic peaks of fragments with isotopic pattern are retained", false);
registerFlag_("deisotoping_annotate_charge", "Annotate the charge to the peaks", false);
addEmptyLine_();
auto defaults = sirius_export_algorithm.getDefaults();
defaults.remove("isotope_pattern_iterations");
defaults.remove("no_masstrace_info_isotope_pattern");
registerFullParam_(defaults);
}
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// Parsing parameters
//-------------------------------------------------------------
// param AssayGeneratorMetabo
StringList in = getStringList_("in");
StringList id = getStringList_("in_featureinfo");
String out = getStringOption_("out");
String method = getStringOption_("method");
double ar_mz_tol = getDoubleOption_("ambiguity_resolution_mz_tolerance");
String ar_mz_tol_unit_res = getStringOption_("ambiguity_resolution_mz_tolerance_unit");
double ar_rt_tol = getDoubleOption_("ambiguity_resolution_rt_tolerance");
double total_occurrence_filter = getDoubleOption_("total_occurrence_filter");
bool method_consensus_spectrum = method == "consensus_spectrum" ? true : false;
bool exclude_ms2_precursor = getFlag_("exclude_ms2_precursor");
int min_transitions = getIntOption_("min_transitions");
int max_transitions = getIntOption_("max_transitions");
double min_fragment_mz = getDoubleOption_("min_fragment_mz");
double max_fragment_mz = getDoubleOption_("max_fragment_mz");
double consensus_spectrum_precursor_rt_tolerance = getDoubleOption_("precursor_consensus_spectrum_rt_tolerance");
double pre_recal_win = getDoubleOption_("precursor_recalibration_window");
String pre_recal_win_unit = getStringOption_("precursor_recalibration_window_unit");
bool ppm_recal = pre_recal_win_unit == "ppm" ? true : false;
double precursor_mz_distance = getDoubleOption_("precursor_mz_distance");
double cosine_sim_threshold = getDoubleOption_("cosine_similarity_threshold");
double transition_threshold = getDoubleOption_("transition_threshold");
bool use_known_unknowns = getFlag_("use_known_unknowns");
// param deisotoper
bool use_deisotoper = getFlag_("deisotoping_use_deisotoper");
double fragment_tolerance = getDoubleOption_("deisotoping_fragment_tolerance");
String fragment_unit = getStringOption_("deisotoping_fragment_unit");
bool fragment_unit_ppm = fragment_unit == "ppm" ? true : false;
int min_charge = getIntOption_("deisotoping_min_charge");
int max_charge = getIntOption_("deisotoping_max_charge");
unsigned int min_isopeaks = getIntOption_("deisotoping_min_isopeaks");
unsigned int max_isopeaks = getIntOption_("deisotoping_max_isopeaks");
bool keep_only_deisotoped = getFlag_("deisotoping_keep_only_deisotoped");
bool annotate_charge = getFlag_("deisotoping_annotate_charge");
writeDebug_("Parameters passed to SiriusExportAlgorithm", sirius_export_algorithm.getParameters(), 3);
//-------------------------------------------------------------
// input and check
//-------------------------------------------------------------
// check size of .mzML & .featureXML input
if (in.size() != id.size())
{
throw Exception::MissingInformation(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION,
"Number of .mzML do not match to the number of .featureXML files. \n Please check and provide the corresponding files.");
}
std::vector<MetaboTargetedAssay> v_mta;
// iterate over all the files
for (unsigned file_counter = 0; file_counter < in.size(); file_counter++)
{
// load mzML
PeakMap spectra;
FileHandler().loadExperiment(in[file_counter], spectra, {FileTypes::MZML}, log_type_);
// load featurexml
FeatureMap feature_map;
FileHandler().loadFeatures(id[file_counter], feature_map, {FileTypes::FEATUREXML}, log_type_);
// check if featureXML corresponds to mzML
StringList featurexml_primary_path;
feature_map.getPrimaryMSRunPath(featurexml_primary_path);
if (in[file_counter] != featurexml_primary_path[0]) // featureXML should only have one primary path
{
OPENMS_LOG_WARN << "Warning: Original paths of the mzML files do not correspond to the featureXML files. Please check and provide the corresponding files." << std::endl;
OPENMS_LOG_WARN << "Input MzML: " << in[file_counter] << std::endl;
OPENMS_LOG_WARN << "Input FeatureXML: " << id[file_counter] << std::endl;
OPENMS_LOG_WARN << "Original paths: " << std::endl;
for (const String& it_fpp : featurexml_primary_path)
{
OPENMS_LOG_WARN << " " << it_fpp << std::endl;
}
}
// determine type of spectral data (profile or centroided)
if (!spectra[0].empty())
{
SpectrumSettings::SpectrumType spectrum_type = spectra[0].getType();
if (spectrum_type == SpectrumSettings::SpectrumType::PROFILE)
{
if (!getFlag_("force"))
{
throw OpenMS::Exception::FileEmpty(__FILE__,
__LINE__,
__FUNCTION__,
"Error: Profile data provided but centroided spectra expected. ");
}
}
}
//-------------------------------------------------------------
// Processing
//-------------------------------------------------------------
// sort spectra
spectra.sortSpectra();
// check if correct featureXML is given and set use_known_unkowns parameter if no id information is available
const std::vector<DataProcessing> &processing = feature_map.getDataProcessing();
for (auto it = processing.begin(); it != processing.end(); ++it)
{
if (it->getSoftware().getName() == "FeatureFinderMetabo")
{
// if id information is missing set use_known_unknowns to true
if (feature_map.getProteinIdentifications().empty())
{
use_known_unknowns = true;
OPENMS_LOG_INFO << "Due to the use of data without previous identification "
<< "use_known_unknowns will be switched on." << std::endl;
}
}
}
// annotate and recalibrate precursor mz and intensity
std::vector<double> delta_mzs, mzs, rts;
PrecursorCorrection::correctToHighestIntensityMS1Peak(spectra, pre_recal_win, ppm_recal, delta_mzs, mzs, rts);
// always use preprocessing:
// run masstrace filter and feature mapping
FeatureMapping::FeatureMappingInfo fm_info;
FeatureMapping::FeatureToMs2Indices feature_mapping; // reference to *basefeature in vector<FeatureMap>
sirius_export_algorithm.preprocessing(id[file_counter],
spectra,
fm_info,
feature_mapping);
// filter known_unkowns based on description (UNKNOWN) (AMS)
std::map<const BaseFeature*, std::vector<size_t>> feature_ms2_spectra_map = feature_mapping.assignedMS2;
std::map<const BaseFeature*, std::vector<size_t>> known_features;
if (!use_known_unknowns)
{
for (auto it = feature_ms2_spectra_map.begin(); it != feature_ms2_spectra_map.end(); ++it)
{
const BaseFeature *feature = it->first;
if (!(feature->getPeptideIdentifications().empty()) &&
!(feature->getPeptideIdentifications()[0].getHits().empty()))
{
String description;
// one hit is enough for prefiltering
description = feature->getPeptideIdentifications()[0].getHits()[0].getMetaValue("description");
// change format of description [name] to name
description.erase(remove_if(begin(description),
end(description),
[](char c) { return c == '[' || c == ']'; }), end(description));
known_features.insert({it->first, it->second});
}
}
feature_mapping.assignedMS2 = known_features;
}
if (use_deisotoper)
{
bool make_single_charged = false;
for (auto& peakmap_it : spectra)
{
MSSpectrum& spectrum = peakmap_it;
if (spectrum.getMSLevel() == 1)
{
continue;
}
else
{
Deisotoper::deisotopeAndSingleCharge(spectrum,
fragment_tolerance,
fragment_unit_ppm,
min_charge,
max_charge,
keep_only_deisotoped,
min_isopeaks,
max_isopeaks,
make_single_charged,
annotate_charge);
}
}
}
// remove peaks form MS2 which are at a higher mz than the precursor + 10 ppm
for (auto& peakmap_it : spectra)
{
MSSpectrum& spectrum = peakmap_it;
if (spectrum.getMSLevel() == 1)
{
continue;
}
else
{
// if peak mz higher than precursor mz set intensity to zero
double prec_mz = spectrum.getPrecursors()[0].getMZ();
double mass_diff = Math::ppmToMass(10.0, prec_mz);
for (auto& spec : spectrum)
{
if (spec.getMZ() > prec_mz + mass_diff)
{
spec.setIntensity(0);
}
}
spectrum.erase(remove_if(spectrum.begin(),
spectrum.end(),
InIntensityRange<PeakMap::PeakType>(1,
std::numeric_limits<PeakMap::PeakType::IntensityType>::max(),
true)), spectrum.end());
}
}
// potential transitions of one file
std::vector<MetaboTargetedAssay> tmp_mta;
tmp_mta = MetaboTargetedAssay::extractMetaboTargetedAssay(spectra,
feature_mapping,
consensus_spectrum_precursor_rt_tolerance,
precursor_mz_distance,
cosine_sim_threshold,
transition_threshold,
min_fragment_mz,
max_fragment_mz,
method_consensus_spectrum,
exclude_ms2_precursor,
file_counter);
// append potential transitions of one file to vector of all files
v_mta.insert(v_mta.end(), tmp_mta.begin(), tmp_mta.end());
} // end iteration over all files
// group ambiguous identification based on precursor_mz and feature retention time
// Use featureMap and use FeatureGroupingAlgorithmQT
std::unordered_map< UInt64, std::vector<MetaboTargetedAssay> > ambiguity_groups = MetaboTargetedAssay::buildAmbiguityGroup(v_mta, ar_mz_tol, ar_rt_tol, ar_mz_tol_unit_res, in.size());
// resolve identification ambiguity based on highest occurrence and highest intensity
MetaboTargetedAssay::resolveAmbiguityGroup(ambiguity_groups, total_occurrence_filter ,in.size());
// merge possible transitions
std::vector<TargetedExperiment::Compound> v_cmp;
std::vector<ReactionMonitoringTransition> v_rmt_all;
for (const auto &it : ambiguity_groups)
{
for (const auto &comp_it : it.second)
{
v_cmp.push_back(comp_it.potential_cmp);
v_rmt_all.insert(v_rmt_all.end(), comp_it.potential_rmts.begin(), comp_it.potential_rmts.end());
}
}
// convert possible transitions to TargetedExperiment
TargetedExperiment t_exp;
t_exp.setCompounds(v_cmp);
t_exp.setTransitions(v_rmt_all);
// use MRMAssay methods for filtering
MRMAssay assay;
// sort by highest intensity - filter
assay.filterMinMaxTransitionsCompound(t_exp, min_transitions, max_transitions);
// sort TargetedExperiment by name (TransitionID)
t_exp.sortTransitionsByName();
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
String extension = out.substr(out.find_last_of(".")+1);
if (extension == "tsv")
{
// validate and write
OpenMS::TransitionTSVFile::convertTargetedExperimentToTSV(out.c_str(), t_exp);
}
else if (extension == "traML")
{
// validate
OpenMS::TransitionTSVFile::validateTargetedExperiment(t_exp);
// write traML
FileHandler().storeTransitions(out, t_exp, {FileTypes::TRAML}, log_type_);
}
else if (extension == "pqp")
{
//validate
OpenMS::TransitionTSVFile::validateTargetedExperiment(t_exp);
// write pqp
TransitionPQPFile pqp_out;
pqp_out.convertTargetedExperimentToPQP(out.c_str(), t_exp);
}
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPAssayGeneratorMetabo tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/FileMerger.cpp | .cpp | 20,583 | 550 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Marc Sturm, Chris Bielow, Hendrik Weisser $
// --------------------------------------------------------------------------
#include <OpenMS/ANALYSIS/MAPMATCHING/MapAlignmentTransformer.h>
#include <OpenMS/ANALYSIS/MAPMATCHING/TransformationDescription.h>
#include <OpenMS/ANALYSIS/TARGETED/TargetedExperiment.h>
#include <OpenMS/CONCEPT/VersionInfo.h>
#include <OpenMS/DATASTRUCTURES/StringListUtils.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/FORMAT/TextFile.h>
#include <OpenMS/FORMAT/FASTAFile.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <boost/regex.hpp>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_FileMerger FileMerger
@brief Merges several files. Multiple output formats supported, depending on the input format.
<B>Supported input/output file type combinations:</B>
<center>
<table>
<tr>
<th ALIGN = "center"> Input file type(s) </th>
<th ALIGN = "center"> Output file type </th>
<th ALIGN = "center"> Notes </th>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center"> featureXML </td>
<td VALIGN="middle" ALIGN = "center"> featureXML </td>
<td VALIGN="middle" ALIGN = "left"> Features from multiple files are merged by simple concatenation into a single output file. Peptide and protein identifications are appended; conflicting unique IDs are updated to maintain consistency </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center"> consensusXML </td>
<td VALIGN="middle" ALIGN = "center"> consensusXML </td>
<td VALIGN="middle" ALIGN = "left"> See append_method parameter (append_rows or append_cols) </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center"> traML </td>
<td VALIGN="middle" ALIGN = "center"> traML </td>
<td VALIGN="middle" ALIGN = "left"> Targeted experiment transitions are combined </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center"> fasta </td>
<td VALIGN="middle" ALIGN = "center"> fasta </td>
<td VALIGN="middle" ALIGN = "left"> Protein/peptide sequences are combined; warnings for duplicates </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center"> mzML, mzXML, mzData </td>
<td VALIGN="middle" ALIGN = "center"> mzML </td>
<td VALIGN="middle" ALIGN = "left"> Raw MS data formats merge to mzML </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center"> dta, dta2d </td>
<td VALIGN="middle" ALIGN = "center"> mzML </td>
<td VALIGN="middle" ALIGN = "left"> DTA formats merge to mzML; RT handling via raw:* parameters </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center"> mgf, fid </td>
<td VALIGN="middle" ALIGN = "center"> mzML </td>
<td VALIGN="middle" ALIGN = "left"> Other raw data formats merge to mzML </td>
</tr>
</table>
</center>
@note All input files for a single merge operation must be of the same type (or compatible raw data types that all output to mzML).
<center>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → FileMerger →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any tool/instrument producing mergeable files </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any tool operating merged files (e.g. @ref TOPP_CometAdapter for mzML, @ref TOPP_ProteinQuantifier for consensusXML) </td>
</tr>
</table>
</center>
Special attention should be given to the append_method for consensusXMLs. One column corresponds to one channel/label + raw file. Rows are quantified and linked features.
More details on the use cases can be found at the parameter description.
For non-consensusXML or consensusXML merging with append_rows, the meta information that is valid for the whole experiment (e.g. MS instrument and sample)
is taken from the first file only.
For spectrum-containing formats (no feature/consensusXML), the retention times for the individual scans are taken from either:
<ul>
<li>the input file meta data (e.g. mzML)
<li>from the input file names (name must contain 'rt' directly followed by a number, e.g. 'myscan_rt3892.98_MS2.dta')
<li>as a list (one RT for each file)
<li>or are auto-generated (starting at 1 with 1 second increment).
</ul>
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_FileMerger.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_FileMerger.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPFileMerger :
public TOPPBase
{
public:
TOPPFileMerger() :
TOPPBase("FileMerger", "Merges several MS files into one file."),
rt_gap_(0.0), rt_offset_(0.0)
{
}
protected:
double rt_gap_, rt_offset_; // parameters for RT concatenation
void registerOptionsAndFlags_() override
{
StringList valid_in = ListUtils::create<String>("mzData,mzXML,mzML,dta,dta2d,mgf,featureXML,consensusXML,fid,traML,fasta");
registerInputFileList_("in", "<files>", StringList(), "Input files separated by blank");
setValidFormats_("in", valid_in);
registerStringOption_("in_type", "<type>", "", "Input file type (default: determined from file extension or content)", false);
setValidStrings_("in_type", valid_in);
registerOutputFile_("out", "<file>", "", "Output file");
setValidFormats_("out", ListUtils::create<String>("mzML,featureXML,consensusXML,traML,fasta"));
registerFlag_("annotate_file_origin", "Store the original filename in each feature using meta value \"file_origin\" (for featureXML and consensusXML only).");
registerStringOption_("append_method", "<choice>", "append_rows", "(ConsensusXML-only) Append quantitative information about features row-wise or column-wise.\n"
"- 'append_rows' is usually used when the inputs come from the same MS run (e.g. caused by manual splitting or multiple algorithms on the same file)\n"
"- 'append_cols' when you want to combine consensusXMLs from e.g. different fractions to be summarized in ProteinQuantifier or jointly exported with MzTabExporter."
, false);
setValidStrings_("append_method", ListUtils::create<String>("append_rows,append_cols"));
registerTOPPSubsection_("rt_concat", "Options for concatenating files in the retention time (RT) dimension. The RT ranges of inputs are adjusted so they don't overlap in the merged file (traML input not supported)");
registerDoubleOption_("rt_concat:gap", "<sec>", 0.0, "The amount of gap (in seconds) to insert between the RT ranges of different input files. RT concatenation is enabled if a value > 0 is set.", false);
registerOutputFileList_("rt_concat:trafo_out", "<files>", vector<String>(), "Output of retention time transformations that were applied to the input files to produce non-overlapping RT ranges. If used, one output file per input file is required.", false);
setValidFormats_("rt_concat:trafo_out", ListUtils::create<String>("trafoXML"));
registerTOPPSubsection_("raw", "Options for raw data input/output (primarily for DTA files)");
registerFlag_("raw:rt_auto", "Assign retention times automatically (integers starting at 1)");
registerDoubleList_("raw:rt_custom", "<rts>", DoubleList(), "List of custom retention times that are assigned to the files. The number of given retention times must be equal to the number of input files.", false);
registerFlag_("raw:rt_filename", "Try to guess the retention time of a file based on the filename. This option is useful for merging DTA files, where filenames should contain the string 'rt' directly followed by a floating point number, e.g. 'my_spectrum_rt2795.15.dta'");
registerIntOption_("raw:ms_level", "<num>", 0, "If 1 or higher, this number is assigned to spectra as the MS level. This option is useful for DTA files which do not contain MS level information.", false);
}
template <class MapType>
void adjustRetentionTimes_(MapType& map, const String& trafo_out,
bool first_file)
{
map.updateRanges();
TransformationDescription trafo;
if (first_file) // no transformation necessary
{
rt_offset_ = map.getMaxRT() + rt_gap_; // overall range for all spectra
trafo.fitModel("identity");
}
else // subsequent file -> apply transformation
{
TransformationDescription::DataPoints points(2);
double rt_min = map.getMinRT(), rt_max = map.getMaxRT();
points[0] = make_pair(rt_min, rt_offset_);
rt_offset_ += rt_max - rt_min;
points[1] = make_pair(rt_max, rt_offset_);
trafo.setDataPoints(points);
trafo.fitModel("linear");
MapAlignmentTransformer::transformRetentionTimes(map, trafo, true);
rt_offset_ += rt_gap_;
}
if (!trafo_out.empty())
{
FileHandler().storeTransformations(trafo_out, trafo, {FileTypes::TRANSFORMATIONXML});
}
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
// file list
StringList file_list = getStringList_("in");
// file type
FileHandler file_handler;
FileTypes::Type force_type;
if (!getStringOption_("in_type").empty())
{
force_type = FileTypes::nameToType(getStringOption_("in_type"));
}
else
{
force_type = file_handler.getType(file_list[0]);
}
// output file names and types
String out_file = getStringOption_("out");
// append method
bool append_rows = false;
bool append_cols = false;
String append_method = getStringOption_("append_method");
if (append_method == "append_rows")
{
append_rows = true;
}
else
{
append_cols = true;
}
bool annotate_file_origin = getFlag_("annotate_file_origin");
rt_gap_ = getDoubleOption_("rt_concat:gap");
vector<String> trafo_out = getStringList_("rt_concat:trafo_out");
if (trafo_out.empty())
{
// resize now so we don't have to worry about indexing out of bounds:
trafo_out.resize(file_list.size());
}
else if (trafo_out.size() != file_list.size())
{
writeLogError_("Error: Number of transformation output files must equal the number of input files (parameters 'rt_concat:trafo_out'/'in')!");
return ILLEGAL_PARAMETERS;
}
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
if (force_type == FileTypes::FEATUREXML)
{
FeatureMap out;
FileHandler fh;
for (Size i = 0; i < file_list.size(); ++i)
{
FeatureMap map;
fh.loadFeatures(file_list[i], map, {FileTypes::FEATUREXML});
if (annotate_file_origin)
{
for (FeatureMap::iterator it = map.begin(); it != map.end(); ++it)
{
it->setMetaValue("file_origin", DataValue(file_list[i]));
}
}
if (rt_gap_ > 0.0) // concatenate in RT
{
adjustRetentionTimes_(map, trafo_out[i], i == 0);
}
out += map;
}
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
// annotate output with data processing info
addDataProcessing_(out, getProcessingInfo_(DataProcessing::FORMAT_CONVERSION));
fh.storeFeatures(out_file, out, {FileTypes::FEATUREXML});
}
else if (force_type == FileTypes::CONSENSUSXML)
{
ConsensusMap out;
FileHandler fh;
// load the metadata from the first file
fh.loadConsensusFeatures(file_list[0], out, {FileTypes::CONSENSUSXML});
// but annotate the origins
if (append_rows) {
if (annotate_file_origin)
{
for (ConsensusFeature& cm : out)
{
cm.setMetaValue("file_origin", DataValue(file_list[0]));
}
}
// skip first file for adding
for (Size i = 1; i < file_list.size(); ++i)
{
ConsensusMap map;
fh.loadConsensusFeatures(file_list[i], map, {FileTypes::CONSENSUSXML});
if (annotate_file_origin)
{
for (ConsensusFeature& cm : map)
{
cm.setMetaValue("file_origin", DataValue(file_list[i]));
}
}
if (rt_gap_ > 0.0) // concatenate in RT
{
adjustRetentionTimes_(map, trafo_out[i], i == 0);
}
out.appendRows(map);
}
}
if (append_cols)
{
// skip first file for adding
for (Size i = 1; i < file_list.size(); ++i)
{
ConsensusMap map;
fh.loadConsensusFeatures(file_list[i], map, {FileTypes::CONSENSUSXML});
out.appendColumns(map);
}
}
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
// annotate output with data processing info
addDataProcessing_(out, getProcessingInfo_(DataProcessing::FORMAT_CONVERSION));
fh.storeConsensusFeatures(out_file, out,{FileTypes::CONSENSUSXML});
}
else if (force_type == FileTypes::FASTA)
{
FASTAFile infile;
FASTAFile outfile;
vector <FASTAFile::FASTAEntry> entries;
vector <FASTAFile::FASTAEntry> temp_entries;
vector <FASTAFile::FASTAEntry>::iterator loopiter;
vector <FASTAFile::FASTAEntry>::iterator iter;
for (Size i = 0; i < file_list.size(); ++i)
{
infile.load(file_list[i], temp_entries);
entries.insert(entries.end(), temp_entries.begin(), temp_entries.end());
}
for (loopiter = entries.begin(); loopiter != entries.end(); loopiter = std::next(loopiter))
{
iter = find_if(entries.begin(), loopiter, [&loopiter](const FASTAFile::FASTAEntry& entry) { return entry.headerMatches(*loopiter); });
if (iter != loopiter)
{
std::cout << "Warning: Duplicate header, Number: " << std::distance(entries.begin(), loopiter) + 1 << ", ID: " << loopiter->identifier << " is same as Number: " << std::distance(entries.begin(), iter) << ", ID: " << iter->identifier << "\n";
}
iter = find_if(entries.begin(), loopiter, [&loopiter](const FASTAFile::FASTAEntry& entry) { return entry.sequenceMatches(*loopiter); });
if (iter != loopiter && iter != entries.end())
{
std::cout << "Warning: Duplicate sequence, Number: " << std::distance(entries.begin(), loopiter) + 1 << ", ID: " << loopiter->identifier << " is same as Number: " << std::distance(entries.begin(), iter) << ", ID: " << iter->identifier << "\n";
}
}
outfile.store(out_file, entries);
}
else if (force_type == FileTypes::TRAML)
{
TargetedExperiment out;
FileHandler fh;
for (Size i = 0; i < file_list.size(); ++i)
{
TargetedExperiment map;
fh.loadTransitions(file_list[i], map, {FileTypes::TRAML});
out += map;
}
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
// annotate output with data processing info
Software software;
software.setName("FileMerger");
software.setVersion(VersionInfo::getVersion());
out.addSoftware(software);
fh.storeTransitions(out_file, out, {FileTypes::TRAML});
}
else // raw data input (e.g. mzML)
{
// RT
bool rt_auto_number = getFlag_("raw:rt_auto");
bool rt_filename = getFlag_("raw:rt_filename");
bool rt_custom = false;
DoubleList custom_rts = getDoubleList_("raw:rt_custom");
if (!custom_rts.empty())
{
rt_custom = true;
if (custom_rts.size() != file_list.size())
{
writeLogError_("Error: Custom retention time list (parameter 'raw:rt_custom') must have as many elements as there are input files (parameter 'in')!");
return ILLEGAL_PARAMETERS;
}
}
// MS level
Int ms_level = getIntOption_("raw:ms_level");
PeakMap out;
UInt rt_auto = 0;
UInt native_id = 0;
for (Size i = 0; i < file_list.size(); ++i)
{
String filename = file_list[i];
// load file
force_type = file_handler.getType(file_list[i]);
PeakMap in;
file_handler.loadExperiment(filename, in, {force_type}, log_type_, true, true);
if (in.empty() && in.getChromatograms().empty())
{
writeLogWarn_(String("Warning: Empty file '") + filename + "'!");
continue;
}
out.reserve(out.size() + in.size());
// warn if custom RT and more than one scan in input file
if (rt_custom && in.size() > 1)
{
writeLogWarn_(String("Warning: More than one scan in file '") + filename + "'! All scans will have the same retention time!");
}
// handle special raw data options:
for (MSSpectrum& spec : in)
{
float rt_final = spec.getRT();
if (rt_auto_number)
{
rt_final = ++rt_auto;
}
else if (rt_custom)
{
rt_final = custom_rts[i];
}
else if (rt_filename)
{
static const boost::regex re(R"(rt(\d+(\.\d+)?))");
boost::smatch match;
bool found = boost::regex_search(filename, match, re);
if (found)
{
rt_final = String(match[1]).toFloat();
}
else
{
writeLogWarn_("Warning: could not extract retention time from filename '" + filename + "'");
}
}
// none of the rt methods were successful
if (rt_final < 0)
{
writeLogWarn_(String("Warning: No valid retention time for output scan '") + rt_auto + "' from file '" + filename + "'");
}
spec.setRT(rt_final);
spec.setNativeID("spectrum=" + String(native_id));
if (ms_level > 0)
{
spec.setMSLevel(ms_level);
}
++native_id;
}
// if we have only one spectrum, we can annotate it directly, for more spectra, we just name the source file leaving the spectra unannotated (to avoid a long and redundant list of sourceFiles)
if (in.size() == 1)
{
in[0].setSourceFile(in.getSourceFiles()[0]);
in.getSourceFiles().clear(); // delete source file annotated from source file (it's in the spectrum anyways)
}
if (rt_gap_ > 0.0) // concatenate in RT
{
adjustRetentionTimes_(in, trafo_out[i], i == 0);
}
// add spectra to output
for (const MSSpectrum& spec : in)
{
out.addSpectrum(spec);
}
// also add the chromatograms
for (vector<MSChromatogram >::const_iterator
chrom_it = in.getChromatograms().begin(); chrom_it !=
in.getChromatograms().end(); ++chrom_it)
{
out.addChromatogram(*chrom_it);
}
// copy experimental settings from first file
if (i == 0)
{
out.ExperimentalSettings::operator=(in);
}
else // otherwise append
{
out.getSourceFiles().insert(out.getSourceFiles().end(), in.getSourceFiles().begin(), in.getSourceFiles().end()); // could be emtpty if spectrum was annotated above, but that's ok then
}
}
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
// annotate output with data processing info
addDataProcessing_(out, getProcessingInfo_(DataProcessing::FORMAT_CONVERSION));
FileHandler().storeExperiment(out_file, out,{FileTypes::MZML}, log_type_);
}
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPFileMerger tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/SpectraMerger.cpp | .cpp | 5,523 | 177 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow, Andreas Bertsch, Lars Nilse $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/PROCESSING/SPECTRAMERGING/SpectraMerger.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <algorithm>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_SpectraMerger SpectraMerger
@brief Allows to add up several spectra.
<center>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=2> → SpectraMerger →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any tool operating on MS peak data @n (in mzML format) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> any tool operating on MS peak data @n (in mzML format)</td>
</tr>
</table>
</center>
@experimental This TOPP-tool is not well tested and not all features might be properly implemented and tested!
This tool can add several consecutive scans, increasing S/N ratio (for MS1 and above) or merge scans which stem from similar precursors (for MS2 and above).
In any case, the number of scans will be reduced.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_SpectraMerger.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_SpectraMerger.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPSpectraMerger :
public TOPPBase
{
public:
TOPPSpectraMerger() :
TOPPBase("SpectraMerger", "Merges spectra (each MS level separately), increasing S/N ratios.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input mzML file.");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerOutputFile_("out", "<file>", "", "Output mzML file with merged spectra.");
setValidFormats_("out", ListUtils::create<String>("mzML"));
registerStringOption_("merging_method", "<method>", "average_gaussian", "Method of merging which should be used.", false);
setValidStrings_("merging_method", ListUtils::create<String>("average_gaussian,average_tophat,precursor_method,block_method"));
registerSubsection_("algorithm", "Algorithm section for merging spectra");
}
Param getSubsectionDefaults_(const String & /*section*/) const override
{
return SpectraMerger().getParameters();
}
ExitCodes main_(int, const char **) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
String in(getStringOption_("in"));
String out(getStringOption_("out"));
String merging_method(getStringOption_("merging_method"));
//-------------------------------------------------------------
// reading input
//-------------------------------------------------------------
FileHandler fh;
FileTypes::Type in_type = fh.getType(in);
PeakMap exp;
fh.loadExperiment(in, exp, {in_type}, log_type_);
exp.sortSpectra();
exp.updateRanges();
auto levels = exp.getMSLevels();
if (levels.empty()) throw Exception::InvalidSize(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, levels.size(), "experiment has no MS levels");
int min_ms_level = levels.front();
int max_ms_level = levels.back();
//-------------------------------------------------------------
// calculations
//-------------------------------------------------------------
SpectraMerger merger;
merger.setLogType(log_type_);
merger.setParameters(getParam_().copy("algorithm:", true));
if (merging_method == "precursor_method")
{
merger.mergeSpectraPrecursors(exp);
}
else if (merging_method == "block_method")
{
merger.mergeSpectraBlockWise(exp);
}
else if (merging_method == "average_gaussian")
{
int ms_level = merger.getParameters().getValue("average_gaussian:ms_level");
if (ms_level == 0)
{
for (int tmp_ms_level = min_ms_level; tmp_ms_level <= max_ms_level; tmp_ms_level++)
{
merger.average(exp, "gaussian", tmp_ms_level);
}
}
else
{
merger.average(exp, "gaussian", ms_level);
}
}
else if (merging_method == "average_tophat")
{
int ms_level = merger.getParameters().getValue("average_tophat:ms_level");
if (ms_level == 0)
{
for (int tmp_ms_level = min_ms_level; tmp_ms_level <= max_ms_level; tmp_ms_level++)
{
merger.average(exp, "tophat", tmp_ms_level);
}
}
else
{
merger.average(exp, "tophat", ms_level);
}
}
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
fh.storeExperiment(out, exp, {}, log_type_);
return EXECUTION_OK;
}
};
int main(int argc, const char ** argv)
{
TOPPSpectraMerger tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/SemanticValidator.cpp | .cpp | 4,677 | 141 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Marc Sturm, Andreas Bertsch $
// --------------------------------------------------------------------------
#include <OpenMS/config.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/VALIDATORS/SemanticValidator.h>
#include <OpenMS/FORMAT/CVMappingFile.h>
#include <OpenMS/FORMAT/ControlledVocabulary.h>
#include <OpenMS/DATASTRUCTURES/CVMappings.h>
#include <OpenMS/APPLICATIONS/TOPPBase.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_SemanticValidator SemanticValidator
@brief SemanticValidator for XML files which can be semantically validated.
This tool is able to semantically validate an XML file against a CV-mapping
file. The CV-mapping file describes the validity of CV-terms for a given
tag inside the XML. The CV-mapping file conforms to the CvMapping XML
schema (found at /share/OpenMS/SCHEMAS/CvMapping.xsd or
http://www.psidev.info/sites/default/files/CvMapping.xsd).
Example files that can be semantically validated using this tool are mzML,
TraML, mzIdentML, mzData or any XML file.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_SemanticValidator.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_SemanticValidator.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPSemanticValidator :
public TOPPBase
{
public:
TOPPSemanticValidator() :
TOPPBase("SemanticValidator", "SemanticValidator for semantically validating certain XML files.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input file (any xml file)");
setValidFormats_("in", ListUtils::create<String>("analysisXML,mzML,traML,mzid,mzData,xml"));
registerInputFile_("mapping_file", "<file>", "", "Mapping file which is used to semantically validate the given XML file against this mapping file (see 'share/OpenMS/MAPPING' for templates).");
setValidFormats_("mapping_file", ListUtils::create<String>("xml"));
registerInputFileList_("cv", "<files>", ListUtils::create<String>(""), "Controlled Vocabulary files containg the CV terms (if left empty, a set of default files are used)", false);
setValidFormats_("cv", ListUtils::create<String>("obo"));
}
ExitCodes main_(int, const char**) override
{
String in_file = getStringOption_("in");
String mapping_file = getStringOption_("mapping_file");
StringList cv_list = getStringList_("cv");
CVMappings mappings;
CVMappingFile().load(mapping_file, mappings, false);
// Allow definition of the controlled vocabulary files on the commandlines.
// If none are defined, the hardcoded obo files are used
ControlledVocabulary cv;
if (!cv_list.empty())
{
for (Size i = 0; i < cv_list.size(); i++)
{
// TODO do we need to provide the name of the namespace here?
cv.loadFromOBO("", cv_list[i]);
}
}
else
{
cv.loadFromOBO("PSI-MOD", File::find("/CHEMISTRY/PSI-MOD.obo"));
cv.loadFromOBO("PATO", File::find("/CV/quality.obo"));
cv.loadFromOBO("UO", File::find("/CV/unit.obo"));
cv.loadFromOBO("brenda", File::find("/CV/brenda.obo"));
cv.loadFromOBO("GO", File::find("/CV/goslim_goa.obo"));
cv.loadFromOBO("UNIMOD", File::find("/CV/unimod.obo"));
cv.loadFromOBO("PSI-MS", File::find("/CV/psi-ms.obo"));
}
// check cv params
Internal::SemanticValidator semantic_validator(mappings, cv);
semantic_validator.setCheckTermValueTypes(true);
semantic_validator.setCheckUnits(true);
StringList errors, warnings;
bool valid = semantic_validator.validate(in_file, errors, warnings);
for (Size i = 0; i < warnings.size(); ++i)
{
cout << "Warning: " << warnings[i] << endl;
}
for (Size i = 0; i < errors.size(); ++i)
{
cout << "Error: " << errors[i] << endl;
}
if (valid && warnings.empty() && errors.empty())
{
cout << "Congratulations, the file is valid!" << endl;
return EXECUTION_OK;
}
else
{
return PARSE_ERROR;
}
}
};
int main(int argc, const char** argv)
{
TOPPSemanticValidator tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/QCShrinker.cpp | .cpp | 5,524 | 155 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Mathias Walzer $
// $Author: Mathias Walzer $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/FORMAT/CsvFile.h>
#include <OpenMS/KERNEL/StandardTypes.h>
#include <OpenMS/DATASTRUCTURES/String.h>
#include <OpenMS/FORMAT/QcMLFile.h>
#include <QByteArray>
#include <QFile>
#include <QString>
#include <QFileInfo>
//~ #include <QIODevice>
#include <fstream>
#include <vector>
#include <map>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_QCShrinker QCShrinker
@brief This application is used to remove extra verbose table attachments from a qcML file that are not needed anymore, e.g. for a final report.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → QCShrinker →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> @ref TOPP_QCMerger </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> ... </td>
</tr>
</table>
</CENTER>
If there is a lot of verbose or deprecated information in the given qcml file at @p in that can be purged.
- @p qp_accessions A list of cv accessions that should be removed. If empty, the usual suspects will be removed.
- @p run the file that defined the run under which the qp for the attachment is aggregated as MZML file. The file is only used to extract the run name from the file name;
- @p name if no file for the run was given (or if the target qp is contained in a set), at least a name of the target run/set containing the the qp for the attachment has to be given;
Output is in qcML format (see parameter @p out) which can be viewed directly in a modern browser (chromium, firefox, safari).
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_QCShrinker.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_QCShrinker.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPQCShrinker :
public TOPPBase
{
public:
TOPPQCShrinker() :
TOPPBase("QCShrinker", "Remove unneeded or verbose table attachments from a qcml file.",
true, {{ "Walzer M, Pernas LE, Nasso S, Bittremieux W, Nahnsen S, Kelchtermans P, Martens, L", "qcML: An Exchange Format for Quality Control Metrics from Mass Spectrometry Experiments", "Molecular & Cellular Proteomics 2014; 13(8)" , "10.1074/mcp.M113.035907"}})
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input qcml file");
setValidFormats_("in", ListUtils::create<String>("qcML"));
//~ registerFlag_("tables", "Remove all tables. (Of all runs and sets if these are not given with parameter name or run.)");
registerStringList_("qp_accessions", "<names>", StringList(), "A list of cv accessions that should be removed. If empty, the usual suspects will be removed!", false);
registerStringOption_("name", "<string>", "", "The name of the target run or set that contains the requested quality parameter.", false);
registerInputFile_("run", "<file>", "", "The file from which the name of the target run that contains the requested quality parameter is taken. This overrides the name parameter!", false);
setValidFormats_("run", ListUtils::create<String>("mzML"));
registerOutputFile_("out", "<file>", "", "Output extended/reduced qcML file");
setValidFormats_("out", ListUtils::create<String>("qcML"));
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parsing parameters
//-------------------------------------------------------------
String in = getStringOption_("in");
String out = getStringOption_("out");
String target_run = getStringOption_("name");
String target_file = getStringOption_("run");
StringList qp_accs = getStringList_("qp_accessions");
//-------------------------------------------------------------
// reading input
//------------------------------------------------------------
if (!target_file.empty())
{
target_run = QFileInfo(QString::fromStdString(target_file)).baseName();
}
//~ !getFlag_("tables")
QcMLFile qcmlfile;
qcmlfile.load(in);
if (qp_accs.empty())
{
qp_accs.push_back("QC:0000044");
qp_accs.push_back("QC:0000047");
qp_accs.push_back("QC:0000022");
qp_accs.push_back("QC:0000038");
qp_accs.push_back("QC:0000049");
}
//TODO care for QualityParameter s
if (target_run.empty())
{
for (Size i = 0; i < qp_accs.size(); ++i)
{
qcmlfile.removeAllAttachments(qp_accs[i]);
}
}
else
{
for (Size i = 0; i < qp_accs.size(); ++i)
{
qcmlfile.removeAttachment(target_run,qp_accs[i]);
}
}
qcmlfile.store(out);
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPQCShrinker tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/FeatureFinderIdentification.cpp | .cpp | 14,339 | 289 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hendrik Weisser $
// $Authors: Hendrik Weisser $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/FEATUREFINDER/FeatureFinderIdentificationAlgorithm.h>
#include <OpenMS/FORMAT/FileHandler.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_FeatureFinderIdentification FeatureFinderIdentification
@brief Detects features in MS1 data based on peptide identifications.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → FeatureFinderIdentification →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes (optional) </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> @ref TOPP_ProteinQuantifier</td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFilter </td>
</tr>
</table>
</CENTER>
@b Reference: @n
Weisser & Choudhary: <a href="https://doi.org/10.1021/acs.jproteome.7b00248">Targeted Feature Detection for Data-Dependent Shotgun Proteomics</a> (J. Proteome Res., 2017, PMID: 28673088).
This tool detects quantitative features in MS1 data based on information from peptide identifications (derived from MS2 spectra).
It uses algorithms for targeted data analysis from the OpenSWATH pipeline.
The aim is to detect features that enable the quantification of (ideally) all peptides in the identification input.
This is based on the following principle: When a high-confidence identification (ID) of a peptide was made based on an MS2 spectrum from a certain (precursor) position in the LC-MS map, this
indicates that the particular peptide is present at that position, so a feature for it should be detectable there.
@note It is important that only high-confidence (i.e. reliable) peptide identifications are used as input!
Targeted data analysis on the MS1 level uses OpenSWATH algorithms and follows roughly the steps outlined below.
<B>Use of inferred ("external") IDs</B>
The situation becomes more complicated when several LC-MS/MS runs from related samples of a label-free experiment are considered.
In order to quantify a larger fraction of the peptides/proteins in the samples, it is desirable to infer peptide identifications across runs.
Ideally, all peptides identified in any of the runs should be quantified in each and every run.
However, for feature detection of inferred ("external") IDs, the following problems arise:
First, retention times may be shifted between the run being quantified and the run that gave rise to the ID.
Such shifts can be corrected (see @ref TOPP_MapAlignerIdentification), but only to an extent.
Thus, the RT location of the inferred ID may not necessarily lie within the RT range of the correct feature.
Second, since the peptide in question was not directly identified in the run being quantified, it may not actually be present in detectable amounts in that sample, e.g. due to differential
regulation of the corresponding protein. There is thus a risk of introducing false-positive features.
FeatureFinderIdentification deals with these challenges by explicitly distinguishing between internal IDs (derived from the LC-MS/MS run being quantified) and external IDs (inferred from related
runs). Features derived from internal IDs give rise to a training dataset for an SVM classifier. The SVM is then used to predict which feature candidates derived from external IDs are most likely
to be correct. See steps 4 and 5 below for more details.
<B>1. Assay generation</B>
Feature detection is based on assays for identified peptides, each of which incorporates the retention time (RT), mass-to-charge ratio (m/z), and isotopic distribution (derived from the sequence)
of a peptide. Peptides with different modifications are considered different peptides. One assay will be generated for every combination of (modified) peptide sequence, charge state, and RT region
that has been identified. The RT regions arise by pooling all identifications of the same peptide, considering a window of size @p extract:rt_window around every RT location that gave rise to an
ID, and then merging overlapping windows.
<B>2. Ion chromatogram extraction</B>
Ion chromatograms (XICs) are extracted from the LC-MS data (parameter @p in).
One XIC per isotope in an assay is generated, with the corresponding m/z value and RT range (variable, depending on the RT region of the assay).
@see @ref TOPP_OpenSwathChromatogramExtractor
<B>3. Feature detection</B>
Next feature candidates - typically several per assay - are detected in the XICs and scored.
A variety of scores for different quality aspects are calculated by OpenSWATH.
@see @ref TOPP_OpenSwathAnalyzer
<B>4. Feature classification</B>
Feature candidates derived from assays with "internal" IDs are classed as "negative" (candidates without matching internal IDs), "positive" (the single best candidate per assay with matching
internal IDs), and "ambiguous" (other candidates with matching internal IDs). If "external" IDs were given as input, features based on them are initially classed as "unknown". Also in this case, a
support vector machine (SVM) is trained on the "positive" and "negative" candidates, to distinguish between the two classes based on the different OpenSWATH quality scores (plus an RT deviation
score). After parameter optimization by cross-validation, the resulting SVM is used to predict the probability of "unknown" feature candidates being positives.
<B>5. Feature filtering</B>
Feature candidates are filtered so that at most one feature per peptide and charge state remains.
For assays with internal IDs, only candidates previously classed as "positive" are kept.
For assays based solely on external IDs, the feature candidate with the highest SVM probability is selected and kept (possibly subject to the @p svm:min_prob threshold).
<B>6. Elution model fitting</B>
Elution models can be fitted to the features to improve the quantification.
For robustness, one model is fitted to all isotopic mass traces of a feature in parallel.
A symmetric (Gaussian) and an asymmetric (exponential-Gaussian hybrid) model type are available.
The fitted models are checked for plausibility before they are accepted.
Finally the results (feature maps, parameter @p out) are returned.
<B>Ion Mobility Support (experimental)</B>
This tool supports two types of ion mobility data:
@b FAIMS (Field Asymmetric Ion Mobility Spectrometry):
FAIMS data is automatically detected based on compensation voltage (CV) annotations in the mzML file.
The data is split by CV and processed separately for each voltage group.
Features representing the same analyte detected at different CV values are merged by default (controlled by @p faims:merge_features).
No special preparation of the input mzML file is required.
@b Bruker @b TimsTOF (trapped ion mobility):
TimsTOF data requires special preparation of the mzML file. The ion mobility spectra must be concatenated into
single spectra per frame using msconvert with the @p --combineIonMobilitySpectra option:
@code
msconvert input.d --mzML --combineIonMobilitySpectra -o output_dir
@endcode
The resulting mzML file contains one spectrum per frame with ion mobility values stored per peak.
Ion mobility values from peptide identifications (if present in the idXML) are used for IM-aware feature detection.
The extraction window is controlled by @p extract:IM_window.
@note Currently mzIdentML (mzid) is not directly supported as an input/output format of this tool. Convert mzid files to/from idXML using @ref TOPP_IDFileConverter if necessary.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_FeatureFinderIdentification.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_FeatureFinderIdentification.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPFeatureFinderIdentification : public TOPPBase
{
public:
// TODO
// cppcheck-suppress uninitMemberVar
TOPPFeatureFinderIdentification() :
TOPPBase("FeatureFinderIdentification", "Detects features in MS1 data based on peptide identifications.", true,
{{"Weisser H, Choudhary JS", "Targeted Feature Detection for Data-Dependent Shotgun Proteomics", "J. Proteome Res. 2017; 16, 8:2964-2974", "10.1021/acs.jproteome.7b00248"}})
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "", "Input file: LC-MS raw data");
setValidFormats_("in", {"mzML"});
registerInputFile_("id", "<file>", "", "Input file: Peptide identifications derived directly from 'in'");
setValidFormats_("id", {"idXML"});
registerInputFile_("id_ext", "<file>", "", "Input file: 'External' peptide identifications (e.g. from aligned runs)", false);
setValidFormats_("id_ext", {"idXML"});
registerOutputFile_("out", "<file>", "", "Output file: Features");
setValidFormats_("out", {"featureXML"});
registerOutputFile_("lib_out", "<file>", "", "Output file: Assay library", false);
setValidFormats_("lib_out", {"traML"});
registerOutputFile_("chrom_out", "<file>", "", "Output file: Chromatograms", false);
setValidFormats_("chrom_out", {"mzML"});
registerOutputFile_("candidates_out", "<file>", "", "Output file: Feature candidates (before filtering and model fitting)", false);
setValidFormats_("candidates_out", {"featureXML"});
registerInputFile_("candidates_in", "<file>", "",
"Input file: Feature candidates from a previous run. If set, only feature classification and elution model fitting are carried out, if enabled. Many parameters are ignored.",
false, true);
setValidFormats_("candidates_in", {"featureXML"});
Param algo_with_subsection;
Param subsection = FeatureFinderIdentificationAlgorithm().getDefaults();
subsection.remove("candidates_out");
algo_with_subsection.insert("", subsection);
registerFullParam_(algo_with_subsection);
}
ExitCodes main_(int, const char**) override
{
FeatureMap features;
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
String out = getStringOption_("out");
String candidates_out = getStringOption_("candidates_out");
String candidates_in = getStringOption_("candidates_in");
FeatureFinderIdentificationAlgorithm ffid_algo;
ffid_algo.getProgressLogger().setLogType(log_type_);
ffid_algo.setParameters(getParam_().copySubset(FeatureFinderIdentificationAlgorithm().getDefaults()));
if (candidates_in.empty())
{
String in = getStringOption_("in");
String id = getStringOption_("id");
String id_ext = getStringOption_("id_ext");
String lib_out = getStringOption_("lib_out");
String chrom_out = getStringOption_("chrom_out");
//-------------------------------------------------------------
// load input
//-------------------------------------------------------------
OPENMS_LOG_INFO << "Loading input data..." << endl;
PeakMap ms_data_full;
FileHandler mzml;
mzml.getOptions().addMSLevel(1);
mzml.loadExperiment(in, ms_data_full, {FileTypes::MZML}, log_type_);
PeptideIdentificationList peptides, peptides_ext;
vector<ProteinIdentification> proteins, proteins_ext;
// "internal" IDs:
FileHandler().loadIdentifications(id, proteins, peptides, {FileTypes::IDXML});
// "external" IDs:
if (!id_ext.empty())
{
FileHandler().loadIdentifications(id_ext, proteins_ext, peptides_ext, {FileTypes::IDXML});
}
//-------------------------------------------------------------
// Run feature detection (FAIMS handling is done internally)
//-------------------------------------------------------------
FeatureFinderIdentificationAlgorithm ffid_algo_run;
ffid_algo_run.getProgressLogger().setLogType(log_type_);
ffid_algo_run.setParameters(getParam_().copySubset(FeatureFinderIdentificationAlgorithm().getDefaults()));
ffid_algo_run.setMSData(std::move(ms_data_full));
ffid_algo_run.run(peptides, proteins, peptides_ext, proteins_ext, features, FeatureMap(), in);
// write auxiliary output (library is empty for multi-FAIMS data):
if (!lib_out.empty())
{
FileHandler().storeTransitions(lib_out, ffid_algo_run.getLibrary(), {FileTypes::TRAML});
}
if (!chrom_out.empty())
{
PeakMap chrom_data = ffid_algo_run.getChromatograms();
addDataProcessing_(chrom_data, getProcessingInfo_(DataProcessing::FILTERING));
FileHandler().storeExperiment(chrom_out, chrom_data, {FileTypes::MZML});
}
addDataProcessing_(features, getProcessingInfo_(DataProcessing::QUANTITATION));
}
else
{
//-------------------------------------------------------------
// load feature candidates
//-------------------------------------------------------------
OPENMS_LOG_INFO << "Reading feature candidates from a previous run..." << endl;
FileHandler().loadFeatures(candidates_in, features, {FileTypes::FEATUREXML});
OPENMS_LOG_INFO << "Found " << features.size() << " feature candidates in total." << endl;
ffid_algo.runOnCandidates(features);
}
//-------------------------------------------------------------
// write output
//-------------------------------------------------------------
OPENMS_LOG_INFO << "Writing final results..." << endl;
FileHandler().storeFeatures(out, features, {FileTypes::FEATUREXML});
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPFeatureFinderIdentification tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/SeedListGenerator.cpp | .cpp | 9,177 | 205 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Hendrik Weisser $
// $Authors: Hendrik Weisser $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/FileTypes.h>
#include <OpenMS/FORMAT/SVOutStream.h>
#include <OpenMS/FEATUREFINDER/SeedListGenerator.h>
#include <map>
// TODO REMOVE
#include <OpenMS/KERNEL/ConsensusMap.h>
#include <OpenMS/SYSTEM/File.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_SeedListGenerator SeedListGenerator
@brief Application to generate seed lists for feature detection.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> potential predecessor tools </td>
<td VALIGN="middle" ROWSPAN=4> → SeedListGenerator →</td>
<th ALIGN = "center"> potential successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDFilter </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=3> @ref TOPP_FeatureFinderCentroided</td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_IDMapper </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FeatureLinkerUnlabeled @n (or another feature grouping algorithm) </td>
</tr>
</table>
</CENTER>
Reference:\n
Weisser <em>et al.</em>: <a href="https://doi.org/10.1021/pr300992u">An automated pipeline for high-throughput label-free quantitative proteomics</a> (J. Proteome Res., 2013, PMID: 23391308).
In feature detection algorithms, an early step is generally to identify points of interest in the LC-MS map (so-called seeds) that may later be extended to features. If supported by the feature detection algorithm (currently only the "centroided" algorithm), user-supplied seed lists allow greater control over this process.
The SeedListGenerator can automatically create seed lists from a variety of sources. The lists are exported in featureXML format - suitable as input to FeatureFinder -, but can be converted to or from text formats using the @ref TOPP_TextExporter (with "-minimal" option to convert to CSV) and @ref TOPP_FileConverter (to convert from CSV) tools.
Seed lists can be generated from the file types below. The seeds are created at the indicated positions (RT/MZ):
<ul>
<li>mzML: locations of MS2 precursors
<li>idXML: locations of peptide identifications
<li>featureXML: locations of unassigned peptide identifications
<li>consensusXML: locations of consensus features that do not contain sub-features from the respective map
</ul>
If input is consensusXML, one output file per constituent map is required (same order as in the consensusXML). Otherwise, exactly one output file.
What are possible use cases for custom seed lists?
- In analyses that can take into account only features with peptide annotations, it may be useful to focus directly on certain locations in the LC-MS map - on all MS2 precursors (mzML input), or on precursors whose fragment spectra could be matched to a peptide sequence (idXML input).
- When additional information becomes available during an analysis, one might want to perform a second, targeted round of feature detection on the experimental data. For example, once a feature map is annotated with peptide identifications, it is possible to go back to the LC-MS map and look for features near unassigned peptides, potentially with a lower score threshold (featureXML input).
- Similarly, when features from different experiments are aligned and grouped, the consensus map may reveal where features were missed in the initial detection round in some experiments. The locations of these "holes" in the consensus map can be compiled into seed lists for the individual experiments (consensusXML input). (Note that the resulting seed lists use the retention time scale of the consensus map, which might be different from the original time scales of the experiments if e.g. one of the MapAligner tools was used to perform retention time correction as part of the alignment process. In this case, the RT transformations from the alignment must be applied to the LC-MS maps prior to the seed list-based feature detection runs.)
@note Currently mzIdentML (mzid) is not directly supported as an input/output format of this tool. Convert mzid files to/from idXML using @ref TOPP_IDFileConverter if necessary.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_SeedListGenerator.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_SeedListGenerator.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
namespace OpenMS
{
class TOPPSeedListGenerator :
public TOPPBase
{
public:
TOPPSeedListGenerator() :
TOPPBase("SeedListGenerator", "Generates seed lists for feature detection.")
{
}
protected:
void registerOptionsAndFlags_() override
{
registerInputFile_("in", "<file>", "",
"Input file (see below for details)");
setValidFormats_("in", ListUtils::create<String>("mzML,idXML,featureXML,consensusXML"));
registerOutputPrefix_("out_prefix", "<prefix>", String(), "Output file prefix");
setValidFormats_("out_prefix", ListUtils::create<String>("featureXML"));
addEmptyLine_();
registerFlag_("use_peptide_mass", "[idXML input only] Use the monoisotopic mass of the best peptide hit for the m/z position (default: use precursor m/z)");
}
ExitCodes main_(int, const char **) override
{
String in = getStringOption_("in");
String out_prefix = getStringOption_("out_prefix");
SeedListGenerator seed_gen;
// results (actually just one result, except for consensusXML input):
std::map<UInt64, SeedListGenerator::SeedList> seed_lists;
Size num_maps = 0;
FileTypes::Type in_type = FileHandler::getType(in);
StringList out;
out.push_back(out_prefix + "_0.featureXML"); // we manually set the name here
if (in_type == FileTypes::CONSENSUSXML)
{
ConsensusMap consensus;
FileHandler().loadConsensusFeatures(in, consensus, {FileTypes::CONSENSUSXML}, log_type_);
num_maps = consensus.getColumnHeaders().size();
ConsensusMap::ColumnHeaders ch = consensus.getColumnHeaders();
size_t map_count = 0;
// we have multiple out files
out.clear();
for([[maybe_unused]] const auto& header : ch)
{
out.push_back(out_prefix + "_" + String(map_count) + ".featureXML"); // we manually set the name here
++map_count;
}
if (out.size() != num_maps)
{
writeLogError_("Error: expected " + String(num_maps) +
" output filenames");
return ILLEGAL_PARAMETERS;
}
seed_gen.generateSeedLists(consensus, seed_lists);
}
else if (out.size() > 1)
{
writeLogError_("Error: expected only one output filename");
return ILLEGAL_PARAMETERS;
}
else if (in_type == FileTypes::MZML)
{
PeakMap experiment;
FileHandler().loadExperiment(in, experiment, {FileTypes::MZML}, log_type_);
seed_gen.generateSeedList(experiment, seed_lists[0]);
}
else if (in_type == FileTypes::IDXML)
{
vector<ProteinIdentification> proteins;
PeptideIdentificationList peptides;
FileHandler().loadIdentifications(in, proteins, peptides, {FileTypes::IDXML}, log_type_);
seed_gen.generateSeedList(peptides, seed_lists[0],
getFlag_("use_peptide_mass"));
}
else if (in_type == FileTypes::FEATUREXML)
{
FeatureMap features;
FileHandler().loadFeatures(in, features, {FileTypes::FEATUREXML}, log_type_);
seed_gen.generateSeedList(
features.getUnassignedPeptideIdentifications(), seed_lists[0]);
}
// output:
num_maps = 0;
for (std::map<UInt64, SeedListGenerator::SeedList>::iterator it =
seed_lists.begin(); it != seed_lists.end(); ++it, ++num_maps)
{
FeatureMap features;
seed_gen.convertSeedList(it->second, features);
//annotate output with data processing info:
addDataProcessing_(features, getProcessingInfo_(
DataProcessing::DATA_PROCESSING));
OPENMS_LOG_INFO << "Writing " << features.size() << " seeds to " << out[num_maps] << endl;
FileHandler().storeFeatures(out[num_maps], features, {FileTypes::FEATUREXML}, log_type_);
}
return EXECUTION_OK;
}
};
}
int main(int argc, const char ** argv)
{
TOPPSeedListGenerator t;
return t.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/ExternalCalibration.cpp | .cpp | 5,059 | 152 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/PROCESSING/CALIBRATION/InternalCalibration.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/TransformationXMLFile.h>
#include <vector>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
//Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_ExternalCalibration ExternalCalibration
@brief Performs an mass recalibration on an MS experiment using an external calibration function.
<CENTER>
<table>
<tr>
<th ALIGN = "center"> pot. predecessor tools </td>
<td VALIGN="middle" ROWSPAN=3> → ExternalCalibration →</td>
<th ALIGN = "center"> pot. successor tools </td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_PeakPickerHiRes </td>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=2> any tool operating on MS peak data @n (in mzML format)</td>
</tr>
<tr>
<td VALIGN="middle" ALIGN = "center" ROWSPAN=1> @ref TOPP_FeatureFinderCentroided </td>
</tr>
</table>
</CENTER>
Recalibrates an MS experiment globally using a constant, linear or quadratic calibration on the observed ppm error,
i.e. using offset=-5, slope=0, power=0 assumes the observed data has -5 ppm decalibration, i.e. the observed m/z is too small and should be increased by 5 ppm! Slope and power
are coefficients for the observed m/z, i.e. y = offset + x * slope + x * x * power, where x is the observed m/z and y
is the resulting mass correction in ppm. Usually slope and offset are very small (< 0.01).
If you only want a 'rough' recalibration, using offset is usually sufficient.
The calibration function is applied to all scans (with the desired level, see below), i.e. time dependent recalibration cannot be modeled.
The user can select what MS levels are subjected to calibration.
Spectra with other MS levels remain unchanged.
Calibration must be done once for each mass analyzer.
Either raw or centroided data can be used as input.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_ExternalCalibration.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_ExternalCalibration.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPExternalCalibration :
public TOPPBase
{
public:
TOPPExternalCalibration() :
TOPPBase("ExternalCalibration", "Applies an external mass recalibration.")
{
}
protected:
void registerOptionsAndFlags_() override
{
// data
registerInputFile_("in", "<file>", "", "Input peak file");
setValidFormats_("in", ListUtils::create<String>("mzML"));
registerOutputFile_("out", "<file>", "", "Output file ");
setValidFormats_("out", ListUtils::create<String>("mzML"));
addEmptyLine_();
registerDoubleOption_("offset", "", 0.0, "Mass offset in ppm", false);
registerDoubleOption_("slope", "", 0.0, "Slope (dependent on m/z)", false);
registerDoubleOption_("power", "", 0.0, "Power (dependent on m/z)", false);
addEmptyLine_();
registerIntList_("ms_level", "i j ...", ListUtils::create<int>("1,2,3"), "Target MS levels to apply the transformation onto. Scans with other levels remain unchanged.", false);
}
ExitCodes main_(int, const char**) override
{
//-------------------------------------------------------------
// parameter handling
//-------------------------------------------------------------
String in = getStringOption_("in");
String out = getStringOption_("out");
IntList ms_level = getIntList_("ms_level");
double offset = getDoubleOption_("offset");
double slope = getDoubleOption_("slope");
double power = getDoubleOption_("power");
//-------------------------------------------------------------
// loading input
//-------------------------------------------------------------
// Raw data
PeakMap exp;
FileHandler().loadExperiment(in, exp, {FileTypes::MZML}, log_type_);
MZTrafoModel tm;
tm.setCoefficients(offset, slope, power);
InternalCalibration ic;
ic.setLogType(log_type_);
ic.applyTransformation(exp, ms_level, tm);
//-------------------------------------------------------------
// writing output
//-------------------------------------------------------------
//annotate output with data processing info
addDataProcessing_(exp, getProcessingInfo_(DataProcessing::CALIBRATION));
FileHandler().storeExperiment(out, exp, {FileTypes::MZML}, log_type_);
return EXECUTION_OK;
}
};
int main(int argc, const char** argv)
{
TOPPExternalCalibration tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/topp/TriqlerConverter.cpp | .cpp | 6,247 | 164 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Timo Sachsenberg $
// --------------------------------------------------------------------------
#include <OpenMS/APPLICATIONS/TOPPBase.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/TextFile.h>
#include <OpenMS/SYSTEM/File.h>
#include <OpenMS/FORMAT/FileHandler.h>
#include <OpenMS/FORMAT/MzTabFile.h>
#include <OpenMS/FORMAT/MzTab.h>
#include <OpenMS/METADATA/ExperimentalDesign.h>
#include <OpenMS/FORMAT/ExperimentalDesignFile.h>
#include <OpenMS/SYSTEM/File.h>
#include <boost/regex.hpp>
#include <OpenMS/FORMAT/TriqlerFile.h>
using namespace OpenMS;
using namespace std;
//-------------------------------------------------------------
// Doxygen docu
//-------------------------------------------------------------
/**
@page TOPP_TriqlerConverter TriqlerConverter
@brief Converter to input for Triqler
This util consumes an ID-mapped consensusXML file and OpenMS experimental design in TSV format to create a CSV file which can subsequently be used as input for the python tool Triqler [1].
[1] The, M. & Käll, L. (2019). Integrated identification and quantification error probabilities for shotgun proteomics. Molecular & Cellular Proteomics, 18 (3), 561-570.
<B>The command line parameters of this tool are:</B>
@verbinclude TOPP_TriqlerConverter.cli
<B>INI file documentation of this tool:</B>
@htmlinclude TOPP_TriqlerConverter.html
*/
// We do not want this class to show up in the docu:
/// @cond TOPPCLASSES
class TOPPTriqlerConverter final :
public TOPPBase
{
public:
TOPPTriqlerConverter() :
TOPPBase("TriqlerConverter", "Converter to input for Triqler")
{
}
protected:
// this function will be used to register the tool parameters
// it gets automatically called on tool execution
void registerOptionsAndFlags_() override
{
// Input consensusXML
registerInputFile_(param_in, "<in>", "", "Input consensusXML with peptide intensities",
true, false);
setValidFormats_(param_in, ListUtils::create<String>("consensusXML"), true);
registerInputFile_(param_in_design, "<in_design>", "", "Experimental Design file", true,
false);
setValidFormats_(param_in_design, ListUtils::create<String>("tsv"), true);
registerStringOption_(param_Triqler_condition, "<Triqler_condition>", "Triqler_Condition",
"Which column in the condition table should be used for Triqler 'Condition'", false, false);
// advanced option to overwrite MS file annotations in consensusXML
registerInputFileList_(param_reannotate_filenames, "<file(s)>", StringList(),
"Overwrite MS file names in consensusXML", false, true);
setValidFormats_(param_reannotate_filenames, ListUtils::create<String>("mzML"), true);
// Output CSV file
registerOutputFile_(param_out, "<out>", "", "Input CSV file for Triqler.", true, false);
setValidFormats_(param_out, ListUtils::create<String>("csv"));
}
// the main_ function is called after all parameters are read
ExitCodes main_(int, const char **) final
{
try
{
// Input file, must be consensusXML
const String arg_in(getStringOption_(param_in));
const FileTypes::Type in_type(FileHandler::getType(arg_in));
fatalErrorIf_(
in_type != FileTypes::CONSENSUSXML,
"Input type is not consensusXML!",
ILLEGAL_PARAMETERS);
// Tool arguments
const String arg_method = getStringOption_(param_method);
const String arg_out = getStringOption_(param_out);
// Experimental Design file
const String arg_in_design = getStringOption_(param_in_design);
const ExperimentalDesign design = ExperimentalDesignFile::load(arg_in_design, false);
ExperimentalDesign::SampleSection sampleSection = design.getSampleSection();
ConsensusMap consensus_map;
FileHandler().loadConsensusFeatures(arg_in, consensus_map, {FileTypes::CONSENSUSXML});
StringList reannotate_filenames = getStringList_(param_reannotate_filenames);
String condition = getStringOption_(param_Triqler_condition);
String retention_time_summarization_method = getStringOption_(param_retention_time_summarization_method);
TriqlerFile TriqlerFile;
TriqlerFile.storeLFQ(arg_out, consensus_map,
design,
reannotate_filenames,
condition);
return EXECUTION_OK;
}
catch (const ExitCodes &exit_code)
{
return exit_code;
}
}
static const String param_in;
static const String param_in_design;
static const String param_method;
static const String param_Triqler_condition;
static const String param_out;
static const String param_retention_time_summarization_method;
static const String param_reannotate_filenames;
private:
static void fatalErrorIf_(const bool error_condition, const String &message, const int exit_code)
{
if (error_condition)
{
OPENMS_LOG_FATAL_ERROR << "FATAL: " << message << std::endl;
throw exit_code;
}
}
};
const String TOPPTriqlerConverter::param_in = "in";
const String TOPPTriqlerConverter::param_in_design = "in_design";
const String TOPPTriqlerConverter::param_method = "method";
const String TOPPTriqlerConverter::param_Triqler_condition = "Triqler_condition";
const String TOPPTriqlerConverter::param_out = "out";
const String TOPPTriqlerConverter::param_retention_time_summarization_method = "retention_time_summarization_method";
const String TOPPTriqlerConverter::param_reannotate_filenames = "reannotate_filenames";
// the actual main function needed to create an executable
int main(int argc, const char **argv)
{
TOPPTriqlerConverter tool;
return tool.main(argc, argv);
}
/// @endcond
| C++ |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/Plot2DCanvas.h | .h | 9,166 | 263 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg$
// $Authors: Marc Sturm $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
// OpenMS
#include <OpenMS/VISUAL/PlotCanvas.h>
#include <OpenMS/VISUAL/Plot1DCanvas.h>
#include <OpenMS/KERNEL/PeakIndex.h>
// QT
class QPainter;
class QMouseEvent;
class QAction;
class QMenu;
namespace OpenMS
{
/**
@brief Canvas for 2D-visualization of peak map, feature map and consensus map data
This widget displays a 2D representation of a set of peaks, features or consensus elements.
@image html Plot2DCanvas.png
The example image shows %Plot2DCanvas displaying a peak layer and a feature layer.
@htmlinclude OpenMS_Plot2DCanvas.parameters
@improvement Add RT interpolation mode for high zoom in 2D View (Hiwi)
@improvement Snap also to min intensity (Hiwi)
@ingroup PlotWidgets
*/
class OPENMS_GUI_DLLAPI Plot2DCanvas :
public PlotCanvas
{
Q_OBJECT
public:
/// Default C'tor hidden
Plot2DCanvas() = delete;
/// Constructor
Plot2DCanvas(const Param& preferences, QWidget* parent = nullptr);
/// Destructor
~Plot2DCanvas() override;
// Docu in base class
void showCurrentLayerPreferences() override;
/// Merges the features in @p map into the features layer @p i
void mergeIntoLayer(Size i, const FeatureMapSharedPtrType& map);
/// Merges the consensus features in @p map into the features layer @p i
void mergeIntoLayer(Size i, const ConsensusMapSharedPtrType& map);
/// Merges the peptide identifications in @p peptides into the peptide layer @p i
void mergeIntoLayer(Size i, PeptideIdentificationList& peptides);
/// recalculates the dot gradient of the active layer
void recalculateCurrentLayerDotGradient();
signals:
/// Requests to show projections for the @p source_layer. Emitted after calling pickProjectionLayer().
void showProjections(const LayerDataBase* source_layer);
/// Signal emitted when the projections are to be shown/hidden (e.g. via context menu)
void toggleProjections();
/// Requests to display the spectrum with index @p index in 1D
void showSpectrumAsNew1D(int index);
void showChromatogramsAsNew1D(std::vector<int, std::allocator<int> > indices);
/// Requests to display all spectra in 3D plot
void showCurrentPeaksAs3D();
/// Requests to display this spectrum (=frame) in ion mobility plot
void showCurrentPeaksAsIonMobility(const MSSpectrum& spec);
public slots:
// Docu in base class
void activateLayer(Size layer_index) override;
// Docu in base class
void removeLayer(Size layer_index) override;
//docu in base class
void updateLayer(Size i) override;
// Docu in base class
void horizontalScrollBarChange(int value) override;
// Docu in base class
void verticalScrollBarChange(int value) override;
/// Picks an appropriate layer for projection and emits the signal showProjections(LayerDataBase*).
void pickProjectionLayer();
protected slots:
/// Reacts on changed layer parameters
void currentLayerParametersChanged_();
protected:
// Docu in base class
bool finishAdding_() override;
/// Collects fragment ion scans in the indicated RT/mz area and adds them to the menus
bool collectFragmentScansInArea_(const RangeType& range, QMenu* msn_scans, QMenu* msn_meta);
/// Draws the coordinates (or coordinate deltas) to the widget's upper left corner
void drawCoordinates_(QPainter& painter, const PeakIndex& peak);
/// Draws the coordinates (or coordinate deltas) to the widget's upper left corner
void drawDeltas_(QPainter& painter, const PeakIndex& start, const PeakIndex& end);
/** @name Reimplemented QT events */
//@{
void mousePressEvent(QMouseEvent* e) override;
void mouseReleaseEvent(QMouseEvent* e) override;
void mouseMoveEvent(QMouseEvent* e) override;
void paintEvent(QPaintEvent* e) override;
void contextMenuEvent(QContextMenuEvent* e) override;
void keyPressEvent(QKeyEvent* e) override;
void keyReleaseEvent(QKeyEvent* e) override;
void mouseDoubleClickEvent(QMouseEvent* e) override;
//@}
// Docu in base class
void updateScrollbars_() override;
// Docu in base class
void intensityModeChange_() override;
// Docu in base class
void recalculateSnapFactor_() override;
/**
@brief Returns the position on color @p gradient associated with given intensity.
Takes intensity modes into account.
*/
Int precalculatedColorIndex_(float val, const MultiGradient& gradient, double snap_factor)
{
float gradientPos = val;
switch (intensity_mode_)
{
case IM_NONE:
gradientPos = val;
break;
case IM_PERCENTAGE:
gradientPos = val * percentage_factor_;
break;
case IM_SNAP:
gradientPos = val * snap_factor;
break;
case IM_LOG:
gradientPos = std::log(val + 1);
break;
}
return gradient.precalculatedColorIndex(gradientPos);
}
/**
@brief Returns the color associated with @p val for the gradient @p gradient.
Takes intensity modes into account.
*/
QColor heightColor_(float val, const MultiGradient& gradient, double snap_factor)
{
return gradient.precalculatedColorByIndex(precalculatedColorIndex_(val, gradient, snap_factor));
}
/**
@brief For a certain dimension: computes the size a data point would need, such that the image
reaches a certain coverage
Internally, this function makes use of the members 'canvas_coverage_min_' (giving the fraction (e.g. 20%) of area which should be covered by data)
and 'pen_size_max_' (maximum allowed number of pixels per data point).
@param[in] ratio_data2pixel The current ratio of # data points vs. # pixels of image
@param[in] pen_size In/Out param: gives the initial pen size, and is increased (up to @p MAX_PEN_SIZE) to reach desired coverage given by 'canvas_coverage_min_'
@return The factor by which @p pen_size increased (gives a hint of how many data points should be merged to avoid overplotting)
*/
double adaptPenScaling_(double ratio_data2pixel, double& pen_size) const;
/// Recalculates the dot gradient of a layer
void recalculateDotGradient_(Size layer);
/// Highlights a single peak and prints coordinates to screen
void highlightPeak_(QPainter& p, const PeakIndex& peak);
/// Returns the nearest peak to position @p pos
PeakIndex findNearestPeak_(const QPoint& pos);
/// Paints a peak icon for feature and consensus feature peaks
void paintIcon_(const QPoint& pos, const QRgb& color, const String& icon, Size s, QPainter& p) const;
/// Translates the visible area by a given offset specified in fractions of current visible area
void translateVisibleArea_(double x_axis_rel, double y_axis_rel);
/**
@brief Convert chart to widget coordinates
Translates chart coordinates to widget coordinates.
@param[in] x the chart coordinate x
@param[in] y the chart coordinate y
@return A point in widget coordinates
*/
QPoint dataToWidget_(double x, double y) const
{
QPoint point;
const auto& xy = visible_area_.getAreaXY();
point.setX(int((x - xy.minX()) / xy.width() * width()));
point.setY(int((xy.maxY() - y) / xy.height() * height()));
return point;
}
QPoint dataToWidget_(const DPosition<2>& xy)
{
return dataToWidget_(xy.getX(), xy.getY());
}
//docu in base class
void translateLeft_(Qt::KeyboardModifiers m) override;
//docu in base class
void translateRight_(Qt::KeyboardModifiers m) override;
//docu in base class
void translateForward_() override;
//docu in base class
void translateBackward_() override;
/// Finishes context menu after customization to peaks, features or consensus features
void finishContextMenu_(QMenu* context_menu, QMenu* settings_menu);
friend class Painter2DBase;
friend class Painter2DChrom;
friend class Painter2DConsensus;
friend class Painter2DIdent;
friend class Painter2DFeature;
friend class Painter2DIonMobility;
friend class Painter2DPeak;
/// the nearest peak/feature to the mouse cursor
PeakIndex selected_peak_;
/// start peak/feature of measuring mode
PeakIndex measurement_start_;
/// stores the linear color gradient for non-log modes
MultiGradient linear_gradient_;
double pen_size_min_; ///< minimum number of pixels for one data point
double pen_size_max_; ///< maximum number of pixels for one data point
double canvas_coverage_min_; ///< minimum coverage of the canvas required; if lower, points are upscaled in size
};
}
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/LayerDataConsensus.h | .h | 2,686 | 87 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow $
// --------------------------------------------------------------------------
#pragma once
#include <OpenMS/VISUAL/LayerDataBase.h>
#include <vector>
namespace OpenMS
{
/**
@brief Class that stores the data for one layer of type ConsensusMap
@ingroup PlotWidgets
*/
class OPENMS_GUI_DLLAPI LayerDataConsensus : public virtual LayerDataBase
{
public:
/// Default constructor
LayerDataConsensus(ConsensusMapSharedPtrType& map);
/// no Copy-ctor (should not be needed)
LayerDataConsensus(const LayerDataConsensus& ld) = delete;
/// no assignment operator (should not be needed)
LayerDataConsensus& operator=(const LayerDataConsensus& ld) = delete;
std::unique_ptr<Painter2DBase> getPainter2D() const override;
std::unique_ptr<LayerData1DBase> to1DLayer() const override
{
throw Exception::NotImplemented(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION);
}
std::unique_ptr<LayerStoreData> storeVisibleData(const RangeAllType& visible_range, const DataFilters& layer_filters) const override;
std::unique_ptr<LayerStoreData> storeFullData() const override;
ProjectionData getProjection(const DIM_UNIT /*unit_x*/, const DIM_UNIT /*unit_y*/, const RangeAllType& /*area*/) const override
{ // currently only a stub
ProjectionData proj;
return proj;
}
PeakIndex findHighestDataPoint(const RangeAllType& area) const override;
PointXYType peakIndexToXY(const PeakIndex& peak, const DimMapper<2>& mapper) const override;
void updateRanges() override
{
consensus_map_->updateRanges();
}
RangeAllType getRange() const override
{
RangeAllType r;
r.assign(*getConsensusMap());
return r;
}
std::unique_ptr<LayerStatistics> getStats() const override;
bool annotate(const PeptideIdentificationList& identifications, const std::vector<ProteinIdentification>& protein_identifications) override;
/// Returns a const reference to the consensus feature data
const ConsensusMapSharedPtrType& getConsensusMap() const
{
return consensus_map_;
}
/// Returns current consensus map (mutable)
ConsensusMapSharedPtrType& getConsensusMap()
{
return consensus_map_;
}
protected:
/// consensus feature data
ConsensusMapSharedPtrType consensus_map_ = ConsensusMapSharedPtrType(new ConsensusMapType());
};
}// namespace OpenMS
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/SwathLibraryStats.h | .h | 1,423 | 51 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#include <OpenMS/ANALYSIS/TARGETED/TargetedExperiment.h>
#include <QWidget>
namespace Ui
{
class SwathLibraryStats;
}
namespace OpenMS
{
/// A multi-tabbed widget for the SwathWizard offering setting of parameters, input-file specification and running Swath and more
class OPENMS_GUI_DLLAPI SwathLibraryStats : public QWidget
{
Q_OBJECT
public:
explicit SwathLibraryStats(QWidget *parent = nullptr);
~SwathLibraryStats();
/// updates the view with new information immediately
void update(const TargetedExperiment::SummaryStatistics& stats);
/// loads the pqp into a TargetedExperiment object while displaying a progress bar and shows the results when ready
void updateFromFile(const QString& pqp_file);
private slots:
private:
Ui::SwathLibraryStats* ui_;
};
} // ns OpenMS
// this is required to allow Ui_SwathLibraryStats (auto UIC'd from .ui) to have a SwathLibraryStats member
using SwathLibraryStats = OpenMS::SwathLibraryStats;
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/MultiGradientSelector.h | .h | 2,488 | 97 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg$
// $Authors: Marc Sturm $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
//OpenMS
#include <OpenMS/CONCEPT/Types.h>
#include <OpenMS/VISUAL/MultiGradient.h>
//QT
#include <QtWidgets>
class QPaintEvent;
class QMouseEvent;
class QKeyEvent;
class QContextMenuEvent;
//std lib
#include <vector>
namespace OpenMS
{
/**
@brief A widget witch allows constructing gradients of multiple colors.
@image html MultiGradientSelector.png
The above example image shows a MultiGradientSelector.
@ingroup Visual
*/
class OPENMS_GUI_DLLAPI MultiGradientSelector :
public QWidget
{
Q_OBJECT
public:
///Constructor
MultiGradientSelector(QWidget * parent = nullptr);
///Destructor
~MultiGradientSelector() override;
///returns a const reference to the gradient
const MultiGradient & gradient() const;
///returns a mutable reference to the gradient
MultiGradient & gradient();
/// sets the interpolation mode
void setInterpolationMode(MultiGradient::InterpolationMode mode);
/// returns the interpolation mode
MultiGradient::InterpolationMode getInterpolationMode() const;
public slots:
/// sets what interpolation mode is used
void stairsInterpolation(bool state);
protected:
///@name re-implemented Qt events
//@{
void paintEvent(QPaintEvent * e) override;
void mousePressEvent(QMouseEvent * e) override;
void mouseMoveEvent(QMouseEvent * e) override;
void mouseReleaseEvent(QMouseEvent * e) override;
void mouseDoubleClickEvent(QMouseEvent * e) override;
void keyPressEvent(QKeyEvent * e) override;
void contextMenuEvent(QContextMenuEvent * e) override;
//@}
// the actual gradient
MultiGradient gradient_;
// border margin
Int margin_;
// height of the gradient area
Int gradient_area_width_;
// height of the lever area
Int lever_area_height_;
//position (0-100) in the vector of the selected lever
Int selected_;
//color of the selected lever
QColor selected_color_;
//stores if the left button is pressed
bool left_button_pressed_;
};
}
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/TOPPASVertex.h | .h | 14,872 | 356 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Johannes Veit $
// $Authors: Johannes Junker, Chris Bielow $
// --------------------------------------------------------------------------
#pragma once
// ------------- DEBUGGING ----------------
// ---- Uncomment to enable debug mode ----
//#define TOPPAS_DEBUG
// ----------------------------------------
#ifdef TOPPAS_DEBUG
#define __DEBUG_BEGIN_METHOD__ \
{ \
for (int dbg_indnt_cntr = 0; dbg_indnt_cntr < global_debug_indent_; ++dbg_indnt_cntr) \
{ \
std::cout << " "; \
} \
std::cout << "BEGIN [" << topo_nr_ << "] " << OPENMS_PRETTY_FUNCTION << std::endl; \
++global_debug_indent_; \
}
#define __DEBUG_END_METHOD__ \
{ \
--global_debug_indent_; \
if (global_debug_indent_ < 0) global_debug_indent_ = 0; \
for (int dbg_indnt_cntr = 0; dbg_indnt_cntr < global_debug_indent_; ++dbg_indnt_cntr) \
{ \
std::cout << " "; \
} \
std::cout << "END [" << topo_nr_ << "] " << OPENMS_PRETTY_FUNCTION << std::endl; \
}
#else
#define __DEBUG_BEGIN_METHOD__ {}
#define __DEBUG_END_METHOD__ {}
#endif
// ----------------------------------------
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#include <OpenMS/DATASTRUCTURES/String.h>
#include <QPainter>
#include <QPainterPath>
#include <QtWidgets/QGraphicsSceneMouseEvent>
#include <QtWidgets/QGraphicsSceneContextMenuEvent>
#include <QtWidgets/QGraphicsItem>
#include <QtWidgets/QMenu>
#include <QtCore/QProcess>
#include <QtCore/QStringList>
namespace OpenMS
{
class TOPPASEdge;
/**
@brief The base class of the different vertex classes.
This class contains the common functionality (such as
event handling for mouse clicks and drags) and holds the common
members of all different kinds of vertices (e.g., containers
for all in and out edges, the vertex ID, the number of a
topological sort of the whole graph, etc.)
@ingroup TOPPAS_elements
*/
class OPENMS_GUI_DLLAPI TOPPASVertex :
public QObject,
public QGraphicsItem
{
Q_OBJECT
Q_INTERFACES(QGraphicsItem)
public:
/// The container for in/out edges
typedef QList<TOPPASEdge *> EdgeContainer;
/// A mutable iterator for in/out edges
typedef EdgeContainer::iterator EdgeIterator;
/// A const iterator for in/out edges
typedef EdgeContainer::const_iterator ConstEdgeIterator;
/// A class which interfaces with QStringList for holding filenames
/// Incoming filenames are checked, and an exception is thrown if they are too long
/// to avoid issues with common filesystems (due to filesystem limits).
class TOPPASFilenames
{
public:
TOPPASFilenames() = default;
TOPPASFilenames(const QStringList& filenames);
int size() const;
const QStringList& get() const;
const QString& operator[](int i) const;
///@name Setters; their all use check_() and can throw!
//@{
void set(const QStringList& filenames);
void set(const QString& filename, int i);
void push_back(const QString& filename);
void append(const QStringList& filenames);
//@}
QStringList getSuffixCounts() const;
private:
/*
@brief Check length of filename and throw Exception::FileNotWritable() if too long
@param[in] filename Full path to file (using relative paths will circumvent the effectiveness)
@throw Exception::FileNotWritable() if too long (>=255 chars)
*/
void check_(const QString& filename);
QStringList filenames_; ///< filenames passed from upstream node in this round
};
/// Info for one edge and round, to be passed to next node
struct VertexRoundPackage
{
TOPPASFilenames filenames; ///< filenames passed from upstream node in this round
TOPPASEdge* edge = nullptr; ///< edge that connects the upstream node to the current one
};
/// all infos to process one round for a vertex (from all incoming vertices)
/// indexing via "parameter_index" of adjacent edge (could later be param_name) -> filenames
/// Index for input and output edges is (-1) implicitly, thus we need signed type
/// warning: the index refers to either input OR output (depending on if this structure is used for input files storage or output files storage)
using RoundPackage = std::map<Int, VertexRoundPackage>;
using RoundPackageConstIt = RoundPackage::const_iterator;
using RoundPackageIt = RoundPackage::iterator;
/// all information a node needs to process all rounds
using RoundPackages = std::vector<RoundPackage>;
/// The color of a vertex during depth-first search
enum DFS_COLOR
{
DFS_WHITE,
DFS_GRAY,
DFS_BLACK
};
/// The color of a vertex during depth-first search
enum SUBSTREESTATUS
{
TV_ALLFINISHED, ///< all downstream nodes are done (including the ones which are feed by a parallel subtree)
TV_UNFINISHED, ///< some direct downstream node is not done
TV_UNFINISHED_INBRANCH ///< a parallel subtree which merged with some downstream node A was not done (which prevented processing of the node A)
};
/// Default Constructor
TOPPASVertex();
/// Copy constructor
TOPPASVertex(const TOPPASVertex & rhs);
/// Destructor
~TOPPASVertex() override = default;
/// Assignment operator
TOPPASVertex& operator=(const TOPPASVertex & rhs);
/// Make a copy of this vertex on the heap and return a pointer to it (useful for copying nodes)
virtual std::unique_ptr<TOPPASVertex> clone() const = 0;
/// base paint method for all derived classes. should be called first in child-class paint
void paint(QPainter* painter, const QStyleOptionGraphicsItem* /*option*/, QWidget* /*widget*/, bool round_shape = true);
/// get the round package for this node from upstream
/// -- indices in 'RoundPackage' mapping are thus referring to incoming edges of this node
/// returns false on failure
bool buildRoundPackages(RoundPackages & pkg, String & error_msg);
/// check if all upstream nodes are ready to go ( 'finished_' is true)
bool isUpstreamFinished() const;
/// Returns the bounding rectangle of this item
QRectF boundingRect() const override = 0;
/// Returns a more precise shape
QPainterPath shape() const final;
/// Returns begin() iterator of outgoing edges
ConstEdgeIterator outEdgesBegin() const;
/// Returns end() iterator of outgoing edges
ConstEdgeIterator outEdgesEnd() const;
/// Returns begin() iterator of incoming edges
ConstEdgeIterator inEdgesBegin() const;
/// Returns end() iterator of incoming edges
ConstEdgeIterator inEdgesEnd() const;
/// Returns the number of incoming edges
Size incomingEdgesCount();
/// Returns the number of outgoing edges
Size outgoingEdgesCount();
/// Adds an incoming edge
void addInEdge(TOPPASEdge * edge);
/// Adds an outgoing edge
void addOutEdge(TOPPASEdge * edge);
/// Removes an incoming edge
void removeInEdge(TOPPASEdge * edge);
/// Removes an outgoing edge
void removeOutEdge(TOPPASEdge * edge);
/// Returns the DFS color of this node
DFS_COLOR getDFSColor();
/// Sets the DFS color of this node
void setDFSColor(DFS_COLOR color);
/// Checks if all tools in the subtree below this node are finished
TOPPASVertex::SUBSTREESTATUS getSubtreeStatus() const;
/// Returns whether the vertex has been marked already (during topological sort)
bool isTopoSortMarked() const;
/// (Un)marks the vertex (during topological sort)
void setTopoSortMarked(bool b);
/// Returns the topological sort number
UInt getTopoNr() const;
/// Sets the topological sort number (overridden in tool and output vertices)
virtual void setTopoNr(UInt nr);
/// Resets the status
/// @param[in] reset_all_files Not used in this implementation, but in derived classes
virtual void reset(bool reset_all_files = false);
/// Marks this node (and everything further downstream) as unreachable. Overridden behavior in mergers.
virtual void markUnreachable();
/// Returns whether this node is reachable
bool isReachable() const;
/// Returns whether this node has already been processed during the current pipeline execution
bool isFinished() const;
/// run the tool (either ToolVertex, Merger, or OutputNode)
/// @exception NotImplemented
virtual void run();
/// invert status of recycling
virtual bool invertRecylingMode();
/// get status of recycling
bool isRecyclingEnabled() const;
/// set status of recycling
void setRecycling(const bool is_enabled);
// get the name of the vertex (to be overridden by derived classes)
virtual String getName() const = 0;
/**
@brief gets filenames for a certain output parameter (from this vertex), for a certain TOPPAS round
*/
QStringList getFileNames(int param_index, int round) const;
/// get all output files for all parameters for all rounds
QStringList getFileNames() const;
// get the output structure directly
const RoundPackages & getOutputFiles() const;
/// check if all upstream nodes are finished
bool allInputsReady() const;
public slots:
/// Called by an incoming edge when it has changed
virtual void inEdgeHasChanged();
/// Called by an outgoing edge when it has changed
virtual void outEdgeHasChanged();
signals:
/// Emitted when this item is clicked
void clicked();
/// Emitted when this item is released
void released();
/// Emitted when the position of the hovering edge changes
void hoveringEdgePosChanged(const QPointF & new_pos);
/// Emitted when a new out edge is supposed to be created
void newHoveringEdge(const QPointF & pos);
/// Emitted when the mouse is released after having dragged a new edge somewhere
void finishHoveringEdge();
/// Emitted when something has changed
void somethingHasChanged();
/// Emitted when the item is dragged
void itemDragged(qreal dx, qreal dy);
/// Emitted if an INI parameter or recycling mode or whatever was edited by the user - depending on the current state of the tool an action is taken
/// each node type decides itself if this will invalidate the pipeline, depending on its internal status
void parameterChanged(const bool invalidates_running_pipeline);
protected:
/// The list of incoming edges
EdgeContainer in_edges_;
/// The list of outgoing edges
EdgeContainer out_edges_;
/// Indicates whether a new out edge is currently being created
bool edge_being_created_{false};
/// The color of the pen
QColor pen_color_{Qt::black};
/// The color of the brush
QColor brush_color_{ Qt::lightGray};
/// The DFS color of this node
DFS_COLOR dfs_color_{DFS_WHITE};
/// "marked" flag for topological sort
bool topo_sort_marked_{false};
/// The number in a topological sort of the entire graph
UInt topo_nr_;
/// Stores the current output file names for each output parameter
RoundPackages output_files_;
/// number of rounds this node will do ('Merge All' nodes will pass everything, thus do only one round)
int round_total_{-1};
/// currently finished number of rounds (TODO: do we need that?)
int round_counter_{0};
/// Stores whether this node has already been processed during the current pipeline execution
bool finished_{false};
/// Indicates whether this node is reachable (i.e. there is an input node somewhere further upstream)
bool reachable_{true};
/// shall subsequent tools be allowed to recycle the output of this node to match the number of rounds imposed by other parent nodes?
bool allow_output_recycling_{false};
#ifdef TOPPAS_DEBUG
// Indentation level for nicer debug output
static int global_debug_indent_;
#endif
///@name reimplemented Qt events
//@{
void mouseReleaseEvent(QGraphicsSceneMouseEvent * e) override;
void mousePressEvent(QGraphicsSceneMouseEvent * e) override;
void mouseDoubleClickEvent(QGraphicsSceneMouseEvent * e) override;
void mouseMoveEvent(QGraphicsSceneMouseEvent * e) override;
void contextMenuEvent(QGraphicsSceneContextMenuEvent * event) override;
//@}
/// Moves the target pos of the edge which is just being created to @p pos
virtual void moveNewEdgeTo_(const QPointF & pos);
/// Returns a three character string (i.e. 001 instead of 1) for the given @p number
String get3CharsNumber_(UInt number) const;
/// Displays the debug output @p message, if TOPPAS_DEBUG is defined
void debugOut_(const String &
#ifdef TOPPAS_DEBUG
message
#endif
) const
{
#ifdef TOPPAS_DEBUG
for (int i = 0; i < global_debug_indent_; ++i)
{
std::cout << " ";
}
std::cout << "[" << topo_nr_ << "] " << message << std::endl;
#endif
}
};
} // namespace OpenMS
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/ParamEditor.h | .h | 6,718 | 214 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Marc Sturm, Chris Bielow $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#include <OpenMS/CONCEPT/Types.h>
#include <QtWidgets/QLineEdit>
#include <QtWidgets/QItemDelegate>
#include <QtWidgets/QTreeWidget>
class QModelIndex;
class QStyleOptionViewItem;
class QAbstractItemModel;
#include <QtCore/qcontainerfwd.h> // for QStringList
class QString;
namespace Ui
{
class ParamEditorTemplate;
}
namespace OpenMS
{
class String;
class Param;
class ParamEditor;
/**
@brief Namespace used to hide implementation details from users.
*/
namespace Internal
{
/**
@brief Custom QLineEdit which emits a signal when losing focus (such that we can commit its data)
*/
class OPENMS_GUI_DLLAPI OpenMSLineEdit
: public QLineEdit
{
Q_OBJECT
public:
OpenMSLineEdit(QWidget * w)
:QLineEdit(w)
{}
signals:
/// emitted on focusOutEvent
void lostFocus();
protected:
void focusOutEvent ( QFocusEvent * e ) override;
void focusInEvent ( QFocusEvent * e ) override;
};
/**
@brief Internal delegate class for QTreeWidget
This handles editing of items.
*/
class OPENMS_GUI_DLLAPI ParamEditorDelegate :
public QItemDelegate
{
Q_OBJECT
public:
///Constructor
ParamEditorDelegate(QObject * parent);
/// Returns the widget(combobox or QLineEdit) used to edit the item specified by index for editing. Prevents edit operations on nodes' values and types
QWidget * createEditor(QWidget * parent, const QStyleOptionViewItem & option, const QModelIndex & index) const override;
/// Sets the data to be displayed and edited by the editor for the item specified by index.
void setEditorData(QWidget * editor, const QModelIndex & index) const override;
/// Sets the data for the specified model and item index from that supplied by the editor. If data changed in a cell, that is if it is different from an initial value, then set its background color to yellow and emit the modified signal otherwise make it white
void setModelData(QWidget * editor, QAbstractItemModel * model, const QModelIndex & index) const override;
/// Updates the editor for the item specified by index according to the style option given.
void updateEditorGeometry(QWidget * editor, const QStyleOptionViewItem & option, const QModelIndex & index) const override;
/// true if the underlying tree has an open QLineEdit which has uncommitted data
bool hasUncommittedData() const;
signals:
/// signal for showing ParamEditor if the Model data changed
void modified(bool) const;
protected:
/// a shortcut to calling commit(), which calls setModelData(); useful for embedded editors, but not for QDialogs etc
bool eventFilter(QObject* editor, QEvent* event) override;
private slots:
///For closing any editor and updating ParamEditor
void commitAndCloseEditor_();
///if cancel in any editor is clicked, the Dialog is closed and changes are rejected
void closeEditor_();
/// ... a bit special, because reset uncommited data
void commitAndCloseLineEdit_();
private:
/// Not implemented
ParamEditorDelegate();
/// used to modify value of output and input files( not for output and input lists)
mutable QString fileName_;
/// holds a directory name (for output directories)
mutable QString dirName_;
/// true if a QLineEdit is still open and has not committed its data yet (so storing the current param is a bad idea)
mutable bool has_uncommited_data_;
};
/// QTreeWidget that emits a signal whenever a new row is selected
class OPENMS_GUI_DLLAPI ParamTree :
public QTreeWidget
{
Q_OBJECT
public:
/// Constructor
ParamTree(QWidget * parent);
/// Overloaded edit method to activate F2 use
bool edit(const QModelIndex & index, EditTrigger trigger, QEvent * event) override;
signals:
/// Signal that is emitted when a new item is selected
void selected(const QModelIndex & index);
protected slots:
/// Reimplemented virtual slot
void selectionChanged(const QItemSelection & selected, const QItemSelection &) override;
};
}
/**
@brief A GUI for editing or viewing a Param object
It supports two display modes:
- normal mode: only the main parameters are displayed, advanced parameters are hidden.
- advanced mode: all parameters are displayed (only available when advanced parameters are provided)
@image html ParamEditor.png
@ingroup Visual
*/
class OPENMS_GUI_DLLAPI ParamEditor :
public QWidget
{
Q_OBJECT
public:
/// Role of the entry
enum
{
NODE, ///< Section
NORMAL_ITEM, ///< Item that is always shown
ADVANCED_ITEM ///< Item that is shown only in advanced mode
};
/// constructor
ParamEditor(QWidget* parent = nullptr);
/// destructor
~ParamEditor() override;
/// load method for Param object
void load(Param& param);
/// store edited data in Param object
void store();
/// Indicates if the data changed since last save
bool isModified() const;
/// Clears all parameters
void clear();
public slots:
/// Notifies the widget that the content was changed.
/// Emits the modified(bool) signal if the state changed.
void setModified(bool is_modified);
signals:
/// item was edited
void modified(bool);
protected slots:
/// Switches between normal and advanced mode
void toggleAdvancedMode(bool advanced);
/// Shows the documentation of an item in doc_
void showDocumentation(const QModelIndex & index);
protected:
/// recursive helper method for method storeRecursive()
void storeRecursive_(QTreeWidgetItem * child, String path, std::map<String, String> & section_descriptions);
/// Pointer to the tree widget
Internal::ParamTree* tree_;
/// The data to edit
Param* param_;
/// Indicates that the data was modified since last store/load operation
bool modified_;
/// Indicates if normal mode or advanced mode is activated
bool advanced_mode_;
private:
Ui::ParamEditorTemplate* ui_;
};
} // namespace OpenMS
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/HistogramWidget.h | .h | 2,878 | 121 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Marc Sturm $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
// QT
#include <QtWidgets>
#include <QPixmap>
class QPaintEvent;
class QResizeEvent;
class QMouseEvent;
//OpenMS
#include <OpenMS/MATH/STATISTICS/Histogram.h>
namespace OpenMS
{
class AxisWidget;
class String;
/**
@brief Widget which can visualize a histogram.
@image html HistogramWidget.png
It can also be used to define a left and right boundary inside the values.
It supports normal and log scaling via the context menu.
@ingroup Visual
*/
class OPENMS_GUI_DLLAPI HistogramWidget :
public QWidget
{
Q_OBJECT
public:
/// Constructor
HistogramWidget(const Math::Histogram<> & distribution, QWidget * parent = nullptr);
/// Destructor
~HistogramWidget() override;
/// Returns the value f the lower splitter
double getLeftSplitter() const;
/// Returns the value of the upper splitter
double getRightSplitter() const;
/// Set axis legends
void setLegend(const String & legend);
public slots:
/// Shows the splitters if @p on is true. Hides them otherwise.
void showSplitters(bool on);
/// Sets the value of the right splitter
void setRightSplitter(double pos);
/// Sets the value of the left splitter
void setLeftSplitter(double pos);
/// Enables/disables log mode
void setLogMode(bool log_mode);
protected:
/// The histogram to display
Math::Histogram<> dist_;
/// Flag that indicates if splitters are shown
bool show_splitters_;
/// Value of the right splitter
double left_splitter_;
/// Value of the right splitter
double right_splitter_;
/// The splitter that is currently dragged (0=none, 1=left, 2=right)
UInt moving_splitter_;
/// X axis
AxisWidget * bottom_axis_;
/// Margin around plot
UInt margin_;
/// Internal buffer for the double buffering
QPixmap buffer_;
/// Flag that indicates the current mode
bool log_mode_;
/// Repaints the contents to the buffer and calls update()
void invalidate_();
///@name reimplemented Qt events
//@{
void paintEvent(QPaintEvent *) override;
void mousePressEvent(QMouseEvent *) override;
void mouseReleaseEvent(QMouseEvent *) override;
void mouseMoveEvent(QMouseEvent *) override;
void resizeEvent(QResizeEvent *) override;
//@}
protected slots:
/// Context menu event
void showContextMenu(const QPoint & pos);
};
} // namespace OpenMS
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/TreeView.h | .h | 1,705 | 54 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow $
// --------------------------------------------------------------------------
#pragma once
#include <QTreeWidget>
#include <OpenMS/VISUAL/MISC/CommonDefs.h>
namespace OpenMS
{
/**
@brief A better QTreeWidget for TOPPView, which supports header context menu and conveniently adding/getting headers names.
*/
class TreeView :
public QTreeWidget
{
Q_OBJECT
public:
/// Constructor
TreeView(QWidget* parent = nullptr);
/// Destructor
~TreeView() override = default;
/// sets the visible headers (and the number of columns)
void setHeaders(const QStringList& headers);
/// hides columns with the given names
/// @throws Exception::InvalidParameter if a name is not matching the current column names
void hideColumns(const QStringList& header_names);
/**
@brief Obtain header names, either from all, or only the visible columns
@param[in] which With or without invisible columns?
@return List of header names
*/
QStringList getHeaderNames(const WidgetHeader which) const;
/// get the displayed name of the header in column with index @p header_column
/// @throws Exception::ElementNotFound if header at index @p header_column is not valid
QString getHeaderName(const int header_column) const;
private slots:
/// Display header context menu; allows to show/hide columns
void headerContextMenu_(const QPoint& pos);
};
}
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/DIATreeTab.h | .h | 2,902 | 90 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow $
// --------------------------------------------------------------------------
#pragma once
#include <QtWidgets>
#include <OpenMS/VISUAL/DataSelectionTabs.h>
#include <OpenMS/VISUAL/LayerDataBase.h>
class QLineEdit;
class QComboBox;
class QTreeWidget;
class QTreeWidgetItem;
namespace OpenMS
{
class TreeView;
struct OSWIndexTrace;
/**
@brief Hierarchical visualization and selection of spectra.
@ingroup PlotWidgets
*/
class OPENMS_GUI_DLLAPI DIATreeTab :
public QWidget, public DataTabBase
{
Q_OBJECT
public:
/// Constructor
DIATreeTab(QWidget* parent = nullptr);
/// Destructor
~DIATreeTab() override = default;
// docu in base class
bool hasData(const LayerDataBase* layer) override;
/// refresh the table using data from @p cl
/// @param[in] cl Layer with OSW data; cannot be const, since we might read missing protein data from source on demand
void updateEntries(LayerDataBase* cl) override;
/// remove all visible data
void clear() override;
signals:
/// emitted when a protein, peptide, feature or transition was selected
void entityClicked(const OSWIndexTrace& trace);
/// emitted when a protein, peptide, feature or transition was double-clicked
void entityDoubleClicked(const OSWIndexTrace& trace);
private:
QLineEdit* spectra_search_box_ = nullptr;
QComboBox* spectra_combo_box_ = nullptr;
TreeView* dia_treewidget_ = nullptr;
/// points to the data which is currently shown
/// Useful to avoid useless repaintings, which would loose the open/close state of internal tree nodes and selected items
OSWData* current_data_ = nullptr;
/**
@brief convert a tree item to a pointer into an OSWData structure
@param[in] item The tree item (protein, peptide,...) that was clicked
@return The index into the current OSWData @p current_data_
**/
OSWIndexTrace prepareSignal_(QTreeWidgetItem* item);
private slots:
/// fill the search-combo-box with current column header names
void populateSearchBox_();
/// searches for rows containing a search text (from spectra_search_box_); called when text search box is used
void spectrumSearchText_();
/// emits entityClicked() for all subitems
void rowSelectionChange_(QTreeWidgetItem*, QTreeWidgetItem*);
/// emits entityClicked() for all subitems
void rowClicked_(QTreeWidgetItem*, int col);
/// emits entityDoubleClicked() for all subitems
void rowDoubleClicked_(QTreeWidgetItem*, int col);
/// searches using text box and plots the spectrum
void searchAndShow_();
};
}
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/Painter2DBase.h | .h | 8,064 | 249 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow $
// --------------------------------------------------------------------------
#pragma once
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#include <OpenMS/CONCEPT/Types.h>
#include <OpenMS/DATASTRUCTURES/ConvexHull2D.h>
#include <OpenMS/VISUAL/INTERFACES/IPeptideIds.h>
#include <OpenMS/VISUAL/PainterBase.h>
#include <vector>
class QPainter;
class QPenStyle;
class QPoint;
class String;
namespace OpenMS
{
class ConsensusFeature;
class LayerDataChrom;
class LayerDataConsensus;
class LayerDataFeature;
class LayerDataIdent;
class LayerDataIonMobility;
class LayerDataPeak;
struct PeakIndex;
class Plot2DCanvas;
/**
* @brief A base class for painting all items from a data layer (as supported by class derived from here) onto a 2D Canvas
*/
class OPENMS_GUI_DLLAPI Painter2DBase : public PainterBase
{
public:
virtual ~Painter2DBase() = default;
/**
@brief Paints items using the given painter onto the canvas.
@param[in] painter The painter used for drawing
@param[in] canvas The canvas to paint onto (should expose all the details needed, like canvas size, draw mode, colors etc)
@param[in] layer_index Which layer is currently painted
*/
virtual void paint(QPainter* painter, Plot2DCanvas* canvas, int layer_index) = 0;
/**
* \brief Emphasize a certain element (e.g. feature), e.g. when mouse hovering.
* By default, nothing is highlighted. Override for subclasses if you need highlighting.
*
* \param painter The painter used for drawing
* \param canvas The canvas to paint onto (should expose all the details needed, like canvas size, draw mode, colors etc)
* \param element Which item of the current layer should be drawn?
*/
virtual void highlightElement(QPainter* painter, Plot2DCanvas* canvas, const PeakIndex element);
protected:
/**
@brief Paints a convex hull.
@param[in] painter The QPainter to paint on
@param[in] canvas The canvas (for configuration details)
@param[in] hull Reference to convex hull
@param[in] has_identifications Draw hulls in green (true) or blue color (false)
*/
static void paintConvexHull_(QPainter& painter, Plot2DCanvas* canvas, const ConvexHull2D& hull, bool has_identifications);
/**
@brief Paints convex hulls.
@param[in] painter The QPainter to paint on
@param[in] canvas The canvas (for configuration details)
@param[in] hulls Reference to convex hulls
@param[in] has_identifications Draw hulls in green (true) or blue color (false)
*/
static void paintConvexHulls_(QPainter& painter, Plot2DCanvas* canvas, const std::vector<ConvexHull2D>& hulls, bool has_identifications);
static void paintPeptideIDs_(QPainter* painter, Plot2DCanvas* canvas, const IPeptideIds::PepIds& ids, int layer_index);
};
/**
@brief Painter2D for spectra
*/
class OPENMS_GUI_DLLAPI Painter2DPeak : public Painter2DBase
{
public:
/// C'tor which remembers the layer to paint
Painter2DPeak(const LayerDataPeak* parent);
void paint(QPainter*, Plot2DCanvas* canvas, int layer_index) override;
protected:
void paintAllIntensities_(QPainter& painter, Plot2DCanvas* canvas, Size layer_index, double pen_width);
/**
@brief Paints maximum intensity of individual peaks.
Paints the peaks as small ellipses. The peaks are colored according to the
selected dot gradient.
@param[in] painter The QPainter to paint with.
@param[in] canvas The canvas to paint on.
@param[in] layer_index The index of the layer.
@param[in] rt_pixel_count
@param[in] mz_pixel_count
*/
void paintMaximumIntensities_(QPainter& painter, Plot2DCanvas* canvas, Size layer_index, Size rt_pixel_count, Size mz_pixel_count);
/**
@brief Paints the locations where MS2 scans where triggered
*/
void paintPrecursorPeaks_(QPainter& painter, Plot2DCanvas* canvas);
const LayerDataPeak* layer_; ///< the data to paint
};
/**
@brief Painter2D for chromatograms
*/
class OPENMS_GUI_DLLAPI Painter2DChrom : public Painter2DBase
{
public:
/// C'tor which remembers the layer to paint
Painter2DChrom(const LayerDataChrom* parent);
void paint(QPainter* painter, Plot2DCanvas* canvas, int layer_index) override;
protected:
const LayerDataChrom* layer_; ///< the data to paint
};
/**
@brief Painter2D for ion mobilograms
*/
class OPENMS_GUI_DLLAPI Painter2DIonMobility : public Painter2DBase
{
public:
/// C'tor which remembers the layer to paint
Painter2DIonMobility(const LayerDataIonMobility* parent);
void paint(QPainter* painter, Plot2DCanvas* canvas, int layer_index) override;
protected:
const LayerDataIonMobility* layer_; ///< the data to paint
};
/**
@brief Painter2D for Features
*/
class OPENMS_GUI_DLLAPI Painter2DFeature : public Painter2DBase
{
public:
/// C'tor which remembers the layer to paint
Painter2DFeature(const LayerDataFeature* parent);
void paint(QPainter*, Plot2DCanvas* canvas, int layer_index) override;
void highlightElement(QPainter* painter, Plot2DCanvas* canvas, const PeakIndex element) override;
protected:
/**
@brief Paints convex hulls (one for each mass trace) of a features layer.
*/
void paintTraceConvexHulls_(QPainter* painter, Plot2DCanvas* canvas);
/**
@brief Paints the convex hulls (one for each feature) of a features layer.
*/
void paintFeatureConvexHulls_(QPainter* painter, Plot2DCanvas* canvas);
const LayerDataFeature* layer_; ///< the data to paint
};
/**
@brief Painter2D for ConsensusFeatures
*/
class OPENMS_GUI_DLLAPI Painter2DConsensus : public Painter2DBase
{
public:
/// C'tor which remembers the layer to paint
Painter2DConsensus(const LayerDataConsensus* parent);
void paint(QPainter*, Plot2DCanvas* canvas, int layer_index) override;
void highlightElement(QPainter* painter, Plot2DCanvas* canvas, const PeakIndex element) override;
protected:
/**
@brief Paints the consensus elements of a consensus features layer.
@param[in] painter The QPainter to paint on
@param[in] canvas The canvas (for configuration details)
@param[in] layer_index Index of the layer
*/
void paintConsensusElements_(QPainter* painter, Plot2DCanvas* canvas, Size layer_index);
/**
@brief Paints one consensus element of a consensus features layer.
@param[in] painter The QPainter to paint on
@param[in] canvas The canvas (for configuration details)
@param[in] layer_index Index of the layer
@param[in] cf Reference to the consensus feature to be painted
*/
void paintConsensusElement_(QPainter* painter, Plot2DCanvas* canvas, Size layer_index, const ConsensusFeature& cf);
/**
@brief checks if any element of a consensus feature is currently visible.
@param[in] canvas The canvas (for configuration details)
@param[in] cf The ConsensusFeature that needs checking
@param[in] layer_index Index of the layer.
*/
bool isConsensusFeatureVisible_(const Plot2DCanvas* canvas, const ConsensusFeature& cf, Size layer_index);
const LayerDataConsensus* layer_; ///< the data to paint
};
/**
@brief Painter2D for Identifications
*/
class OPENMS_GUI_DLLAPI Painter2DIdent : public Painter2DBase
{
public:
/// C'tor which remembers the layer to paint
Painter2DIdent(const LayerDataIdent* parent);
/// Implementation of base class
void paint(QPainter*, Plot2DCanvas* canvas, int layer_index) override;
protected:
const LayerDataIdent* layer_; ///< the data to paint
};
} // namespace OpenMS
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/SequenceVisualizer.h | .h | 1,580 | 65 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Julianus Pfeuffer $
// $Authors: Dhanmoni Nath, Julianus Pfeuffer $
// --------------------------------------------------------------------------
#ifdef QT_WEBENGINEWIDGETS_LIB
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#include <QWidget>
#include <QJsonObject>
class QWebEngineView;
class QWebChannel;
namespace Ui
{
class SequenceVisualizer;
}
namespace OpenMS
{
class OPENMS_GUI_DLLAPI Backend : public QObject
{
Q_OBJECT
// We can access the protein and peptide data using SequenceVisualizer.json_data_obj inside JS/HTML resource file
Q_PROPERTY(QJsonObject json_data_obj MEMBER m_json_data_obj_ NOTIFY dataChanged_)
signals:
void dataChanged_();
public:
QJsonObject m_json_data_obj_;
};
class OPENMS_GUI_DLLAPI SequenceVisualizer : public QWidget
{
Q_OBJECT
public:
explicit SequenceVisualizer(QWidget* parent = nullptr);
~SequenceVisualizer() override;
public slots:
// this method sets protein and peptide data to m_json_data_obj_.
void setProteinPeptideDataToJsonObj(
const QString& accession_num,
const QString& pro_seq,
const QJsonArray& peptides_data);
private:
Ui::SequenceVisualizer* ui_;
Backend backend_;
QWebEngineView* view_;
QWebChannel* channel_;
};
}// namespace OpenMS
#endif | Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/TOPPASTreeView.h | .h | 1,675 | 65 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Johannes Veit $
// $Authors: Johannes Junker $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
//QT
#include <QtWidgets/QTreeWidget>
#include <QMouseEvent>
#include <QtCore/QPoint>
namespace OpenMS
{
class String;
/**
@brief Tree view implementation for the list of TOPP tools
@ingroup Visual
*/
class OPENMS_GUI_DLLAPI TOPPASTreeView :
public QTreeWidget
{
Q_OBJECT
public:
/// Constructor
TOPPASTreeView(QWidget * parent = nullptr);
/// Destructor
~TOPPASTreeView() override;
/// Filter tree elements by name (case insensitive and partial=substring matches are valid)
/// An empty filter shows all elements.
/// If an element in a subtree is matched, all parents up to the root are also shown.
void filter(const QString& must_match);
/// expand all subtrees, i.e. make them visible
void expandAll();
/// collapse all subtrees; only show the uppermost level
void collapseAll();
protected:
///@name Reimplemented Qt events
//@{
void mousePressEvent(QMouseEvent * e) override;
void mouseMoveEvent(QMouseEvent * e) override;
void keyPressEvent(QKeyEvent * e) override;
void leaveEvent(QEvent * e) override;
void enterEvent(QEnterEvent* e) override;
//@}
/// The drag start position
QPoint drag_start_pos_;
};
}
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/MultiGradient.h | .h | 6,709 | 192 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Marc Sturm $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
//OpenMS
#include <OpenMS/CONCEPT/Types.h>
#include <OpenMS/CONCEPT/Macros.h>
#include <OpenMS/CONCEPT/Exception.h>
#include <OpenMS/DATASTRUCTURES/String.h>
//QT
#include <QtGui/QColor>
//STL
#include <map>
#include <vector>
#include <cmath>
namespace OpenMS
{
/**
@brief A gradient of multiple colors and arbitrary distances between colors.
Positions associated with numbers range from 0 to 100.
There is always a color associated with position 0 and 100.
Stretching the gradient to a specified range, and precalculation and
caching is also possible.
@ingroup Visual
*/
class OPENMS_GUI_DLLAPI MultiGradient
{
public:
/// Returns the default gradient for linear intensity mode
static MultiGradient getDefaultGradientLinearIntensityMode();
/// Returns the default gradient for logarithmic intensity mode
static MultiGradient getDefaultGradientLogarithmicIntensityMode();
/// Interpolation mode.
enum InterpolationMode
{
IM_LINEAR, ///< IM_LINEAR returns the linear interpolation (default).
IM_STAIRS ///< IM_STAIRS returns the color of the next lower position
};
/// Constructor
MultiGradient();
/// Copy constructor
MultiGradient(const MultiGradient & multigradient);
/// Destructor
~MultiGradient();
/// Assignment operator
MultiGradient & operator=(const MultiGradient & rhs);
/// sets or replaces the color at position @p position
void insert(double position, QColor color);
/// removes the color at position @p position
bool remove(double position);
/// returns if a value for position @p position exists
bool exists(double position);
/**
@brief returns the position of the @p index -th point
@exception Exception::IndexOverflow is thrown for a too large index
*/
UInt position(UInt index);
/**
@brief returns the color of the @p index -th point
@exception Exception::IndexOverflow is thrown for a too large index
*/
QColor color(UInt index);
/**
@brief Returns the color as @p position.
If the @p position is higher or lower than the range [0,100] the highest,
respectively the lowest, color is returned.
*/
QColor interpolatedColorAt(double position) const;
/**
@brief returns the color as @p position with the gradient stretched between @p min and @p max.
If the @p position is higher or lower than the range [min,max] the highest,
respectively the lowest, color is returned.
*/
QColor interpolatedColorAt(double position, double min, double max) const;
/// activates the precalculation of values (only approximate results are given)
void activatePrecalculationMode(double min, double max, UInt steps);
/// deactivates the precalculation of values ( and deletes the precalculated values)
void deactivatePrecalculationMode();
/// index of color in precalculated table by position in gradient
inline Int precalculatedColorIndex( double position ) const
{
OPENMS_PRECONDITION(pre_.size() != 0, "MultiGradient::precalculatedColorIndex(double): Precalculation mode not activated!");
OPENMS_PRECONDITION(position >= pre_min_, (String("MultiGradient::precalculatedColorIndex(double): position ") + position + " out of specified range (" + pre_min_ + "-" + (pre_min_ + pre_size_) + ")!").c_str());
Int index = (Int)((position - pre_min_) / pre_size_ * pre_steps_);
return qBound( 0, index, (Int)pre_.size() - 1 );
}
/// precalculated color by its index in the table
inline QColor precalculatedColorByIndex( Int index ) const
{
OPENMS_PRECONDITION(pre_.size() != 0, "MultiGradient::precalculatedColorByIndex(Int): Precalculation mode not activated!");
OPENMS_PRECONDITION( index >= 0, "MultiGradient::precalculatedColorByIndex(Int): negative indexes not allowed");
OPENMS_PRECONDITION( index < (Int)pre_.size(), (String("MultiGradient::indexedColor(Int): index ") + index + " out of specified range (0-" + pre_.size() + ")!").c_str());
return pre_[index];
}
/**
@brief Returns a precalculated color.
If the @p position is out of the range specified in activatePrecalculationMode() the behaviour depends on the debug mode:
- With debug information an Precondition exception is thrown
- Without debug information array boundaries are violated, which probably causes a segmentation fault.
*/
inline QColor precalculatedColorAt(double position) const
{
return precalculatedColorByIndex( precalculatedColorIndex( position ) );
}
///return the number of color points
Size size() const;
/// size of precalculated colors table
Size precalculatedSize() const
{
return pre_.size();
}
/// sets the interpolation mode (default or stairs). Default is linear
void setInterpolationMode(InterpolationMode mode);
/// returns the interpolation mode
InterpolationMode getInterpolationMode() const;
///convert to string representation
std::string toString() const;
/**
@brief Sets the gradient by string representation.
The string representation of a gradient starts with the interpolation mode: "Linear" or "Stairs" and the separator "|".
It is followed by an arbitrary number of integer-color-pairs.
Such a pair consists of floating point number (0.0-100.0) followed by a comma and
a "#". Then follows a color in RGB notation "#RRGGBB" and finally a semicolon.
Examples are:
<UL>
<LI> "Linear|0,#ffff00;100,#000000"
<LI> "Stairs|0,#ffff00;11.5,#ffaa00;32,#ff0000;55,#aa00ff;78,#5500ff;100,#000000"
</UL>
*/
void fromString(const std::string & gradient);
protected:
/// Map of index and color
std::map<double, QColor> pos_col_;
/// Current interpolation mode
InterpolationMode interpolation_mode_;
/// Precalculated colors
std::vector<QColor> pre_;
/// Minimum of the precalculated color range
double pre_min_;
/// Width of the precalculated color range
double pre_size_;
/// Steps of the precalculated color range
UInt pre_steps_;
};
}
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/ColorSelector.h | .h | 1,406 | 64 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg$
// $Authors: Marc Sturm $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
//QT
#include <QtWidgets/QWidget>
class QPaintEvent;
class QMouseEvent;
namespace OpenMS
{
/**
@brief A widget for selecting a color.
It represents a color (displayed as background color) and allows changing the color.
\image html ColorSelector.png
The above example image shows four ColorSelector instances on the right side.
@ingroup Visual
*/
class OPENMS_GUI_DLLAPI ColorSelector :
public QWidget
{
Q_OBJECT
public:
/// Constructor
ColorSelector(QWidget * parent = nullptr);
/// Destructor
~ColorSelector() override;
/// Returns the selected color
const QColor & getColor();
/// Sets the selected color
void setColor(const QColor &);
/// Qt size hint
QSize sizeHint() const override;
protected:
///@name Reimplemented Qt events
//@{
void paintEvent(QPaintEvent * e) override;
void mousePressEvent(QMouseEvent * e) override;
//@}
QColor color_;
};
}
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/Plot1DCanvas.h | .h | 27,189 | 757 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg$
// $Authors: Marc Sturm, Timo Sachsenberg, Chris Bielow $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
// OpenMS
#include <OpenMS/VISUAL/LayerData1DBase.h>
#include <OpenMS/VISUAL/LayerDataChrom.h>
#include <OpenMS/VISUAL/PlotCanvas.h>
#include <OpenMS/VISUAL/Painter1DBase.h>
// QT
#include <QTextDocument>
#include <QPoint>
// STL
#include <vector>
#include <utility>
// QT
class QAction;
namespace OpenMS
{
class Annotation1DItem;
/**
* \brief Manipulates X or Y component of points in the X-Y plane, by assuming one axis (either X or Y axis) has gravity acting upon it.
*
* Example: Assume there is a X-Y plane with RT(on X axis) and Intensity(on Y axis). Given a point on the plane, you can make it 'drop down',
* to a minimum intensity, by applying a gravity on the Y axis. You can also make the point 'fly up'.
*/
class Gravitator
{
public:
using AreaXYType = PlotCanvas::GenericArea::AreaXYType;
/**
* \brief C'tor to apply gravity on any axis
* \param axis Which axis
*/
Gravitator(DIM axis)
{
setGravityAxis(axis);
}
/**
* \brief Convenience c'tor, which picks the Intensity dimension from a DimMapper as gravity axis
* \param unit_mapper Pick the Intensity axis from this mapper (or throw exception). See setGravityAxis().
*/
Gravitator(const DimMapper<2>& unit_mapper)
{
setIntensityAsGravity(unit_mapper);
}
/// Which axis is pulling a point downwards (e.g. when plotting sticks)
/// Note that pulling (see gravitateDown()) will only change the value for the gravity axis.
/// E.g. with gravity on Y, a Point(X=10, Y=10), will be pulled to Point(X=10, Y=min)
/// @param[in] axis Either X, or Y
/// @throws Exception::InvalidValue if @p axis is not X or Y
void setGravityAxis(DIM axis)
{
if (axis != DIM::X && axis != DIM::Y)
{
throw Exception::InvalidValue(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION, "Not a valid axis for 1D plotting", String((int)axis));
}
gravity_axis_ = axis;
}
/**
* @brief Convenience function, which picks the Intensity dimension from a DimMapper as gravity axis
* @param[in] unit_mapper
* @throw Exception::NotImplemented if @p unit_mapper does not have an Intensity dimension
*/
void setIntensityAsGravity(const DimMapper<2>& unit_mapper)
{
if (unit_mapper.getDim(DIM::X).getUnit() == DIM_UNIT::INT)
{
setGravityAxis(DIM::X);
}
if (unit_mapper.getDim(DIM::Y).getUnit() == DIM_UNIT::INT)
{
setGravityAxis(DIM::Y);
}
/// if 1D view has no intensity dimension, go think about what dimension should be gravitational...
throw Exception::NotImplemented(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION);
}
/// Which axis is affected by gravity?
DIM getGravityAxis() const
{
return gravity_axis_;
}
/**
* \brief Swap gravity axis (from X to Y, or vice versa)
* \return An orthogonal gravity model
*/
Gravitator swap() const
{
auto r = *this;
r.setGravityAxis( (r.getGravityAxis() == DIM::X) ? DIM::Y : DIM::X);
return r;
}
/// Pull the point @p p to the current gravity axis, i.e. the lowest point on the Area
///
/// @param[in] p A X-Y data point
/// @param[in] area An area which contains the min/max range of X and Y axis
/// @return A X-Y data point identical to @p p, but with its gravity-axis value changed to the minimum given in @p area
QPoint gravitateMin(QPoint p, const AreaXYType& area) const
{
if (gravity_axis_ == DIM::X)
{
p.rx() = area.minX();
}
else if (gravity_axis_ == DIM::Y)
{
p.ry() = area.minY();
}
return p;
}
/// Add value of @p delta's gravity dimension to the gravity dimension of point @p p. Other dimensions remain untouched.
///
/// @param[in] p A X-Y data point
/// @param[in] delta A distance, of which we only use the gravity dimension's part.
/// @return A X-Y data point identical to @p p, but with its gravity-axis value changed by adding delta.
QPoint gravitateWith(QPoint p, const QPoint& delta) const
{
if (gravity_axis_ == DIM::X)
{
p.rx() += delta.x();
}
else if (gravity_axis_ == DIM::Y)
{
p.ry() += delta.y();
}
return p;
}
/// Same as gravitateWith()
template<UInt D>
DPosition<D> gravitateWith(DPosition<D> p, const DPosition<D>& delta) const
{
p[(int)gravity_axis_] += delta[(int)gravity_axis_];
return p;
}
/// Change the value of @p p's gravity dimension to the value of @p targets'. Other dimensions remain untouched.
///
/// @param[in] p A X-Y data point
/// @param[out] target A target value, of which we only use the gravity dimension's part.
/// @return A X-Y data point identical to @p p, but with its gravity-axis value changed by target's value.
QPoint gravitateTo(QPoint p, const QPoint& target) const
{
if (gravity_axis_ == DIM::X)
{
p.rx() = target.x();
}
else if (gravity_axis_ == DIM::Y)
{
p.ry() = target.y();
}
return p;
}
/// Same as gravitateTo()
template<UInt D>
DPosition<D> gravitateTo(DPosition<D> p, const DPosition<D>& target) const
{
p[(int)gravity_axis_] = target[(int)gravity_axis_];
return p;
}
/// Opposite of gravitateMin()
QPoint gravitateMax(QPoint p, const AreaXYType& area) const
{
if (gravity_axis_ == DIM::X)
{
p.rx() = area.maxX();
}
else if (gravity_axis_ == DIM::Y)
{
p.ry() = area.maxY();
}
return p;
}
/// Pull the point @p p to zero (0) on the current gravity axis.
///
/// @param[in] p A X-Y data point
/// @return A X-Y data point with its gravity axis set to '0'
QPoint gravitateZero(QPoint p) const
{
if (gravity_axis_ == DIM::X)
{
p.rx() = 0;
}
else if (gravity_axis_ == DIM::Y)
{
p.ry() = 0;
}
return p;
}
/// Pull the point @p p to zero (0) on the current gravity axis.
///
/// @param[in] p A X-Y data point
/// @return A X-Y data point with its gravity axis set to '0'
template<UInt D>
DPosition<D> gravitateZero(DPosition<D> p) const
{
p[(int)gravity_axis_] = 0;
return p;
}
/// Pull the point @p p to NAN on the current gravity axis.
///
/// @param[in] p A X-Y data point
/// @return A X-Y data point with its gravity axis set to NAN
template<UInt D>
DPosition<D> gravitateNAN(DPosition<D> p) const
{
p[(int)gravity_axis_] = std::numeric_limits<float>::quiet_NaN();
return p;
}
/// Get the value of the gravity dimension
///
/// @param[in] p A X-Y data point
/// @return Either the X or Y component, depending on gravity
int gravityValue(const QPoint& p) const
{
if (gravity_axis_ == DIM::X)
{
return p.x();
}
else if (gravity_axis_ == DIM::Y)
{
return p.y();
}
// never reached, but make compilers happy
return 0;
}
/// Get the value of the gravity dimension
///
/// @param[in] p A X-Y data point
/// @return Either the X or Y component, depending on gravity
template<UInt D>
int gravityValue(const DPosition<D>& p) const
{
return p[(int)gravity_axis_];
}
/// Get the difference of values in the gravity dimension
///
/// @param[in] start The start point in XY coordinates
/// @param[in] end The end point in XY coordinates
/// @return The difference of (end-start) in the X or Y component, depending on gravity
template<UInt D>
auto gravityDiff(const DPosition<D>& start, const DPosition<D>& end) const
{
return end[(int)gravity_axis_] - start[(int)gravity_axis_];
}
private:
/// Where are points in the X-Y plane projected onto when drawing lines?
DIM gravity_axis_;
};
/**
@brief Canvas for visualization of one or several spectra.
@image html Plot1DCanvas.png
The example image shows %Plot1DCanvas displaying a raw data layer and a peak data layer.
@htmlinclude OpenMS_Plot1DCanvas.parameters
@ingroup PlotWidgets
*/
class OPENMS_GUI_DLLAPI Plot1DCanvas :
public PlotCanvas
{
Q_OBJECT
public:
/// Label modes (percentage or absolute) of x axis and y axis
enum LabelMode
{
LM_XABSOLUTE_YABSOLUTE,
LM_XPERCENT_YABSOLUTE,
LM_XABSOLUTE_YPERCENT,
LM_XPERCENT_YPERCENT
};
/// extra empty margin added on top to ensure annotations and 100% y-axis label are properly drawn
constexpr static double TOP_MARGIN{1.09};
/// Default constructor
Plot1DCanvas(const Param& preferences, const DIM gravity_axis = DIM::Y, QWidget* parent = nullptr);
/// Destructor
~Plot1DCanvas() override;
/// returns the layer data of the layer @p index
/// @throws std::bad_cast exception if the current layer is not a LayerData1DBase
const LayerData1DBase& getLayer(Size index) const;
/// returns the layer data of the layer @p index
/// @throws std::bad_cast exception if the current layer is not a LayerData1DBase
LayerData1DBase& getLayer(Size index);
/// returns the layer data of the active layer
/// @throws std::bad_cast exception if the current layer is not a LayerData1DBase
const LayerData1DBase& getCurrentLayer() const;
/// returns the layer data of the active layer
/// @throws std::bad_cast exception if the current layer is not a LayerData1DBase
LayerData1DBase& getCurrentLayer();
/// Get the dimension on which gravity is currently acting upon (usually it's the Y axis' unit)
const DimBase& getGravityDim() const;
/// Get the dimension on which gravity is currently not acting upon (the orthogonal axis; usually it's the X axis' unit)
const DimBase& getNonGravityDim() const;
/// add a chromatogram layer
/// @param[in] chrom_exp_sptr An MSExperiment with chromatograms
/// @param[in] ondisc_sptr OnDisk experiment, as fallback to read the chromatogram from, should @p chrom_exp_sptr.getChromatograms(index) be empty
/// @param[in] chrom_annotation If OSWData was loaded, pass the shared_pointer from the LayerData. Otherwise leave empty.
/// @param[in] index Index of the chromatogram to show
/// @param[in] filename For file change watcher (can be empty, if need be)
/// @param[in] basename Name of layer (usually the basename of the file)
/// @param[in] basename_extra Optional suffix of the layer name (e.g. a peptide sequence, or an index '[39]).
/// @return true on success, false if data was missing etc
/// @note: this does NOT trigger layerActivated signal for efficiency-reasons. Do it manually afterwards!
bool addChromLayer(ExperimentSharedPtrType chrom_exp_sptr,
ODExperimentSharedPtrType ondisc_sptr,
OSWDataSharedPtrType chrom_annotation,
const int index,
const String& filename,
const String& basename,
const String& basename_extra);
///Enumerate all available paint styles
enum DrawModes
{
DM_PEAKS, ///< draw data as peak
DM_CONNECTEDLINES ///< draw as connected lines
};
/// Returns the draw mode of the current layer
DrawModes getDrawMode() const;
/// Sets draw mode of the current layer
void setDrawMode(DrawModes mode);
// Docu in base class
void showCurrentLayerPreferences() override;
/// Returns whether flipped layers exist or not
bool flippedLayersExist();
/// Flips the layer with @p index up/downwards
void flipLayer(Size index);
/// Returns whether this widget is currently in mirror mode
bool mirrorModeActive() const;
/// Sets whether this widget is currently in mirror mode
void setMirrorModeActive(bool b);
/// For convenience - calls dataToWidget
void dataToWidget(const DPosition<2>& peak, QPoint& point, bool flipped = false);
/// For convenience - calls dataToWidget
void dataToWidget(const DPosition<2>& xy_point, DPosition<2>& point, bool flipped);
/// Calls PlotCanvas::dataToWidget_(), takes mirror mode into account
void dataToWidget(double x, double y, QPoint& point, bool flipped = false);
/// For convenience - calls widgetToData
PointXYType widgetToData(const QPoint& pos);
/// Calls PlotCanvas::widgetToData_(), takes mirror mode into account
PointXYType widgetToData(double x, double y);
/**
@brief converts a distance in axis values to pixel values
*/
inline void dataToWidgetDistance(double x, double y, QPoint& point)
{
dataToWidget_(x, y, point);
// subtract the 'offset'
QPoint zero;
dataToWidget_(0, 0, zero);
point -= zero;
}
/**
@brief compute distance in data coordinates (unit axis as shown) when moving @p x/y pixel in chart/widget coordinates
*/
inline PointXYType widgetToDataDistance(double x, double y)
{
PointXYType point = Plot1DCanvas::widgetToData(x, y); // call the 1D version, otherwise intensity&mirror modes will not be honored
// subtract the 'offset'
PointXYType zero = Plot1DCanvas::widgetToData(0, 0); // call the 1D version, otherwise intensity&mirror modes will not be honored
point -= zero;
return point;
}
/// overload to call the 1D version (which has min-intensity of '0')
virtual const RangeType& getDataRange() const override
{
return overall_data_range_1d_;
}
/**
* \brief Pushes a data point back into the valid data range of the current layer area. Useful for annotation items which were mouse-dragged outside the range by the user.
* \tparam T A data point, e.g. Peak1D, which may be outside the data area
* \param data_point
* \param layer_index The layer of the above data_point (to obtain the data range of the layer)
*/
template <class T>
void pushIntoDataRange(T& data_point, const int layer_index)
{ // note: if this is needed for anything other than the 1D Canvas, you need to make sure to call the correct widgetToData/ etc functions --- they work a bit different, depending on Canvas
auto xy_unit = unit_mapper_.map(data_point); // datatype to xy
pushIntoDataRange(xy_unit, layer_index);
unit_mapper_.fromXY(xy_unit, data_point); // xy to datatype
}
/**
* \brief Pushes a data point back into the valid data range of the current layer area. Useful for annotation items which were mouse-dragged outside the range by the user.
* \param xy_unit A pair (X and Y coordinate) with values in the units currently used on the axis
* \param layer_index The layer of the above data_point (to obtain the data range of the layer)
*/
//template<> // specialization does not compile when declared within the class on GCC -- even though it should; but I'm not moving it outside! :)
void pushIntoDataRange(PointXYType& xy_unit, const int layer_index)
{ // note: if this is needed for anything other than the 1D Canvas, you need to make sure to call the correct widgetToData/ etc functions --- they work a bit different, depending on Canvas
auto p_range = unit_mapper_.fromXY(xy_unit);
const auto all_range = getLayer(layer_index).getRange();
p_range.pushInto(all_range);
xy_unit = unit_mapper_.mapRange(p_range).minPosition();
}
/// Display a static text box on the top right
void setTextBox(const QString& html);
/// ----- Annotations
/// Add an annotation item for the given peak
Annotation1DItem* addPeakAnnotation(const PeakIndex& peak_index, const QString& text, const QColor& color);
/// ----- Alignment
/// Performs an alignment of the layers with @p layer_index_1 and @p layer_index_2
void performAlignment(Size layer_index_1, Size layer_index_2, const Param& param);
/// Resets alignment_
void resetAlignment();
/// Returns the number of aligned pairs of peaks
Size getAlignmentSize();
/// Returns the score of the alignment
double getAlignmentScore() const;
/// Returns aligned_peaks_indices_
std::vector<std::pair<Size, Size> > getAlignedPeaksIndices();
/// Sets current spectrum index of current layer to @p index
void activateSpectrum(Size index, bool repaint = true);
/// Set's the Qt PenStyle of the active layer
void setCurrentLayerPeakPenStyle(Qt::PenStyle ps);
/// Actual painting takes place here
void paint(QPainter* paint_device, QPaintEvent* e);
/// interesting (e.g., high-intensity) get live annotated with m/s's
void setDrawInterestingMZs(bool enable);
/// Return true if interesting m/s are annotated
bool isDrawInterestingMZs() const;
// Show/hide ion ladder on top right corner (Identification view)
void setIonLadderVisible(bool show);
// Returns true if ion ladder is visible
bool isIonLadderVisible() const;
/**
* \brief Get gravity manipulation object to apply gravity to points
* \return Gravitator
*/
const Gravitator& getGravitator() const
{
return gr_;
}
signals:
/// Requests to display all spectra in 2D plot
void showCurrentPeaksAs2D();
/// Requests to display all spectra in 3D plot
void showCurrentPeaksAs3D();
/// Requests to display this spectrum (=frame) in ion mobility plot
void showCurrentPeaksAsIonMobility(const MSSpectrum& spec);
/// Requests to display all spectra as DIA
void showCurrentPeaksAsDIA(const Precursor& pc, const MSExperiment& exp);
public slots:
// Docu in base class
void activateLayer(Size layer_index) override;
// Docu in base class
void removeLayer(Size layer_index) override;
// Docu in base class
void updateLayer(Size i) override;
// Docu in base class
void horizontalScrollBarChange(int value) override;
protected slots:
/// Reacts on changed layer parameters
void currentLayerParamtersChanged_();
protected:
/**
@brief Convert chart to widget coordinates
Translates chart (unit) coordinates to widget (pixel) coordinates.
@param[in] x the chart coordinate x
@param[in] y the chart coordinate y
@param[out] point returned widget coordinates
*/
void dataToWidget_(double x, double y, QPoint& point)
{
const auto& xy = visible_area_.getAreaXY();
const auto h_px = height();
const auto w_px = width();
point.setX(int((x - xy.minX()) / xy.width() * w_px));
if (intensity_mode_ != PlotCanvas::IM_LOG)
{
point.setY(int((xy.maxY() - y) / xy.height() * h_px));
}
else // IM_LOG
{
point.setY(h_px - int(std::log10((y - xy.minY()) + 1) / std::log10(xy.height() + 1) * h_px));
}
}
void dataToWidget_(const DPosition<2>& xy, QPoint& point)
{
dataToWidget_(xy.getX(), xy.getY(), point);
}
QPoint dataToWidget_(const DPosition<2>& xy)
{
QPoint point;
dataToWidget_(xy.getX(), xy.getY(), point);
return point;
}
// Docu in base class
bool finishAdding_() override;
/// Draws the coordinates (or coordinate deltas) to the widget's upper left corner
void drawCoordinates_(QPainter& painter, const PeakIndex& peak);
/// Draws the coordinates (or coordinate deltas) to the widget's upper left corner
void drawDeltas_(QPainter& painter, const PeakIndex& start, const PeakIndex& end);
/// Draws the alignment on @p painter
void drawAlignment_(QPainter& painter);
/// internal method, called before calling parent function PlotCanvas::changeVisibleArea_
void changeVisibleArea1D_(const UnitRange& new_area, bool repaint, bool add_to_stack);
// Docu in base class
void changeVisibleArea_(VisibleArea new_area, bool repaint = true, bool add_to_stack = false) override;
/**
@brief Changes visible area interval
This method is for convenience only. It calls changeVisibleArea_(const VisibleArea&, bool, bool) .
*/
void changeVisibleArea_(const AreaXYType& new_area, bool repaint = true, bool add_to_stack = false);
/**
@brief Changes visible area interval
This method is for convenience only. It calls changeVisibleArea_(const VisibleArea&, bool, bool) .
*/
void changeVisibleArea_(const UnitRange& new_area, bool repaint = true, bool add_to_stack = false);
/// Draws a highlighted peak; if draw_elongation is true, the elongation line is drawn (for measuring)
void drawHighlightedPeak_(Size layer_index, const PeakIndex& peak, QPainter& painter, bool draw_elongation = false);
/**
@brief Zooms fully out and resets the zoom stack
Sets the visible area to the initial value, such that all data (for the current spec/chrom/...) is shown.
@param[in] repaint If @em true a repaint is forced. Otherwise only the new area is set.
*/
void resetZoom(bool repaint = true) override
{
zoomClear_();
PlotCanvas::changeVisibleArea_(visible_area_.cloneWith(overall_data_range_1d_), repaint, true);
}
/// Recalculates the current scale factor based on the specified layer (= 1.0 if intensity mode != IM_PERCENTAGE)
void recalculatePercentageFactor_(Size layer_index);
/**
@brief Recalculates the overall_data_range_ (by calling PlotCanvas::recalculateRanges_)
plus the overall_data_range_1d_ (which only takes into account the current spec/chrom/.. of all layers)
A small margin is added to each side of the range in order to display all data.
*/
void recalculateRanges_() override
{
PlotCanvas::recalculateRanges_(); // for: overall_data_range_
// the same thing for: overall_data_range_1d_
RangeType& layer_range_1d = overall_data_range_1d_;
layer_range_1d.clearRanges();
for (Size layer_index = 0; layer_index < getLayerCount(); ++layer_index)
{
layer_range_1d.extend(getLayer(layer_index).getRange1D());
}
// add 4% margin (2% left, 2% right) to all dimensions, except the current gravity axes's minimum (usually intensity)
layer_range_1d.scaleBy(1.04);
// set minimum intensity to 0 (avoid negative intensities and show full height of peaks in case their common minimum is large)
auto& gravity_range = getGravityDim().map(layer_range_1d);
gravity_range.setMin(0);
// make sure that each dimension is not a single point (axis widget won't like that)
// (this needs to be the last command to ensure this property holds when leaving the function!)
layer_range_1d.minSpanIfSingular(1);
}
// Docu in base class
void updateScrollbars_() override;
// Docu in base class
void intensityModeChange_() override;
/// Adjust the gravity axis (usually y-axis with intensity) according to the given range on the x-axis
/// (since the user cannot freely choose the limits of this axis in 1D View)
RangeAllType correctGravityAxisOfVisibleArea_(UnitRange area);
/** @name Reimplemented QT events */
//@{
void paintEvent(QPaintEvent* e) override;
void mousePressEvent(QMouseEvent* e) override;
void mouseReleaseEvent(QMouseEvent* e) override;
void mouseMoveEvent(QMouseEvent* e) override;
void keyPressEvent(QKeyEvent* e) override;
void contextMenuEvent(QContextMenuEvent* e) override;
//@}
// docu in base class
void zoomForward_() override;
// docu in base class
void zoom_(int x, int y, bool zoom_in) override;
// docu in base class
void translateLeft_(Qt::KeyboardModifiers m) override;
// docu in base class
void translateRight_(Qt::KeyboardModifiers m) override;
// docu in base class
void translateForward_() override;
// docu in base class
void translateBackward_() override;
// docu in base class
void paintGridLines_(QPainter& painter) override;
/// Find peak next to the given position
PeakIndex findPeakAtPosition_(QPoint);
/// Shows dialog and calls addLabelAnnotation_
void addUserLabelAnnotation_(const QPoint& screen_position);
/// Adds an annotation item at the given screen position
void addLabelAnnotation_(const QPoint& screen_position, const QString& label_text);
/// Shows dialog and calls addPeakAnnotation_
void addUserPeakAnnotation_(PeakIndex near_peak);
/// Ensure that all annotations are within data range
void ensureAnnotationsWithinDataRange_();
friend class Painter1DChrom;
friend class Painter1DPeak;
friend class Painter1DIonMobility;
/////////////////////
////// data members
/////////////////////
/// The data range (m/z, RT and intensity) of the current(!) spec/chrom for all layers
RangeType overall_data_range_1d_;
/// Draw modes (for each layer) - sticks or connected lines
std::vector<DrawModes> draw_modes_;
/// Draw style (for each layer)
std::vector<Qt::PenStyle> peak_penstyle_;
/// start point of "ruler" in pixel coordinates for measure mode
QPoint measurement_start_point_px_;
/// Indicates whether this widget is currently in mirror mode
bool mirror_mode_ = false;
/// Indicates whether annotation items are just being moved on the canvas
bool moving_annotations_ = false;
/// Indicates whether an alignment is currently visualized
bool show_alignment_ = false;
/// Layer index of the first alignment layer
Size alignment_layer_1_;
/// Layer index of the second alignment layer
Size alignment_layer_2_;
/// Stores the alignment as MZ values of pairs of aligned peaks in both spectra
std::vector<std::pair<double, double> > aligned_peaks_mz_delta_;
/// Stores the peak indices of pairs of aligned peaks in both spectra
std::vector<std::pair<Size, Size> > aligned_peaks_indices_;
/// Stores the score of the last alignment
double alignment_score_ = 0.0;
/// whether the ion ladder is displayed on the top right corner in ID view
bool ion_ladder_visible_ = true;
/// annotate interesting peaks with m/z's
bool draw_interesting_MZs_ = false;
/// The text box in the upper left corner with the current data coordinates of the cursor
QTextDocument text_box_content_;
/// handles pulling/pushing of points to the edges of the widget
Gravitator gr_;
};
} // namespace OpenMS
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/PainterBase.h | .h | 2,902 | 81 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow $
// --------------------------------------------------------------------------
#pragma once
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#include <OpenMS/CONCEPT/Types.h>
#include <string_view>
#include <QPainterPath>
#include <QRgb>
class QColor;
class QPainter;
class QPenStyle;
class QPoint;
namespace OpenMS
{
class String;
enum class ShapeIcon
{
DIAMOND,
SQUARE,
CIRCLE,
TRIANGLE
};
/**
* @brief An empty base class with some static convenience functions
*/
class OPENMS_GUI_DLLAPI PainterBase
{
public:
/// translates 'diamond', 'square', 'circle', 'triangle' into a ShapeIcon
/// @throws Exception::InvalidValue otherwise
static ShapeIcon toShapeIcon(const String& icon);
/// static method to draw a dashed line
static void drawDashedLine(const QPoint& from, const QPoint& to, QPainter* painter, const QColor& color);
/// draw a cross at @p position, using a certain size (= width = height) of the cross
static void drawCross(const QPoint& position, QPainter* painter, const int size = 8);
/// draw a caret '^' at @p position, using a certain size (= width) of the caret
static void drawCaret(const QPoint& position, QPainter* painter, const int size = 8);
/// draw an unfilled diamond at @p position, using a certain size (= width = height) of the diamond
static void drawDiamond(const QPoint& position, QPainter* painter, const int size = 8);
/// draws squares, circles etc
static void drawIcon(const QPoint& pos, const QRgb& color, const ShapeIcon icon, Size s, QPainter& p);
/// An arrow head which is open, i.e. '>'
static QPainterPath getOpenArrow(int arrow_width);
/// An arrow head which is closed, i.e. a triangle
static QPainterPath getClosedArrow(int arrow_width);
/**
* \brief
* \param painter The painter to paint with
* \param pen For setting line width and color
* \param start Start position of the line
* \param end End position of the line
* \param arrow_start An (optional) arrow head. Use 'getOpenArrow' or 'getClosedArrow' for predefined arrows
* \param arrow_end An (optional) arrow tail. Use 'getOpenArrow' or 'getClosedArrow' for predefined arrows
* \return The bounding rectangle of the line and arrows (if any)
*/
static QRectF drawLineWithArrows(QPainter* painter, const QPen& pen, const QPoint& start, const QPoint& end,
const QPainterPath& arrow_start = QPainterPath(),
const QPainterPath& arrow_end = QPainterPath());
};
} // namespace OpenMS
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/TVIdentificationViewController.h | .h | 4,114 | 107 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg$
// $Authors: Timo Sachsenberg $
// --------------------------------------------------------------------------
#pragma once
#include <OpenMS/METADATA/SpectrumSettings.h>
#include <OpenMS/VISUAL/LayerDataBase.h>
#include <OpenMS/VISUAL/TVControllerBase.h>
#include <vector>
namespace OpenMS
{
class Annotation1DItem;
class NASequence;
class SpectraIDViewTab;
class TOPPViewBase;
/**
@brief Behavior of TOPPView in identification mode.
*/
class TVIdentificationViewController
: public TVControllerBase
{
Q_OBJECT
public:
/// Construct the behaviour with its parent
TVIdentificationViewController(TOPPViewBase* parent, SpectraIDViewTab* spec_id_view_);
public slots:
/// Behavior for showSpectrumAsNew1D
virtual void showSpectrumAsNew1D(int spectrum_index, int peptide_id_index, int peptide_hit_index);
/// Show spectrum without selecting an identification
virtual void showSpectrumAsNew1D(int index);
/// Behavior for activate1DSpectrum
virtual void activate1DSpectrum(int spectrum_index, int peptide_id_index, int peptide_hit_index);
/// select spectrum without selecting an identification
virtual void activate1DSpectrum(int index);
/// Behavior for deactivate1DSpectrum
virtual void deactivate1DSpectrum(int index);
/// Slot for behavior activation
void activateBehavior() override;
/// Slot for behavior deactivation
void deactivateBehavior() override;
void setVisibleArea1D(double l, double h);
private:
/// Adds labels for the provided precursors to the 1D spectrum
void addPrecursorLabels1D_(const std::vector<Precursor>& pcs);
/// Removes the precursor labels for from the specified 1D spectrum
void removeTemporaryAnnotations_(Size spectrum_index);
/// Adds a theoretical spectrum as set from the preferences dialog for the peptide hit.
void addTheoreticalSpectrumLayer_(const PeptideHit& ph);
/// Add peak annotatios from id data structure
void addPeakAnnotationsFromID_(const PeptideHit& hit);
/// removes all layer with theoretical spectrum generated in identification view
void removeTheoreticalSpectrumLayer_();
/// remove all graphical peak annotations
void removeGraphicalPeakAnnotations_(int spectrum_index);
/// Adds annotation (compound name, adducts, ppm error) to a peak in 1D spectra
void addPeakAnnotations_(const PeptideIdentificationList& ph);
/// Helper function for text formatting
String n_times(Size n, const String& input);
/// Helper function that turns fragment annotations into coverage Strings for visualization with the sequence
void extractCoverageStrings(std::vector<PeptideHit::PeakAnnotation> frag_annotations, String& alpha_string, String& beta_string, Size alpha_size, Size beta_size);
/// Generates HTML for showing the sequence with annotations of matched fragments
template <typename SeqType>
String generateSequenceDiagram_(const SeqType& seq, const std::vector<PeptideHit::PeakAnnotation>& annotations, const StringList& top_ions, const StringList& bottom_ions);
/// Helper function for generateSequenceDiagram_() - overload for peptides
void generateSequenceRow_(const AASequence& seq, std::vector<String>& row);
/// Helper function for generateSequenceDiagram_() - overload for oligonucleotides
void generateSequenceRow_(const NASequence& seq, std::vector<String>& row);
/// Helper function, that collapses a vector of Strings into one String
String collapseStringVector(std::vector<String> strings);
private:
SpectraIDViewTab* spec_id_view_;
/// Used to check which annotation handles have been added automatically by the identification view. Ownership
/// of the AnnotationItems has the Annotation1DContainer
std::vector<Annotation1DItem*> temporary_annotations_;
};
}
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/AxisPainter.h | .h | 1,778 | 60 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Timo Sachsenberg $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#include <vector>
#include <OpenMS/CONCEPT/Types.h>
#include <OpenMS/DATASTRUCTURES/String.h>
#include <QPaintEvent>
#include <QPainter>
namespace OpenMS
{
/**
@brief Draws a coordinate axis.
It has only static methods, that's why the constructor is private.
@ingroup Visual
*/
class OPENMS_GUI_DLLAPI AxisPainter
{
public:
/// Typedef for the grid vector
typedef std::vector<std::vector<double> > GridVector;
/// Where the axis is placed
enum Alignment
{
TOP,
BOTTOM,
LEFT,
RIGHT
};
/// Draws an axis
static void paint(QPainter * painter, QPaintEvent * e, const double & min, const double & max, const GridVector & grid,
const Int width, const Int height, const Alignment alignment, const UInt margin,
const bool show_legend, const String& legend, const bool shorten_number,
const bool is_log, const bool is_inverse_orientation);
private:
/// Constructor: only static methods
AxisPainter();
/// sets @p short_num to a shortened string representation ("123.4 k/M/G") of @p number
static void getShortenedNumber_(QString& short_num, double number);
/// Round to 8 significant digits after comma (and apply log scaling)
static double scale_(double x, bool is_log);
};
}
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/Plot2DWidget.h | .h | 5,143 | 147 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg$
// $Authors: Marc Sturm $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
// OpenMS
#include <OpenMS/VISUAL/PlotWidget.h>
#include <OpenMS/VISUAL/Plot2DCanvas.h>
class QGroupBox;
class QLabel;
class QCheckBox;
namespace OpenMS
{
class Plot1DWidget;
/**
@brief Widget for 2D-visualization of peak map and feature map data
The widget is composed of two scroll bars, two AxisWidget and a Plot2DCanvas as central widget.
@image html Plot2DWidget.png
The example image shows %Plot2DWidget displaying a peak layer and a feature layer.
@ingroup PlotWidgets
*/
class OPENMS_GUI_DLLAPI Plot2DWidget :
public PlotWidget
{
Q_OBJECT
public:
/// Main managed data type (experiment)
typedef LayerDataBase::ExperimentSharedPtrType ExperimentSharedPtrType;
/// Default constructor
Plot2DWidget(const Param & preferences, QWidget * parent = nullptr);
/// Destructor
~Plot2DWidget() override = default;
// docu in base class
Plot2DCanvas* canvas() const override
{
return static_cast<Plot2DCanvas*>(canvas_);
}
/// const reference to the horizontal projection
const Plot1DWidget* getProjectionOntoX() const;
/// const reference to the vertical projection
const Plot1DWidget* getProjectionOntoY() const;
/// Returns if one of the projections is visible (or both are visible)
bool projectionsVisible() const;
/// set the mapper for the canvas and the projections (the non-marginal projection
/// axis will be mapped to Intensity).
/// Also tries to guess drawing (sticks vs line) and intensity mode
void setMapper(const DimMapper<2>& mapper) override
{
canvas_->setMapper(mapper); // update canvas
// ... and projections: the projected Dim becomes intensity
projection_onto_X_->setMapper(DimMapper<2>({mapper.getDim(DIM::X).getUnit(), DIM_UNIT::INT}));
projection_onto_Y_->setMapper(DimMapper<2>({DIM_UNIT::INT, mapper.getDim(DIM::Y).getUnit()}));
// decide on default draw mode, depending on main axis unit (e.g. m/z or RT)
auto set_style = [&](const DIM_UNIT main_unit_1d, Plot1DCanvas* canvas) {
switch (main_unit_1d)
{ // this may not be optimal for every unit. Feel free to change behavior.
case DIM_UNIT::MZ:
// to show isotope distributions as sticks
canvas->setDrawMode(Plot1DCanvas::DM_PEAKS);
canvas->setIntensityMode(PlotCanvas::IM_PERCENTAGE);
break;
// all other units
default:
canvas->setDrawMode(Plot1DCanvas::DM_CONNECTEDLINES);
canvas->setIntensityMode(PlotCanvas::IM_SNAP);
break;
}
};
set_style(mapper.getDim(DIM::X).getUnit(), projection_onto_Y_->canvas());
set_style(mapper.getDim(DIM::Y).getUnit(), projection_onto_X_->canvas());
}
public slots:
// Docu in base class
void recalculateAxes_() override;
/// Shows/hides the projections
void toggleProjections();
// Docu in base class
void showGoToDialog() override;
signals:
/**
@brief Signal emitted whenever the visible area changes.
@param[in] area The new visible area.
*/
void visibleAreaChanged(DRange<2> area);
/// Requests to display the spectrum with index @p index in 1D
void showSpectrumAsNew1D(int index);
void showChromatogramsAsNew1D(std::vector<int, std::allocator<int> > indices);
/// Requests to display all spectra as 1D
void showCurrentPeaksAs3D();
/// Requests to display this spectrum (=frame) in ion mobility plot
void showCurrentPeaksAsIonMobility(const MSSpectrum& spec);
protected:
/// shows projections information
void projectionInfo_(int peaks, double intensity, double max);
/// Vertical projection widget
Plot1DWidget * projection_onto_X_;
/// Horizontal projection widget
Plot1DWidget * projection_onto_Y_;
/// Group box that shows information about the projections
QGroupBox * projection_box_;
/// Number of peaks of the projection
QLabel * projection_peaks_;
/// Intensity sum of the projection
QLabel * projection_sum_;
/// Intensity maximum of the projection
QLabel * projection_max_;
/// Checkbox that indicates that projections should be automatically updated (with a slight delay)
QCheckBox * projections_auto_;
/// Timer that triggers auto-update of projections
QTimer * projections_timer_;
private slots:
/// extracts the projections from the @p source_layer and displays them
void showProjections_(const LayerDataBase* source_layer);
/// slot that monitors the visible area changes and triggers the update of projections
void autoUpdateProjections_();
};
}
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/Painter1DBase.h | .h | 3,080 | 101 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow $
// --------------------------------------------------------------------------
#pragma once
#include <OpenMS/KERNEL/MSSpectrum.h>
#include <OpenMS/VISUAL/PainterBase.h>
class QPainter;
class QPenStyle;
namespace OpenMS
{
class LayerData1DBase;
class LayerData1DChrom;
class LayerData1DIonMobility;
class LayerData1DPeak;
class Plot1DCanvas;
/**
* @brief A base class for painting all items from a data layer (as supported by class derived from here) onto a 1D Canvas
*/
class OPENMS_GUI_DLLAPI Painter1DBase : public PainterBase
{
public:
virtual ~Painter1DBase() = default;
/**
@brief Paints items using the given painter onto the canvas.
@param[in] painter The painter used for drawing
@param[in] canvas The canvas to paint onto (should expose all the details needed, like canvas size, draw mode, colors etc)
@param[in] layer_index Which layer is currently painted (FIXME: remove when Canvas1D::DrawMode and PenStyle are factored out)
*/
virtual void paint(QPainter* painter, Plot1DCanvas* canvas, int layer_index) = 0;
void drawAnnotations_(const LayerData1DBase* layer, QPainter& painter, Plot1DCanvas* canvas) const;
};
/**
@brief Painter1D for spectra
*/
class OPENMS_GUI_DLLAPI Painter1DPeak : public Painter1DBase
{
public:
/// C'tor which remembers the layer to paint
Painter1DPeak(const LayerData1DPeak* parent);
/// Implementation of base class
void paint(QPainter*, Plot1DCanvas* canvas, int layer_index) override;
protected:
/// annotate up to 10 interesting peaks in the range @p vbegin to @p vend with their m/z values (using deisotoping and intensity filtering)
void drawMZAtInterestingPeaks_(QPainter& painter, Plot1DCanvas* canvas, MSSpectrum::ConstIterator v_begin, MSSpectrum::ConstIterator v_end) const;
const LayerData1DPeak* layer_; ///< the data to paint
};
/**
@brief Painter1D for chromatograms
*/
class OPENMS_GUI_DLLAPI Painter1DChrom : public Painter1DBase
{
public:
/// C'tor which remembers the layer to paint
Painter1DChrom(const LayerData1DChrom* parent);
/// Implementation of base class
void paint(QPainter*, Plot1DCanvas* canvas, int layer_index) override;
protected:
const LayerData1DChrom* layer_; ///< the data to paint
};
/**
@brief Painter1D for mobilograms
*/
class OPENMS_GUI_DLLAPI Painter1DIonMobility : public Painter1DBase
{
public:
/// C'tor which remembers the layer to paint
Painter1DIonMobility(const LayerData1DIonMobility* parent);
/// Implementation of base class
void paint(QPainter*, Plot1DCanvas* canvas, int layer_index) override;
protected:
const LayerData1DIonMobility* layer_; ///< the data to paint
};
} // namespace OpenMS
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/EnhancedTabBarWidgetInterface.h | .h | 2,566 | 82 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Timo Sachsenberg $
// --------------------------------------------------------------------------
#pragma once
#include <OpenMS/CONCEPT/Types.h>
#include <OpenMS/DATASTRUCTURES/String.h>
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#include <QObject>
namespace OpenMS
{
class EnhancedTabBar;
/**
@brief provides a signal mechanism (by deriving from QObject) for classes which are not allowed to have signals themselves.
This might be useful for EnhancedTabBarWidgetInterface, since that cannot derive from QObject due to the diamond star inheritance problem via its parent classes (e.g. PlotWidget).
Diamond star problem:
PlotWidget
/ \
ETBWI QWidget
-! /
QObject
Thus, ETBWI cannot derive from QObject and needs to delegate its signaling duties to a SignalProvider.
Wrap all signals that are required in a function call and call these functions instead of emitting the signal directly.
Connect the signal to a slot by using QObject::connect() externally somewhere.
*/
class OPENMS_GUI_DLLAPI SignalProvider
: public QObject
{
Q_OBJECT
public:
void emitAboutToBeDestroyed(int id)
{
emit aboutToBeDestroyed(id);
}
signals:
void aboutToBeDestroyed(int id);
};
/**
@brief Widgets that are placed into an EnhancedTabBar must implement this interface
@ingroup Visual
*/
class OPENMS_GUI_DLLAPI EnhancedTabBarWidgetInterface
{
public:
/// C'tor; creates a new ID;
EnhancedTabBarWidgetInterface();
/// Destructor (emits SignalProvider::aboutToBeDestroyed)
virtual ~EnhancedTabBarWidgetInterface();
/// adds itself to this tabbar and upon destruction removes itself again.
/// Make sure the tabbar still exists when you call this function and this object is destroyed
void addToTabBar(EnhancedTabBar* const parent, const String& caption, const bool make_active_tab = true);
/// get the EnhancedTabBar unique window id
Int getWindowId() const;
/// the first object to be created will get this ID
static Int getFirstWindowID();
private:
Int window_id_ { -1 };
SignalProvider sp_; ///< emits the signal that the EnhancedTabBarWidgetInterface is about to be destroyed
};
} // namespace OpenMS
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/ListEditor.h | .h | 4,530 | 159 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: David Wojnar $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#ifndef Q_MOC_RUN
#include <OpenMS/DATASTRUCTURES/ListUtils.h>
#endif
#include <QDialog>
#include <QListWidget>
#include <QItemDelegate>
class QPushButton;
namespace OpenMS
{
namespace Internal
{
class ListTable;
class ListEditorDelegate;
}
/**
@brief Editor for editing int, double and string lists (including output and input file lists)
*/
class OPENMS_GUI_DLLAPI ListEditor :
public QDialog
{
Q_OBJECT
public:
//types of lists
enum Type
{
INT,
FLOAT,
STRING,
OUTPUT_FILE,
INPUT_FILE
};
///Constructor
ListEditor(QWidget * parent = nullptr, const QString& title = "");
///returns modified list
StringList getList() const;
///sets list (and its type)that will be modified by user
void setList(const StringList & list, ListEditor::Type type);
///set restrictions for list elements
void setListRestrictions(const String & restrictions);
///set name of type
void setTypeName(QString name);
private:
///List type
ListEditor::Type type_;
///displays the list
Internal::ListTable * listTable_;
///Delegate between view and model
Internal::ListEditorDelegate * listDelegate_;
/// button for new Row
QPushButton * newRowButton_;
///button for removing row
QPushButton * removeRowButton_;
///button clicked if modifications are accepted
QPushButton * OkButton_;
///button clicked if modifications are rejected
QPushButton * CancelButton_;
};
/**
@brief Namespace used to hide implementation details from users.
*/
namespace Internal
{
class OPENMS_GUI_DLLAPI ListTable :
public QListWidget
{
Q_OBJECT
public:
//Default Constructor
ListTable(QWidget * parent = nullptr);
//returns a list_
StringList getList();
//sets new list
void setList(const StringList & list, ListEditor::Type type);
public slots:
void createNewRow();
void removeCurrentRow();
private:
///List type
ListEditor::Type type_;
//everything is internally stored as stringlist
StringList list_;
};
/**
@brief Internal delegate class
This handles editing of items.
*/
class OPENMS_GUI_DLLAPI ListEditorDelegate :
public QItemDelegate
{
Q_OBJECT
public:
///Constructor
ListEditorDelegate(QObject * parent);
/// not reimplemented
QWidget * createEditor(QWidget * parent, const QStyleOptionViewItem & option, const QModelIndex & index) const override;
/// Sets the data to be displayed and edited by the editor for the item specified by index.
void setEditorData(QWidget * editor, const QModelIndex & index) const override;
/// Sets the data for the specified model and item index from that supplied by the editor. If data changed in a cell, that is if it is different from an initial value, then set its background color to yellow and emit the modified signal otherwise make it white
void setModelData(QWidget * editor, QAbstractItemModel * model, const QModelIndex & index) const override;
/// Updates the editor for the item specified by index according to the style option given.
void updateEditorGeometry(QWidget * editor, const QStyleOptionViewItem & option, const QModelIndex & index) const override;
//sets Type of List
void setType(const ListEditor::Type type);
//sets restrictions for listelements
void setRestrictions(const String & restrictions);
///set name of type
void setTypeName(QString name);
///sets the fileName
void setFileName(QString name);
private:
/// Not implemented => private
ListEditorDelegate();
///List type
ListEditor::Type type_;
///restrictions for list elements
String restrictions_;
///type name. used to distinguish output/input from string lists
QString typeName_;
///used to set input and output values in setModelData
mutable QString file_name_;
};
}
} // namespace OpenMS
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/LayerDataBase.h | .h | 19,006 | 463 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Marc Sturm $
// --------------------------------------------------------------------------
#pragma once
#include <OpenMS/DATASTRUCTURES/Param.h>
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#include <OpenMS/DATASTRUCTURES/String.h>
#include <OpenMS/PROCESSING/MISC/DataFilters.h>
#include <OpenMS/KERNEL/ConsensusMap.h>
#include <OpenMS/KERNEL/FeatureMap.h>
#include <OpenMS/KERNEL/MSExperiment.h>
#include <OpenMS/KERNEL/OnDiscMSExperiment.h>
#include <OpenMS/KERNEL/StandardTypes.h>
#include <OpenMS/METADATA/PeptideIdentification.h>
#include <OpenMS/METADATA/ProteinIdentification.h>
#include <OpenMS/METADATA/AnnotatedMSRun.h>
#include <OpenMS/VISUAL/LogWindow.h>
#include <OpenMS/VISUAL/MISC/CommonDefs.h>
#include <OpenMS/VISUAL/MultiGradient.h>
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#include <memory>
#include <bitset>
#include <vector>
class QWidget;
namespace OpenMS
{
class LayerData1DBase;
class LayerStoreData;
class LayerStatistics;
class OSWData;
class Painter1DBase;
class Painter2DBase;
template <int N_DIM> class DimMapper;
struct LayerDataDefs
{
/// Result of computing a projection on X and Y axis in a 2D Canvas; see LayerDataBase::getProjection()
struct ProjectionData
{
/// C'tor
ProjectionData();
/// Move C'tor
ProjectionData(ProjectionData&&);
/// D'tor
~ProjectionData(); // needs to be implemented in cpp, since inline would require a full definition of LayerData1DBase;
std::unique_ptr<LayerData1DBase> projection_ontoX;
std::unique_ptr<LayerData1DBase> projection_ontoY;
struct Summary {
UInt number_of_datapoints {0};
Peak1D::IntensityType max_intensity {0};
double sum_intensity {0}; // double since sum could get large
} stats;
};
/** @name Type definitions */
//@{
/// Dataset types.
/// Order in the enum determines the order in which layer types are drawn.
enum DataType
{
DT_PEAK, ///< Spectrum profile or centroided data
DT_CHROMATOGRAM, ///< Chromatogram data
DT_FEATURE, ///< Feature data
DT_CONSENSUS, ///< Consensus feature data
DT_IDENT, ///< Peptide identification data
DT_UNKNOWN ///< Undefined data type indicating an error
};
/// Flags that determine which information is shown.
enum Flags
{
F_HULL, ///< Features: Overall convex hull
F_HULLS, ///< Features: Convex hulls of single mass traces
F_UNASSIGNED, ///< Features: Unassigned peptide hits
P_PRECURSORS, ///< Peaks: Mark precursor peaks of MS/MS scans
P_PROJECTIONS, ///< Peaks: Show projections
C_ELEMENTS, ///< Consensus features: Show elements
I_PEPTIDEMZ, ///< Identifications: m/z source
I_LABELS, ///< Identifications: Show labels (not sequences)
SIZE_OF_FLAGS
};
/// Label used in visualization
enum LabelType
{
L_NONE, ///< No label is displayed
L_INDEX, ///< The element number is used
L_META_LABEL, ///< The 'label' meta information is used
L_ID, ///< The best peptide hit of the first identification run is used
L_ID_ALL, ///< All peptide hits of the first identification run are used
SIZE_OF_LABEL_TYPE
};
/// Label names
static const std::string NamesOfLabelType[SIZE_OF_LABEL_TYPE];
/// Features
typedef FeatureMap FeatureMapType;
/// SharedPtr on feature map
typedef std::shared_ptr<FeatureMap> FeatureMapSharedPtrType;
/// consensus features
typedef ConsensusMap ConsensusMapType;
/// SharedPtr on consensus features
typedef std::shared_ptr<ConsensusMap> ConsensusMapSharedPtrType;
/// Main data type (experiment)
typedef AnnotatedMSRun ExperimentType;
/// SharedPtr on MSExperiment
typedef std::shared_ptr<ExperimentType> ExperimentSharedPtrType;
typedef std::shared_ptr<const ExperimentType> ConstExperimentSharedPtrType;
/// SharedPtr on On-Disc MSExperiment
typedef std::shared_ptr<OnDiscMSExperiment> ODExperimentSharedPtrType;
/// SharedPtr on OSWData
typedef std::shared_ptr<OSWData> OSWDataSharedPtrType;
};
/**
@brief Class that stores the data for one layer
The data for a layer can be peak data, feature data (feature, consensus),
chromatogram or peptide identification data.
For 2D and 3D data, the data is generally accessible through getPeakData()
while features are accessible through getFeatureMap() and getConsensusMap().
For 1D data, the current spectrum must be accessed through
getCurrentSpectrum().
Peak data is stored using a shared pointer to an MSExperiment data structure
as well as a shared pointer to a OnDiscMSExperiment data structure. Note that
the actual data may not be in memory as this is not efficient for large files
and therefore may have to be retrieved from disk on-demand.
@note The spectrum for 1D viewing retrieved through getCurrentSpectrum() is a
copy of the actual raw data and *different* from the one retrieved through
getPeakData()[index]. Any changes to applied to getCurrentSpectrum() are
non-persistent and will be gone the next time the cache is updated.
Persistent changes can be applied to getPeakDataMuteable() and will be
available on the next cache update.
@note Layer is mainly used as a member variable of PlotCanvas which holds
a vector of LayerDataBase objects.
@ingroup PlotWidgets
*/
#ifdef _MSC_VER
#pragma warning(disable : 4250) // 'class1' : inherits 'class2::member' via dominance
#endif
class OPENMS_GUI_DLLAPI LayerDataBase : public LayerDataDefs
{
public:
/// Actual state of each flag
std::bitset<SIZE_OF_FLAGS> flags;
//@}
/// Default constructor (for virtual inheritance)
LayerDataBase() = delete; // <-- this is the problem. Call assignment op in 1DPeak???
/// C'tor for child classes
explicit LayerDataBase(const DataType type) : type(type) {}
/// Copy-C'tor
LayerDataBase(const LayerDataBase& ld) = default;
/// Assignment operator
LayerDataBase& operator=(const LayerDataBase& ld) = delete;
/// Move-C'tor - do not move from this class since its a virtual base class (diamond problem) and the move c'tor may be called twice (which would loose data!)
/// Instead of painstakingly writing user-defined move c'tors which check for moving for all the direct child classes,
/// we'd rather use copy (which is the automatic fallback, and safe) and incur a small performance hit
LayerDataBase(LayerDataBase&& ld) = delete;
/// Move assignment -- deleted, by same argument as for move c'tor
LayerDataBase& operator=(LayerDataBase&& ld) = delete;
/// D'tor
virtual ~LayerDataBase() = default;
/**
* \brief Obtain a painter which can draw the layer on a 2D canvas
* \return A painter
*/
virtual std::unique_ptr<Painter2DBase> getPainter2D() const = 0;
/**
* \brief Create a shallow copy (i.e. shared experimental data using shared_ptr) of the current layer, and make it 1D (i.e. support showing a single spec/chrom etc)
* \return A new layer for 1D
*/
virtual std::unique_ptr <LayerData1DBase> to1DLayer() const = 0;
/// Returns a visitor which contains the current visible data and can write the data to disk
virtual std::unique_ptr<LayerStoreData> storeVisibleData(const RangeAllType& /*visible_range*/, const DataFilters& /*layer_filters*/) const
{
throw Exception::NotImplemented(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION);
}
/// Returns a visitor which contains the the full data of the layer and can write the data to disk in the appropriate format (e.g. mzML)
virtual std::unique_ptr<LayerStoreData> storeFullData() const
{
throw Exception::NotImplemented(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION);
}
/// Calculate a projection of the current layer for the given unit and the given area.
/// E.g. the area might be restricted in RT and m/z, and then requested projection should return the XIC (@p unit == RT)
/// It is up to the implementation to decide on binning.
// todo: put this into a LayerData2DBase class, since a LayerData1DPeak should not implement this.
virtual ProjectionData getProjection(const DIM_UNIT unit_x, const DIM_UNIT unit_y, const RangeAllType& area) const = 0;
/**
* \brief Find the closest datapoint within the given range and return a proxy to that datapoint
* \param area Range to search in. Only dimensions used in the canvas are populated.
* \return A proxy (e.g. scan + peak index in an MSExperiment) which points to the data
*/
virtual PeakIndex findClosestDataPoint(const RangeAllType& area) const
{
(void)area; // allow doxygen to document the param
throw Exception::NotImplemented(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION);
}
/**
* \brief Find the datapoint with the highest intensity within the given range and return a proxy to that datapoint
* \param area Range to search in. Only dimensions used in the canvas are populated.
* \return A proxy (e.g. scan + peak index in an MSExperiment) which points to the data
*/
virtual PeakIndex findHighestDataPoint(const RangeAllType& area) const
{
(void)area; // allow doxygen to document the param
throw Exception::NotImplemented(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION);
}
/**
* \brief Convert a PeakIndex to a XY coordinate (via @p mapper).
* \param peak The Peak to convert
* \param mapper Converts the internal representation (e.g. Peak1D) to an XY coordinate
* \return XY coordinate in data units (e.g. X=m/z, Y=intensity)
*/
virtual PointXYType peakIndexToXY(const PeakIndex& peak, const DimMapper<2>& mapper) const = 0;
/**
* \brief Get name and value of all data-arrays corresponding to the given datapoint
*
* Empty (or shorter) data-arrays are skipped.
*
* \param peak_index The datapoint
* \return A string, e.g. "fwhm: 20, im: 3.3", depending on which float/string dataarrays are populated for the given datapoint
*/
virtual String getDataArrayDescription(const PeakIndex& peak_index)
{
(void)peak_index; // allow doxygen to document the param
throw Exception::NotImplemented(__FILE__, __LINE__, OPENMS_PRETTY_FUNCTION);
}
/// add peptide identifications to the layer
/// Only supported for DT_PEAK, DT_FEATURE and DT_CONSENSUS.
/// Will return false otherwise.
virtual bool annotate(const PeptideIdentificationList& identifications,
const std::vector<ProteinIdentification>& protein_identifications);
/**
@brief Update ranges of the underlying data
*/
virtual void updateRanges() = 0;
/// Returns the minimum intensity of the internal data, depending on type
float getMinIntensity() const;
/// Returns the maximum intensity of the internal data, depending on type
float getMaxIntensity() const;
using RangeAllType = RangeManager<RangeRT, RangeMZ, RangeIntensity, RangeMobility>;
/// Returns the data range of the whole layer (i.e. all scans/chroms/etc) in all known dimensions.
/// If a layer does not support the dimension (or the layer is empty) the dimension will be empty
/// If you need the data range for a 1D view (i.e. only a single spec/chrom/etc), call 'LayerDataBase1D::getRange1D()'
virtual RangeAllType getRange() const = 0;
/// Compute layer statistics (via visitor)
virtual std::unique_ptr<LayerStatistics> getStats() const = 0;
/// The name of the layer, usually the basename of the file
const String& getName() const
{
return name_;
}
/// Set the name of the layer, usually the basename of the file
void setName(const String& new_name)
{
name_ = new_name;
}
/// get the extra annotation to the layers name, e.g. '[39]' for which chromatogram index is currently shown in 1D
const String& getNameSuffix() const
{
return name_suffix_;
}
/// set an extra annotation as suffix to the layers name, e.g. '[39]' for which chromatogram index is currently shown in 1D
void setNameSuffix(const String& decorator)
{
name_suffix_ = decorator;
}
/// get name augmented with attributes, e.g. '*' if modified
virtual String getDecoratedName() const;
/// if this layer is visible
bool visible = true;
/// data type (peak or feature data, etc)
DataType type = DT_UNKNOWN;
/// file name of the file the data comes from (if available)
String filename;
/// Layer parameters
Param param;
/// Gradient for 2D and 3D views
MultiGradient gradient;
/// Filters to apply before painting
DataFilters filters;
/// Flag that indicates if the layer data can be modified (so far used for features only)
bool modifiable = false;
/// Flag that indicates that the layer data was modified since loading it
bool modified = false;
/// Label type
LabelType label = L_NONE;
/// Selected peptide id and hit index (-1 if none is selected)
int peptide_id_index = -1;
int peptide_hit_index = -1;
private:
/// layer name
String name_;
/// an extra annotation as suffix to the layers name, e.g. '[39]' for which chromatogram index is currently shown in 1D
String name_suffix_;
};
/// A base class to annotate layers of specific types with (identification) data
///
/// @note Add new derived classes to getAnnotatorWhichSupports() to enable automatic annotation in TOPPView
class LayerAnnotatorBase
{
public:
/**
@brief C'tor with params
@param[in] supported_types Which identification data types are allowed to be opened by the user in annotate()
@param[in] file_dialog_text The header text of the file dialog shown to the user
@param[in] gui_lock Optional GUI element which will be locked (disabled) during call to 'annotateWorker_'; can be null_ptr
**/
LayerAnnotatorBase(const FileTypeList& supported_types, const String& file_dialog_text, QWidget* gui_lock);
/// Make D'tor virtual for correct destruction from pointers to base
virtual ~LayerAnnotatorBase() = default;
/// Annotates a @p layer, writing messages to @p log and showing QMessageBoxes on errors.
/// The input file is selected via a file-dialog which is opened with @p current_path as initial path.
/// The file type is checked to be one of the supported_types_ before the annotateWorker_ function is called
/// as implemented by the derived classes
bool annotateWithFileDialog(LayerDataBase& layer, LogWindow& log, const String& current_path) const;
/// Annotates a @p layer, given a filename from which to load the data.
/// The file type is checked to be one of the supported_types_ before the annotateWorker_ function is called
/// as implemented by the derived classes
bool annotateWithFilename(LayerDataBase& layer, LogWindow& log, const String& filename) const;
/// get a derived annotator class, which supports annotation of the given file type.
/// If multiple class support this type (currently not the case) an Exception::IllegalSelfOperation will be thrown
/// If NO class supports this type, the unique_ptr points to nothing (.get() == nullptr).
static std::unique_ptr<LayerAnnotatorBase> getAnnotatorWhichSupports(const FileTypes::Type& type);
/// see getAnnotatorWhichSupports(const FileTypes::Type& type). File type is queried from filename
static std::unique_ptr<LayerAnnotatorBase> getAnnotatorWhichSupports(const String& filename);
protected:
/// abstract virtual worker function to annotate a layer using content from the @p filename
/// returns true on success
virtual bool annotateWorker_(LayerDataBase& layer, const String& filename, LogWindow& log) const = 0;
const FileTypeList supported_types_;
const String file_dialog_text_;
QWidget* gui_lock_ = nullptr;///< optional widget which will be locked when calling annotateWorker_() in child-classes
};
/// Annotate a layer with PeptideIdentifications using Layer::annotate(pepIDs, protIDs).
/// The ID data is loaded from a file selected by the user via a file-dialog.
class LayerAnnotatorPeptideID : public LayerAnnotatorBase
{
public:
LayerAnnotatorPeptideID(QWidget* gui_lock) :
LayerAnnotatorBase(std::vector<FileTypes::Type>{FileTypes::IDXML, FileTypes::MZIDENTML},
"Select peptide identification data", gui_lock)
{
}
protected:
/// loads the ID data from @p filename and calls Layer::annotate.
/// Always returns true (unless an exception is thrown from internal sub-functions)
virtual bool annotateWorker_(LayerDataBase& layer, const String& filename, LogWindow& log) const;
};
/// Annotate a layer with AccurateMassSearch results (from an AMS-featureXML file).
/// The featuremap is loaded from a file selected by the user via a file-dialog.
class LayerAnnotatorAMS : public LayerAnnotatorBase
{
public:
LayerAnnotatorAMS(QWidget* gui_lock) :
LayerAnnotatorBase(std::vector<FileTypes::Type>{FileTypes::FEATUREXML},
"Select AccurateMassSearch's featureXML file", gui_lock)
{
}
protected:
/// loads the featuremap from @p filename and calls Layer::annotate.
/// Returns false if featureXML file was not created by AMS, and true otherwise (unless an exception is thrown from internal sub-functions)
virtual bool annotateWorker_(LayerDataBase& layer, const String& filename, LogWindow& log) const;
};
/// Annotate a chromatogram layer with ID data (from an OSW sqlite file as produced by OpenSwathWorkflow or pyProphet).
/// The OSWData is loaded from a file selected by the user via a file-dialog.
class LayerAnnotatorOSW : public LayerAnnotatorBase
{
public:
LayerAnnotatorOSW(QWidget* gui_lock) :
LayerAnnotatorBase(std::vector<FileTypes::Type>{FileTypes::OSW},
"Select OpenSwath/pyProphet output file", gui_lock)
{
}
protected:
/// loads the OSWData from @p filename and stores the data using Layer::setChromatogramAnnotation()
/// Always returns true (unless an exception is thrown from internal sub-functions)
virtual bool annotateWorker_(LayerDataBase& layer, const String& filename, LogWindow& log) const;
};
/// Print the contents to a stream.
OPENMS_GUI_DLLAPI std::ostream& operator<<(std::ostream& os, const LayerDataBase& rhs);
}// namespace OpenMS
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/PlotCanvas.h | .h | 31,065 | 935 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Timo Sachsenberg $
// $Authors: Marc Sturm $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
//OpenMS
#include <OpenMS/CONCEPT/Types.h>
#include <OpenMS/CONCEPT/VersionInfo.h>
#include <OpenMS/DATASTRUCTURES/DefaultParamHandler.h>
#include <OpenMS/DATASTRUCTURES/DRange.h>
#include <OpenMS/KERNEL/DimMapper.h>
#include <OpenMS/VISUAL/LayerDataBase.h>
#include <OpenMS/VISUAL/MISC/CommonDefs.h>
//QT
#include <QtWidgets>
#include <QRubberBand>
class QWheelEvent;
class QKeyEvent;
class QMouseEvent;
class QFocusEvent;
class QMenu;
//STL
#include <stack>
#include <vector>
namespace OpenMS
{
class PlotWidget;
class LayerDataChrom;
class LayerDataPeak;
class LayerDataFeature;
class LayerDataConsensus;
using LayerDataBaseUPtr = std::unique_ptr<LayerDataBase>;
using LayerDataChromUPtr = std::unique_ptr<LayerDataChrom>;
using LayerDataPeakUPtr = std::unique_ptr<LayerDataPeak>;
using LayerDataFeatureUPtr = std::unique_ptr<LayerDataFeature>;
using LayerDataConsensusUPtr = std::unique_ptr<LayerDataConsensus>;
/**
A class to manage a stack of layers as shown in the layer widget in TOPPView.
The order of layers is automatically determined based on LayerDataBase::type (in short: peak data below, ID data on top).
*/
class LayerStack
{
public:
/// adds a new layer and makes it the current layer
/// @param[in] new_layer Takes ownership of the layer!
void addLayer(LayerDataBaseUPtr new_layer);
const LayerDataBase& getLayer(const Size index) const;
LayerDataBase& getLayer(const Size index);
const LayerDataBase& getCurrentLayer() const;
LayerDataBase& getCurrentLayer();
/// throws Exception::IndexOverflow unless @p index is smaller than getLayerCount()
void setCurrentLayer(Size index);
Size getCurrentLayerIndex() const;
bool empty() const;
Size getLayerCount() const;
void removeLayer(Size layer_index);
void removeCurrentLayer();
protected:
std::vector<LayerDataBaseUPtr> layers_;
private:
Size current_layer_ = -1;
};
/**
@brief Base class for visualization canvas classes
This class is the base class for the spectrum data views which are used
for 1D, 2D and 3D visualization of data. In TOPPView, each PlotCanvas
is paired with an enclosing PlotWidget (see also the
getPlotWidget() function that provides a back-reference). To provide
additional spectrum views, you can derive from this class and you should
also create a subclass from PlotWidget which encloses your class
derived from PlotCanvas. A spectrum canvas can display multiple data
layers at the same time (see layers_ member variable).
The actual data to be displayed is stored as a vector of LayerDataBase
objects which hold the actual data. It also stores information about the
commonly used constants such as ActionModes or IntensityModes.
All derived classes should follow these interface conventions:
- Translate mode
- Activated by default
- Arrow keys can be used to translate without entering translate mode
- Zoom mode
- Activated using the CTRL key
- Zoom stack traversal with CTRL+/CTRL- or mouses wheel
- Pressing the @em Backspace key resets the zoom (and stack)
- Measure mode
- Activated using the SHIFT key
@ingroup PlotWidgets
*/
class OPENMS_GUI_DLLAPI PlotCanvas : public QWidget, public DefaultParamHandler
{
Q_OBJECT
public:
/**@name Type definitions */
//@{
/// Main data type (experiment)
typedef LayerDataBase::ExperimentType ExperimentType;
/// Main managed data type (experiment)
typedef LayerDataBase::ExperimentSharedPtrType ExperimentSharedPtrType;
typedef LayerDataBase::ConstExperimentSharedPtrType ConstExperimentSharedPtrType;
typedef LayerDataBase::ODExperimentSharedPtrType ODExperimentSharedPtrType;
/// Main data type (features)
typedef LayerDataBase::FeatureMapType FeatureMapType;
/// Main managed data type (features)
typedef LayerDataBase::FeatureMapSharedPtrType FeatureMapSharedPtrType;
/// Main data type (consensus features)
typedef LayerDataBase::ConsensusMapType ConsensusMapType;
/// Main managed data type (consensus features)
typedef LayerDataBase::ConsensusMapSharedPtrType ConsensusMapSharedPtrType;
/// Spectrum type
typedef ExperimentType::SpectrumType SpectrumType;
/// Spectrum iterator type (iterates over peaks)
typedef SpectrumType::ConstIterator SpectrumConstIteratorType;
/// Peak type
typedef SpectrumType::PeakType PeakType;
/// a generic range for the most common units
using RangeType = RangeAllType;
/// The range of data shown on the X and Y axis (unit depends on runtime config)
using AreaXYType = Area<2>::AreaXYType;
/// The visible range of data on X and Y axis as shown on plot axis (not necessarily the range of actual data, e.g. no data to show).
using VisibleArea = Area<2>;
/// A generic range of data on X and Y axis as shown on plot axis
using GenericArea = Area<2>;
/// The number of pixels on the axis. The lower point of the area will be zero. The maxima will reflect the number of pixels
/// in either dimension. Note that the unit of the Axis etc is in PIXELS (-> *not* in seconds, m/z or whatever)
using PixelArea = Area<2>;
using UnitRange = RangeAllType;
using PointOnAxis = DimMapper<2>::Point;
/// Mouse action modes
enum ActionModes
{
AM_TRANSLATE, ///< translate
AM_ZOOM, ///< zoom
AM_MEASURE ///< measure
};
/// Display modes of intensity
enum IntensityModes
{
IM_NONE, ///< Normal mode: f(x)=x; y-axis shows the maximum of all layers, no scaling
IM_PERCENTAGE, ///< Shows intensities normalized by each layer's maximum: f(x)=x/max(x)*100
IM_SNAP, ///< Shows the maximum displayed intensity (across all layers) as if it was the overall maximum intensity
IM_LOG ///< Logarithmic version of normal mode
};
//@}
/// Default constructor
PlotCanvas(const Param& preferences, QWidget* parent = nullptr);
/// Destructor
~PlotCanvas() override;
/**
@brief Sets the spectrum widget.
Sets the enclosing spectrum widget. Call this from your
PlotWidget derived class.
@param[in] widget the spectrum widget
*/
inline void setPlotWidget(PlotWidget* widget)
{
spectrum_widget_ = widget;
}
/**
@brief Returns the spectrum widget.
Returns the enclosing spectrum widget
@return the spectrum widget
*/
inline PlotWidget* getPlotWidget() const
{
return spectrum_widget_;
}
/**
@brief Returns the action mode
Returns the current action mode of type ActionModes
@return the current action mode
*/
inline Int getActionMode() const
{
return action_mode_;
}
/**
@brief Returns the intensity mode
Returns the current intensity mode of type IntensityModes
@return the current intensity mode
*/
inline IntensityModes getIntensityMode() const
{
return intensity_mode_;
}
/**
@brief Sets the intensity mode
Sets the intensity mode and calls intensityModeChange_()
@param[in] mod the new intensity mode
@see intensityModeChange_()
*/
inline void setIntensityMode(IntensityModes mod)
{
intensity_mode_ = mod;
intensityModeChange_();
}
/**
@brief Returns if the grid is currently shown
@return @c true if the grid is visible, @c false otherwise
*/
inline bool gridLinesShown() const
{
return show_grid_;
}
/// returns the layer data with index @p index
const LayerDataBase& getLayer(Size index) const
{
return layers_.getLayer(index);
}
/// returns the layer data with index @p index
LayerDataBase& getLayer(Size index)
{
return layers_.getLayer(index);
}
/// returns the layer data of the active layer
const LayerDataBase& getCurrentLayer() const
{
return layers_.getCurrentLayer();
}
/// returns the layer data of the active layer
LayerDataBase& getCurrentLayer()
{
return layers_.getCurrentLayer();
}
/// returns the index of the active layer
inline Size getCurrentLayerIndex() const
{
return layers_.getCurrentLayerIndex();
}
/// returns a layer flag of the current layer
bool getLayerFlag(LayerDataBase::Flags f) const
{
return getLayerFlag(layers_.getCurrentLayerIndex(), f);
}
/// sets a layer flag of the current layer
void setLayerFlag(LayerDataBase::Flags f, bool value)
{
setLayerFlag(layers_.getCurrentLayerIndex(), f, value);
}
/// returns a layer flag of the layer @p layer
bool getLayerFlag(Size layer, LayerDataBase::Flags f) const
{
return layers_.getLayer(layer).flags.test(f);
}
/// sets a layer flag of the layer @p layer
void setLayerFlag(Size layer, LayerDataBase::Flags f, bool value)
{
// abort if there are no layers
if (layers_.empty())
return;
layers_.getLayer(layer).flags.set(f, value);
update_buffer_ = true;
update();
}
inline void setLabel(LayerDataBase::LabelType label)
{
// abort if there are no layers
if (layers_.empty())
return;
layers_.getCurrentLayer().label = label;
update_buffer_ = true;
update();
}
/**
@brief Returns the currently visible area. This is the authority which determines the X and Y axis' scale.
@see visible_area_
*/
const VisibleArea& getVisibleArea() const
{
return visible_area_;
}
/// Given a 2D axis coordinate, is it in the currently visible area? (useful to avoid plotting stuff outside the visible area)
/// Note: The input @p p must have unit coordinates (i.e. the result of widgetToData_), not pixel coordinates.
bool isVisible(const PointOnAxis& p) const
{
return visible_area_.getAreaXY().encloses(p);
}
/// Get the number of pixels of the current canvas (this is independent of the current visible area and zoom level).
/// It's just the size of the canvas.
PixelArea getPixelRange() const
{
int X_pixel_count = buffer_.width();
int Y_pixel_count = buffer_.height();
PixelArea area(&unit_mapper_);
area.setArea(AreaXYType(0, 0, X_pixel_count, Y_pixel_count));
return area;
}
/**
@brief Sets filters, but does not repaint (useful when setting up a new layer)
*/
virtual void initFilters(const DataFilters& filters);
/**
@brief Sets the filters applied to the data; and redraws
*/
virtual void setFilters(const DataFilters& filters);
/**
@name Dataset handling methods
@see changeVisibility
*/
//@{
/// Returns the number of layers
inline Size getLayerCount() const
{
return layers_.getLayerCount();
}
/// change the active layer (the one that is used for selecting and so on)
virtual void activateLayer(Size layer_index) = 0;
/// removes the layer with index @p layer_index
virtual void removeLayer(Size layer_index) = 0;
/// removes all layers by calling removeLayer() for all layer indices (from highest to lowest)
void removeLayers()
{
for (Size i = getLayerCount(); i > 0; --i)
{
removeLayer(i - 1);
}
visible_area_.clear(); // reset visible area
}
/// Add an already constructed layer (e.g. for projections)
bool addLayer(std::unique_ptr<LayerData1DBase> layer);
/**
@brief Add a peak data layer
@param[in] map Shared pointer to input map. It can be performed in constant time and does not double the required memory.
@param[in] od_map Shared pointer to on disk data which potentially caches some data to save memory (the map can be empty, but do not pass nullptr).
@param[in] filename This @em absolute filename is used to monitor changes in the file and reload the data
@param[in] caption The caption of the layer (shown in the layer window)
@param[in] use_noise_cutoff Add a noise filter which removes low-intensity peaks
@return If a new layer was created
*/
bool addPeakLayer(const ExperimentSharedPtrType& map,
ODExperimentSharedPtrType od_map,
const String& filename = "",
const String& caption = "",
const bool use_noise_cutoff = false);
/**
@brief Add a chrom data layer
@param[in] map Shared pointer to input map. It can be performed in constant time and does not double the required memory.
@param[in] od_map Shared pointer to on disk data which potentially caches some data to save memory (the map can be empty, but do not pass nullptr).
@param[in] filename This @em absolute filename is used to monitor changes in the file and reload the data
@param[in] caption The caption of the layer (shown in the layer window)
@return If a new layer was created
*/
bool addChromLayer(const ExperimentSharedPtrType& map, ODExperimentSharedPtrType od_map, const String& filename = "", const String& caption = "");
/**
@brief Add a feature data layer
@param[in] map Shared Pointer to input map. It can be performed in constant time and does not double the required memory.
@param[in] filename This @em absolute filename is used to monitor changes in the file and reload the data
@param[in] caption The caption of the layer (shown in the layer window)
@return If a new layer was created
*/
bool addLayer(FeatureMapSharedPtrType map, const String& filename = "", const String& caption = "");
/**
@brief Add a consensus feature data layer
@param[in] map Shared Pointer to input map. It can be performed in constant time and does not double the required memory.
@param[in] filename This @em absolute filename is used to monitor changes in the file and reload the data
@param[in] caption The caption of the layer (shown in the layer window)
@return If a new layer was created
*/
bool addLayer(ConsensusMapSharedPtrType map, const String& filename = "", const String& caption = "");
//@}
/**
@brief Add an identification data layer
@param[in] peptides Input list of peptides, which has to be mutable and will be empty after adding.
Swapping is used to insert the data. It can be performed in constant time and does not double
the required memory.
@param[in] filename This @em absolute filename is used to monitor changes in the file and reload the data
@param[in] caption The caption of the layer (shown in the layer window)
@return If a new layer was created
*/
bool addLayer(PeptideIdentificationList& peptides, const String& filename = "", const String& caption = "");
/// Returns the minimum intensity of the active layer
inline float getCurrentMinIntensity() const
{
return layers_.getCurrentLayer().getMinIntensity();
}
/// Returns the maximum intensity of the active layer
inline float getCurrentMaxIntensity() const
{
return layers_.getCurrentLayer().getMaxIntensity();
}
/// Returns the minimum intensity of the layer with index @p index
inline float getMinIntensity(Size index) const
{
return getLayer(index).getMinIntensity();
}
/// Returns the maximum intensity of the layer with index @p index
inline float getMaxIntensity(Size index) const
{
return getLayer(index).getMaxIntensity();
}
/// Sets the @p name of layer @p i
void setLayerName(Size i, const String& name);
/// Gets the name of layer @p i
String getLayerName(Size i);
/// Sets the parameters of the current layer
inline void setCurrentLayerParameters(const Param& param)
{
getCurrentLayer().param = param;
emit preferencesChange();
}
/**
@brief Returns the area which encloses all data points of all layers.
@see overall_data_range_
*/
virtual const RangeType& getDataRange() const;
/**
@brief Returns the first intensity scaling factor for 'snap to maximum intensity mode' (for the currently visible data range).
@see snap_factors_
*/
double getSnapFactor();
/// Returns the percentage factor
double getPercentageFactor() const;
/// Shows the preferences dialog of the active layer
virtual void showCurrentLayerPreferences() = 0;
/**
@brief Shows a dialog with the meta data
@param[in,out] modifiable indicates if the data can be modified.
@param[in] index If given, the meta data of the corresponding element (spectrum, feature, consensus feature) is shown instead of the layer meta data.
*/
virtual void showMetaData(bool modifiable = false, Int index = -1);
public slots:
/**
@brief change the visibility of a layer
@param[in] i the index of the layer
@param[in] b true if layer is supposed to be visible
*/
void changeVisibility(Size i, bool b);
/**
@brief change if the defined data filters are used
@param[in] i the index of the layer
@param[in] b true if layer is supposed to be visible
*/
void changeLayerFilterState(Size i, bool b);
/**
@brief Whether or not to show grid lines
Sets whether grid lines are shown or not.
@param[in] show Boolean variable deciding whether or not to show the grid lines.
*/
void showGridLines(bool show);
/**
@brief Zooms fully out and resets the zoom stack
Sets the visible area to the initial value, such that all data is shown.
@param[in] repaint If @em true a repaint is forced. Otherwise only the new area is set.
*/
virtual void resetZoom(bool repaint = true);
/**
@brief Sets the visible area.
Sets the visible area to a new value and emits visibleAreaChanged() if the area is different from the old one.
@param[in] area the new visible area
*/
void setVisibleArea(const VisibleArea& area);
/**
@brief Sets the visible area.
Sets the visible area to a new value and emits visibleAreaChanged() if the area is different from the old one.
@param[in] area the new visible area
*/
void setVisibleArea(const RangeAllType& area);
/**
@brief Sets the visible area.
Sets the visible area to a new value and emits visibleAreaChanged() if the area is different from the old one.
@param[in] area the new visible area
*/
void setVisibleArea(const AreaXYType& area);
/**
* @brief Set only the visible area for the x axis; other axes are untouched.
* @param[in] min
* @param[in] max
*/
void setVisibleAreaX(double min, double max);
/**
* @brief Set only the visible area for the y axis; other axes are untouched.
* @param[in] min
* @param[in] max
*/
void setVisibleAreaY(double min, double max);
/**
@brief Saves the current layer data.
@param[out] visible If true, only the visible data is stored. Otherwise the whole data is stored.
*/
void saveCurrentLayer(bool visible);
/**
@brief Notifies the canvas that the horizontal scrollbar has been moved.
Reimplement this slot to react on scrollbar events.
*/
virtual void horizontalScrollBarChange(int value);
/**
@brief Notifies the canvas that the vertical scrollbar has been moved.
Reimplement this slot to react on scrollbar events.
*/
virtual void verticalScrollBarChange(int value);
/// Sets the additional context menu. If not 0, this menu is added to the context menu of the canvas
void setAdditionalContextMenu(QMenu * menu);
/// Updates layer @p i when the data in the corresponding file changes
virtual void updateLayer(Size i) = 0;
/**
* \brief Get the Area in pixel coordinates of the current canvas for X and Y axis.
* \return
*/
AreaXYType canvasPixelArea() const
{
return AreaXYType({0, 0}, {(float)width(), (float)height()});
}
/**
* \brief Get Mapper to translate between values for axis (X/Y) and units (m/z, RT, intensity, ...)
* \return The translation from axis to units
*/
const DimMapper<2>& getMapper() const;
/**
* \brief Set a new mapper for the canvas.
* \param mapper The new mapper for translating between units and axis
*/
void setMapper(const DimMapper<2>& mapper);
signals:
/// Signal emitted whenever the modification status of a layer changes (editing and storing)
void layerModficationChange(Size layer, bool modified);
/// Signal emitted whenever a new layer is activated within the current window
void layerActivated(QWidget * w);
/// Signal emitted whenever the zoom changed
void layerZoomChanged(QWidget * w);
/**
@brief Change of the visible area
Signal emitted whenever the visible area changes.
@param[in] area The new visible area.
*/
void visibleAreaChanged(const VisibleArea& area);
/// Emitted when the cursor position changes (for displaying e.g. in status bar)
void sendCursorStatus(const String& x_value, const String& y_value);
/// Emits a status message that should be displayed for @p time ms. If @p time is 0 the message should be displayed until the next message is emitted.
void sendStatusMessage(std::string message, OpenMS::UInt time);
/// Forces recalculation of axis ticks in the connected widget.
void recalculateAxes();
/// Triggers the update of the vertical scrollbar
void updateVScrollbar(float f_min, float disp_min, float disp_max, float f_max);
/// Triggers the update of the horizontal scrollbar
void updateHScrollbar(float f_min, float disp_min, float disp_max, float f_max);
/// Toggle axis legend visibility change
void changeLegendVisibility();
/// Emitted when the action mode changes
void actionModeChange();
/// Emitted when the layer preferences have changed
void preferencesChange();
protected slots:
///Updates the cursor according to the current action mode
void updateCursor_();
protected:
/// Draws several lines of text to the upper right corner of the widget
void drawText_(QPainter & painter, const QStringList& text);
/// Returns the m/z value of an identification depending on the m/z source of the layer (precursor mass/theoretical peptide mass)
double getIdentificationMZ_(const Size layer_index,
const PeptideIdentification & peptide) const;
/// Method that is called when a new layer has been added
virtual bool finishAdding_() = 0;
/// remove already added layer which did not pass final checks in finishAdding_()
/// @param[in] error_message Optional error message to show as messagebox
void popIncompleteLayer_(const QString& error_message = "");
///@name reimplemented QT events
//@{
void resizeEvent(QResizeEvent * e) override;
void wheelEvent(QWheelEvent * e) override;
void keyPressEvent(QKeyEvent * e) override;
void keyReleaseEvent(QKeyEvent * e) override;
void focusOutEvent(QFocusEvent * e) override;
void leaveEvent(QEvent * e) override;
void enterEvent(QEnterEvent * e) override;
//@}
/// This method is called whenever the intensity mode changes. Reimplement if you need to react on such changes.
virtual void intensityModeChange_();
/// Call this whenever the DimMapper receives new dimensions; will update the axes and scrollbars
void dimensionsChanged_();
/**
@brief Sets the visible area
Changes the visible area, adjusts the zoom stack and notifies interested clients about the change.
If the area is outside the overall data range, the new area is pushed back into the overall range.
@param[in] new_area The new visible area.
@param[in] repaint If @em true, a complete repaint is forced.
@param[in] add_to_stack If @em true the new area is to add to the zoom_stack_.
*/
virtual void changeVisibleArea_(VisibleArea new_area, bool repaint = true, bool add_to_stack = false);
/**
@brief Recalculates the intensity scaling factor for 'snap to maximum intensity mode'.
@see snap_factors_
*/
virtual void recalculateSnapFactor_();
///@name Zoom stack methods
//@{
/// Zooms such that screen point x, y would still point to the same data point
virtual void zoom_(int x, int y, bool zoom_in);
///Go backward in zoom history
void zoomBack_();
///Go forward in zoom history
virtual void zoomForward_();
/// Add a visible area to the zoom stack
void zoomAdd_(const VisibleArea& area);
/// Clears the zoom stack and invalidates the current zoom position. After calling this, a valid zoom position has to be added immediately.
void zoomClear_();
//@}
///@name Translation methods, which are called when cursor buttons are pressed
//@{
/// Translation bound to the 'Left' key
virtual void translateLeft_(Qt::KeyboardModifiers m);
/// Translation bound to the 'Right' key
virtual void translateRight_(Qt::KeyboardModifiers m);
/// Translation bound to the 'Up' key
virtual void translateForward_();
/// Translation bound to the 'Down' key
virtual void translateBackward_();
//@}
/**
@brief Updates the scroll bars
Updates the scrollbars after a change of the visible area.
*/
virtual void updateScrollbars_();
/**
@brief Convert widget (pixel) to chart (unit) coordinates
Translates widget coordinates to chart coordinates.
@param[in] x the widget coordinate x
@param[in] y the widget coordinate y
@return chart coordinates
*/
inline PointXYType widgetToData_(double x, double y)
{
const auto& xy = visible_area_.getAreaXY();
return PointXYType(
xy.minX() + x / width() * xy.width(),
xy.minY() + (height() - y) / height() * xy.height()
);
}
/// Calls widgetToData_ with x and y position of @p pos
inline PointXYType widgetToData_(const QPoint& pos)
{
return widgetToData_(pos.x(), pos.y());
}
/// Helper function to paint grid lines
virtual void paintGridLines_(QPainter & painter);
/// Buffer that stores the actual peak information
QImage buffer_;
/// Mapper for X and Y axis
DimMapper<2> unit_mapper_;
/// Stores the current action mode (Pick, Zoom, Translate)
ActionModes action_mode_ = AM_TRANSLATE;
/// Stores the used intensity mode function
IntensityModes intensity_mode_ = IM_NONE;
/// Layer data
LayerStack layers_;
/**
@brief Stores the currently visible area in data units (e.g. seconds, m/z, intensity etc) and axis (X,Y) area.
This is always (and only) the data shown in the widget.
In 1D, the gravity axis may get some headroom on the y-axis, see Plot1DCanvas::correctGravityAxisOfVisibleArea_()
*/
VisibleArea visible_area_;
/**
@brief Recalculates the overall_data_range_
A small margin is added to each side of the range in order to display all data.
*/
virtual void recalculateRanges_();
/**
@brief Stores the data range (m/z, RT and intensity) of all layers
*/
RangeType overall_data_range_;
/// Stores whether or not to show a grid.
bool show_grid_ = true;
/// The zoom stack.
std::vector<VisibleArea> zoom_stack_;
/// The current position in the zoom stack
std::vector<VisibleArea>::iterator zoom_pos_ = zoom_stack_.end();
/**
@brief Updates the displayed data
The default implementation calls QWidget::update().
This method is reimplemented in the 3D view to update the OpenGL widget.
@param[in] caller_name Name of the calling function (use OPENMS_PRETTY_FUNCTION).
*/
virtual void update_(const char* caller_name);
/// Takes all actions necessary when the modification status of a layer changes (signals etc.)
void modificationStatus_(Size layer_index, bool modified);
/// Whether to recalculate the data in the buffer when repainting
bool update_buffer_ = false;
/// Back-pointer to the enclosing spectrum widget
PlotWidget* spectrum_widget_ = nullptr;
/// start position of mouse actions
QPoint last_mouse_pos_;
/**
@brief Intensity scaling factor for relative scale with multiple layers.
In this mode all layers are scaled to the same maximum.
FIXME: this factor changes, depending on the layer which is currently plotted! Ouch!
*/
double percentage_factor_ = 1.0;
/**
@brief Intensity scaling factor for 'snap to maximum intensity mode'.
In this mode the highest currently visible intensity is treated like the maximum overall intensity.
Only used in 2D mode.
*/
std::vector<double> snap_factors_;
/// Rubber band for selected area
QRubberBand rubber_band_;
/// External context menu extension
QMenu* context_add_ = nullptr;
/// Flag that determines if timing data is printed to the command line
bool show_timing_ = false;
/// selected peak
PeakIndex selected_peak_;
/// start peak of measuring mode
PeakIndex measurement_start_;
/// Data processing setter for peak maps
void addDataProcessing_(PeakMap & map, DataProcessing::ProcessingAction action) const
{
std::set<DataProcessing::ProcessingAction> actions;
actions.insert(action);
DataProcessingPtr p = std::shared_ptr<DataProcessing>(new DataProcessing);
//actions
p->setProcessingActions(actions);
//software
p->getSoftware().setName("PlotCanvas");
//version
p->getSoftware().setVersion(VersionInfo::getVersion());
//time
p->setCompletionTime(DateTime::now());
for (Size i = 0; i < map.size(); ++i)
{
map[i].getDataProcessing().push_back(p);
}
}
};
}
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/TOPPViewMenu.h | .h | 3,879 | 111 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Chris Bielow $
// $Authors: Chris Bielow $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#include <OpenMS/DATASTRUCTURES/FlagSet.h>
#include <OpenMS/KERNEL/StandardTypes.h>
#include <OpenMS/VISUAL/EnhancedWorkspace.h>
#include <OpenMS/VISUAL/LayerDataBase.h>
#include <QObject>
#include <vector>
class QAction;
class QMenu;
namespace OpenMS
{
class TOPPViewBase;
class EnhancedWorkspace;
class RecentFilesMenu;
enum class TV_STATUS
{
HAS_CANVAS,
HAS_LAYER,
HAS_MIRROR_MODE, // implies 1D View
IS_1D_VIEW,
TOPP_IDLE
};
using FS_TV = FlagSet<TV_STATUS>;
/// allow + operations on the enum, e.g. 'HAS_CANVAS + HAS_LAYER + IS_1D_VIEW'
FS_TV OPENMS_GUI_DLLAPI operator+(const TV_STATUS left, const TV_STATUS right);
using FS_LAYER = FlagSet<LayerDataBase::DataType>;
/// allow + operations on the enum, e.g. 'DT_PEAK + DT_FEATURE'
FS_LAYER OPENMS_GUI_DLLAPI operator+(const LayerDataBase::DataType left, const LayerDataBase::DataType right);
/**
@brief The file menu items for TOPPView
*/
class TOPPViewMenu
: public QObject
{
Q_OBJECT
public:
/** @brief Constructor which connects slots/signals of this class with the objects given as arguments
@param[in] parent Base class which actually shows the menu (as part of a QMainWindow)
@param[in] ws Workspace to connect some signals to
@param[in] recent_files A submenu for recent files which will be integrated as part of 'File -> Recent files'
**/
TOPPViewMenu(TOPPViewBase* const parent, EnhancedWorkspace* const ws, RecentFilesMenu* const recent_files);
/// add a menu entry at 'Windows -> [Windowname]' to allow hiding/showing a TOPPView subwindow (e.g. Log, Layers, Filters, ...)
void addWindowToggle(QAction* const window_toggle);
public slots:
/// enable/disable entries according to a given state of TOPPViewBase
void update(const FS_TV status, const LayerDataBase::DataType layer_type);
private:
struct ActionRequirement_
{
ActionRequirement_(QAction* action, const FS_TV& needs, const FS_LAYER layer_set)
: action_(action), needs_(needs), layer_set_(layer_set) {}
ActionRequirement_(QAction* action, const TV_STATUS& needs, const FS_LAYER layer_set)
: action_(action), needs_(needs), layer_set_(layer_set) {}
/// check if an ActionRequirement is fulfilled by the arguments
/// i.e. @p status is a superset of needs_ and @p layer_type is a superset of layer_set_ (or layer_set_ is empty)
/// If yes, the the action to enabled, or disable it otherwise
void enableAction(const FS_TV status, const LayerDataBase::DataType layer_type);
private:
QAction* action_;
FS_TV needs_;
FS_LAYER layer_set_;
};
/// fills menu_items_ members with ActionRequirements and returns the just created object
/// Only use this for items which depend on the state of TOPPViewBase,
/// e.g. close() can only work if something is open. But open() is always allowed.
QAction* addAction_(QAction* action, const TV_STATUS req, const FS_LAYER layer_set = FS_LAYER());
/// overload for multiple requirements
QAction* addAction_(QAction* action, const FS_TV req, const FS_LAYER layer_set = FS_LAYER());
/// holds all actions which have a set of requirements, i.e. depend on the state of TOPPViewBase
std::vector<ActionRequirement_> menu_items_;
/// the windows submenu (holds all windows added via addWindowToggle())
QMenu* m_windows_;
};
} //namespace
| Unknown |
3D | OpenMS/OpenMS | src/openms_gui/include/OpenMS/VISUAL/TOPPASInputFileListVertex.h | .h | 2,459 | 82 | // Copyright (c) 2002-present, OpenMS Inc. -- EKU Tuebingen, ETH Zurich, and FU Berlin
// SPDX-License-Identifier: BSD-3-Clause
//
// --------------------------------------------------------------------------
// $Maintainer: Johannes Veit $
// $Authors: Johannes Junker, Chris Bielow $
// --------------------------------------------------------------------------
#pragma once
// OpenMS_GUI config
#include <OpenMS/VISUAL/OpenMS_GUIConfig.h>
#include <OpenMS/VISUAL/TOPPASVertex.h>
namespace OpenMS
{
/**
@brief A vertex representing an input file list
@ingroup TOPPAS_elements
*/
class OPENMS_GUI_DLLAPI TOPPASInputFileListVertex : public TOPPASVertex
{
Q_OBJECT
public:
/// Default constructor
TOPPASInputFileListVertex() = default;
/// Constructor
TOPPASInputFileListVertex(const QStringList& files);
/// Copy constructor
TOPPASInputFileListVertex(const TOPPASInputFileListVertex& rhs) = default;
/// Destructor
~TOPPASInputFileListVertex() override = default;
/// Assignment operator
TOPPASInputFileListVertex& operator=(const TOPPASInputFileListVertex& rhs) = default;
virtual std::unique_ptr<TOPPASVertex> clone() const override;
/// returns "InputVertex"
String getName() const override;
/// Sets the list of files
void setFilenames(const QStringList & files);
/// Starts all tools below this node
void run() override;
// documented in base class
void paint(QPainter * painter, const QStyleOptionGraphicsItem * option, QWidget * widget) override;
// documented in base class
QRectF boundingRect() const override;
/// Checks if the given list of file names is valid
bool fileNamesValid();
/// Shows the dialog for editing the files
void showFilesDialog();
/// Opens the folders of the input files
void openContainingFolder();
/// Returns the key (for applying resources from a resource file)
const QString & getKey();
/// Sets the key (for applying resources from a resource file)
void setKey(const QString & key);
public slots:
/// Called by an outgoing edge when it has changed
void outEdgeHasChanged() override;
protected:
/// The key of this input node (for applying resources from a resource file)
QString key_;
/// current working dir, i.e. the last position a file was added from
QString cwd_;
///@name reimplemented Qt events
//@{
void mouseDoubleClickEvent(QGraphicsSceneMouseEvent * e) override;
//@}
};
}
| Unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.