keyword stringclasses 7 values | repo_name stringlengths 8 98 | file_path stringlengths 4 244 | file_extension stringclasses 29 values | file_size int64 0 84.1M | line_count int64 0 1.6M | content stringlengths 1 84.1M ⌀ | language stringclasses 14 values |
|---|---|---|---|---|---|---|---|
3D | theaidenlab/AGWG-merge | edit/apply-edits-prep-for-next-round.sh | .sh | 5,016 | 118 | #!/bin/bash
#### Description: Wrapper script that applies a list of edits to the original set of scaffolds/contigs to create a new input set, described by new cprops, new mnd and, if requested, new fasta.
#### Usage: apply-edits-prep-for-next-round.sh [ -f path_to_fasta_file ] [ -r revision_label ] [ -p true/false ] <path_to_edit_annotations> <path_to_original_cprops> <path_to_original_mnd>
#### Input: Edits encoded as 2D annotations (0-based, tab-separated, 12 column format), original input cprops and merged_nodups.txt files
#### Dependencies: edit-cprops-according-to-annotations.awk, edit-mnd-according-to-new-cprops.awk, edit-fasta-according-to-new-cprops.awk
#### Options: -r revision label; -f path to original fasta file; -p for GNU Parallel support.
#### Output: New cprops, new mnd file (optional: new fasta file), named as *.revision_label.cprops; *.revision_label.txt
#### Written by: Olga Dudchenko - olga.dudchenko@bcm.edu. Version date 12/07/2016.
USAGE="
*****************************************************
Usage: ./apply-edits-prep-for-next-round.sh [ -r suffix ] [ -f path_to_fasta_file ] [ -p true/false ] <path_to_edit_annotations> <path_to_original_cprops> <path_to_original_mnd>
ARGUMENTS:
path_to_edit_annotations Path to 2D annotation files describing regions in original scaffolds/contigs to be labeled as debris.
path_to_original_cprops Path to cprops describing original input scaffold/contig set
path_to_original_mnd Path to mnd file describing Hi-C contacts across original intput scaffold/contig set
OPTIONS:
-h Shows this help
-r revision_label Revision label to annotate edited files (default is \"revision\")
-f path_to_fasta Path to fasta file if plan to generate [not sure if needed, not working for now]
-p true/false Use GNU Parallel to speed up calculations (default is true)
*****************************************************
"
## SET DEFAULTS
use_parallel="true"
suffix="revision"
## HANDLE OPTIONS
while getopts "r:f:p:h" opt; do
case $opt in
h) echo "$USAGE"
exit 0
;;
r) echo "...-r flag was triggered, output will be labeled as *.${OPTARG}.*" >&1
suffix="${OPTARG}"
;;
f) if [ -s $OPTARG ]; then
echo "...-f flag was triggered, will dump edited fasta from $OPTARG" >&1
orig_fasta=$OPTARG
else
echo ":( Original fasta file not found. Continuing without the fasta dump" >&2
fi
;;
p) if [ $OPTARG == true ] || [ $OPTARG == false ]; then
echo "...-p flag was triggered. Running with GNU Parallel support parameter set to $OPTARG." >&1
use_parallel=$OPTARG
else
echo ":( Unrecognized value for -p flag. Running with default parameters (-p true)." >&2
fi
;;
*) echo ":( Illegal options. Exiting!" >&2
echo "$USAGE" >&2
exit 1
;;
esac
done
shift $(( OPTIND-1 ))
## HANDLE ARGUMENTS, TODO: check formats
edits=$1
cprops=$2
mnd=$3
[ $# -eq 3 ] && [ -s ${edits} ] && [ -s ${cprops} ] && [ -s ${orig_mnd} ] || { echo >&2 ":( Not sure how to parse your input or input files not found at intended locations. Exiting!" && echo "$USAGE" >&2 && exit 1; }
## HANDLE DEPENDENCIES
if [ $use_parallel == true ]; then
type parallel >/dev/null 2>&1 || { echo >&2 ":( GNU Parallel support is set to true (default) but GNU Parallel is not in the path. Please install GNU Parallel or set -p option to false. Exiting!"; exit 1; }
fi
path_to_scripts=`cd "$( dirname $0)" && pwd`
edit_cprops_script=${path_to_scripts}"/edit-cprops-according-to-annotations.awk"
edit_mnd_script=${path_to_scripts}"/edit-mnd-according-to-new-cprops.awk"
edit_fasta_script=${path_to_scripts}"/edit-fasta-according-to-new-cprops.awk"
if [ ! -f ${edit_cprops_script} ] || [ ! -f ${edit_mnd_script} ] || [ ! -f ${edit_fasta_script} ]; then
echo >&2 ":( Relevant dependency scripts not found. Exiting!" && exit 1
fi
## MAIN
## edit cprops
echo "...applying edits to cprops file" >&1
filename=$(basename "$cprops")
extension="${filename##*.}"
filename="${filename%.*}"
awk -v label1=":::fragment_" -v label2=":::debris" -f ${edit_cprops_script} ${edits} ${cprops} > ${filename}.${suffix}.${extension}
new_cprops="${filename}.${suffix}.${extension}"
## edit mnd
echo "...applying edits to mnd file" >&1
filename=$(basename "$mnd")
extension="${filename##*.}"
filename="${filename%.*}"
if [ ${use_parallel} == "true" ]; then
parallel -a ${mnd} --pipepart --will-cite --jobs 80% --block 1G "awk -v label1=\":::fragment_\" -v label2=\":::debris\" -f ${edit_mnd_script} ${new_cprops} - " > ${filename}.${suffix}.${extension}
else
awk -v label1=":::fragment_" -v label2=":::debris" -f ${edit_mnd_script} ${new_cprops} ${mnd} > ${filename}.${suffix}.${extension}
fi
## edit fasta
if [ ! -z ${fasta} ] && [ -s ${fasta} ]; then
echo "...applying edits to input sequence file" >&1
filename=$(basename "$fasta")
extension="${filename##*.}"
filename="${filename%.*}"
awk -v label1=":::fragment_" -v label2=":::debris" -f ${edit_fasta_script} ${new_cprops} ${fasta} > ${filename}.${suffix}.${extension}
fi
| Shell |
3D | theaidenlab/AGWG-merge | edit/run-mismatch-detector.sh | .sh | 14,327 | 211 | #!/bin/bash
#### Description: Wrapper script to annotate mismatches. Consists of 3 parts:
#### 1. Analysis of local mismatch signatures [done]
#### 2?. Filtering local mismatch calls by checking for off-diagonal contact enrichments signatures [not done]
#### 3. Mismatch boundary thinning to reduce mimassembly debris [done].
#### Usage: run-mismatch-detector.sh -p <percentile> -b <bin_size> -d <depletion_region> <path-to-hic-file>
#### Dependencies: GNU Parallel; Juicebox_tools; compute-quartile.awk; precompute-depletion-score.awk; ...
#### Input: Juicebox hic file.
#### Parameters: percentile which defines sensitivity to local misassemblies (0<p<100, default 5); resolution at which to search the misassemblies (default 25kb); diagonal that limits the region for calculating the insulation score (default 100kb).
#### NOTE: Unprompted parameters: k, sensitivity to depletion signal (default k=0.5).
#### Output: "Wide" and "narrow" bed file highlighting mismatch regions [mismatch_wide.bed; mismatch_narrow.bed]. Note that these represent candidate regions of misassembly and might be subject to further filtering. Additional output generated as part of this wrapper script includes depletion_score_wide.wig, depletion_score_narrow.wig track files.
#### Written by: Olga Dudchenko - olga.dudchenko@bcm.edu. Version date 12/03/2016.
USAGE="
*****************************************************
This is a wrapper for a fragment of Hi-C misassembly detection pipeline, version date: Dec 3, 2016. This fragment concerns with generating a mismatch annotation file that will later be overlaid with scaffold boundaries to excise regions spanning misassemblies.
Usage: ./run-mismatch-detector.sh [-h] [-p percentile] [-b bin_size_aka_resolution] [-d depletion_region_size] path_to_hic_file
ARGUMENTS:
path_to_hic_file Path to Juicebox .hic file of the current assembly.
OPTIONS:
-h Shows this help
-c percentile Sets percent of the map to saturate (0<=c<=100, default is 5).
-w wide_res Sets resolution for the first-pass search of mismatches (default is 25000 bp)
-n narrow_res Sets resolution for the precise mismatch localizaton (n<w, default is 1000 bp)
-d depletion_area Sets the size of the region to aggregate the depletion score in the wide path (d >= 2*w, default is 100000 bp)
Unprompted
-p true/false Use GNU Parallel to speed up computation (default is true)
-k Sensitivity to magnitude of depletion, percent of expected (0<=k<=100, default is 50 i.e. label region as mismatch when score is 1/2 of expected)
-b NONE/VC/VC_SQRT/KR Sets which type of contact matrix balancing to use (default KR)
Uses compute-quartile.awk, precompute-depletion-score.awk [[...]] that should be in the same folder as the wrapper script.
*****************************************************
"
## Set defaults
pct=5 # default percent of map to saturate
bin_size=25000 # default bin size to do a first-pass search for mismatches
narrow_bin_size=1000 # default bin size to do a second-pass search for mismatches
dep_size=100000 # default for the size of the region to average the depletion score
## Set unprompted defaults
use_parallel=true # use GNU Parallel to speed-up calculations (default)
k=50 # sensitivity to depletion score (50% of expected is labeled as a mismatch)
norm="KR" # use an unbalanced contact matrix for analysis
## HANDLE OPTIONS
while getopts "hc:w:n:d:p:k:b:" opt; do
case $opt in
h) echo "$USAGE" >&1
exit 0
;;
c) re='^[0-9]+\.?[0-9]*$'
if [[ $OPTARG =~ $re ]] && [[ ${OPTARG%.*} -ge 0 ]] && ! [[ "$OPTARG" =~ ^0*(\.)?0*$ ]] && [[ $((${OPTARG%.*} + 1)) -le 100 ]]; then
echo ":) -c flag was triggered, starting calculations with ${OPTARG}% saturation level" >&1
pct=$OPTARG
else
echo ":( Wrong syntax for saturation threshold. Using the default value pct=${pct}" >&2
fi
;;
w) re='^[0-9]+$'
if [[ $OPTARG =~ $re ]]; then
echo ":) -w flag was triggered, performing cursory search for mismatches at $OPTARG resolution" >&1
bin_size=$OPTARG
else
echo ":( Wrong syntax for bin size. Using the default value 25000" >&2
fi
;;
n) re='^[0-9]+$'
if [[ $OPTARG =~ $re ]]; then
echo ":) -n flag was triggered, performing mismatch region thinning at $OPTARG resolution" >&1
narrow_bin_size=$OPTARG
else
echo ":( Wrong syntax for mismatch localization resolution. Using the default value 1000" >&2
fi
;;
d) re='^[0-9]+$'
if [[ $OPTARG =~ $re ]]; then
echo ":) -d flag was triggered, depletion score will be averaged across a region bounded by $OPTARG superdiagonal" >&1
dep_size=$OPTARG
else
echo ":( Wrong syntax for mapping quality. Using the default value dep_size=100000" >&2
fi
;;
p) if [ $OPTARG == true ] || [ $OPTARG == false ]; then
echo ":) -p flag was triggered. Running with GNU Parallel support parameter set to $OPTARG." >&1
use_parallel=$OPTARG
else
echo ":( Unrecognized value for -p flag. Running with default parameters (-p true)." >&2
fi
;;
k) re='^[0-9]+$'
if [[ $OPTARG =~ $re ]] && [[ $OPTARG -gt 0 ]] && [[ $OPTARG -lt 100 ]]; then
echo ":) -k flag was triggered, starting calculations with ${OPTARG}% depletion as mismatch threshold" >&1
k=$OPTARG
else
echo ":( Wrong syntax for mismatch threshold. Using the default value k=50" >&2
fi
;;
b) if [ $OPTARG == NONE ] || [ $OPTARG == VC ] || [ $OPTARG == VC_SQRT ] || [ $OPTARG == KR ]; then
echo ":) -b flag was triggered. Type of norm chosen for the contact matrix is $OPTARG." >&1
norm=$OPTARG
else
echo ":( Unrecognized value for -b flag. Running with default parameters (-b NONE)." >&2
fi
;;
*) echo "$USAGE" >&2
exit 1
;;
esac
done
shift $(( OPTIND-1 ))
## check parameters for consistency
[[ ${dep_size} -le ${bin_size} ]] && echo >&2 ":( Requested depletion region size ${dep_size} and bin size ${bin_size} parameters are incompatible {${dep_size} < ${bin_size}). Exiting!" && echo >&2 "$USAGE" && exit 1
(( ${dep_size} % ${bin_size} != 0 )) && new_dep_size=$(( dep_size / bin_size * bin_size )) && echo >&2 ":| Warning: depletion region size ${dep_size} and bin size ${bin_size} parameters are incompatible (${dep_size} % ${bin_size} != 0). Changing depletion region size to ${new_dep_size}." && dep_size=${new_dep_size}
[[ ${bin_size} -le ${narrow_bin_size} ]] && echo >&2 ":( Requested mismatch localization resolution ${narrow_bin_size} and cursory search bin size ${bin_size} parameters are incompatible ($bin_size < ${narrow_bin_size}). Exiting!" && echo >&2 "$USAGE" && exit 1
## HANDLE ARGUMENTS: TODO check file format
if [ $# -lt 1 ]; then
echo ":( Required arguments not found. Please double-check your input!!" >&2
echo "$USAGE" >&2
exit 1
fi
hic_file=$1
## CHECK DEPENDENCIES
type java >/dev/null 2>&1 || { echo >&2 ":( Java is not available, please install/add to path Java to run Juicer and Juicebox. Exiting!"; exit 1; }
if [ ${use_parallel} == "true" ]; then
type parallel >/dev/null 2>&1 || { echo >&2 ":( GNU Parallel support is set to true (default) but GNU Parallel is not in the path. Please install GNU Parallel or set -p option to false. Exiting!"; exit 1; }
fi
path_to_scripts=`cd "$( dirname $0)" && pwd`
path_to_vis=$(dirname ${path_to_scripts})"/visualize"
juicebox=${path_to_vis}/"juicebox_tools.sh"
compute_centile_script=${path_to_scripts}"/compute-centile.awk"
precompute_depletion_score=${path_to_scripts}"/precompute-depletion-score.awk"
if [ ! -f ${juicebox} ] || [ ! -f ${compute_centile_script} ] || [ ! -f ${precompute_depletion_score} ] ; then
echo >&2 ":( Relevant dependency scripts not found. Exiting!" && exit 1
fi
## DUMP MATRIX FOR ANALYSIS, COMPUTE (1-PCT) QUARTILE AND COMPUTE DEPLETION SCORE ?? substitute for homebrewed dump from mnd? ## TODO: if stick with juicebox_tools ask for proper exit code on fail.
echo "...Dumping ${bin_size} resolution matrix"
bash ${juicebox} dump observed ${norm} ${hic_file} assembly assembly BP ${bin_size} h.hic_dump.${bin_size}.txt
[ $? -ne 0 ] && echo >&2 ":( Juicebox dump is empty! Perhaps something is wrong with the hic file or the requested resolution is too high. Exiting!" && exit 1
echo "...Estimating necessary saturation level for requested misassembly sensitivity"
sat_level=`awk '\$1!=\$2{print \$3}' h.hic_dump.${bin_size}.txt | sort -n -S1G --parallel=24 -s | awk -v complement=1 -v p=${pct} -f ${compute_centile_script}`
echo "...Coarse resolution saturation level = ${sat_level}"
## TODO: Check that saturation level is above a certain threshold to avoid working with extremely sparse matrices.
echo "...Analyzing near-diagonal mismatches"
cmd="awk 'BEGIN{printf \"%.4f\", $k/100*$sat_level*0.5*$dep_size/$bin_size*($dep_size/$bin_size-1)}'"
thr=`eval $cmd`
echo "variableStep chrom=assembly span=${bin_size}" > "depletion_score_wide.wig"
if [ ${use_parallel} == "true" ]; then
parallel -a h.hic_dump.${bin_size}.txt --will-cite --jobs 80% --pipepart --block 1G "awk -v sat_level=${sat_level} -v bin_size=${bin_size} -v dep_size=${dep_size} -f ${precompute_depletion_score} -" | awk -v bin_size=${bin_size} -v dep_size=${dep_size} 'BEGIN{OFS="\t"}NR==1{min=$1}$1>max{max=$1}$1<min{min=$1}{c[$1]+=$2}END{for(i=min+dep_size-2*bin_size; i<=max-dep_size+2*bin_size; i+=bin_size){print i, c[i]+=0}}' | tee -a "depletion_score_wide.wig" | awk -v thr=${thr} -v bin_size=${bin_size} '($2<thr){if(start==""){start=$1}; span+=bin_size}($2>=thr){if(start!=""){print "assembly", start, start+span}; start=""; span=0}END{if(start!=""){print "assembly", start, start+span}}' > mismatch_wide.bed
# try to address problematic contigs at the beginning of the map - requires more work
# parallel -a h.hic_dump.${bin_size}.txt --will-cite --jobs 80% --pipepart --block 1G "awk -v sat_level=${sat_level} -v bin_size=${bin_size} -v dep_size=${dep_size} -f ${precompute_depletion_score} -" | awk -v bin_size=${bin_size} -v dep_size=${dep_size} 'BEGIN{OFS="\t"}NR==1{min=$1}$1>max{max=$1}$1<min{min=$1}{c[$1]+=$2}END{for(i=0; i<=max; i+=bin_size){print i, c[i]+=0}}' | tee -a "depletion_score_wide.wig" | awk -v thr=${thr} -v bin_size=${bin_size} '($2<thr){if(start==""){start=$1}; span+=bin_size}($2>=thr){if(start!=""){print "assembly", start, start+span}; start=""; span=0}END{if(start!=""){print "assembly", start, start+span}}' > mismatch_wide.bed
else
awk -v sat_level=${sat_level} -v bin_size=${bin_size} -v dep_size=${dep_size} -f ${precompute_depletion_score} h.hic_dump.${bin_size}.txt | awk -v bin_size=${bin_size} -v dep_size=${dep_size} 'BEGIN{OFS="\t"}NR==1{min=$1}$1>max{max=$1}$1<min{min=$1}{c[$1]+=$2}END{for(i=min+dep_size-2*bin_size; i<=max-dep_size+2*bin_size; i+=bin_size){print i, c[i]+=0}}' | tee -a "depletion_score_wide.wig" | awk -v thr=${thr} -v bin_size=${bin_size} '($2<thr){if(start==""){start=$1}; span+=bin_size}($2>=thr){if(start!=""){print "assembly", start, start+span}; start=""; span=0}END{if(start!=""){print "assembly", start, start+span}}' > mismatch_wide.bed
fi
rm h.hic_dump.${bin_size}.txt
## TODO: Check output and issue a warning in case too many candidates!
echo "...Filtering mismatch calls without better alternative: Not functional yet, skipping"
echo "...Thinning mismatch region boundaries."
## DUMP HIGH-RES MATRIX FOR ANALYSIS, COMPUTE (1-PCT) QUARTILE AND COMPUTE DEPLETION SCORE
dep_size=${bin_size}
bin_size=${narrow_bin_size}
dep_size=$(( dep_size / bin_size * bin_size )) # in case any rounding issues
echo "...Dumping ${bin_size} resolution matrix"
bash ${juicebox} dump observed NONE ${hic_file} assembly assembly BP ${bin_size} h.hic_dump.${bin_size}.txt
[ $? -ne 0 ] && echo >&2 ":( Juicebox dump is empty! Perhaps something is wrong with the hic file or the requested resolution is too high. Exiting!" && exit 1
echo "...Estimating necessary saturation level for requested misassembly sensitivity"
sat_level=`awk '\$1!=\$2{print \$3}' h.hic_dump.${bin_size}.txt | sort -n -S1G --parallel=24 -s | awk -v complement=1 -v p=${pct} -f ${compute_centile_script}`
## TODO: Check that saturation level is above a certain threshold to avoid working with extremely sparse matrices. No big deal but comment lines are still in the dump..
echo "...Fine resolution saturation level = ${sat_level}"
echo "...Analyzing near-diagonal mismatches"
echo "variableStep chrom=assembly span=${bin_size}" > "depletion_score_narrow.wig"
## TODO: CHECK min+dep_size-2*bin_size or -1*bin_size. Maybe switch to n bins, not abs length
if [ ${use_parallel} == "true" ]; then
parallel -a h.hic_dump.${bin_size}.txt --will-cite --jobs 80% --pipepart --block 1G "awk -v sat_level=${sat_level} -v bin_size=${bin_size} -v dep_size=${dep_size} -f ${precompute_depletion_score} -" | awk -v bin_size=${bin_size} -v dep_size=${dep_size} 'BEGIN{OFS="\t"}NR==1{min=$1}$1>max{max=$1}$1<min{min=$1}{c[$1]+=$2}END{for(i=min+dep_size-2*bin_size; i<=max-dep_size+2*bin_size; i+=bin_size){print i, c[i]+=0}}' >> "depletion_score_narrow.wig"
else
awk -v sat_level=${sat_level} -v bin_size=${bin_size} -v dep_size=${dep_size} -f ${precompute_depletion_score} h.hic_dump.${bin_size}.txt | awk -v bin_size=${bin_size} -v dep_size=${dep_size} 'BEGIN{OFS="\t"}NR==1{min=$1}$1>max{max=$1}$1<min{min=$1}{c[$1]+=$2}END{for(i=min+dep_size-2*bin_size; i<=max-dep_size+2*bin_size; i+=bin_size){print i, c[i]+=0}}' >> "depletion_score_narrow.wig"
fi
awk -v bin_size=${bin_size} 'FILENAME==ARGV[1]{acount+=1; start[acount]=$2; end[acount]=$3; next}FNR==1{count=1;next}start[count]>=($1+bin_size){next}end[count]<=$1{for(s=start[count];s<end[count];s+=bin_size){if(cand[s]==min){print s, s+bin_size}}; count+=1}{cand[$1]=$2; if($1<=start[count]){min=$2}else if($2<min){min=$2}}END{if(count<=acount){for(s=start[count];s<end[count];s+=bin_size){if(cand[s]==min){print s, s+bin_size}}}}' "mismatch_wide.bed" "depletion_score_narrow.wig" | awk 'BEGIN{OFS="\t"}NR==1{start=$1; end=$2;next}$1==end{end=$2;next}{print "assembly", start, end; start=$1; end=$2}END{print "assembly", start, end}' > mismatch_narrow.bed
rm h.hic_dump.${bin_size}.txt
| Shell |
3D | theaidenlab/AGWG-merge | supp/generate-table-s5.sh | .sh | 448 | 10 | #!/bin/bash
extract () {
awk '{str[NR]=$0}END{print str[1]; print str[2]; print str[3]; print str[7]; print str[8]}' $1 | awk '{$1=$1}1'
}
in_chrom=`extract chr-length.fasta.stats | awk 'NR==1{print $NF}'`
awk -v in_chrom=${in_chrom} '{tmp=$NF; $NF=""; print $0"\t"tmp}NR==1{tot_len=tmp}NR==5||NR==10||NR==15{pct=in_chrom/tot_len; print "In chromosome-length scaffolds\t"pct}' <(extract chr-length-and-small-and-tiny.fasta.stats) > S5.dat
| Shell |
3D | theaidenlab/AGWG-merge | supp/generate-table-s1.sh | .sh | 748 | 8 | #!/bin/bash
extract () {
awk '{str[NR]=$0}END{print str[1]; print str[7]; print str[8]; print str[9]}' $1 | awk '{$1=$1}1'
}
awk '{tmp=$NF; $NF=""; print $0"\t"tmp}NR==1{tot_seq=tmp}NR==5{print "% of Total Sequenced Base Pairs\t"tmp/tot_seq}NR==9{tot_attempt=tmp; print "% of Total Sequenced Base Pairs\t"tmp/tot_seq}NR==13{print "% of Total Attempted Base Pairs\t"tmp/tot_attempt; print "% of Total Sequenced Base Pairs\t"tmp/tot_seq}NR==17{print "% of Total Attempted Base Pairs\t"tmp/tot_attempt; print "% of Total Sequenced Base Pairs\t"tmp/tot_seq}' <(extract draft.fasta.stats && extract unattempted.fasta.stats && extract input.fasta.stats && extract resolved.fasta.stats && extract unresolved-and-inconsistent.fasta.stats) > S1.dat
| Shell |
3D | theaidenlab/AGWG-merge | supp/match-map-data.sh | .sh | 3,061 | 87 | #!/bin/bash
USAGE="
*****************************
./generate-linkage-map.sh -d <Juneja/...> -c <chrom_number> <path_to_orig_cprops> <path_to_final_tiled_cprops> <path_to_final_tiled_asm>
*****************************
"
## HANDLE OPTIONS
while getopts "hd:c:" opt; do
case $opt in
h) echo "$USAGE" >&1
exit 0
;;
d) data=$OPTARG
;;
c) re='^[0-9]+$'
if [[ $OPTARG =~ $re ]] && [[ $OPTARG -gt 0 ]]; then
chr=$OPTARG
else
echo ":( Wrong syntax for chromosome number. Exiting!" >&2 && exit 1
fi
;;
*) echo "$USAGE" >&2
exit 1
;;
esac
done
shift $(( OPTIND-1 ))
[ -z $data ] && echo >&1 ":| Warning: Map data reference was not listed. Running against default data from Juneja et al, 2014. To request a different map type -d Juneja/..." && data=Juneja
pipeline=`cd "$( dirname $0)" && cd .. && pwd`
case "$data" in
Juneja)
datafile="$pipeline/data/Juneja_et_al_linkage_map.dat"
chr=3
;;
Timoshevskiy)
datafile="$pipeline/data/Timoshevskiy_et_al.dat"
chr=3
;;
Hickner)
datafile="$pipeline/data/Hickner_et_al_linkage_map.dat"
chr=3
;;
Aresburger)
datafile="$pipeline/data/Aresburger_et_al.dat"
chr=3
;;
Unger)
datafile="$pipeline/data/Unger_et_al.dat"
chr=3
;;
Naumenko)
datafile="$pipeline/data/Naumenko_et_al.dat"
chr=3
;;
*)
datafile=$data
[ -z $chr ] && echo "Unknown data or no chromosome number. Exiting!" >&2 && exit 1
;;
esac
## HANDLE ARGUMENTS
[ -z $1 ] || [ -z $2 ] || [ -z $3 ] && echo >&2 ":( Some input seems missing." && echo >&2 "$USAGE" && exit 1
orig_cprops=$1
current_cprops=$2
current_asm=$3
id=`basename ${current_asm} .asm`
## TODO: check compatibility
awk 'BEGIN{FS="\t"; OFS="\t"; print "chr1", "x1", "x2", "chr2", "y1", "y2", "color", "id", "X1", "X2", "Y1", "Y2"}NR>1{print $4, $5-1, $6, $4, $5-1, $6, "0,0,0", "+"$1, $5-1, $6, $5-1, $6}' ${datafile} | awk -f ${pipeline}/lift/lift-input-annotations-to-asm-annotations.awk ${orig_cprops} <(awk '{print $2}' ${orig_cprops}) - | awk -f ${pipeline}/supp/lift-asm-annotations-to-input-annotations.awk ${current_cprops} <(awk '{print $2}' ${current_cprops}) - | awk -f ${pipeline}/lift/lift-input-annotations-to-asm-annotations.awk ${current_cprops} ${current_asm} - > ${data}_vs_${id}_2D_annotations.txt
awk -v chr=${chr} 'FILENAME==ARGV[1]{len[$2]=$3; next}FNR<=chr{gsub("-",""); c=0; for(i=1;i<=NF;i++){c+=len[$i]}; print FNR"\t"c}' ${current_cprops} ${current_asm} > ${id}.chrom.tiled.sizes
awk -v chr=${chr} 'BEGIN{FS="\t"; OFS="\t"}FILENAME==ARGV[1]{shift[$1+1]=shift[$1]+$2; next}FILENAME==ARGV[2]{pos[substr($8,2)]=$2+1; next}FNR==1{print $0, "Chr-length scaffold (current study)", "Start position (current study)"; next}{if (!pos[$1]){chrscaf="NA"} else {chrscaf=1; while(chrscaf<=chr){if(pos[$1]<shift[chrscaf+1]){break};chrscaf++}; if(chrscaf==chr+1){chrscaf="NA"}}; if(chrscaf=="NA"){print $0, "NA", "NA"}else{print $0, chrscaf, pos[$1]-shift[chrscaf], pos[$1]}}' ${id}.chrom.tiled.sizes $data"_vs_"${id}"_2D_annotations.txt" ${datafile} > ${data}_map_vs_${id}.txt
| Shell |
3D | theaidenlab/AGWG-merge | supp/make-rainbow-tracks.sh | .sh | 2,001 | 47 | #!/bin/bash
## Helper script to generate rainbow tracks illustrating liftover
## Written by: OD olga.dudchenko@bcm.edu
pipeline=`cd "$( dirname $0)" && cd .. && pwd`
bin=500000;
USAGE="
*****************************
./make-rainbow-tracks.sh -b <bin> -c <ref_chrom_number> ref.cprops ref.asm target.cprops target.asm
*****************************
"
## HANDLE OPTIONS
while getopts "hc:b:" opt; do
case $opt in
h) echo "$USAGE" >&1
exit 0
;;
c) chrom=$OPTARG
;;
b) bin=$OPTARG
;;
*) echo "$USAGE" >&2
exit 1
;;
esac
done
shift $(( OPTIND-1 ))
## HANDLE ARGUMENTS
[ -z $1 ] || [ -z $2 ] || [ -z $3 ] || [ -z $4 ] && echo >&2 ":( Some input seems missing." && echo >&2 "$USAGE" && exit 1
ref_cprops=$1
ref_asm=$2
target_cprops=$3
target_asm=$4
[ -z $chrom ] && chrom=`wc -l<${ref_asm}` && echo >&2 ":| Warning: Number of chromosomes in the reference genome is not listed. Coloring all $chrom scaffolds"
awk -v bin="$bin" -v chrom="$chrom" 'FILENAME==ARGV[1]{len[$2]=$3;next}FNR<=chrom{gsub("-","");for(i=1;i<=NF;i++){c+=len[$i]}}END{OFS="\t"; print "chr1", "x1", "x2", "chr2", "y1", "y2", "color","id","X1","X2","Y1","Y2"; s=0; while((s+bin)<=c){print "assembly", s, s+bin, "assembly", s, s+bin, "0,0,0", s/c, s, s+bin, s, s+bin; s+=bin}; print "assembly", s, c, "assembly", s, c, "0,0,0", 1, s, c, s, c}' ${ref_cprops} ${ref_asm} | awk -f ${pipeline}/supp/lift-asm-annotations-to-input-annotations-w-split.awk ${ref_cprops} <(awk 'FILENAME==ARGV[1]{include[$2]=1;next}{print; gsub("-",""); for(i=1;i<=NF;i++){delete include[$i]}}END{for(i in include){print i}}' ${ref_cprops} ${ref_asm}) - | awk -f ${pipeline}/lift/lift-input-annotations-to-asm-annotations.awk ${ref_cprops} <(awk '{print $2}' ${ref_cprops}) - | awk -f ${pipeline}/lift/lift-asm-annotations-to-input-annotations.awk ${target_cprops} <(awk '{print $2}' ${target_cprops}) - | awk -f ${pipeline}/lift/lift-input-annotations-to-asm-annotations.awk ${target_cprops} ${target_asm} -
| Shell |
3D | theaidenlab/AGWG-merge | supp/generate-table-s2.sh | .sh | 532 | 8 | #!/bin/bash
extract () {
awk '{str[NR]=$0}END{print str[1]; print str[7]; print str[8]; print str[9]}' $1 | awk '{$1=$1}1'
}
awk '{tmp=$NF; $NF=""; print $0"\t"tmp}NR==1{tot_seq=tmp}NR==5{print "% of Total Sequenced Base Pairs\t"tmp/tot_seq}NR==9{tot_attempt=tmp; print "% of Total Sequenced Base Pairs\t"tmp/tot_seq}NR==13{print "% of Total Sequenced Base Pairs\t"tmp/tot_seq}' <(extract input.fasta.stats && extract chr-length-and-small.fasta.stats && extract chr-length.fasta.stats && extract small.fasta.stats) > S2.dat
| Shell |
3D | theaidenlab/AGWG-merge | supp/generate-table-1.sh | .sh | 348 | 8 | #!/bin/bash
extract () {
awk '{str[NR]=$0}END{print str[1]; print str[2]; print str[3]; print str[7]; print str[8]}' $1 | awk '{$1=$1}1'
}
awk '{tmp=$NF; $NF=""; print $0"\t"tmp}NR==5||NR==10||NR==15{print ""}' <(extract draft.fasta.stats && extract chr-length.fasta.stats && extract small.fasta.stats && extract tiny.fasta.stats) > T1.dat
| Shell |
3D | theaidenlab/AGWG-merge | supp/fasta-count-sequenced-bases.sh | .sh | 1,380 | 41 | #!/bin/bash
USAGE="./fasta-count-sequenced-bases.sh {-c <chrom_number>} <path_to_fasta>"
# handle options
while getopts "c:h" opt; do
case $opt in
h) echo "$USAGE"
exit 0
;;
c) re='^[-0-9]+$'
if ! [[ $OPTARG =~ $re && $OPTARG -gt 0 ]] ; then
echo ":( Error: Wrong syntax for number of chromosomes: counting all sequenced bases." >&2
else
echo ":) -c flag was triggered. Counting sequenced bases in the first $OPTARG scaffolds." >&1
chrom=$OPTARG
fi
;;
*) echo "$USAGE" >&1
exit 1
;;
esac
done
shift $(( OPTIND-1 ))
# handle arguments
fasta_file=$1
if [ -z $chrom ]
then
parallel --pipepart -a ${fasta_file} --will-cite "awk '\$0!~/>/{gsub(\"N||n\",\"\"); c+=length}END{print c}'" | awk '{c+=$0}END{printf("Total seq bases: %'"'"'d\n", c)}'
else
#echo "Individual chromosomes:"
parallel --pipepart -a ${fasta_file} -k --will-cite "awk 'BEGIN{name=\"prev\"}\$0~/>/{if(c){print name, c}; name=\$1; c=0;next}{gsub(\"N||n\",\"\"); c+=length}END{print name, c}'" | awk -v chrom=${chrom} 'NR==1{name=$1; len=$2; next}$1=="prev"{len+=$2;next}{print name, len; name=$1; len=$2}END{print name, len}' | awk -v chrom=${chrom} 'NR<=chrom{c+=$2}{d+=$2}END{printf("Total chrom seq bases: %'"'"'d (%% of of total: %'"'"'f%%)\n", c, (c/d*100))}'
#| tee >(head -n ${chrom}) '
fi
#printf("Total chrom seq bases: %'"'"'d, %% of total: %'"'"'d%%\n", c, (c/d*100))
| Shell |
3D | theaidenlab/AGWG-merge | supp/get-AaegL2.sh | .sh | 1,226 | 25 | #!/bin/bash
## This is a script to set up initial conditions for replicating the AaegL4 assembly
## Input: None (link to NCBI assembly hardcoded)
## Output: AaegL2.fasta file
## Written by: OD
## USAGE: ./get-AaegL2.sh
[ -f temp.cprops ] || [ -f temp.asm ] && echo >&2 "Please remove or rename temp.cprops and/or temp.asm files from the current folder. Exiting!" && exit 1
link="ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/000/004/015/GCA_000004015.2_AaegL2/GCA_000004015.2_AaegL2_genomic.fna.gz"
pipeline=`cd "$( dirname $0)" && cd .. && pwd`
curl $link -o `basename $link`
curl `dirname $link`"/"`basename $link _genomic.fna.gz`"_assembly_report.txt" -o `basename $link _genomic.fna.gz`"_assembly_report.txt"
gunzip "`basename $link`"
awk -F "\t" '$1!~/#/&&$5!~/na/{count++; print $5, count, $9}' `basename $link _genomic.fna.gz`"_assembly_report.txt" > temp.cprops
awk '{print $2}' temp.cprops > temp.asm
bash ${pipeline}/finalize/construct-fasta-from-asm.sh temp.cprops temp.asm `basename $link .gz` | awk 'FILENAME==ARGV[1]{oname[FNR]=$1;next}$0~/>/{counter++;$0=">"oname[counter]}1' temp.cprops - > AaegL2.fasta
rm `basename $link .gz` `basename $link _genomic.fna.gz`"_assembly_report.txt" temp.cprops temp.asm | Shell |
3D | theaidenlab/AGWG-merge | supp/generate-table-s3.sh | .sh | 550 | 8 | #!/bin/bash
extract () {
awk '{str[NR]=$0}END{print str[1]; print str[7]; print str[8]; print str[9]}' $1 | awk '{$1=$1}1'
}
awk '{tmp=$NF; $NF=""; print $0"\t"tmp}NR==1{tot_seq=tmp}NR==5{print "% of Total Sequenced Base Pairs\t"tmp/tot_seq}NR==9{tot_attempt=tmp; print "% of Total Sequenced Base Pairs\t"tmp/tot_seq}NR==13{print "% of Total Sequenced Base Pairs\t"tmp/tot_seq}' <(extract draft.fasta.stats && extract chr-length-and-small-and-tiny.fasta.stats && extract chr-length.fasta.stats && extract small-and-tiny.fasta.stats) > S3.dat
| Shell |
3D | theaidenlab/AGWG-merge | supp/generate-table-s4.sh | .sh | 439 | 10 | #!/bin/bash
extract () {
awk '{str[NR]=$0}END{print str[1]; print str[2]; print str[3]; print str[7]; print str[8]}' $1 | awk '{$1=$1}1'
}
in_chrom=`extract chr-length.fasta.stats | awk 'NR==1{print $NF}'`
awk -v in_chrom=${in_chrom} '{tmp=$NF; $NF=""; print $0"\t"tmp}NR==1{tot_len=tmp}NR==5||NR==10||NR==15{pct=in_chrom/tot_len; print "In chromosome-length scaffolds\t"pct}' <(extract chr-length-and-small.fasta.stats) > S4.dat
| Shell |
3D | theaidenlab/AGWG-merge | supp/print-tables.sh | .sh | 334 | 16 | #!/bin/bash
pipeline=`cd "$( dirname $0)" && cd .. && pwd`
bash ${pipeline}/supp/generate-table-1.sh
bash ${pipeline}/supp/generate-table-s1.sh
bash ${pipeline}/supp/generate-table-s2.sh
bash ${pipeline}/supp/generate-table-s3.sh
bash ${pipeline}/supp/generate-table-s4.sh
bash ${pipeline}/supp/generate-table-s5.sh
| Shell |
3D | theaidenlab/AGWG-merge | split/run-asm-splitter.sh | .sh | 11,980 | 226 | #!/bin/bash
#### Description: Wrapper script to split any given assembly. Note that this split is ignorant of the chromosome number and is simply a version of a mismatch detector at a pretty large scale. To control what is going on one might want to check if the number of mismatches pre to post-polish has decreased (currently disabled).
#### Usage: run-asm-splitter.sh -j <path_to_current_hic_file> -a <path_to_scaf_annotation_file> -b <path_to_superscaf_annotation_file> -w <coarse_res_for_mismatch> -n <fine_res_for_mismatch> -d <depletion_region> <path_to_original_cprops> <path_to_original_mnd_file> <path_to_current_cprops> <path_to_current_asm_file>
#### Input: cprops and mnd for original input contigs/scaffolds; cprops and asm for current input contigs/scaffolds.
#### Optional input: Juicebox .hic file for current assembly (-j option).
#### Output: cprops and asm of the polished assembly. Additional files include the new mnd and the new .hic files.
#### Parameters: primarily those that will be passed to the mismatch detector.
#### Unprompted: -p for use of GNU Parallel; -c for percent to saturate, -k for sensitivity, -b for balancing type (mismatch detector unprompted parameters).
#### Dependencies: mismatch-detector, editor, scaffolder, polish-specific files (edit-asm-according-to-new-cprops.sh).
#### Written by: Olga Dudchenko - olga.dudchenko@bcm.edu. Version dated 01/22/2017
## Set defaults
input_size=100000
wide_bin=100000
wide_depletion=3000000
narrow_bin=1000
## Set unprompted defaults
use_parallel=true # use GNU Parallel to speed-up calculations (default)
k=55 # sensitivity to depletion score (50% of expected is labeled as a mismatch)
pct=5 # default percent of map to saturate
norm="KR" # use an unbalanced contact matrix for analysis
## HANDLE OPTIONS
while getopts "hs:j:a:b:w:n:d:k:c:b:p:" opt; do
case $opt in
h) echo "$USAGE" >&1
exit 0
;;
p) if [ $OPTARG == true ] || [ $OPTARG == false ]; then
echo ":) -p flag was triggered. Running with GNU Parallel support parameter set to $OPTARG." >&1
use_parallel=$OPTARG
else
echo ":( Unrecognized value for -p flag. Running with default parameters (-p true)." >&2
fi
;;
s) re='^[0-9]+$'
if [[ $OPTARG =~ $re ]]; then
echo "...-s flag was triggered, will ignore all scaffolds shorter than $OPTARG for polishing" >&1
input_size=$OPTARG
else
echo ":( Wrong syntax for input size. Using the default value ${input_size}" >&2
fi
;;
j) if [ -s $OPTARG ]; then
echo "...-j flag was triggered, will use Juicebox map $OPTARG" >&1
current_hic=$OPTARG
else
echo ":( Juicebox file not found. Will run visualize script from scratch" >&2
fi
;;
a) if [ -s $OPTARG ]; then
echo "...-a flag was triggered, will use scaffold annotation file $OPTARG" >&1
current_scaf=$OPTARG
else
echo ":( Scaffold annotation file not found. Will run visualize script from scratch" >&2
fi
;;
b) if [ -s $OPTARG ]; then
echo "...-b flag was triggered, will use superscaffold annotation file $OPTARG" >&1
current_superscaf=$OPTARG
else
echo ":( Superscaffold annotation file not found. Will run visualize script from scratch" >&2
fi
;;
w) re='^[0-9]+$'
if [[ $OPTARG =~ $re ]]; then
echo ":) -w flag was triggered, performing cursory search for mismatches at $OPTARG resolution" >&1
wide_bin=$OPTARG
else
echo ":( Wrong syntax for bin size. Using the default value 25000" >&2
fi
;;
n) re='^[0-9]+$'
if [[ $OPTARG =~ $re ]]; then
echo ":) -n flag was triggered, performing mismatch region thinning at $OPTARG resolution" >&1
narrow_bin=$OPTARG
else
echo ":( Wrong syntax for mismatch localization resolution. Using the default value 1000" >&2
fi
;;
d) re='^[0-9]+$'
if [[ $OPTARG =~ $re ]]; then
echo ":) -d flag was triggered, depletion score will be averaged across a region bounded by $OPTARG superdiagonal" >&1
wide_depletion=$OPTARG
else
echo ":( Wrong syntax for mapping quality. Using the default value dep_size=100000" >&2
fi
;;
k) re='^[0-9]+$'
if [[ $OPTARG =~ $re ]] && [[ $OPTARG -gt 0 ]] && [[ $OPTARG -lt 100 ]]; then
echo ":) -k flag was triggered, starting calculations with ${OPTARG}% depletion as mismatch threshold" >&1
k=$OPTARG
else
echo ":( Wrong syntax for mismatch threshold. Using the default value k=50" >&2
fi
;;
c) re='^[0-9]+\.?[0-9]*$'
if [[ $OPTARG =~ $re ]] && [[ ${OPTARG%.*} -ge 0 ]] && ! [[ "$OPTARG" =~ ^0*(\.)?0*$ ]] && [[ $((${OPTARG%.*} + 1)) -le 100 ]]; then
echo ":) -c flag was triggered, starting calculations with ${OPTARG}% saturation level" >&1
pct=$OPTARG
else
echo ":( Wrong syntax for saturation threshold. Using the default value pct=${pct}" >&2
fi
;;
b) if [ $OPTARG == NONE ] || [ $OPTARG == VC ] || [ $OPTARG == VC_SQRT ] || [ $OPTARG == KR ]; then
echo ":) -b flag was triggered. Type of norm chosen for the contact matrix is $OPTARG." >&1
norm=$OPTARG
else
echo ":( Unrecognized value for -b flag. Running with default parameters (-b NONE)." >&2
fi
;;
*) echo "$USAGE" >&2
exit 1
;;
esac
done
shift $(( OPTIND-1 ))
## HANDLE ARGUMENTS: TODO check file format
if [ $# -lt 4 ]; then
echo ":( Required arguments not found. Please double-check your input!!" >&2
echo "$USAGE" >&2
exit 1
fi
orig_cprops=$1
orig_mnd=$2
current_cprops=$3
current_asm=$4
if [ ! -f ${orig_cprops} ] || [ ! -f ${orig_mnd} ] || [ ! -f ${current_cprops} ] || [ ! -f ${current_asm} ] ; then
echo >&2 ":( Required files not found. Please double-check your input!!" && exit 1
fi
## CHECK DEPENDENCIES
pipeline=`cd "$( dirname $0)" && cd .. && pwd`
## PREP
id=`basename ${current_cprops} .cprops`
STEP="split"
# PREPARATORY: split asm into resolved and unresolved pieces:
awk 'NR==1' ${current_asm} > temp_resolved.asm
awk 'NR>1' ${current_asm} > temp_unresolved.asm
## MAIN FUNCTION
# 0) Check if Juicebox file for the current assembly has been passed. If not build a map. TODO: make sure that the required resolution is included. TODO: Enable at some point.
if [ -z ${current_hic} ] || [ -z ${current_scaf} ] || [ -z ${current_superscaf} ] ; then
echo "...no hic file and/or annotation files have been provided with input, building the hic map from scratch" >&1
## TODO: check w/o parallel
bash ${pipeline}/edit/edit-mnd-according-to-new-cprops.sh ${current_cprops} ${orig_mnd} > `basename ${current_cprops} .cprops`.mnd.txt
current_mnd=`basename ${current_cprops} .cprops`.mnd.txt
bash ${pipeline}/visualize/run-asm-visualizer.sh ${current_cprops} ${current_asm} ${current_mnd}
current_hic=`basename ${current_asm} .asm`.hic
current_scaf=`basename ${current_asm} .asm`_asm.scaffold_track.txt
current_superscaf=`basename ${current_asm} .asm`_asm.superscaf_track.txt
fi
# 1) Annotate mismatches in current assembly
bash ${pipeline}/edit/run-mismatch-detector.sh -p ${use_parallel} -c ${pct} -w ${wide_bin} -k ${k} -d ${wide_depletion} -n ${narrow_bin} ${current_hic}
# store intermediate mismatch stuff - not necessary
mv depletion_score_wide.wig ${id}.${STEP}.depletion_score_wide.wig
mv depletion_score_narrow.wig ${id}.${STEP}.depletion_score_narrow.wig
mv mismatch_wide.bed ${id}.${STEP}.mismatch_wide.bed
mv mismatch_narrow.bed ${id}.${STEP}.mismatch_narrow.bed
# convert bed track into 2D annotations
resolved=$(awk 'NR==2{print $3}' ${current_superscaf}) #scaled coordinates
# modified overlay-edits to choose the boundaries inside, i.e. only mismatches, and no debris
#awk -v bin_size=${narrow_bin} -f ${pipeline}/edit/overlay-edits.awk ${current_scaf} ${id}.${STEP}.mismatch_narrow.bed | awk -v r=${resolved} 'NR==1||$3<=r' > ${id}.${STEP}.suspicious_2D.txt
awk -v bin_size=${narrow_bin} -f ${pipeline}/split/overlay-edits.awk ${current_scaf} ${id}.${STEP}.mismatch_narrow.bed | awk -v r=${resolved} 'NR==1||$3<=r' > ${id}.${STEP}.suspicious_2D.txt
# split into mismatches and edits
awk 'NR==1||$8=="mismatch"' ${id}.${STEP}.suspicious_2D.txt > ${id}.${STEP}.mismatches_2D.txt
awk 'NR==1||$8=="debris"' ${id}.${STEP}.suspicious_2D.txt > ${id}.${STEP}.edits_2D.txt
# 2) Deal with mismatches: break resolved asm at joints associated with mismatches
awk -f ${pipeline}/lift/lift-asm-annotations-to-input-annotations.awk ${current_cprops} ${current_asm} ${id}.${STEP}.mismatches_2D.txt | awk 'FILENAME==ARGV[1]{cname[$1]=$2;next}FNR>1{if($2==0){type[cname[$1]]="h"}else{type[cname[$1]]="t"}}END{for(i in type){print i, type[i]}}' ${current_cprops} - | awk 'FILENAME==ARGV[1]{type[$1]=$2; if($2=="h"){type[-$1]="t"}else{type[-$1]="h"}; next}{str=""; for(i=1;i<=NF;i++){if($i in type){if (type[$i]=="h"){print str; str=$i}else{str=str" "$i; print str; str=""}}else{str=str" "$i}}; print str}' - temp_resolved.asm | sed '/^$/d' | awk '{$1=$1}1' > temp_resolved.asm.new && mv temp_resolved.asm.new temp_resolved.asm
# 3) Apply edits
# reconstruct the edits file
awk 'BEGIN{OFS="\t"; print "chr1", "x1", "x2", "chr2", "y1", "y2", "color", "id", "X1", "X2", "Y1", "Y2"}$1~/:::debris/{print $1, 0, $3, $1, 0, $3, "0,0,0", "debris", 0, $3, 0, $3}' ${current_cprops} | awk -f ${pipeline}/lift/lift-input-annotations-to-asm-annotations.awk ${current_cprops} <(awk '{print $2}' ${current_cprops}) - | awk -f ${pipeline}/lift/lift-asm-annotations-to-input-annotations.awk ${orig_cprops} <(awk '{print $2}' ${orig_cprops}) - > temp.pre_split_edits.txt
current_edits=temp.pre_split_edits.txt
bash ${pipeline}/lift/lift-edit-asm-annotations-to-original-input-annotations.sh ${orig_cprops} ${current_cprops} ${current_asm} ${id}.${STEP}.edits_2D.txt > h.edits.txt
awk 'NR==1' "h.edits.txt" > temp
{ awk 'NR>1' ${current_edits} ; awk 'NR>1' "h.edits.txt" ; } | sort -k 1,1 -k 2,2n >> temp
mv temp temp.post_split_edits.txt
split_edits=temp.post_split_edits.txt
bash ${pipeline}/edit/apply-edits-prep-for-next-round.sh -p ${use_parallel} -r ${STEP} ${split_edits} ${orig_cprops} ${orig_mnd}
mv `basename ${orig_cprops} .cprops`.${STEP}.cprops $id.${STEP}.cprops
mv `basename ${orig_mnd} .txt`.${STEP}.txt $id.${STEP}.mnd.txt
split_cprops=$id.${STEP}.cprops
split_mnd=$id.${STEP}.mnd.txt
# 4) Lift current assembly to new cprops
bash ${pipeline}/edit/edit-asm-according-to-new-cprops.sh ${split_cprops} ${current_cprops} temp_resolved.asm > new_resolved.asm
bash ${pipeline}/edit/edit-asm-according-to-new-cprops.sh ${split_cprops} ${current_cprops} temp_unresolved.asm > new_unresolved.asm
# 5) Prepare for split run: break resolved at debris, filter debris and pieces smaller than input_size
awk -v input_size=${input_size} 'function printout(str){if(c>=input_size){print substr(str,2)>"h.scaffolds.original.notation.step.0.txt"}else{print substr(str,2)>"h.dropouts.step.0.txt"}}FILENAME==ARGV[1]{len[$2]=$3; len[-$2]=$3; if($1~/:::debris/){remove[$2]=1; remove[-$2]=1}; next}{str=""; for(i=1;i<=NF;i++){if($i in remove){if(str!=""){printout(str)}; print $i > "h.dropouts.step.0.txt"; str=""; c=0}else{str=str" "$i; c+=len[$i]}}; if(str!=""){printout(str)}}' ${split_cprops} new_resolved.asm
cat new_unresolved.asm >> h.dropouts.step.0.txt
cat h.scaffolds.original.notation.step.0.txt h.dropouts.step.0.txt > `basename ${split_cprops} .cprops`.asm
split_asm=`basename ${split_cprops} .cprops`.asm
# 7) Visualize output
bash ${pipeline}/visualize/run-asm-visualizer.sh -p ${use_parallel} ${split_cprops} ${split_asm} ${split_mnd}
# 8) Cleanup
rm ${split_mnd} temp_resolved.asm temp_unresolved.asm temp.pre_split_edits.txt temp.post_split_edits.txt new_resolved.asm new_unresolved.asm temp.${id}.${STEP}.asm_mnd.txt h.scaffolds.original.notation.step.0.txt h.dropouts.step.0.txt
| Shell |
3D | AxDante/SAAMI | 2D_Image_exampe.py | .py | 278 | 11 | # Import sammi module
from saami.SAAMI import SAAMI
Data2D = SAAMI('data/Chest_X-ray_example_processed', roi=None, dataset_type='Image')
print('dataset loaded')
mask = Data2D.calculate_mask(0, threshold= 0.003)
Data2D.save_mask(0, save_path='outputs/saved_2D_sam_mask.png')
| Python |
3D | AxDante/SAAMI | 3D_volume_example.py | .py | 577 | 22 | # Import sammi module
from saami.SAAMI import SAAMI
# roi = ((70, 150), (700, 600))
SAAMIdata = SAAMI('data/MRI_example', roi=None)
# Calculates 3D mask for the first volume (idx = 0)
mask = SAAMIdata.calculate_mask(0, threshold= 0.003)
SAAMIdata.save_mask(0, save_path='outputs/saved_sam_mask.nii')
# Fine-tune 3D mask for the first volume
new_mask = SAAMIdata.finetune_3d_mask(0, neighbor_size=3)
# # Save 3D mask for the first volume
SAAMIdata.save_mask(0, save_path='outputs/saved_sam_mask_adjusted.nii')
# visualization for the first volume
SAAMIdata.visualize(0)
| Python |
3D | AxDante/SAAMI | CODE_OF_CONDUCT.md | .md | 5,226 | 129 | # Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
knight16729438@gmail.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.
| Markdown |
3D | AxDante/SAAMI | saami/Dataset.py | .py | 5,013 | 152 | from torch.utils.data import Dataset
import os
import nibabel as nib
import numpy as np
import re
import cv2
class VolumeDataset(Dataset):
"""
VolumeDataset
"""
def __init__(self, input_dir, roi=None):
self.input_dir = input_dir
self.data = {}
self.folder_names = os.listdir(input_dir) # Get all folders under input_dir
image_file_list = []
for fid in range(len(self.folder_names)): # for loop iterate over all folder_name under input_dir
folder_name = self.folder_names[fid]
image_folder = os.path.join(input_dir, folder_name, "Images")
label_folder = os.path.join(input_dir, folder_name, "Labels")
image_file = os.listdir(image_folder)[0] # Currently we assume only one image file per folder, which would be the main volume for the patient
image_path = os.path.join(image_folder, image_file)
image_file_list.append(image_path)
image_data = nib.load(image_path).get_fdata()
label_data = np.zeros(image_data.shape)
label_files = os.listdir(label_folder)
combined_labels = [f for f in label_files if f.endswith("combined_seg.nii.gz")]
if len(combined_labels) > 0 :
# Use the combined file if it exists
label_data = nib.load(os.path.join(label_folder, combined_labels[0])).get_fdata()
else:
# Otherwise, combine all label files
for label_file in os.listdir(label_folder):
if label_file.endswith(".nii.gz"):
label_path = os.path.join(label_folder, label_file)
label_id_str = re.findall(r'\d+', label_file)[0]
label_id = int(label_id_str)
label_image_data = nib.load(label_path).get_fdata()
label_data += label_id * (label_image_data > 0)
image = image_data[:, :, :]
label = label_data[:, :, :]
# ROI adjustment
if roi:
image = image_data[roi[0][0]:roi[1][0], roi[0][1]:roi[1][1], :]
label = label_data[roi[0][0]:roi[1][0], roi[0][1]:roi[1][1], :]
# Check shape between image and label
assert label.shape == image.shape, "Shape mismatch between image and label"
self.data[fid] = {"image": image, "label": label}
self.image_file_list = image_file_list
def __len__(self):
"""
Return number of folders under input_dir
"""
return len(self.folder_names)
def __getitem__(self, idx):
"""
Get data_dict corresponding to idx
"""
return self.data[idx]
def update_label(self, idx, label, label_data):
"""
Update data in the dataset
"""
self.data[idx][label] = label_data
def update_data(self, idx, data):
"""
Update data in the dataset
"""
self.data[idx] = data
class ImageDataset(Dataset):
"""
ImageDataset
"""
def __init__(self,
input_dir, roi = None):
self.input_dir = input_dir
self.data = []
image_folder = os.path.join(input_dir, "Images")
label_folder = os.path.join(input_dir, "Labels")
image_file_list = os.listdir(image_folder)
label_file_list = os.listdir(label_folder)
# Filter out non-image files, may support more extensions in the future
image_file_list = [img for img in image_file_list if img.lower().endswith(('.jpg', '.png', '.jpeg'))]
label_file_list = [lbl for lbl in label_file_list if lbl.lower().endswith(('.jpg', '.png', '.jpeg'))]
image_file_list.sort()
label_file_list.sort()
assert len(image_file_list) == len(label_file_list), "Number of images and labels do not match"
# Iterate through each pair and load image/label for each pair
for img_file, lbl_file in zip(image_file_list, label_file_list):
img_path = os.path.join(image_folder, img_file)
lbl_path = os.path.join(label_folder, lbl_file)
image = cv2.imread(img_path)
label = cv2.imread(lbl_path)
# ROI adjustment
if roi:
image = image[roi[0][0]:roi[1][0], roi[0][1]:roi[1][1]]
label = label[roi[0][0]:roi[1][0], roi[0][1]:roi[1][1]]
data = {"image": image, "label": label}
self.data.append(data)
self.image_file_list = image_file_list
def __len__(self):
"""
Return the number of data items in the dataset
"""
return len(self.data)
def __getitem__(self,
idx):
"""
Get data corresponding to idx
"""
return self.data[idx]
def update_data(self, idx, data):
"""
Update data in the dataset
"""
self.data[idx] = data
| Python |
3D | AxDante/SAAMI | saami/__init__.py | .py | 0 | 0 | null | Python |
3D | AxDante/SAAMI | saami/functions.py | .py | 10,672 | 288 | import os.path
import urllib.request
import numpy as np
import cv2
from tqdm import tqdm
from segment_anything import SamAutomaticMaskGenerator, sam_model_registry, SamPredictor
def get_sam_mask_generator(sam_checkpoint=None, sam_model_type="vit_b", device="cuda"):
if sam_checkpoint == None:
if sam_model_type == "vit_h":
model_url = 'https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth'
sam_checkpoint = 'models/sam_vit_h_4b8939.pth'
elif sam_model_type == "vit_b":
model_url = 'https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth'
sam_checkpoint = 'models/sam_vit_b_01ec64.pth'
elif sam_model_type == "vit_l":
model_url = 'https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth'
sam_checkpoint = 'models/sam_vit_l_0b3195.pth'
if not os.path.exists(sam_checkpoint):
print("SAM checkpoint does not exist, downloading the checkpoint under /models folder ...")
if not os.path.exists('models'):
os.makedirs('models')
urllib.request.urlretrieve(model_url, sam_checkpoint)
print('Loading SAM model from checkpoint: {}'.format(sam_checkpoint))
sam = sam_model_registry[sam_model_type](checkpoint=sam_checkpoint)
sam.to(device=device)
# mask_generator = SamAutomaticMaskGenerator(sam)
mask_generator = SamAutomaticMaskGenerator(
model=sam,
points_per_side=32,
pred_iou_thresh=0.86,
stability_score_thresh=0.92,
crop_n_layers=1,
crop_n_points_downscale_factor=2,
min_mask_region_area=100, # Requires open-cv to run post-processing
)
return mask_generator
def apply_threshold_label(data, threshold=0.005):
# Calculate the occurrence of each lable in the input data (2D array)
ue, counts = np.unique(data, return_counts=True)
th_val = threshold * data.shape[0] * data.shape[1]
mask = ue[counts < th_val]
data[np.isin(data, mask)] = 0
return data
def process_slice(sam_data, input_image_slice, mask_generator, axis, start_pos=0, threshold=0.0):
# Repeat dimension if input slice only has one channel (grayscale image)
if len(input_image_slice.shape) == 2:
mask = np.abs(input_image_slice) > 10
rows, cols = np.where(mask)
if not (rows.size > 0 and cols.size > 0):
print('No available pixels, skipping...')
return
top, bottom = np.min(rows), np.max(rows)
left, right = np.min(cols), np.max(cols)
image_slice = input_image_slice[top:bottom + 1, left:right + 1]
image_slice = image_slice[:, :, np.newaxis]
image_3c = np.repeat(image_slice, 3, axis=2)
image_3c = (image_3c / np.amax(image_3c) * 255).astype(np.uint8)
elif len(input_image_slice.shape) == 3:
# Assumeing the input image is RGB (and with proper intensity range)
image_3c = input_image_slice
# If image_3c width or height is larger than 1600, perform rescaling
max_dim = 1600
original_height, original_width = image_3c.shape[:2]
if original_height > max_dim or original_width > max_dim:
new_height, new_width = (
int(original_height * max_dim / max(original_height, original_width)),
int(original_width * max_dim / max(original_height, original_width)),
)
image_3c = cv2.resize(image_3c, (new_width, new_height), interpolation=cv2.INTER_AREA)
# Run SAM model
masks = (mask_generator.generate(image_3c))
masks_label = np.zeros(masks[0]['segmentation'].shape, dtype=int)
# Pefrom label post-processing
for index, mask in enumerate(masks):
masks_label[mask['segmentation']] = index + 1
masks_label = masks_label.astype(np.int8)
# If rescaling was performed, rescale the mask back to the original size
if original_height > max_dim or original_width > max_dim:
masks_label = cv2.resize(masks_label, (original_width, original_height), interpolation=cv2.INTER_NEAREST)
masks_label = masks_label.astype(np.int16)
# Apply threshold to remove small regions
if threshold > 0:
masks_label = apply_threshold_label(masks_label, threshold)
# Save the segmentation result to the SAM data
if axis == 'x':
sam_data["sam_seg"]["x"][start_pos, top:bottom + 1, left:right + 1] = masks_label
elif axis == 'y':
sam_data["sam_seg"]["y"][top:bottom + 1, start_pos, left:right + 1] = masks_label
elif axis == 'z':
sam_data["sam_seg"]["z"][top:bottom + 1, left:right + 1, start_pos] = masks_label
elif axis == '2D':
sam_data["sam_seg"]['2D'] = masks_label
return sam_data
def get_SAM_data(data_dict, mask_generator, main_axis = '2D', threshold=0.0):
image = data_dict["image"]
label = data_dict["label"]
img_shape = data_dict["image"].shape
sam_data = {}
sam_data["image"] = data_dict["image"]
sam_data["label"] = data_dict["label"]
sam_data["sam_seg"] = {}
if main_axis == '2D': # 2D case
print('Processing slice using SAM model on 2D image.')
sam_data = process_slice(sam_data, image, mask_generator, '2D', threshold=0.0)
else: # 3D case
axes = ['x', 'y', 'z'] if main_axis == 'all' else [main_axis]
if 'x' in axes:
sam_data["sam_seg"]['x'] = np.zeros(sam_data["image"].shape)
total_slices = img_shape[0]
print('Processing slice using SAM model along x axis.')
with tqdm(total=total_slices, desc="Processing slices", unit="slice") as pbar:
for i in range(total_slices):
sam_data = process_slice(sam_data, image[:, :, i], mask_generator, 'x', i)
pbar.update(1)
if 'y' in axes:
sam_data["sam_seg"]['y'] = np.zeros(sam_data["image"].shape)
total_slices = img_shape[1]
print('Processing slice using SAM model along y axis.')
with tqdm(total=total_slices, desc="Processing slices", unit="slice") as pbar:
for i in range(total_slices):
sam_data = process_slice(sam_data, image[:, :, i], mask_generator, 'y', i)
pbar.update(1)
if 'z' in axes:
sam_data["sam_seg"]['z'] = np.zeros(sam_data["image"].shape)
total_slices = img_shape[2]
print('Processing slice using SAM model along z axis.')
with tqdm(total=total_slices, desc="Processing slices", unit="slice") as pbar:
for i in range(total_slices):
sam_data = process_slice(sam_data, image[:, :, i], mask_generator, 'z', i)
pbar.update(1)
return sam_data
def check_grid3d(data, rx, ry, rz, bs):
if all(data[rx, ry, rz] == value for value in
[data[rx - bs, ry, rz], data[rx + bs, ry, rz], data[rx, ry - bs, rz], data[rx, ry + bs, rz]]):
return True, data[rx, ry, rz]
else:
return False, -1
def check_grid(arr, rx, ry, ns=2, method='point'):
if method == 'point':
if all(arr[rx, ry] == value for value in
[arr[rx - ns, ry], arr[rx + ns, ry], arr[rx, ry - ns], arr[rx, ry + ns]]):
return True
else:
return False
def check_grid(arr, rx, ry, ns=2, method='point'):
if method == 'point':
if all(arr[rx, ry] == value for value in
[arr[rx - ns, ry], arr[rx + ns, ry], arr[rx, ry - ns], arr[rx, ry + ns]]):
return True
else:
return False
def calculate_mapping(array_1, array_2, num_labels, neighbor_size=0):
if array_1.shape != array_2.shape:
raise ValueError("The input arrays should have the same shape.")
mapping = np.zeros((num_labels + 1, num_labels + 1), dtype=int)
for i in range(array_1.shape[0]):
for j in range(array_1.shape[1]):
val_1 = array_1[i, j].astype(int)
val_2 = array_2[i, j].astype(int)
try:
c1 = check_grid(array_1, i, j, ns=neighbor_size)
c2 = check_grid(array_2, i, j, ns=neighbor_size)
if c1 and c2:
mapping[val_1, val_2] += 1
else:
j = j + neighbor_size
except:
pass
return mapping
def find_largest_indices(arr):
# Flatten the array and get the indices that would sort it in descending order
si = np.argsort(arr.flatten())[::-1]
# Convert the flattened indices to 2D indices
si_2d = np.unravel_index(si, arr.shape)
# Initialize an empty list for the result
result = []
# Iterate through the sorted indices and check if the corresponding value is not 0
for i in range(len(si)):
if arr[si_2d[0][i], si_2d[1][i]] != 0:
# Add the non-zero value's indices as a tuple to the result list
result.append((si_2d[0][i], si_2d[1][i]))
return result
# Handles the propagated information from previous layer
def modify_layer(array, mapping):
# Find the majority mapping for each label in the first array
# m_array = np.full(array.shape, -1)
m_array = np.zeros(array.shape)
ilist = find_largest_indices(mapping)
alist = []
vlist = []
while len(ilist) > 0:
pval, cval = ilist.pop(0)
# if cval not in vlist:
# alist.append((pval, cval))
# vlist.append(cval)
valid = not (any(pval == t[0] for t in alist) or any(cval == t[1] for t in alist))
if valid:
alist.append((pval, cval))
# For each assignment in final assignment list
for a in alist:
m_array[array == a[1]] = a[0]
return m_array
def fine_tune_3d_masks(data_dict, main_axis='z', neighbor_size=0):
data = data_dict['sam_seg'][main_axis]
data_shape = data.shape
adj_data = data.copy().astype(int)
max_labels = np.amax(adj_data).astype(int)
center = data_shape[2] // 2
print('Using mask layer {} as center'.format(center))
# First loop: from center to 0
for rz in tqdm(range(center, 0, -1), desc="Fine-tuning layer"):
mapping = calculate_mapping(adj_data[:, :, rz], data[:, :, rz - 1], max_labels)
adj_data[:, :, rz - 1] = modify_layer(adj_data[:, :, rz - 1], mapping)
# Second loop: from center to data_shape[2]
for rz in tqdm(range(center, data_shape[2] - 1), desc="Fine-tuning layer"):
mapping = calculate_mapping(adj_data[:, :, rz], data[:, :, rz + 1], max_labels)
adj_data[:, :, rz + 1] = modify_layer(adj_data[:, :, rz + 1], mapping)
return adj_data | Python |
3D | AxDante/SAAMI | saami/SAAMI.py | .py | 3,842 | 84 | import cv2
import os
import nibabel as nib
from typing import NamedTuple
from saami.Dataset import VolumeDataset, ImageDataset
from saami.utils import save_SAM_data, convert_to_nifti
from saami.functions import get_SAM_data, fine_tune_3d_masks, get_sam_mask_generator
from saami.visualizer import visualize_volume_SAM
class DatasetInfo(NamedTuple):
dataset_class: type
main_axis: str
class SAAMI:
def __init__(self, data_dir, mask_generator=None, main_axis='', roi=None, dataset_type='Volume'):
self.data_dir = data_dir
self.roi = roi
self.mask_generator = mask_generator if mask_generator else get_sam_mask_generator(sam_model_type="vit_b")
self.main_axis = main_axis if main_axis else '2D' if dataset_type == 'Image' else 'z'
# Dataset type mapping
dataset_class_map = {
"Volume": DatasetInfo(VolumeDataset, main_axis if main_axis else 'z'),
"Image": DatasetInfo(ImageDataset, '2D'),
}
self.dataset_type = dataset_type
# Create an instance of the specified dataset class
self.dataset = dataset_class_map[dataset_type].dataset_class(data_dir, roi=self.roi)
def calculate_3d_mask(self, idx, threshold=0.0):
assert self.dataset_type == 'Volume', "calculate_3d_mask is only available for VolumeDataset"
data = self.dataset[idx]
mask_data = get_SAM_data(data, self.mask_generator, main_axis=self.main_axis, threshold=threshold)
data['sam_seg'][self.main_axis] = mask_data
self.dataset.update_data(idx, mask_data)
return mask_data
def calculate_mask(self, idx, threshold=0.0):
data = self.dataset[idx]
sam_data = get_SAM_data(data, self.mask_generator, main_axis=self.main_axis, threshold=threshold)
self.dataset.update_data(idx, sam_data)
return sam_data['sam_seg'][self.main_axis]
def get_mask(self, idx):
data = self.dataset[idx]
return data['sam_seg'][self.main_axis]
def finetune_3d_mask(self, idx, neighbor_size=3):
assert self.dataset_type == 'Volume', "finetune_3d_mask is only available for VolumeDataset"
sam_data = self.dataset[idx]
mask_data = fine_tune_3d_masks(sam_data, main_axis=self.main_axis, neighbor_size=neighbor_size)
sam_data['sam_seg'][self.main_axis] = mask_data
self.dataset.update_data(idx, sam_data)
return mask_data
def save_data(self, idx, save_path='outputs/saved_data.pkl'):
data = self.dataset[idx]
save_SAM_data(data, save_path)
def save_all_data(self, save_path=None):
for i in range(len(self.dataset)):
save_path = save_path if save_path else 'outputs/saved_data_{}.pkl'.format(i)
self.save_data(i, save_path=save_path)
def save_mask(self, idx, save_path='outputs/saved_mask.nii'):
data = self.dataset[idx]
if save_path.endswith('.nii'):
orig_file_path = self.dataset.image_file_list[idx]
assert os.path.exists(orig_file_path), "Original nifti does not exist"
orig_nifti = nib.load(orig_file_path)
convert_to_nifti(data, save_path=save_path, affine=orig_nifti.affine)
elif save_path.endswith('.pkl'):
save_SAM_data(data, save_path)
elif save_path.endswith('jpg') or save_path.endswith('png'):
cv2.imwrite(save_path, data['sam_seg'][self.main_axis])
def save_masks(self, save_path=None):
for i in range(len(self.dataset)):
save_path = save_path if save_path else 'outputs/saved_mask_{}.nii'.format(i)
self.save_mask(i, save_path=save_path)
def visualize(self, idx, save_folder="outputs/example_images"):
data = self.dataset[idx]
visualize_volume_SAM(data, save_path=save_folder, show_tkinter = True) | Python |
3D | AxDante/SAAMI | saami/utils.py | .py | 2,533 | 74 | "Utility file for saami package"
import os
import numpy as np
import nibabel as nib
import pickle
def download_progress_hook(block_num, block_size, total_size):
downloaded = block_num * block_size
progress = min(100, (downloaded / total_size) * 100)
print(f"\rDownload progress: {progress:.2f}%", end='')
def most_prevalent_labels(data):
unique, counts = np.unique(data, return_counts=True)
label_count = sorted(zip(unique, counts), key=lambda x: x[1], reverse=True)
labels = [int(label) for label, count in label_count]
occurrences = np.asarray([count for label, count in label_count])
return labels, occurrences
def random_index_with_label(data, label):
if len(data.shape) == 3:
indices = np.argwhere(data == label)
if indices.size == 0:
return (-1, -1, -1)
random_index = indices[np.random.choice(indices.shape[0])]
return tuple(random_index)
elif len(data.shape) == 2:
indices = np.argwhere(data == label)
if indices.size == 0:
return (-1, -1)
random_index = indices[np.random.choice(indices.shape[0])]
return tuple(random_index)
def convert_to_nifti(data_dict, save_path='outputs/test.nii', main_axis='z', affine=np.eye(4)):
data = data_dict['sam_seg'][main_axis]
# Convert data to nifti image
data_array = np.array(data, dtype=np.int16) # Convert the data type to int16
nifti_img = nib.Nifti1Image(data_array, affine)
# Check if the saving folder exists
folder_path = os.path.dirname(save_path)
if not os.path.exists(folder_path):
os.makedirs(folder_path)
# Save data
nib.save(nifti_img, save_path)
print('Nifti data saved to {}'.format(save_path))
def save_SAM_data(sam_data, save_path):
# Ensure that the directory for the save path exists
os.makedirs(os.path.dirname(save_path), exist_ok=True)
# Open the file in binary write mode and use pickle.dump to save the dictionary
with open(save_path, 'wb') as f:
pickle.dump(sam_data, f)
print('SAM data saved to {}'.format(save_path))
def load_SAM_data(load_path):
# Check if the file exists
if not os.path.exists(load_path):
raise FileNotFoundError('The specified file {} does not exist.'.format(load_path))
# Open the file in binary read mode and use pickle.load to load the dictionary
with open(load_path, 'rb') as f:
sam_data = pickle.load(f)
print('SAM data loaded from to {}'.format(load_path))
return sam_data | Python |
3D | AxDante/SAAMI | saami/visualizer.py | .py | 4,523 | 138 | import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
import os
import tkinter as tk
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib.colors import Normalize
def visualize_volume_SAM(data_dict, show_widget=False, show_tkinter=False, save_path="", axis='z'):
print(data_dict)
images = data_dict["image"]
labels = data_dict["label"]
masks = data_dict['sam_seg'][axis]
if axis == 'x':
max_slice = images.shape[0] - 1
elif axis == 'y':
max_slice = images.shape[1] - 1
elif axis == 'z':
max_slice = images.shape[2] - 1
max_mask_value = np.amax(masks)
# Function to update the plot based on the slider value
def get_plot(fig, slice_idx):
# Clear previous image for the GUI
fig.clear()
if axis == 'x':
image = images[slice_idx, :, :]
gt_label = labels[slice_idx, :, :]
sam_label = masks[slice_idx, :, :]
elif axis == 'y':
image = images[:, slice_idx, :]
gt_label = labels[:, slice_idx, :]
sam_label = masks[:, slice_idx, :]
elif axis == 'z':
image = images[:, :, slice_idx]
gt_label = labels[:, :, slice_idx]
sam_label = masks[:, :, slice_idx]
aspect='auto'
# Set fixed color map range for the labels
vmin, vmax = 0, max_mask_value
norm = Normalize(vmin=vmin, vmax=vmax)
axes = fig.subplots(2, 3)
(ax1, ax2, ax3), (ax4, ax5, ax6) = axes
ax1.imshow(image, cmap="gray", aspect=aspect)
ax1.set_title("Original Image")
ax1.axis("off")
label_img = ax2.imshow(gt_label, cmap="jet", aspect=aspect, norm=norm)
ax2.set_title("Ground Truth Label")
ax2.axis("off")
ax3.imshow(image, cmap="gray", aspect="equal")
ax3.imshow(gt_label, cmap="jet", alpha=0.5, aspect=aspect, norm=norm)
ax3.set_title("Ground Truth Overlay")
ax3.axis("off")
ax4.imshow(image, cmap="gray", aspect=aspect)
ax4.set_title("Original Image")
ax4.axis("off")
label_img = ax5.imshow(sam_label, cmap="jet", aspect=aspect, norm=norm)
ax5.set_title("SAM-Mask Label")
ax5.axis("off")
ax6.imshow(image, cmap="gray", aspect=aspect)
ax6.imshow(sam_label, cmap="jet", alpha=0.5, aspect=aspect, norm=norm)
ax6.set_title("SAM-Mask Overlay")
ax6.axis("off")
fig.subplots_adjust(wspace=0.2, hspace=0.2)
if show_tkinter:
canvas.draw()
# Show the ipy widget (which works in notebook environemnt
if show_widget:
fig = plt.figure(figsize=(15, 15))
slider = widgets.IntSlider(min=0, max=max_slice, step=1, value=0)
widgets.interact(lambda slice_idx: get_plot(fig, slice_idx), slice_idx=slider)
# Show the tinker GUI
if show_tkinter:
window = tk.Tk()
window.title("Volume Visualization")
fig = plt.figure(figsize=(15, 15))
canvas = FigureCanvasTkAgg(fig, master=window)
canvas.get_tk_widget().pack(side=tk.LEFT, fill=tk.BOTH, expand=1)
# Create a frame for the slider and button
control_frame = tk.Frame(window)
control_frame.pack(side=tk.RIGHT, fill=tk.Y)
# Slider
slider = tk.Scale(control_frame, from_=0, to=max_slice, orient=tk.VERTICAL,
command=lambda s: get_plot(fig, int(s)))
slider.pack(side=tk.TOP, pady=10)
# Save Image button
def save_image():
if not os.path.exists(save_path):
os.makedirs(save_path)
filename = os.path.join(save_path, f"Slice_{slider.get()}_visualization.jpg")
print('saving file to {}'.format(filename))
fig.savefig(filename)
save_button = tk.Button(control_frame, text="Save Image", command=save_image)
save_button.pack(side=tk.TOP, pady=10)
# Callback function to handle window close event
def on_close():
window.destroy()
plt.close(fig)
# Bind the callback function to the window close event
window.protocol("WM_DELETE_WINDOW", on_close)
get_plot(fig, 0)
window.mainloop()
# Otherwise just run through the volume and save the images
if not show_tkinter and not show_widget:
fig = plt.figure(figsize=(15, 15))
for i in range(max_slice):
get_plot(fig, i) | Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | AutoEncoder/utils.py | .py | 4,810 | 178 |
import warnings
import torch
# import imageio
import math
import numpy as np
# import skvideo.io
import sys
import pdb as pdb_original
import logging
# import imageio.core.util
logging.getLogger("imageio_ffmpeg").setLevel(logging.ERROR)
class ForkedPdb(pdb_original.Pdb):
"""A Pdb subclass that may be used
from a forked multiprocessing child
"""
def interaction(self, *args, **kwargs):
_stdin = sys.stdin
try:
sys.stdin = open('/dev/stdin')
pdb_original.Pdb.interaction(self, *args, **kwargs)
finally:
sys.stdin = _stdin
# Shifts src_tf dim to dest dim
# i.e. shift_dim(x, 1, -1) would be (b, c, t, h, w) -> (b, t, h, w, c)
def shift_dim(x, src_dim=-1, dest_dim=-1, make_contiguous=True):
n_dims = len(x.shape)
if src_dim < 0:
src_dim = n_dims + src_dim
if dest_dim < 0:
dest_dim = n_dims + dest_dim
assert 0 <= src_dim < n_dims and 0 <= dest_dim < n_dims
dims = list(range(n_dims))
del dims[src_dim]
permutation = []
ctr = 0
for i in range(n_dims):
if i == dest_dim:
permutation.append(src_dim)
else:
permutation.append(dims[ctr])
ctr += 1
x = x.permute(permutation)
if make_contiguous:
x = x.contiguous()
return x
# reshapes tensor start from dim i (inclusive)
# to dim j (exclusive) to the desired shape
# e.g. if x.shape = (b, thw, c) then
# view_range(x, 1, 2, (t, h, w)) returns
# x of shape (b, t, h, w, c)
def view_range(x, i, j, shape):
shape = tuple(shape)
n_dims = len(x.shape)
if i < 0:
i = n_dims + i
if j is None:
j = n_dims
elif j < 0:
j = n_dims + j
assert 0 <= i < j <= n_dims
x_shape = x.shape
target_shape = x_shape[:i] + shape + x_shape[j:]
return x.view(target_shape)
def accuracy(output, target, topk=(1,)):
"""Computes the accuracy over the k top predictions for the specified values of k"""
with torch.no_grad():
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.reshape(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)
res.append(correct_k.mul_(100.0 / batch_size))
return res
def tensor_slice(x, begin, size):
assert all([b >= 0 for b in begin])
size = [l - b if s == -1 else s
for s, b, l in zip(size, begin, x.shape)]
assert all([s >= 0 for s in size])
slices = [slice(b, b + s) for b, s in zip(begin, size)]
return x[slices]
def adopt_weight(global_step, threshold=0, value=0.):
weight = 1
if global_step < threshold:
weight = value
return weight
def save_video_grid(video, fname, nrow=None, fps=6):
b, c, t, h, w = video.shape
video = video.permute(0, 2, 3, 4, 1)
video = (video.cpu().numpy() * 255).astype('uint8')
if nrow is None:
nrow = math.ceil(math.sqrt(b))
ncol = math.ceil(b / nrow)
padding = 1
video_grid = np.zeros((t, (padding + h) * nrow + padding,
(padding + w) * ncol + padding, c), dtype='uint8')
for i in range(b):
r = i // ncol
c = i % ncol
start_r = (padding + h) * r
start_c = (padding + w) * c
video_grid[:, start_r:start_r + h, start_c:start_c + w] = video[i]
video = []
for i in range(t):
video.append(video_grid[i])
imageio.mimsave(fname, video, fps=fps)
## skvideo.io.vwrite(fname, video_grid, inputdict={'-r': '5'})
#print('saved videos to', fname)
def comp_getattr(args, attr_name, default=None):
if hasattr(args, attr_name):
return getattr(args, attr_name)
else:
return default
def visualize_tensors(t, name=None, nest=0):
if name is not None:
print(name, "current nest: ", nest)
print("type: ", type(t))
if 'dict' in str(type(t)):
print(t.keys())
for k in t.keys():
if t[k] is None:
print(k, "None")
else:
if 'Tensor' in str(type(t[k])):
print(k, t[k].shape)
elif 'dict' in str(type(t[k])):
print(k, 'dict')
visualize_tensors(t[k], name, nest + 1)
elif 'list' in str(type(t[k])):
print(k, len(t[k]))
visualize_tensors(t[k], name, nest + 1)
elif 'list' in str(type(t)):
print("list length: ", len(t))
for t2 in t:
visualize_tensors(t2, name, nest + 1)
elif 'Tensor' in str(type(t)):
print(t.shape)
else:
print(t)
return ""
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | AutoEncoder/model/PatchVolume.py | .py | 31,335 | 706 |
import math
import numpy as np
import pytorch_lightning as pl
import torch
import torch.nn as nn
import torch.nn.functional as F#
from einops import rearrange
from torch.optim.optimizer import Optimizer
from AutoEncoder.utils import shift_dim, adopt_weight
from AutoEncoder.model.lpips import LPIPS
from AutoEncoder.model.codebook import Codebook
from AutoEncoder.model.MedicalNetPerceptual import MedicalNetPerceptual
from pytorch_lightning.callbacks import BaseFinetuning
from einops_exts import rearrange_many
from os.path import join
import os
import numpy as np
def silu(x):
return x*torch.sigmoid(x)
class SiLU(nn.Module):
def __init__(self):
super(SiLU, self).__init__()
def forward(self, x):
return silu(x)
def hinge_d_loss(logits_real, logits_fake):
loss_real = torch.mean(F.relu(1. - logits_real))
loss_fake = torch.mean(F.relu(1. + logits_fake))
d_loss = 0.5 * (loss_real + loss_fake)
return d_loss
def non_saturating_d_loss(logits_real, logits_fake):
loss_real = torch.mean(F.relu(1. - logits_real))
loss_fake = torch.mean(F.relu(1. + logits_fake))
d_loss = 0.5 * (loss_real + loss_fake)
return d_loss
def vanilla_d_loss(logits_real, logits_fake):
d_loss = 0.5 * (
torch.mean(torch.nn.functional.softplus(-logits_real)) +
torch.mean(torch.nn.functional.softplus(logits_fake)))
return d_loss
class Perceptual_Loss(nn.Module):
def __init__(self, is_3d: bool = True, sample_ratio: float = 0.2):
super().__init__()
self.is_3d = is_3d
self.sample_ratio = sample_ratio
if is_3d:
self.perceptual_model = MedicalNetPerceptual(net_path=os.path.dirname(os.path.abspath(__file__))+'/../../warvito_MedicalNet-models_main').eval()
else:
self.perceptual_model = LPIPS().eval()
def forward(self, input:torch.Tensor, target: torch.Tensor):
if self.is_3d:
p_loss = torch.mean(self.perceptual_model(input , target))
else:
B,C,D,H,W = input.shape
input_slices_xy = input.permute((0,2,1,3,4)).contiguous()
input_slices_xy = input_slices_xy.view(-1, C, H, W)
indices_xy = torch.randperm(input_slices_xy.shape[0])[: int(input_slices_xy.shape[0] * self.sample_ratio)].to(input.device)
input_slices_xy = torch.index_select(input_slices_xy, dim=0, index=indices_xy)
target_slices_xy = target.permute((0,2,1,3,4)).contiguous()
target_slices_xy = target_slices_xy.view(-1, C, H, W)
target_slices_xy = torch.index_select(target_slices_xy, dim=0, index=indices_xy)
input_slices_xz = input.permute((0,3,1,2,4)).contiguous()
input_slices_xz = input_slices_xz.view(-1, C, D, W)
indices_xz = torch.randperm(input_slices_xz.shape[0])[: int(input_slices_xz.shape[0] * self.sample_ratio)].to(input.device)
input_slices_xz = torch.index_select(input_slices_xz, dim=0, index=indices_xz)
target_slices_xz = target.permute((0,3,1,2,4)).contiguous()
target_slices_xz = target_slices_xz.view(-1, C, D, W)
target_slices_xz = torch.index_select(target_slices_xz, dim=0, index=indices_xz)
input_slices_yz = input.permute((0,4,1,2,3)).contiguous()
input_slices_yz = input_slices_yz.view(-1, C, D, H)
indices_yz = torch.randperm(input_slices_yz.shape[0])[: int(input_slices_yz.shape[0] * self.sample_ratio)].to(input.device)
input_slices_yz = torch.index_select(input_slices_yz, dim=0, index=indices_yz)
target_slices_yz = target.permute((0,4,1,2,3)).contiguous()
target_slices_yz = target_slices_yz.view(-1, C, D, H)
target_slices_yz = torch.index_select(target_slices_yz, dim=0, index=indices_yz)
p_loss = torch.mean(self.perceptual_model(input_slices_xy,target_slices_xy)) + torch.mean(self.perceptual_model(input_slices_xz,target_slices_xz)) + torch.mean(self.perceptual_model(input_slices_yz,target_slices_yz))
return p_loss
class patchvolumeAE(pl.LightningModule):
def __init__(self, cfg):
super().__init__()
self.automatic_optimization = False
self.cfg = cfg
self.embedding_dim = cfg.model.embedding_dim
self.n_codes = cfg.model.n_codes
self.patch_size = cfg.dataset.patch_size
self.discriminator_iter_start = cfg.model.discriminator_iter_start
self.encoder = Encoder(cfg.model.n_hiddens, cfg.model.downsample,
cfg.dataset.image_channels, cfg.model.norm_type,
cfg.model.num_groups,cfg.model.embedding_dim,
)
self.decoder = Decoder(
cfg.model.n_hiddens, cfg.model.downsample, cfg.dataset.image_channels, cfg.model.norm_type, cfg.model.num_groups,cfg.model.embedding_dim)
self.enc_out_ch = self.encoder.out_channels
self.pre_vq_conv = nn.Conv3d(cfg.model.embedding_dim, cfg.model.embedding_dim, 1, 1)
self.post_vq_conv = nn.Conv3d(cfg.model.embedding_dim, cfg.model.embedding_dim, 1, 1)
self.codebook = Codebook(cfg.model.n_codes, cfg.model.embedding_dim,
no_random_restart=cfg.model.no_random_restart, restart_thres=cfg.model.restart_thres)
self.gan_feat_weight = cfg.model.gan_feat_weight
try:
self.stage =cfg.model.stage
except:
self.stage = 1
print('stage:',self.stage)
self.volume_discriminator = NLayerDiscriminator3D(
cfg.dataset.image_channels, cfg.model.disc_channels, cfg.model.disc_layers, norm_layer=nn.BatchNorm3d)
if cfg.model.disc_loss_type == 'vanilla':
self.disc_loss = vanilla_d_loss
elif cfg.model.disc_loss_type == 'hinge':
self.disc_loss = hinge_d_loss
self.perceptual_loss = Perceptual_Loss(is_3d=cfg.model.perceptual_3d).eval()
self.volume_gan_weight = cfg.model.volume_gan_weight
self.perceptual_weight = cfg.model.perceptual_weight
self.l1_weight = cfg.model.l1_weight
print('GAN starts at:', self.cfg.model.discriminator_iter_start )
self.save_hyperparameters()
def encode(self, x, include_embeddings=False, quantize=True):
h = self.pre_vq_conv(self.encoder(x))
if quantize:
vq_output = self.codebook(h)
if include_embeddings:
return vq_output['embeddings'], vq_output['encodings']
else:
return vq_output['encodings']
return h
def patch_encode(self, x,quantize = False,patch_size = 64):
b,s1,s2,s3 = x.shape[0],x.shape[-3],x.shape[-2],x.shape[-1]
x = x.unfold(2,patch_size,patch_size).unfold(3,patch_size,patch_size).unfold(4,patch_size,patch_size)
x = rearrange(x , 'b c p1 p2 p3 d h w -> (b p1 p2 p3) c d h w')
h = self.pre_vq_conv(self.encoder(x))
if quantize == True:
vq_output = self.codebook(h)
embeddings = vq_output['embeddings']
else:
embeddings = h
embeddings = rearrange(embeddings, '(b p) c d h w -> b p c d h w', b=b)
embeddings = rearrange(embeddings, 'b (p1 p2 p3) c d h w -> b c (p1 d) (p2 h) (p3 w)',
p1=s1//patch_size, p2=s2//patch_size, p3=s3//patch_size)
return embeddings
def patch_encode_sliding(self, x, quantize = False, patch_size = 64, sliding_window = 64):
b,s1,s2,s3 = x.shape[0],x.shape[-3],x.shape[-2],x.shape[-1]
x = x.unfold(2,patch_size,patch_size).unfold(3,patch_size,patch_size).unfold(4,patch_size,patch_size)
x = rearrange(x , 'b c p1 p2 p3 d h w -> (b p1 p2 p3) c d h w')
embeddings = []
sliding_batch_size = sliding_window*b
for i in range(0, len(x), sliding_batch_size):
batch = x[i:i+sliding_batch_size]
if len(batch) < sliding_batch_size:
batch = x[i:]
h = self.pre_vq_conv(self.encoder(batch))
if quantize == True:
vq_output = self.codebook(h)
embeddings.append(vq_output['embeddings'])
else:
embeddings.append(h)
embeddings = torch.concat(embeddings,dim=0)
embeddings = rearrange(embeddings, '(b p) c d h w -> b p c d h w', b=b)
embeddings = rearrange(embeddings, 'b (p1 p2 p3) c d h w -> b c (p1 d) (p2 h) (p3 w)',
p1=s1//patch_size, p2=s2//patch_size, p3=s3//patch_size)
return embeddings
def decode(self, latent, quantize=True):
if quantize:
vq_output = self.codebook(latent)
latent = vq_output['encodings']
h = F.embedding(latent, self.codebook.embeddings)
h = self.post_vq_conv(shift_dim(h, -1, 1))
return self.decoder(h)
def decode_sliding(self, latent, quantize=False, patch_size = 256,sliding_window = 1,compress_ratio = 8):
latent_patch_size = patch_size//compress_ratio
b,c,s1,s2,s3 = latent.shape[0],latent.shape[1],latent.shape[2],latent.shape[3],latent.shape[4]
latent = latent.unfold(2,latent_patch_size,latent_patch_size).unfold(3,latent_patch_size,latent_patch_size).unfold(4,latent_patch_size,latent_patch_size)
latent = rearrange(latent , 'b c p1 p2 p3 d h w -> (b p1 p2 p3) c d h w')
sliding_batch_size = sliding_window*b
output = []
for i in range(0, len(latent), sliding_batch_size):
batch = latent[i:i+sliding_batch_size]
if len(batch) < sliding_batch_size:
batch = latent[i:]
if quantize:
vq_output = self.codebook(batch)
batch = vq_output['encodings']
h_batch = F.embedding(batch, self.codebook.embeddings)
h_batch = self.post_vq_conv(shift_dim(h_batch, -1, 1))
output.append(self.decoder(h_batch).cpu())
torch.cuda.empty_cache()
output = torch.concat(output, dim=0)
output = rearrange(output, '(b p) c d h w -> b p c d h w', b=b)
output = rearrange(output, 'b (p1 p2 p3) c d h w -> b c (p1 d) (p2 h) (p3 w)',
p1=s1//latent_patch_size, p2=s2//latent_patch_size, p3=s3//latent_patch_size)
return output
def forward(self, x, optimizer_idx=None, log_volume=False,val=False):
B, C, D, H, W = x.shape ##b c z x y
if self.stage == 1 and val==False:
x_input = x
else:
x_input = x.unfold(2,self.patch_size,self.patch_size).unfold(3,self.patch_size,self.patch_size).unfold(4,self.patch_size,self.patch_size)
x_input = rearrange(x_input , 'b c p1 p2 p3 d h w -> (b p1 p2 p3) c d h w')
z = self.pre_vq_conv(self.encoder(x_input))
vq_output = self.codebook(z)
embeddings = vq_output['embeddings']
if self.stage == 1 and val == False:
x_recon = self.decoder(self.post_vq_conv(embeddings))
else:
embeddings = rearrange(embeddings, '(b p) c d h w -> b p c d h w', b=B)
embeddings = rearrange(embeddings, 'b (p1 p2 p3) c d h w -> b c (p1 d) (p2 h) (p3 w)',
p1=D//self.patch_size, p2=H//self.patch_size, p3=W//self.patch_size)
x_recon = self.decoder(self.post_vq_conv(embeddings))
# elif self.stage==1 and val == True:
# embeddings = rearrange(embeddings, '(b p) c d h w -> b p c d h w', b=B)
# embeddings = rearrange(embeddings, 'b (p1 p2 p3) c d h w -> b c (p1 d) (p2 h) (p3 w)',
# p1=D//self.patch_size, p2=H//self.patch_size, p3=W//self.patch_size)
# x_recon = self.decoder(self.post_vq_conv(embeddings))
# elif self.stage == 2 and val == True:
# embeddings = rearrange(embeddings, '(b p) c d h w -> b p c d h w', b=B)
# embeddings = rearrange(embeddings, 'b (p1 p2 p3) c d h w -> b c (p1 d) (p2 h) (p3 w)',
# p1=D//self.patch_size, p2=H//self.patch_size, p3=W//self.patch_size)
# x_recon = self.decoder(self.post_vq_conv(embeddings))
# else:
# # if np.random.uniform(0,1)>0.7:
# # x_recon = self.decoder(self.post_vq_conv(embeddings))
# # x_recon = rearrange(x_recon, '(b p) c d h w -> b p c d h w', b=B)
# # x_recon = rearrange(x_recon, 'b (p1 p2 p3) c d h w -> b c (p1 d) (p2 h) (p3 w)',
# # p1=D//self.patch_size, p2=H//self.patch_size, p3=W//self.patch_size)
# # else:
# embeddings = rearrange(embeddings, '(b p) c d h w -> b p c d h w', b=B)
# embeddings = rearrange(embeddings, 'b (p1 p2 p3) c d h w -> b c (p1 d) (p2 h) (p3 w)',
# p1=D//self.patch_size, p2=H//self.patch_size, p3=W//self.patch_size)
# x_recon = self.decoder(self.post_vq_conv(embeddings))
recon_loss = F.l1_loss(x_recon, x) * self.l1_weight
if log_volume:
return x, x_recon
if optimizer_idx == 0:
perceptual_loss = self.perceptual_weight * self.perceptual_loss(x, x_recon)
if self.global_step > self.cfg.model.discriminator_iter_start and self.volume_gan_weight > 0:
logits_volume_fake , pred_volume_fake = self.volume_discriminator(x_recon)
g_volume_loss = -torch.mean(logits_volume_fake)
g_loss = self.volume_gan_weight*g_volume_loss
volume_gan_feat_loss = 0
logits_volume_real, pred_volume_real = self.volume_discriminator(x)
for i in range(len(pred_volume_fake)-1):
volume_gan_feat_loss += F.l1_loss(pred_volume_fake[i], pred_volume_real[i].detach())
gan_feat_loss = self.gan_feat_weight * volume_gan_feat_loss
aeloss = g_loss
else:
gan_feat_loss = torch.tensor(0.0, requires_grad=True)
aeloss = torch.tensor(0.0, requires_grad=True)
self.log("train/gan_feat_loss", gan_feat_loss,
logger=True, on_step=True, on_epoch=True)
self.log("train/perceptual_loss", perceptual_loss,
prog_bar=True, logger=True, on_step=True, on_epoch=True)
self.log("train/recon_loss", recon_loss, prog_bar=True,
logger=True, on_step=True, on_epoch=True)
self.log("train/aeloss", aeloss, prog_bar=True,
logger=True, on_step=True, on_epoch=True)
self.log("train/commitment_loss", vq_output['commitment_loss'],
prog_bar=True, logger=True, on_step=True, on_epoch=True)
self.log('train/perplexity', vq_output['perplexity'],
prog_bar=True, logger=True, on_step=True, on_epoch=True)
return recon_loss, x_recon, vq_output, aeloss, perceptual_loss, gan_feat_loss
if optimizer_idx == 1:
# Train discriminator
logits_volume_real , _ = self.volume_discriminator(x.detach())
logits_volume_fake , _= self.volume_discriminator(x_recon.detach())
d_volume_loss = self.disc_loss(logits_volume_real, logits_volume_fake)
discloss = self.volume_gan_weight*d_volume_loss
self.log("train/logits_volume_real", logits_volume_real.mean().detach(),
logger=True, on_step=True, on_epoch=True)
self.log("train/logits_volume_fake", logits_volume_fake.mean().detach(),
logger=True, on_step=True, on_epoch=True)
self.log("train/d_volume_loss", d_volume_loss,
logger=True, on_step=True, on_epoch=True)
self.log("train/discloss", discloss, prog_bar=True,
logger=True, on_step=True, on_epoch=True)
return discloss
perceptual_loss = self.perceptual_weight * self.perceptual_loss(x, x_recon)
return recon_loss, x_recon, vq_output, perceptual_loss
def training_step(self, batch, batch_idx):
x = batch['data']
opts = self.optimizers()
optimizer_idx = batch_idx % len(opts)
if self.global_step < self.discriminator_iter_start:
optimizer_idx = 0
opt = opts[optimizer_idx]
opt.zero_grad()
if self.stage == 1 :
if optimizer_idx == 0:
recon_loss, _, vq_output, aeloss, perceptual_loss, gan_feat_loss = self.forward(
x, optimizer_idx)
commitment_loss = vq_output['commitment_loss']
loss = recon_loss + commitment_loss + aeloss + perceptual_loss + gan_feat_loss
if optimizer_idx == 1:
discloss = self.forward(x, optimizer_idx)
loss = discloss
self.manual_backward(loss)
opt.step()
return loss
elif self.stage == 2:
if optimizer_idx == 0:
recon_loss, _, vq_output, aeloss, perceptual_loss, gan_feat_loss = self.forward(
x, optimizer_idx)
loss = recon_loss + aeloss + perceptual_loss + gan_feat_loss
if optimizer_idx == 1:
discloss = self.forward(x, optimizer_idx)
loss = discloss
self.manual_backward(loss)
opt.step()
return loss
else:
raise NotImplementedError
def validation_step(self, batch, batch_idx):
x = batch['data'] # TODO: batch['stft']
recon_loss, _, vq_output, perceptual_loss = self.forward(x,val=True)
# print(recon_loss, perceptual_loss)
self.log('val/recon_loss', recon_loss, prog_bar=True,sync_dist=True)
self.log('val/perceptual_loss', perceptual_loss, prog_bar=True,sync_dist=True)
self.log('val/perplexity', vq_output['perplexity'], prog_bar=True,sync_dist=True)
self.log('val/commitment_loss',
vq_output['commitment_loss'], prog_bar=True,sync_dist=True)
def configure_optimizers(self):
lr = self.cfg.model.lr
if self.stage == 1:
opt_ae = torch.optim.Adam(list(self.encoder.parameters()) +
list(self.decoder.parameters()) +
list(self.pre_vq_conv.parameters()) +
list(self.post_vq_conv.parameters()) +
list(self.codebook.parameters()),
lr=lr, betas=(0.5, 0.9))
opt_disc = torch.optim.Adam(list(self.volume_discriminator.parameters()),
lr=lr, betas=(0.5, 0.9))
return [opt_ae, opt_disc]
elif self.stage ==2:
opt_ae = torch.optim.Adam(list(self.decoder.parameters()) +
list(self.post_vq_conv.parameters()),
lr=lr, betas=(0.5, 0.9))
opt_disc = torch.optim.Adam(list(self.volume_discriminator.parameters()),
lr=lr, betas=(0.5, 0.9))
return [opt_ae,opt_disc]
else:
raise NotImplementedError
def log_volumes(self, batch, **kwargs):
log = dict()
x = batch['data']
x, x_rec = self(x, log_volume=True, val=(kwargs['split']=='val'))
log["inputs"] = x
log["reconstructions"] = x_rec
return log
def Normalize(in_channels, norm_type='group', num_groups=32):
assert norm_type in ['group', 'batch']
if norm_type == 'group':
# TODO Changed num_groups from 32 to 8
return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True)
elif norm_type == 'batch':
return torch.nn.SyncBatchNorm(in_channels)
class AttentionBlock(nn.Module):
def __init__(self, dim, heads=4, dim_head=32,norm_type='group',num_groups=32):
super().__init__()
self.norm = Normalize(dim, norm_type=norm_type, num_groups=num_groups)
self.scale = dim_head ** -0.5
self.heads = heads
hidden_dim = dim_head * heads # 256
self.to_qkv = nn.Linear(dim, hidden_dim * 3, bias=False)
self.to_out = nn.Conv3d(hidden_dim, dim, 1)
def forward(self, x):
b, c, z, h, w = x.shape
x_norm = self.norm(x)
x_norm = rearrange(x_norm,'b c z x y -> b (z x y) c').contiguous()
qkv = self.to_qkv(x_norm).chunk(3, dim=2)
q, k, v = rearrange_many(
qkv, 'b d (h c) -> b h d c ', h=self.heads)
out = F.scaled_dot_product_attention(q, k, v, scale=self.scale, dropout_p=0.0, is_causal=False)
out = rearrange(out, 'b h (z x y) c -> b (h c) z x y ',z = z, x = h ,y = w ).contiguous()
out = self.to_out(out)
return out+x
class Encoder(nn.Module):
def __init__(self, n_hiddens, downsample, image_channel=1, norm_type='group', num_groups=32 , embedding_dim = 8):
super().__init__()
n_times_downsample = np.array([int(math.log2(d)) for d in downsample])
self.conv_blocks = nn.ModuleList()
max_ds = n_times_downsample.max()
self.embedding_dim = embedding_dim
self.conv_first = nn.Conv3d(
image_channel , n_hiddens, kernel_size=3, stride=1, padding=1
)
channels = [n_hiddens * 2 ** i for i in range(max_ds)]
channels = channels +[channels[-1]]
in_channels = channels[0]
for i in range(max_ds + 1):
block = nn.Module()
if i != 0 :
in_channels = channels[i-1]
out_channels = channels[i]
stride = tuple([2 if d > 0 else 1 for d in n_times_downsample])
if in_channels!= out_channels:
block.res1 = ResBlockXY(in_channels , out_channels,norm_type=norm_type, num_groups=num_groups )
else:
block.res1 = ResBlockX(in_channels , out_channels,norm_type=norm_type, num_groups=num_groups)
block.res2 = ResBlockX(out_channels , out_channels, norm_type=norm_type, num_groups=num_groups)
if i != max_ds:
block.down = nn.Conv3d(out_channels,out_channels,kernel_size=(4, 4, 4),stride=stride,padding=1)
else:
block.down = nn.Identity()
self.conv_blocks.append(block)
n_times_downsample -= 1
self.mid_block = nn.Module()
self.mid_block.res1 = ResBlockX(out_channels , out_channels,norm_type=norm_type, num_groups=num_groups)
self.mid_block.attn = AttentionBlock(out_channels, heads=4,norm_type=norm_type,num_groups=num_groups)
self.mid_block.res2 = ResBlockX(out_channels , out_channels,norm_type=norm_type, num_groups=num_groups)
self.final_block = nn.Sequential(
Normalize(out_channels, norm_type, num_groups=num_groups),
SiLU(),
nn.Conv3d(out_channels, self.embedding_dim, 3 , 1 ,1)
)
self.out_channels = out_channels
def forward(self, x):
h = self.conv_first(x)
for idx , block in enumerate(self.conv_blocks):
h = block.res1(h)
h = block.res2(h)
h = block.down(h)
h = self.mid_block.res1(h)
h = self.mid_block.attn(h)
h = self.mid_block.res2(h)
h = self.final_block(h)
return h
class Decoder(nn.Module):
def __init__(self, n_hiddens, upsample, image_channel, norm_type='group', num_groups=32 , embedding_dim=8 ):
super().__init__()
n_times_upsample = np.array([int(math.log2(d)) for d in upsample])
max_us = n_times_upsample.max()
channels = [n_hiddens * 2 ** i for i in range(max_us)]
channels = channels+[channels[-1]]
channels.reverse()
self.embedding_dim = embedding_dim
self.conv_first = nn.Conv3d(self.embedding_dim, channels[0],3,1,1)
self.mid_block = nn.Module()
self.mid_block.res1 = ResBlockX(channels[0] , channels[0],norm_type=norm_type, num_groups=num_groups)
self.mid_block.attn = AttentionBlock(channels[0], heads=4,norm_type=norm_type,num_groups=num_groups)
self.mid_block.res2 = ResBlockX(channels[0] , channels[0],norm_type=norm_type, num_groups=num_groups)
self.conv_blocks = nn.ModuleList()
in_channels = channels[0]
for i in range(max_us + 1):
block = nn.Module()
if i != 0:
in_channels = channels[i-1]
out_channels = channels[i]
us = tuple([2 if d > 0 else 1 for d in n_times_upsample])
if in_channels != out_channels:
block.res1 = ResBlockXY(in_channels, out_channels, norm_type=norm_type, num_groups=num_groups)
else:
block.res1 = ResBlockX(in_channels, out_channels, norm_type=norm_type, num_groups=num_groups)
block.res2 = ResBlockX(out_channels, out_channels, norm_type=norm_type, num_groups=num_groups)
if i != max_us :
block.up = Upsample(out_channels)
else:
block.up = nn.Identity(out_channels)
self.conv_blocks.append(block)
n_times_upsample -= 1
self.final_block = nn.Sequential(
Normalize(out_channels, norm_type, num_groups=num_groups),
SiLU(),
nn.Conv3d(out_channels, image_channel, 3 , 1 ,1)
)
def forward(self, x):
h = self.conv_first(x)
h = self.mid_block.res1(h)
h = self.mid_block.attn(h)
h = self.mid_block.res2(h)
for i, block in enumerate(self.conv_blocks):
h = block.res1(h)
h = block.res2(h)
h = block.up(h)
h = self.final_block(h)
return h
class ResBlockX(nn.Module):
def __init__(self, in_channels, out_channels=None, dropout=0.0, norm_type='group', num_groups=32):
super().__init__()
self.in_channels = in_channels
out_channels = in_channels if out_channels is None else out_channels
self.out_channels = out_channels
self.norm1 = Normalize(in_channels, norm_type, num_groups=num_groups)
self.conv1 = nn.Conv3d(in_channels, out_channels, kernel_size=3, padding=1, stride=1)
self.norm2 = Normalize(in_channels, norm_type, num_groups=num_groups)
self.dropout = torch.nn.Dropout(dropout)
self.conv2 = nn.Conv3d(out_channels, out_channels, kernel_size=3 , padding=1, stride=1)
def forward(self, x):
h = x
h = self.norm1(h)
h = silu(h)
h = self.conv1(h)
h = self.norm2(h)
h = silu(h)
h = self.conv2(h)
return x+h
class Upsample(nn.Module):
def __init__(self, in_channels):
super().__init__()
self.conv_trans = nn.ConvTranspose3d(in_channels, in_channels, 4,
stride=2, padding=1)
def forward(self, x):
x = self.conv_trans(x)
return x
class ResBlockXY(nn.Module):
def __init__(self, in_channels, out_channels=None, dropout=0.0, norm_type='group', num_groups=32):
super().__init__()
self.in_channels = in_channels
out_channels = in_channels if out_channels is None else out_channels
self.out_channels = out_channels
self.resConv = nn.Conv3d(in_channels, out_channels, (1, 1, 1))
self.norm1 = Normalize(in_channels, norm_type, num_groups=num_groups)
self.conv1 = nn.Conv3d(in_channels , out_channels, kernel_size=3, padding=1, stride=1)
self.norm2 = Normalize(out_channels, norm_type, num_groups=num_groups)
self.dropout = torch.nn.Dropout(dropout)
self.conv2 = nn.Conv3d(out_channels , out_channels, kernel_size=3, padding=1, stride=1)
def forward(self, x):
residual = self.resConv(x)
h = self.norm1(x)
h = silu(h)
h = self.conv1(h)
h = self.norm2(h)
h = silu(h)
h = self.conv2(h)
return h+residual
class ResBlockDown(nn.Module):
def __init__(self, in_channels, out_channels, norm_layer = nn.SyncBatchNorm):
super().__init__()
self.leakyRELU = nn.LeakyReLU()
self.pool = nn.AvgPool3d((2, 2, 2))
self.conv1 = nn.Conv3d(in_channels, out_channels, 3, padding=1)
self.conv2 = nn.Conv3d(out_channels, out_channels, 3, padding=1)
self.resConv = nn.Conv3d(in_channels, out_channels, 1)
def forward(self, x):
residual = self.resConv(self.pool(x))
x = self.conv1(x)
x = self.leakyRELU(x)
x = self.pool(x)
x = self.conv2(x)
x = self.leakyRELU(x)
return (x+residual)/math.sqrt(2)
class NLayerDiscriminator3D(nn.Module):
def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.SyncBatchNorm, use_sigmoid=False, getIntermFeat=True):
super(NLayerDiscriminator3D, self).__init__()
self.getIntermFeat = getIntermFeat
self.n_layers = n_layers
kw = 4
padw = int(np.ceil((kw-1.0)/2))
sequence = [[nn.Conv3d(input_nc, ndf, kernel_size=kw,
stride=2, padding=padw), nn.LeakyReLU(0.2, True)]]
nf = ndf
for n in range(1, n_layers):
nf_prev = nf
nf = min(nf * 2, 512)
sequence += [[
nn.Conv3d(nf_prev, nf, kernel_size=kw, stride=2, padding=padw),
norm_layer(nf), nn.LeakyReLU(0.2, True)
]]
nf_prev = nf
nf = min(nf * 2, 512)
sequence += [[
nn.Conv3d(nf_prev, nf, kernel_size=kw, stride=1, padding=padw),
norm_layer(nf),
nn.LeakyReLU(0.2, True)
]]
sequence += [[nn.Conv3d(nf, 1, kernel_size=kw,
stride=1, padding=padw)]]
if use_sigmoid:
sequence += [[nn.Sigmoid()]]
if getIntermFeat:
for n in range(len(sequence)):
setattr(self, 'model'+str(n), nn.Sequential(*sequence[n]))
else:
sequence_stream = []
for n in range(len(sequence)):
sequence_stream += sequence[n]
self.model = nn.Sequential(*sequence_stream)
def forward(self, input):
if self.getIntermFeat:
res = [input]
for n in range(self.n_layers+2):
model = getattr(self, 'model'+str(n))
res.append(model(res[-1]))
return res[-1], res[1:]
else:
return self.model(input), _
class AE_finetuning(BaseFinetuning):
def freeze_before_training(self, pl_module: pl.LightningModule):
pl_module.stage = 2
self.freeze(pl_module.encoder)
self.freeze(pl_module.pre_vq_conv)
self.freeze(pl_module.codebook)
# self.freeze(pl_module.volume_discriminator)
def finetune_function(self, pl_module: pl.LightningModule, epoch: int, optimizer: Optimizer) -> None:
pl_module.encoder.eval()
pl_module.codebook.eval()
pl_module.pre_vq_conv.eval()
# pl_module.volume_discriminator.eval()
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | AutoEncoder/model/lpips.py | .py | 6,483 | 182 | """Adapted from https://github.com/SongweiGe/TATS"""
"""Stripped version of https://github.com/richzhang/PerceptualSimilarity/tree/master/models"""
from collections import namedtuple
from torchvision import models
import torch.nn as nn
import torch
from tqdm import tqdm
import requests
import os
import hashlib
URL_MAP = {
"vgg_lpips": "https://heibox.uni-heidelberg.de/f/607503859c864bc1b30b/?dl=1"
}
CKPT_MAP = {
"vgg_lpips": "vgg.pth"
}
MD5_MAP = {
"vgg_lpips": "d507d7349b931f0638a25a48a722f98a"
}
def download(url, local_path, chunk_size=1024):
os.makedirs(os.path.split(local_path)[0], exist_ok=True)
with requests.get(url, stream=True) as r:
total_size = int(r.headers.get("content-length", 0))
with tqdm(total=total_size, unit="B", unit_scale=True) as pbar:
with open(local_path, "wb") as f:
for data in r.iter_content(chunk_size=chunk_size):
if data:
f.write(data)
pbar.update(chunk_size)
def md5_hash(path):
with open(path, "rb") as f:
content = f.read()
return hashlib.md5(content).hexdigest()
def get_ckpt_path(name, root, check=False):
assert name in URL_MAP
path = os.path.join(root, CKPT_MAP[name])
if not os.path.exists(path) or (check and not md5_hash(path) == MD5_MAP[name]):
print("Downloading {} model from {} to {}".format(
name, URL_MAP[name], path))
download(URL_MAP[name], path)
md5 = md5_hash(path)
assert md5 == MD5_MAP[name], md5
return path
class LPIPS(nn.Module):
# Learned perceptual metric
def __init__(self, use_dropout=True):
super().__init__()
self.scaling_layer = ScalingLayer()
self.chns = [64, 128, 256, 512, 512] # vg16 features
self.net = vgg16(pretrained=True, requires_grad=False)
self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout)
self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout)
self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout)
self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout)
self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout)
self.load_from_pretrained()
for param in self.parameters():
param.requires_grad = False
def load_from_pretrained(self, name="vgg_lpips"):
ckpt = get_ckpt_path(name, os.path.join(
os.path.dirname(os.path.abspath(__file__)), "cache"))
self.load_state_dict(torch.load(
ckpt, map_location=torch.device("cpu")), strict=False)
print("loaded pretrained LPIPS loss from {}".format(ckpt))
@classmethod
def from_pretrained(cls, name="vgg_lpips"):
if name is not "vgg_lpips":
raise NotImplementedError
model = cls()
ckpt = get_ckpt_path(name, os.path.join(
os.path.dirname(os.path.abspath(__file__)), "cache"))
model.load_state_dict(torch.load(
ckpt, map_location=torch.device("cpu")), strict=False)
return model
def forward(self, input, target):
in0_input, in1_input = (self.scaling_layer(
input), self.scaling_layer(target))
outs0, outs1 = self.net(in0_input), self.net(in1_input)
feats0, feats1, diffs = {}, {}, {}
lins = [self.lin0, self.lin1, self.lin2, self.lin3, self.lin4]
for kk in range(len(self.chns)):
feats0[kk], feats1[kk] = normalize_tensor(
outs0[kk]), normalize_tensor(outs1[kk])
diffs[kk] = (feats0[kk] - feats1[kk]) ** 2
res = [spatial_average(lins[kk].model(diffs[kk]), keepdim=True)
for kk in range(len(self.chns))]
val = res[0]
for l in range(1, len(self.chns)):
val += res[l]
return val
class ScalingLayer(nn.Module):
def __init__(self):
super(ScalingLayer, self).__init__()
self.register_buffer('shift', torch.Tensor(
[-.030, -.088, -.188])[None, :, None, None])
self.register_buffer('scale', torch.Tensor(
[.458, .448, .450])[None, :, None, None])
def forward(self, inp):
return (inp - self.shift) / self.scale
class NetLinLayer(nn.Module):
""" A single linear layer which does a 1x1 conv """
def __init__(self, chn_in, chn_out=1, use_dropout=False):
super(NetLinLayer, self).__init__()
layers = [nn.Dropout(), ] if (use_dropout) else []
layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1,
padding=0, bias=False), ]
self.model = nn.Sequential(*layers)
class vgg16(torch.nn.Module):
def __init__(self, requires_grad=False, pretrained=True):
super(vgg16, self).__init__()
vgg_pretrained_features = models.vgg16(pretrained=pretrained).features
self.slice1 = torch.nn.Sequential()
self.slice2 = torch.nn.Sequential()
self.slice3 = torch.nn.Sequential()
self.slice4 = torch.nn.Sequential()
self.slice5 = torch.nn.Sequential()
self.N_slices = 5
for x in range(4):
self.slice1.add_module(str(x), vgg_pretrained_features[x])
for x in range(4, 9):
self.slice2.add_module(str(x), vgg_pretrained_features[x])
for x in range(9, 16):
self.slice3.add_module(str(x), vgg_pretrained_features[x])
for x in range(16, 23):
self.slice4.add_module(str(x), vgg_pretrained_features[x])
for x in range(23, 30):
self.slice5.add_module(str(x), vgg_pretrained_features[x])
if not requires_grad:
for param in self.parameters():
param.requires_grad = False
def forward(self, X):
h = self.slice1(X)
h_relu1_2 = h
h = self.slice2(h)
h_relu2_2 = h
h = self.slice3(h)
h_relu3_3 = h
h = self.slice4(h)
h_relu4_3 = h
h = self.slice5(h)
h_relu5_3 = h
vgg_outputs = namedtuple(
"VggOutputs", ['relu1_2', 'relu2_2', 'relu3_3', 'relu4_3', 'relu5_3'])
out = vgg_outputs(h_relu1_2, h_relu2_2,
h_relu3_3, h_relu4_3, h_relu5_3)
return out
def normalize_tensor(x, eps=1e-10):
norm_factor = torch.sqrt(torch.sum(x**2, dim=1, keepdim=True))
return x/(norm_factor+eps)
def spatial_average(x, keepdim=True):
return x.mean([2, 3], keepdim=keepdim)
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | AutoEncoder/model/__init__.py | .py | 165 | 7 | from AutoEncoder.model.codebook import Codebook
from AutoEncoder.model.lpips import LPIPS
from AutoEncoder.model.MedicalNetPerceptual import MedicalNetPerceptual
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | AutoEncoder/model/codebook.py | .py | 3,982 | 108 |
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.distributed as dist
from AutoEncoder.utils import shift_dim
class Codebook(nn.Module):
def __init__(self, n_codes, embedding_dim, no_random_restart=False, restart_thres=1.0):
super().__init__()
self.register_buffer('embeddings', torch.randn(n_codes, embedding_dim))
self.register_buffer('N', torch.zeros(n_codes))
self.register_buffer('z_avg', self.embeddings.data.clone())
self.n_codes = n_codes
self.embedding_dim = embedding_dim
self._need_init = True
self.no_random_restart = no_random_restart
self.restart_thres = restart_thres
def _tile(self, x):
d, ew = x.shape
if d < self.n_codes:
n_repeats = (self.n_codes + d - 1) // d
std = 0.01 / np.sqrt(ew)
x = x.repeat(n_repeats, 1)
x = x + torch.randn_like(x) * std
return x
def _init_embeddings(self, z):
# z: [b, c, t, h, w]
self._need_init = False
flat_inputs = shift_dim(z, 1, -1).flatten(end_dim=-2)
y = self._tile(flat_inputs)
d = y.shape[0]
_k_rand = y[torch.randperm(y.shape[0])][:self.n_codes]
if dist.is_initialized():
dist.broadcast(_k_rand, 0)
self.embeddings.data.copy_(_k_rand)
self.z_avg.data.copy_(_k_rand)
self.N.data.copy_(torch.ones(self.n_codes))
def forward(self, z):
# z: [b, c, t, h, w]
if self._need_init and self.training:
self._init_embeddings(z)
flat_inputs = shift_dim(z, 1, -1).flatten(end_dim=-2) # [bthw, c]
distances = (flat_inputs ** 2).sum(dim=1, keepdim=True) \
- 2 * flat_inputs @ self.embeddings.t() \
+ (self.embeddings.t() ** 2).sum(dim=0, keepdim=True) # [bthw, c]
encoding_indices = torch.argmin(distances, dim=1)
encode_onehot = F.one_hot(encoding_indices, self.n_codes).type_as(
flat_inputs) # [bthw, ncode]
encoding_indices = encoding_indices.view(
z.shape[0], *z.shape[2:]) # [b, t, h, w, ncode]
embeddings = F.embedding(
encoding_indices, self.embeddings) # [b, t, h, w, c]
embeddings = shift_dim(embeddings, -1, 1) # [b, c, t, h, w]
commitment_loss = 0.25 * F.mse_loss(z, embeddings.detach())
# EMA codebook update
if self.training:
n_total = encode_onehot.sum(dim=0)
encode_sum = flat_inputs.t() @ encode_onehot
if dist.is_initialized():
dist.all_reduce(n_total)
dist.all_reduce(encode_sum)
self.N.data.mul_(0.99).add_(n_total, alpha=0.01)
self.z_avg.data.mul_(0.99).add_(encode_sum.t(), alpha=0.01)
n = self.N.sum()
weights = (self.N + 1e-7) / (n + self.n_codes * 1e-7) * n
encode_normalized = self.z_avg / weights.unsqueeze(1)
self.embeddings.data.copy_(encode_normalized)
y = self._tile(flat_inputs)
_k_rand = y[torch.randperm(y.shape[0])][:self.n_codes]
if dist.is_initialized():
dist.broadcast(_k_rand, 0)
if not self.no_random_restart:
usage = (self.N.view(self.n_codes, 1)
>= self.restart_thres).float()
self.embeddings.data.mul_(usage).add_(_k_rand * (1 - usage))
embeddings_st = (embeddings - z).detach() + z
avg_probs = torch.mean(encode_onehot, dim=0)
perplexity = torch.exp(-torch.sum(avg_probs *
torch.log(avg_probs + 1e-7)))
return dict(embeddings=embeddings_st, encodings=encoding_indices,
commitment_loss=commitment_loss, perplexity=perplexity)
def dictionary_lookup(self, encodings):
embeddings = F.embedding(encodings, self.embeddings)
return embeddings
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | AutoEncoder/model/MedicalNetPerceptual.py | .py | 2,710 | 67 | import torch
import torch.nn as nn
import os
def mednet_norm(input):
mean = input.mean()
std = input.std()
return (input - mean) / std
def mednet_norm_feature(x , eps = 1e-7):
norm_factor = torch.sqrt(torch.sum(x**2, dim=1, keepdim=True))
return x / (norm_factor + eps)
def spatial_average(x, keepdim = True) -> torch.Tensor:
return x.mean([2, 3], keepdim=keepdim)
class MedicalNetPerceptual(nn.Module):
def __init__(
self, net_path = None, verbose = True, channel_wise = False):
super().__init__()
torch.hub._validate_not_a_forked_repo = lambda a, b, c: True
net = "medicalnet_resnet10_23datasets"
if net_path is not None:
self.model = torch.hub.load(repo_or_dir=net_path, model=net, trust_repo=True, source='local',model_dir = os.path.dirname(os.path.abspath(__file__))+'/../../warvito_MedicalNet-models_main/medicalnet' )
else:
self.model = torch.hub.load("warvito/MedicalNet-models", model=net, verbose=verbose)
self.eval()
self.channel_wise = channel_wise
for param in self.parameters():
param.requires_grad = False
def forward(self, input: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
input = mednet_norm(input)
target = mednet_norm(target)
# Get model outputs
feats_per_ch = 0
for ch_idx in range(input.shape[1]):
input_channel = input[:, ch_idx, ...].unsqueeze(1)
target_channel = target[:, ch_idx, ...].unsqueeze(1)
if ch_idx == 0:
outs_input = self.model.forward(input_channel)
outs_target = self.model.forward(target_channel)
feats_per_ch = outs_input.shape[1]
else:
outs_input = torch.cat([outs_input, self.model.forward(input_channel)], dim=1)
outs_target = torch.cat([outs_target, self.model.forward(target_channel)], dim=1)
feats_input = mednet_norm_feature(outs_input)
feats_target = mednet_norm_feature(outs_target)
feats_diff: torch.Tensor = (feats_input - feats_target) ** 2
if self.channel_wise:
results = torch.zeros(
feats_diff.shape[0], input.shape[1], feats_diff.shape[2], feats_diff.shape[3], feats_diff.shape[4]
)
for i in range(input.shape[1]):
l_idx = i * feats_per_ch
r_idx = (i + 1) * feats_per_ch
results[:, i, ...] = feats_diff[:, l_idx : i + r_idx, ...].sum(dim=1)
else:
results = feats_diff.sum(dim=1, keepdim=True)
results = results.mean([2, 3, 4], keepdim=True)
return results | Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | warvito_MedicalNet-models_main/hubconf.py | .py | 338 | 13 | # Optional list of dependencies required by the package
dependencies = ["torch", "gdown"]
from medicalnet_models.models.resnet import (
medicalnet_resnet10,
medicalnet_resnet10_23datasets,
medicalnet_resnet50,
medicalnet_resnet50_23datasets,
medicalnet_resnet101,
medicalnet_resnet152,
medicalnet_resnet200
)
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | warvito_MedicalNet-models_main/medicalnet_models/models/resnet.py | .py | 11,374 | 368 | from torch import Tensor
from typing import List, Type, Union
import errno
import os
from typing import Optional
import gdown
import torch
import torch.nn as nn
def conv3x3x3(in_planes: int, out_planes: int, stride: int = 1, dilation: int = 1) -> nn.Conv3d:
"""3x3x3 convolution with padding"""
return nn.Conv3d(
in_planes,
out_planes,
kernel_size=3,
dilation=dilation,
stride=stride,
padding=dilation,
bias=False,
)
def conv1x1x1(in_planes: int, out_planes: int, stride: int = 1) -> nn.Conv3d:
"""1x1x1 convolution"""
return nn.Conv3d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
class BasicBlock(nn.Module):
expansion: int = 1
def __init__(
self,
inplanes: int,
planes: int,
stride: int = 1,
downsample: Optional[nn.Module] = None,
dilation: int = 1,
) -> None:
super().__init__()
self.conv1 = conv3x3x3(inplanes, planes, stride=stride, dilation=dilation)
self.bn1 = nn.BatchNorm3d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3x3(planes, planes, dilation=dilation)
self.bn2 = nn.BatchNorm3d(planes)
self.downsample = downsample
self.stride = stride
self.dilation = dilation
def forward(self, x: Tensor) -> Tensor:
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class Bottleneck(nn.Module):
expansion: int = 4
def __init__(
self,
inplanes: int,
planes: int,
stride: int = 1,
downsample: Optional[nn.Module] = None,
dilation: int = 1,
) -> None:
super().__init__()
self.conv1 = conv1x1x1(inplanes, planes)
self.bn1 = nn.BatchNorm3d(planes)
self.conv2 = conv3x3x3(
planes,
planes,
stride=stride,
dilation=dilation,
)
self.bn2 = nn.BatchNorm3d(planes)
self.conv3 = conv1x1x1(planes, planes * 4)
self.bn3 = nn.BatchNorm3d(planes * 4)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
self.dilation = dilation
def forward(self, x: Tensor) -> Tensor:
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(
self,
block: Type[Union[BasicBlock, Bottleneck]],
layers: List[int],
) -> None:
super().__init__()
self.inplanes = 64
self.layers = layers
self.conv1 = nn.Conv3d(1, self.inplanes, kernel_size=7, stride=2, padding=3, bias=False)
self.bn1 = nn.BatchNorm3d(self.inplanes)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool3d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=1, dilation=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation=4)
def _make_layer(
self, block: Type[Union[BasicBlock, Bottleneck]], planes: int, blocks: int, stride: int = 1, dilation: int = 1
):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1x1(
self.inplanes,
planes * block.expansion,
stride=stride,
),
nn.BatchNorm3d(planes * block.expansion),
)
layers = []
layers.append(
block(
self.inplanes,
planes,
stride=stride,
dilation=dilation,
downsample=downsample,
)
)
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes, dilation=dilation))
return nn.Sequential(*layers)
def forward(self, x: Tensor) -> Tensor:
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
return x
def download_model(url: str, filename: str, model_dir: Optional[str] = None, progress: bool = True) -> str:
if model_dir is None:
hub_dir = torch.hub.get_dir()
model_dir = os.path.join(hub_dir, "medicalnet")
try:
os.makedirs(model_dir)
except OSError as e:
if e.errno == errno.EEXIST:
# Directory already exists, ignore.
pass
else:
# Unexpected OSError, re-raise.
raise
cached_file = os.path.join(model_dir, filename)
if not os.path.exists(cached_file):
gdown.download(
url=url,
output=cached_file,
quiet=not progress,
)
return cached_file
def medicalnet_resnet10(
model_dir: Optional[str] = None,
filename: str = "resnet_10.pth",
progress: bool = True,
) -> ResNet:
cached_file = download_model(
"https://drive.google.com/uc?export=download&id=1lCEK_K5q90YaOtyfkGAjUCMrqcQZUYV0",
filename,
model_dir,
progress,
)
model = ResNet(BasicBlock, [1, 1, 1, 1])
# Fix checkpoints saved with DataParallel wrapper
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pretrained_state_dict = torch.load(cached_file, map_location=device)
pretrained_state_dict = pretrained_state_dict["state_dict"]
pretrained_state_dict = {k.replace("module.", ""): v for k, v in pretrained_state_dict.items()}
model.load_state_dict(pretrained_state_dict)
return model
def medicalnet_resnet10_23datasets(
model_dir: Optional[str] = None,
filename: str = "resnet_10_23dataset.pth",
progress: bool = True,
) -> ResNet:
cached_file = download_model(
"https://drive.google.com/uc?export=download&id=1HLpyQ12SmzmCIFjMcNs4j3Ijyy79JYLk",
filename,
model_dir,
progress,
)
model = ResNet(BasicBlock, [1, 1, 1, 1])
# Fix checkpoints saved with DataParallel wrapper
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pretrained_state_dict = torch.load(cached_file, map_location=device)
pretrained_state_dict = pretrained_state_dict["state_dict"]
pretrained_state_dict = {k.replace("module.", ""): v for k, v in pretrained_state_dict.items()}
model.load_state_dict(pretrained_state_dict)
return model
def medicalnet_resnet50(
model_dir: Optional[str] = None,
filename: str = "resnet_50.pth",
progress: bool = True,
) -> ResNet:
cached_file = download_model(
"https://drive.google.com/uc?export=download&id=1E7005_ZT_z6tuPpPNRvYkMBWzAJNMIIC",
filename,
model_dir,
progress,
)
model = ResNet(Bottleneck, [3, 4, 6, 3])
# Fix checkpoints saved with DataParallel wrapper
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pretrained_state_dict = torch.load(cached_file, map_location=device)
pretrained_state_dict = pretrained_state_dict["state_dict"]
pretrained_state_dict = {k.replace("module.", ""): v for k, v in pretrained_state_dict.items()}
model.load_state_dict(pretrained_state_dict)
return model
def medicalnet_resnet50_23datasets(
model_dir: Optional[str] = None,
filename: str = "resnet_50_23dataset.pth",
progress: bool = True,
) -> ResNet:
cached_file = download_model(
"https://drive.google.com/uc?export=download&id=1qXyw9S5f-6N1gKECDfMroRnPZfARbqOP",
filename,
model_dir,
progress,
)
model = ResNet(Bottleneck, [3, 4, 6, 3])
# Fix checkpoints saved with DataParallel wrapper
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pretrained_state_dict = torch.load(cached_file, map_location=device)
pretrained_state_dict = pretrained_state_dict["state_dict"]
pretrained_state_dict = {k.replace("module.", ""): v for k, v in pretrained_state_dict.items()}
model.load_state_dict(pretrained_state_dict)
return model
def medicalnet_resnet101(
model_dir: Optional[str] = None,
filename: str = "resnet_101.pth",
progress: bool = True,
) -> ResNet:
cached_file = download_model(
"https://drive.google.com/uc?export=download&id=1mMNQvhlaS-jmnbyqdniGNSD5aONIidKt",
filename,
model_dir,
progress,
)
model = ResNet(Bottleneck, [3, 4, 23, 3])
# Fix checkpoints saved with DataParallel wrapper
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pretrained_state_dict = torch.load(cached_file, map_location=device)
pretrained_state_dict = pretrained_state_dict["state_dict"]
pretrained_state_dict = {k.replace("module.", ""): v for k, v in pretrained_state_dict.items()}
model.load_state_dict(pretrained_state_dict)
return model
def medicalnet_resnet152(
model_dir: Optional[str] = None,
filename: str = "resnet_152.pth",
progress: bool = True,
) -> ResNet:
cached_file = download_model(
"https://drive.google.com/uc?export=download&id=1Lixxc9YsZZqAl3mnAh7PwT8c3sTXoinE",
filename,
model_dir,
progress,
)
model = ResNet(Bottleneck, [3, 8, 36, 3])
# Fix checkpoints saved with DataParallel wrapper
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pretrained_state_dict = torch.load(cached_file, map_location=device)
pretrained_state_dict = pretrained_state_dict["state_dict"]
pretrained_state_dict = {k.replace("module.", ""): v for k, v in pretrained_state_dict.items()}
model.load_state_dict(pretrained_state_dict)
return model
def medicalnet_resnet200(
model_dir: Optional[str] = None,
filename: str = "resnet_200.pth",
progress: bool = True,
) -> ResNet:
cached_file = download_model(
"https://drive.google.com/uc?export=download&id=13BGtYw2fkvDSlx41gOZ5qTFhhrDB_zXr",
filename,
model_dir,
progress,
)
model = ResNet(Bottleneck, [3, 24, 36, 3])
# Fix checkpoints saved with DataParallel wrapper
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pretrained_state_dict = torch.load(cached_file, map_location=device)
pretrained_state_dict = pretrained_state_dict["state_dict"]
pretrained_state_dict = {k.replace("module.", ""): v for k, v in pretrained_state_dict.items()}
model.load_state_dict(pretrained_state_dict)
return model
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | ddpm/__init__.py | .py | 37 | 2 |
from ddpm.BiFlowNet import BiFlowNet | Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | ddpm/utils.py | .py | 4,796 | 168 | """
Various utilities for neural networks.
"""
import math
import torch as th
import torch.nn as nn
class SiLU(nn.Module):
def forward(self, x):
return x * th.sigmoid(x)
class GroupNorm32(nn.GroupNorm):
def forward(self, x):
return super().forward(x.float()).type(x.dtype)
def conv_nd(dims, *args, **kwargs):
"""
Create a 1D, 2D, or 3D convolution module.
"""
if dims == 1:
return nn.Conv1d(*args, **kwargs)
elif dims == 2:
return nn.Conv2d(*args, **kwargs)
elif dims == 3:
return nn.Conv3d(*args, **kwargs)
raise ValueError(f"unsupported dimensions: {dims}")
def linear(*args, **kwargs):
"""
Create a linear module.
"""
return nn.Linear(*args, **kwargs)
def avg_pool_nd(dims, *args, **kwargs):
"""
Create a 1D, 2D, or 3D average pooling module.
"""
if dims == 1:
return nn.AvgPool1d(*args, **kwargs)
elif dims == 2:
return nn.AvgPool2d(*args, **kwargs)
elif dims == 3:
return nn.AvgPool3d(*args, **kwargs)
raise ValueError(f"unsupported dimensions: {dims}")
def update_ema(target_params, source_params, rate=0.99):
"""
Update target parameters to be closer to those of source parameters using
an exponential moving average.
:param target_params: the target parameter sequence.
:param source_params: the source parameter sequence.
:param rate: the EMA rate (closer to 1 means slower).
"""
for targ, src in zip(target_params, source_params):
targ.detach().mul_(rate).add_(src, alpha=1 - rate)
def zero_module(module):
"""
Zero out the parameters of a module and return it.
"""
for p in module.parameters():
p.detach().zero_()
return module
def scale_module(module, scale):
"""
Scale the parameters of a module and return it.
"""
for p in module.parameters():
p.detach().mul_(scale)
return module
def mean_flat(tensor):
"""
Take the mean over all non-batch dimensions.
"""
return tensor.mean(dim=list(range(1, len(tensor.shape))))
def normalization(channels):
"""
Make a standard normalization layer.
:param channels: number of input channels.
:return: an nn.Module for normalization.
"""
return GroupNorm32(8, channels)
def timestep_embedding(timesteps, dim, max_period=10000):
"""
Create sinusoidal timestep embeddings.
:param timesteps: a 1-D Tensor of N indices, one per batch element.
These may be fractional.
:param dim: the dimension of the output.
:param max_period: controls the minimum frequency of the embeddings.
:return: an [N x dim] Tensor of positional embeddings.
"""
half = dim // 2
freqs = th.exp(
-math.log(max_period) * th.arange(start=0, end=half, dtype=th.float32) / half
).to(device=timesteps.device)
args = timesteps[:, None].float() * freqs[None]
embedding = th.cat([th.cos(args), th.sin(args)], dim=-1)
if dim % 2:
embedding = th.cat([embedding, th.zeros_like(embedding[:, :1])], dim=-1)
return embedding
def checkpoint(func, inputs, params, flag):
"""
Evaluate a function without caching intermediate activations, allowing for
reduced memory at the expense of extra compute in the backward pass.
:param func: the function to evaluate.
:param inputs: the argument sequence to pass to `func`.
:param params: a sequence of parameters `func` depends on but does not
explicitly take as arguments.
:param flag: if False, disable gradient checkpointing.
"""
if flag:
args = tuple(inputs) + tuple(params)
return CheckpointFunction.apply(func, len(inputs), *args)
else:
return func(*inputs)
class CheckpointFunction(th.autograd.Function):
@staticmethod
def forward(ctx, run_function, length, *args):
ctx.run_function = run_function
ctx.input_tensors = list(args[:length])
ctx.input_params = list(args[length:])
with th.no_grad():
output_tensors = ctx.run_function(*ctx.input_tensors)
return output_tensors
@staticmethod
def backward(ctx, *output_grads):
ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
with th.enable_grad():
shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
output_tensors = ctx.run_function(*shallow_copies)
input_grads = th.autograd.grad(
output_tensors,
ctx.input_tensors + ctx.input_params,
output_grads,
allow_unused=True,
)
del ctx.input_tensors
del ctx.input_params
del output_tensors
return (None, None) + input_grads
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | ddpm/BiFlowNet.py | .py | 40,364 | 1,090 |
import math
import copy
import torch
from torch import nn, einsum
import torch.nn.functional as F
from functools import partial
import torchio as tio
from torch.utils import data
from pathlib import Path
from torch.optim import Adam
from torchvision import transforms as T, utils
from torch.cuda.amp import autocast, GradScaler
from PIL import Image
import numpy as np
from tqdm import tqdm
from einops import rearrange
from einops_exts import check_shape, rearrange_many
from timm.models.vision_transformer import Attention
from timm.models.layers import to_2tuple
from torch.utils.data import Dataset, DataLoader
# helpers functions
class PatchEmbed_Voxel(nn.Module):
""" Voxel to Patch Embedding
"""
def __init__(self, voxel_size=(16,16,16,), patch_size=2, in_chans=3, embed_dim=768, bias=True):
super().__init__()
patch_size = (patch_size, patch_size, patch_size)
num_patches = (voxel_size[0] // patch_size[0]) * (voxel_size[1] // patch_size[1]) * (voxel_size[2] // patch_size[2])
self.patch_xyz = (voxel_size[0] // patch_size[0], voxel_size[1] // patch_size[1], voxel_size[2] // patch_size[2])
self.voxel_size = voxel_size
self.patch_size = patch_size
self.num_patches = num_patches
self.proj = nn.Conv3d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size, bias=bias)
def forward(self, x):
B, C, X, Y, Z = x.shape
x = x.float()
x = self.proj(x).flatten(2).transpose(1, 2).contiguous()
return x
class FinalLayer(nn.Module):
"""
The final layer of DiT block.
"""
def __init__(self, hidden_size, patch_size, out_channels):
super().__init__()
self.norm_final = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
self.linear = nn.Linear(hidden_size, patch_size * patch_size * patch_size * out_channels, bias=True)
self.adaLN_modulation = nn.Sequential(
nn.SiLU(),
nn.Linear(4*hidden_size*2, 2 * hidden_size, bias=True)
)
def forward(self, x, c):
shift, scale = self.adaLN_modulation(c).chunk(2, dim=1)
x = modulate(self.norm_final(x), shift, scale)
x = self.linear(x)
return x
class Mlp(nn.Module):
""" MLP as used in Vision Transformer, MLP-Mixer and related networks
"""
def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, bias=True, drop=0., eta=None):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
bias = to_2tuple(bias)
drop_probs = to_2tuple(drop)
self.fc1 = nn.Linear(in_features, hidden_features, bias=bias[0])
self.act = act_layer()
self.drop1 = nn.Dropout(drop_probs[0])
self.fc2 = nn.Linear(hidden_features, out_features, bias=bias[1])
self.drop2 = nn.Dropout(drop_probs[1])
if eta is not None: # LayerScale Initialization (no layerscale when None)
self.gamma1 = nn.Parameter(eta * torch.ones(hidden_features), requires_grad=True)
self.gamma2 = nn.Parameter(eta * torch.ones(out_features), requires_grad=True)
else:
self.gamma1, self.gamma2 = 1.0, 1.0
def forward(self, x):
x = self.gamma1 * self.fc1(x)
x = self.act(x)
x = self.drop1(x)
x = self.gamma2 * self.fc2(x)
x = self.drop2(x)
return x
def get_3d_sincos_pos_embed(embed_dim, grid_size, cls_token=False, extra_tokens=0):
"""
grid_size: int of the grid height and width
return:
pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)
"""
print('grid_size:', grid_size)
grid_x = np.arange(grid_size[0], dtype=np.float32)
grid_y = np.arange(grid_size[1], dtype=np.float32)
grid_z = np.arange(grid_size[2], dtype=np.float32)
grid = np.meshgrid(grid_x, grid_y, grid_z, indexing='ij') # here y goes first
grid = np.stack(grid, axis=0)
grid = grid.reshape([3, 1, grid_size[0], grid_size[1], grid_size[2]])
pos_embed = get_3d_sincos_pos_embed_from_grid(embed_dim, grid)
if cls_token and extra_tokens > 0:
pos_embed = np.concatenate([np.zeros([extra_tokens, embed_dim]), pos_embed], axis=0)
return pos_embed
def get_3d_sincos_pos_embed_from_grid(embed_dim, grid):
# assert embed_dim % 3 == 0
# use half of dimensions to encode grid_h
emb_x = get_1d_sincos_pos_embed_from_grid(embed_dim // 3, grid[0]) # (X*Y*Z, D/3)
emb_y = get_1d_sincos_pos_embed_from_grid(embed_dim // 3, grid[1]) # (X*Y*Z, D/3)
emb_z = get_1d_sincos_pos_embed_from_grid(embed_dim // 3, grid[2]) # (X*Y*Z, D/3)
emb = np.concatenate([emb_x, emb_y, emb_z], axis=1) # (X*Y*Z, D)
return emb
def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
"""
embed_dim: output dimension for each position
pos: a list of positions to be encoded: size (M,)
out: (M, D)
"""
assert embed_dim % 2 == 0
omega = np.arange(embed_dim // 2, dtype=np.float64)
omega /= embed_dim / 2.
omega = 1. / 10000**omega # (D/2,)
pos = pos.reshape(-1) # (M,)
out = np.einsum('m,d->md', pos, omega) # (M, D/2), outer product
emb_sin = np.sin(out) # (M, D/2)
emb_cos = np.cos(out) # (M, D/2)
emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D)
return emb
def modulate(x, shift, scale):
return x * (1 + scale.unsqueeze(1)) + shift.unsqueeze(1)
class DiTBlock(nn.Module):
"""
A DiT block with adaptive layer norm zero (adaLN-Zero) conditioning.
"""
def __init__(self, hidden_size, num_heads, mlp_ratio=4.0, skip=False,**block_kwargs):
super().__init__()
self.skip_linear = nn.Linear(2*hidden_size, hidden_size) if skip else None
self.norm1 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
self.attn = Attention(hidden_size, num_heads=num_heads, qkv_bias=True, **block_kwargs)
self.norm2 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
mlp_hidden_dim = int(hidden_size * mlp_ratio)
approx_gelu = lambda: nn.GELU(approximate="tanh")
self.mlp = Mlp(in_features=hidden_size, hidden_features=mlp_hidden_dim, act_layer=approx_gelu, drop=0)
self.adaLN_modulation = nn.Sequential(
nn.SiLU(),
nn.Linear(4 * hidden_size*2, 6 * hidden_size, bias=True)
)
def forward(self, x, c , skip= None):
if self.skip_linear is not None:
x = self.skip_linear(torch.cat([x,skip], dim = -1))
shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.adaLN_modulation(c).chunk(6, dim=1)
x = x + gate_msa.unsqueeze(1) * self.attn(modulate(self.norm1(x), shift_msa, scale_msa))
x = x + gate_mlp.unsqueeze(1) * self.mlp(modulate(self.norm2(x), shift_mlp, scale_mlp))
return x
def exists(x):
return x is not None
def noop(*args, **kwargs):
pass
def is_odd(n):
return (n % 2) == 1
def default(val, d):
if exists(val):
return val
return d() if callable(d) else d
def cycle(dl):
while True:
for data in dl:
yield data
def num_to_groups(num, divisor):
groups = num // divisor
remainder = num % divisor
arr = [divisor] * groups
if remainder > 0:
arr.append(remainder)
return arr
def prob_mask_like(shape, prob, device):
if prob == 1:
return torch.ones(shape, device=device, dtype=torch.bool)
elif prob == 0:
return torch.zeros(shape, device=device, dtype=torch.bool)
else:
return torch.zeros(shape, device=device).float().uniform_(0, 1) < prob
def is_list_str(x):
if not isinstance(x, (list, tuple)):
return False
return all([type(el) == str for el in x])
# relative positional bias
class RelativePositionBias(nn.Module):
def __init__(
self,
heads=8,
num_buckets=32,
max_distance=128
):
super().__init__()
self.num_buckets = num_buckets
self.max_distance = max_distance
self.relative_attention_bias = nn.Embedding(num_buckets, heads)
@staticmethod
def _relative_position_bucket(relative_position, num_buckets=32, max_distance=128):
ret = 0
n = -relative_position
num_buckets //= 2
ret += (n < 0).long() * num_buckets
n = torch.abs(n)
max_exact = num_buckets // 2
is_small = n < max_exact
val_if_large = max_exact + (
torch.log(n.float() / max_exact) / math.log(max_distance /
max_exact) * (num_buckets - max_exact)
).long()
val_if_large = torch.min(
val_if_large, torch.full_like(val_if_large, num_buckets - 1))
ret += torch.where(is_small, n, val_if_large)
return ret
def forward(self, n, device):
q_pos = torch.arange(n, dtype=torch.long, device=device)
k_pos = torch.arange(n, dtype=torch.long, device=device)
rel_pos = rearrange(k_pos, 'j -> 1 j') - rearrange(q_pos, 'i -> i 1')
rp_bucket = self._relative_position_bucket(
rel_pos, num_buckets=self.num_buckets, max_distance=self.max_distance)
values = self.relative_attention_bias(rp_bucket)
return rearrange(values, 'i j h -> h i j')
# small helper modules
class EMA():
def __init__(self, beta):
super().__init__()
self.beta = beta
def update_model_average(self, ma_model, current_model):
for current_params, ma_params in zip(current_model.parameters(), ma_model.parameters()):
old_weight, up_weight = ma_params.data, current_params.data
ma_params.data = self.update_average(old_weight, up_weight)
def update_average(self, old, new):
if old is None:
return new
return old * self.beta + (1 - self.beta) * new
class Residual(nn.Module):
def __init__(self, fn):
super().__init__()
self.fn = fn
def forward(self, x, *args, **kwargs):
return self.fn(x, *args, **kwargs) + x
class SinusoidalPosEmb(nn.Module):
def __init__(self, dim):
super().__init__()
self.dim = dim
def forward(self, x):
device = x.device
half_dim = self.dim // 2
emb = math.log(10000) / (half_dim - 1)
emb = torch.exp(torch.arange(half_dim, device=device) * -emb)
emb = x[:, None] * emb[None, :]
emb = torch.cat((emb.sin(), emb.cos()), dim=-1)
return emb
def Upsample(dim):
return nn.ConvTranspose3d(dim, dim, (4, 4, 4), (2, 2, 2), (1, 1, 1))
def Downsample(dim):
return nn.Conv3d(dim, dim, (4, 4, 4), (2, 2, 2), (1, 1, 1))
class LayerNorm(nn.Module):
def __init__(self, dim, eps=1e-5):
super().__init__()
self.eps = eps
self.gamma = nn.Parameter(torch.ones(1, dim, 1, 1, 1))
def forward(self, x):
var = torch.var(x, dim=1, unbiased=False, keepdim=True)
mean = torch.mean(x, dim=1, keepdim=True)
return (x - mean) / (var + self.eps).sqrt() * self.gamma
class PreNorm(nn.Module):
def __init__(self, dim, fn):
super().__init__()
self.fn = fn
self.norm = LayerNorm(dim)
def forward(self, x, **kwargs):
x = self.norm(x)
return self.fn(x, **kwargs)
class Block(nn.Module):
def __init__(self, dim, dim_out, groups=6):
super().__init__()
self.proj = nn.Conv3d(dim, dim_out, (3, 3, 3), padding=(1, 1, 1))
self.norm = nn.GroupNorm(groups, dim_out)
self.act = nn.SiLU()
def forward(self, x, scale_shift=None):
x = self.proj(x)
x = self.norm(x)
if exists(scale_shift):
scale, shift = scale_shift
x = x * (scale + 1) + shift
return self.act(x)
class ResnetBlock(nn.Module):
def __init__(self, dim, dim_out, *, time_emb_dim=None, groups=6):
super().__init__()
self.mlp = nn.Sequential(
nn.SiLU(),
nn.Linear(time_emb_dim, dim_out * 2)
) if exists(time_emb_dim) else None
self.block1 = Block(dim, dim_out, groups=groups)
self.block2 = Block(dim_out, dim_out, groups=groups)
self.res_conv = nn.Conv3d(
dim, dim_out, 1) if dim != dim_out else nn.Identity()
def forward(self, x, time_emb=None):
scale_shift = None
if exists(self.mlp):
assert exists(time_emb), 'time emb must be passed in'
time_emb = self.mlp(time_emb)
time_emb = rearrange(time_emb, 'b c -> b c 1 1 1')
scale_shift = time_emb.chunk(2, dim=1)
h = self.block1(x, scale_shift=scale_shift)
h = self.block2(h)
return h + self.res_conv(x)
class AttentionBlock(nn.Module):
def __init__(self, dim, heads=4, dim_head=32):
super().__init__()
self.scale = dim_head ** -0.5
self.heads = heads
hidden_dim = dim_head * heads # 256
self.to_qkv = nn.Linear(dim, hidden_dim * 3, bias=False)
self.to_out = nn.Conv3d(hidden_dim, dim, 1)
def forward(self, x):
b, c, z, h, w = x.shape
x = rearrange(x,'b c z x y -> b (z x y) c').contiguous()
qkv = self.to_qkv(x).chunk(3, dim=2)
q, k, v = rearrange_many(
qkv, 'b d (h c) -> b h d c ', h=self.heads)
out = F.scaled_dot_product_attention(q, k, v, scale=self.scale, dropout_p=0.0, is_causal=False)
out = rearrange(out, 'b h (z x y) c -> b (h c) z x y ',z = z, x = h ,y = w ).contiguous()
out = self.to_out(out)
return out
class EinopsToAndFrom(nn.Module):
def __init__(self, from_einops, to_einops, fn):
super().__init__()
self.from_einops = from_einops
self.to_einops = to_einops
self.fn = fn
def forward(self, x, **kwargs):
shape = x.shape
reconstitute_kwargs = dict(
tuple(zip(self.from_einops.split(' '), shape)))
x = rearrange(x, f'{self.from_einops} -> {self.to_einops}')
x = self.fn(x, **kwargs)
x = rearrange(
x, f'{self.to_einops} -> {self.from_einops}', **reconstitute_kwargs)
return x
class BiFlowNet(nn.Module):
def __init__(
self,
dim,
learn_sigma = False,
cond_classes=None,
dim_mults=(1, 1, 2, 4, 8),
sub_volume_size = (8,8,8),
patch_size = 2,
channels=3,
attn_heads=8,#
init_dim=None,
init_kernel_size=3,
use_sparse_linear_attn=[0,0,0,1,1],
resnet_groups=24, #
DiT_num_heads = 8, #
mlp_ratio=4,
vq_size=64,
res_condition=True,
num_mid_DiT=1
):
self.cond_classes = cond_classes
self.res_condition=res_condition
super().__init__()
self.channels = channels
self.vq_size = vq_size
out_dim = 2*channels if learn_sigma else channels
self.dim = dim
init_dim = default(init_dim, dim)
assert is_odd(init_kernel_size)
init_padding = init_kernel_size // 2
self.init_conv = nn.Conv3d(channels, init_dim, (init_kernel_size, init_kernel_size,
init_kernel_size), padding=(init_padding, init_padding, init_padding))
dims = [init_dim, *map(lambda m: dim * m, dim_mults)]
in_out = list(zip(dims[:-1], dims[1:]))
self.feature_fusion = np.asarray([item[0]==item[1] for item in in_out ]).sum()
self.num_mid_DiT= num_mid_DiT
# time conditioning
time_dim = dim * 4
self.time_mlp = nn.Sequential(
SinusoidalPosEmb(dim),
nn.Linear(dim, time_dim),
nn.GELU(),
nn.Linear(time_dim, time_dim)
)
# text conditioning
if self.cond_classes is not None:
self.cond_emb = nn.Embedding(cond_classes, time_dim)
if self.res_condition is not None:
self.res_mlp =nn.Sequential(nn.Linear(3, time_dim), nn.SiLU(), nn.Linear(time_dim, time_dim))
time_dim = 2* time_dim
# layers
### miniDiT blocks
self.sub_volume_size = sub_volume_size
self.patch_size = patch_size
self.x_embedder = PatchEmbed_Voxel(sub_volume_size, patch_size, channels, dim, bias=True)
num_patches = self.x_embedder.num_patches
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches, dim), requires_grad=False)
self.IntraPatchFlow_input = nn.ModuleList()
for i in range(self.feature_fusion):
temp = [DiTBlock(dim,
DiT_num_heads,
mlp_ratio=mlp_ratio,
)]
temp.append(FinalLayer(dim,self.patch_size,dim))
self.IntraPatchFlow_input.append(nn.ModuleList(temp))
self.IntraPatchFlow_input = nn.ModuleList(self.IntraPatchFlow_input)
self.IntraPatchFlow_mid = []
for i in range(self.num_mid_DiT):
self.IntraPatchFlow_mid.append(DiTBlock(dim,
DiT_num_heads,
mlp_ratio=mlp_ratio,
))
self.IntraPatchFlow_mid = nn.ModuleList(self.IntraPatchFlow_mid)
self.IntraPatchFlow_output = nn.ModuleList()
for i in range(self.feature_fusion):
temp = [DiTBlock(dim,
DiT_num_heads,
mlp_ratio=mlp_ratio,
skip= True
)]
temp.append(FinalLayer(dim,self.patch_size,dim))
self.IntraPatchFlow_output.append(nn.ModuleList(temp))
self.IntraPatchFlow_output = nn.ModuleList(self.IntraPatchFlow_output)
###
self.downs = nn.ModuleList([])
self.ups = nn.ModuleList([])
num_resolutions = len(in_out)
# block type
block_klass = partial(ResnetBlock, groups=resnet_groups)
block_klass_cond = partial(block_klass, time_emb_dim=time_dim)
# modules for all layers
for ind, (dim_in, dim_out) in enumerate(in_out):
is_last = ind == (num_resolutions - 1)
is_first = ind < self.feature_fusion - 1
self.downs.append(nn.ModuleList([
block_klass_cond(dim_in, dim_out),
Residual(PreNorm(dim_out, AttentionBlock(
dim_out, heads=attn_heads))) if use_sparse_linear_attn[ind] else nn.Identity(),
block_klass_cond(dim_out, dim_out),
Residual(PreNorm(dim_out, AttentionBlock(
dim_out, heads=attn_heads))) if use_sparse_linear_attn[ind] else nn.Identity(),
Downsample(dim_out) if not is_last and not is_first else nn.Identity()
]))
mid_dim = dims[-1]
self.mid_block1 = block_klass_cond(mid_dim, mid_dim)
self.mid_spatial_attn = Residual(PreNorm(mid_dim, AttentionBlock(
mid_dim, heads=attn_heads)))
self.mid_block2 = block_klass_cond(mid_dim, mid_dim)
for ind, (dim_in, dim_out) in enumerate(reversed(in_out)):
is_last = ind >= (num_resolutions - 2)
self.ups.append(nn.ModuleList([
block_klass_cond(dim_out * 2, dim_out),
Residual(PreNorm(dim_out, AttentionBlock(
dim = dim_out, heads=attn_heads))) if use_sparse_linear_attn[len(in_out)- ind -1] else nn.Identity(),
block_klass_cond(dim_out * 2, dim_in),
Residual(PreNorm(dim_in, AttentionBlock(
dim = dim_in, heads=attn_heads))) if use_sparse_linear_attn[len(in_out)- ind -1] else nn.Identity(),
Upsample(dim_in) if not is_last else nn.Identity()
]))
self.final_conv = nn.Sequential(
block_klass(dim * 2, dim),
nn.Conv3d(dim, out_dim, 1)
)
self.initialize_weights()
def initialize_weights(self):
# Initialize transformer layers:
def _basic_init(module):
if isinstance(module, nn.Linear):
torch.nn.init.xavier_uniform_(module.weight)
if module.bias is not None:
nn.init.constant_(module.bias, 0)
self.apply(_basic_init)
# Initialize (and freeze) pos_embed by sin-cos embedding:
pos_embed = get_3d_sincos_pos_embed(self.pos_embed.shape[-1], (self.sub_volume_size[0]//self.patch_size, self.sub_volume_size[1]//self.patch_size , self.sub_volume_size[2]//self.patch_size))
self.pos_embed.data.copy_(torch.Tensor(pos_embed).float().unsqueeze(0))
# Initialize patch_embed like nn.Linear (instead of nn.Conv2d):
w = self.x_embedder.proj.weight.data
nn.init.xavier_uniform_(w.view([w.shape[0], -1]))
for blocks in self.IntraPatchFlow_input:
for block in blocks:
if isinstance(block,DiTBlock):
nn.init.constant_(block.adaLN_modulation[-1].weight, 0)
nn.init.constant_(block.adaLN_modulation[-1].bias, 0)
else:
nn.init.constant_(block.adaLN_modulation[-1].weight, 0)
nn.init.constant_(block.adaLN_modulation[-1].bias, 0)
nn.init.constant_(block.linear.weight, 0)
nn.init.constant_(block.linear.bias, 0)
for block in self.IntraPatchFlow_mid:
if isinstance(block,DiTBlock):
nn.init.constant_(block.adaLN_modulation[-1].weight, 0)
nn.init.constant_(block.adaLN_modulation[-1].bias, 0)
else:
nn.init.constant_(block.adaLN_modulation[-1].weight, 0)
nn.init.constant_(block.adaLN_modulation[-1].bias, 0)
nn.init.constant_(block.linear.weight, 0)
nn.init.constant_(block.linear.bias, 0)
for blocks in self.IntraPatchFlow_output:
for block in blocks:
if isinstance(block,DiTBlock):
nn.init.constant_(block.adaLN_modulation[-1].weight, 0)
nn.init.constant_(block.adaLN_modulation[-1].bias, 0)
else:
nn.init.constant_(block.adaLN_modulation[-1].weight, 0)
nn.init.constant_(block.adaLN_modulation[-1].bias, 0)
nn.init.constant_(block.linear.weight, 0)
nn.init.constant_(block.linear.bias, 0)
def forward_with_cond_scale(
self,
*args,
cond_scale=2.,
**kwargs
):
logits = self.forward(*args, null_cond_prob=0., **kwargs)
if cond_scale == 1 or not self.has_cond:
return logits
null_logits = self.forward(*args, null_cond_prob=1., **kwargs)
return null_logits + (logits - null_logits) * cond_scale
def forward(
self,
x,
time,
y=None,
res=None,
):
assert (y is not None) == (
self.cond_classes is not None
), "must specify y if and only if the model is class-conditional"
# y = (y*0).to(torch.int)
b = x.shape[0]
ori_shape = (x.shape[2]*8,x.shape[3]*8,x.shape[4]*8)
# time_rel_pos_bias = self.time_rel_pos_bias(x.shape[2], device=x.device)
x_IntraPatch = x.clone()
# x_IntraPatch.retain_grad()
p = self.sub_volume_size[0]
x_IntraPatch = x_IntraPatch.unfold(2,p,p).unfold(3,p,p).unfold(4,p,p)
p1 , p2 , p3= x_IntraPatch.size(2) , x_IntraPatch.size(3) , x_IntraPatch.size(4)
x_IntraPatch = rearrange(x_IntraPatch , 'b c p1 p2 p3 d h w -> (b p1 p2 p3) c d h w')
x = self.init_conv(x)
r = x.clone()
t = self.time_mlp(time) if exists(self.time_mlp) else None
c = t.shape[-1]
t_DiT = t.unsqueeze(1).repeat(1,p1*p2*p3,1).view(-1,c)
if self.cond_classes:
assert y.shape == (x.shape[0],)
cond_emb = self.cond_emb(y)
cond_emb_DiT = cond_emb.unsqueeze(1).repeat(1,p1*p2*p3,1).view(-1,c)
t = t + cond_emb
t_DiT = t_DiT + cond_emb_DiT
if self.res_condition:
if len(res.shape) == 1:
res = res.unsqueeze(0)
res_condition_emb = self.res_mlp(res)
t = torch.cat((t,res_condition_emb),dim=1)
res_condition_emb_DiT = res_condition_emb.unsqueeze(1).repeat(1,p1*p2*p3,1).view(-1,c)
t_DiT = torch.cat((t_DiT,res_condition_emb_DiT),dim=1)
x_IntraPatch = self.x_embedder(x_IntraPatch)
x_IntraPatch = x_IntraPatch + self.pos_embed
h_DiT , h_Unet,h,=[],[],[]
for Block, MlpLayer in self.IntraPatchFlow_input:
x_IntraPatch = Block(x_IntraPatch,t_DiT)
h_DiT.append(x_IntraPatch)
Unet_feature = self.unpatchify_voxels(MlpLayer(x_IntraPatch,t_DiT))
Unet_feature = rearrange(Unet_feature, '(b p) c d h w -> b p c d h w', b=b)
Unet_feature = rearrange(Unet_feature, 'b (p1 p2 p3) c d h w -> b c (p1 d) (p2 h) (p3 w)',
p1=ori_shape[0]//self.vq_size, p2=ori_shape[1]//self.vq_size, p3=ori_shape[2]//self.vq_size)
h_Unet.append(Unet_feature)
for Block in self.IntraPatchFlow_mid:
x_IntraPatch = Block(x_IntraPatch,t_DiT)
for Block, MlpLayer in self.IntraPatchFlow_output:
x_IntraPatch = Block(x_IntraPatch,t_DiT , h_DiT.pop())
Unet_feature = self.unpatchify_voxels(MlpLayer(x_IntraPatch,t_DiT))
Unet_feature = rearrange(Unet_feature, '(b p) c d h w -> b p c d h w', b=b)
Unet_feature = rearrange(Unet_feature, 'b (p1 p2 p3) c d h w -> b c (p1 d) (p2 h) (p3 w)',
p1=ori_shape[0]//self.vq_size, p2=ori_shape[1]//self.vq_size, p3=ori_shape[2]//self.vq_size)
h_Unet.append(Unet_feature)
for idx, (block1, spatial_attn1, block2, spatial_attn2,downsample) in enumerate(self.downs):
if idx <self.feature_fusion :
x = x + h_Unet.pop(0)
x = block1(x, t)
x = spatial_attn1(x)
h.append(x)
x = block2(x, t)
x = spatial_attn2(x)
h.append(x)
x = downsample(x)
x = self.mid_block1(x, t)
x = self.mid_spatial_attn(x)
x = self.mid_block2(x, t)
for idx, (block1, spatial_attn1,block2, spatial_attn2, upsample) in enumerate(self.ups):
if len(self.ups)-idx <= 2:
x = x + h_Unet.pop(0)
x = torch.cat((x, h.pop()), dim=1)
x = block1(x, t)
x = spatial_attn1(x)
x = torch.cat((x, h.pop()), dim=1)
x = block2(x, t)
x = spatial_attn2(x)
x = upsample(x)
x = torch.cat((x, r), dim=1)
return self.final_conv(x)
def unpatchify_voxels(self, x0):
"""
input: (N, T, patch_size * patch_size * patch_size * C) (N, 64, 8*8*8*3)
voxels: (N, C, X, Y, Z) (N, 3, 32, 32, 32)
"""
c = self.dim
p = self.patch_size
x,y,z = np.asarray(self.sub_volume_size) // self.patch_size
assert x * y * z == x0.shape[1]
x0 = x0.reshape(shape=(x0.shape[0], x, y, z, p, p, p, c))
x0 = torch.einsum('nxyzpqrc->ncxpyqzr', x0)
volume = x0.reshape(shape=(x0.shape[0], c, x * p, y * p, z * p))
return volume
# gaussian diffusion trainer class
def extract(a, t, x_shape):
b, *_ = t.shape
out = a.gather(-1, t)
return out.reshape(b, *((1,) * (len(x_shape) - 1)))
def cosine_beta_schedule(timesteps, s=0.008):
"""
cosine schedule
as proposed in https://openreview.net/forum?id=-NEXDKk8gZ
"""
steps = timesteps + 1
x = torch.linspace(0, timesteps, steps, dtype=torch.float64)
alphas_cumprod = torch.cos(
((x / timesteps) + s) / (1 + s) * torch.pi * 0.5) ** 2
alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
return torch.clip(betas, 0, 0.9999)
class GaussianDiffusion(nn.Module):
def __init__(
self,
text_use_bert_cls=False,
channels=3,
timesteps=1000,
loss_type='l1',
use_dynamic_thres=False, # from the Imagen paper
dynamic_thres_percentile=0.9,
):
super().__init__()
self.channels = channels
betas = cosine_beta_schedule(timesteps)
alphas = 1. - betas
alphas_cumprod = torch.cumprod(alphas, axis=0)
alphas_cumprod_prev = F.pad(alphas_cumprod[:-1], (1, 0), value=1.)
timesteps, = betas.shape
self.num_timesteps = int(timesteps)
self.loss_type = loss_type
# register buffer helper function that casts float64 to float32
def register_buffer(name, val): return self.register_buffer(
name, val.to(torch.float32))
register_buffer('betas', betas)
register_buffer('alphas_cumprod', alphas_cumprod)
register_buffer('alphas_cumprod_prev', alphas_cumprod_prev)
# calculations for diffusion q(x_t | x_{t-1}) and others
register_buffer('sqrt_alphas_cumprod', torch.sqrt(alphas_cumprod))
register_buffer('sqrt_one_minus_alphas_cumprod',
torch.sqrt(1. - alphas_cumprod))
register_buffer('log_one_minus_alphas_cumprod',
torch.log(1. - alphas_cumprod))
register_buffer('sqrt_recip_alphas_cumprod',
torch.sqrt(1. / alphas_cumprod))
register_buffer('sqrt_recipm1_alphas_cumprod',
torch.sqrt(1. / alphas_cumprod - 1))
# calculations for posterior q(x_{t-1} | x_t, x_0)
posterior_variance = betas * \
(1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
# above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
register_buffer('posterior_variance', posterior_variance)
# below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
register_buffer('posterior_log_variance_clipped',
torch.log(posterior_variance.clamp(min=1e-20)))
register_buffer('posterior_mean_coef1', betas *
torch.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))
register_buffer('posterior_mean_coef2', (1. - alphas_cumprod_prev)
* torch.sqrt(alphas) / (1. - alphas_cumprod))
# text conditioning parameters
self.text_use_bert_cls = text_use_bert_cls
# dynamic thresholding when sampling
self.use_dynamic_thres = use_dynamic_thres
self.dynamic_thres_percentile = dynamic_thres_percentile
def q_mean_variance(self, x_start, t):
mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start
variance = extract(1. - self.alphas_cumprod, t, x_start.shape)
log_variance = extract(
self.log_one_minus_alphas_cumprod, t, x_start.shape)
return mean, variance, log_variance
def predict_start_from_noise(self, x_t, t, noise):
return (
extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
)
def q_posterior(self, x_start, x_t, t):
posterior_mean = (
extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
)
posterior_variance = extract(self.posterior_variance, t, x_t.shape)
posterior_log_variance_clipped = extract(
self.posterior_log_variance_clipped, t, x_t.shape)
return posterior_mean, posterior_variance, posterior_log_variance_clipped
def p_mean_variance(self, denoise_fn,x, t, clip_denoised: bool, y=None, res=None,hint = None,cond_scale=1.):
if hint == None:
x_recon = self.predict_start_from_noise(
x, t=t, noise=denoise_fn(x, t, y=y,res=res))
else:
x_recon = self.predict_start_from_noise(
x, t=t, noise=denoise_fn(x, t, y=y, res=res,hint = hint))
if clip_denoised:
s = 1.
if self.use_dynamic_thres:
s = torch.quantile(
rearrange(x_recon, 'b ... -> b (...)').abs(),
self.dynamic_thres_percentile,
dim=-1
)
s.clamp_(min=1.)
s = s.view(-1, *((1,) * (x_recon.ndim - 1)))
# clip by threshold, depending on whether static or dynamic
x_recon = x_recon.clamp(-s, s) / s
model_mean, posterior_variance, posterior_log_variance = self.q_posterior(
x_start=x_recon, x_t=x, t=t)
return model_mean, posterior_variance, posterior_log_variance
@torch.inference_mode()
def p_sample(self, denoise_fn , x, t, y=None, res= None,hint = None,cond_scale=1., clip_denoised=True):
b, *_, device = *x.shape, x.device
model_mean, _, model_log_variance = self.p_mean_variance(denoise_fn,
x=x, t=t, clip_denoised=clip_denoised, y=y, res=res,hint = hint,cond_scale=cond_scale)
noise = torch.randn_like(x)
# no noise when t == 0
nonzero_mask = (1 - (t == 0).float()).reshape(b,
*((1,) * (len(x.shape) - 1)))
return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
@torch.inference_mode()
def p_sample_loop(self, denoise_fn, z, y=None, res=None,cond_scale=1.,hint = None):
device = self.betas.device
b = z.shape[0]
img = default(z, lambda: torch.randn_like(z , device= device))
for i in tqdm(reversed(range(0, self.num_timesteps)), desc='sampling loop time step', total=self.num_timesteps):
img = self.p_sample(denoise_fn, img, torch.full(
(b,), i, device=device, dtype=torch.long), y=y, res=res,cond_scale=cond_scale,hint= hint)
return img
@torch.inference_mode()
def sample(self, denoise_fn, z, y=None, res=None,cond_scale=1.,hint = None, strategy='ddpm', eta=0.0, ddim_steps= 100):
if strategy == 'ddpm':
return self.p_sample_loop(
denoise_fn, z, y, res, cond_scale, hint
)
elif strategy == 'ddim':
return self.p_sample_loop_ddim(
denoise_fn, z, y, res, cond_scale, hint, eta, ddim_steps
)
else:
raise NotImplementedError
@torch.inference_mode()
def ddim_sample(self, denoise_fn, x, x_cond, t, t_prev, y=None, hint=None, cond_scale=1., clip_denoised=False):
b, *_, device = *x.shape, x.device
# Get predicted x0
if hint is None:
x_recon = denoise_fn(torch.cat((x, x_cond), dim=1), t, y=y)
else:
x_recon = denoise_fn(torch.cat((x, x_cond), dim=1), t, y=y, hint=hint)
if clip_denoised:
s = 1.
if self.use_dynamic_thres:
s = torch.quantile(
rearrange(x_recon, 'b ... -> b (...)').abs(),
self.dynamic_thres_percentile,
dim=-1
)
s.clamp_(min=1.)
s = s.view(-1, *((1,) * (x_recon.ndim - 1)))
x_recon = x_recon.clamp(-s, s) / s
alpha_t = self.alphas_cumprod[t].view(-1, 1, 1, 1)
alpha_prev = self.alphas_cumprod[t_prev].view(-1, 1, 1, 1)
sqrt_alpha_t = alpha_t.sqrt()
sqrt_alpha_prev = alpha_prev.sqrt()
# compute direction pointing to x_t
eps = (x - sqrt_alpha_t * x_recon) / (1 - alpha_t).sqrt()
# deterministic DDIM update
x_prev = sqrt_alpha_prev * x_recon + (1 - alpha_prev).sqrt() * eps
return x_prev
@torch.inference_mode()
def p_sample_loop_ddim(self, denoise_fn, z, y=None, res=None, cond_scale=1., hint=None, eta=0.0, ddim_steps=100):
device = self.betas.device
b = z.shape[0]
img = default(z, lambda: torch.randn_like(z, device=device))
# DDIM time schedule
times = torch.linspace(0, self.num_timesteps - 1, steps=ddim_steps).long().flip(0).to(device)
for i in tqdm(range(ddim_steps), desc="DDIM Sampling", total=ddim_steps):
t = times[i]
t_prev = times[i + 1] if i < ddim_steps - 1 else torch.tensor(0, device=device)
t_batch = torch.full((b,), t, device=device, dtype=torch.long)
t_prev_batch = torch.full((b,), t_prev, device=device, dtype=torch.long)
img = self.p_sample_ddim(
denoise_fn, img, t_batch, t_prev_batch,
y=y, res=res, hint=hint, cond_scale=cond_scale,
eta=eta, clip_denoised=True
)
return img
@torch.inference_mode()
def p_sample_ddim(self, denoise_fn, x, t, t_prev, y=None, res=None, hint=None, cond_scale=1., clip_denoised=True, eta=0.0):
# Predict noise (ε_theta)
if hint is None:
noise_pred = denoise_fn(x, t, y=y, res=res)
else:
noise_pred = denoise_fn(x, t, y=y, res=res, hint=hint)
# Predict x0
x0 = self.predict_start_from_noise(x, t=t, noise=noise_pred)
if clip_denoised:
s = 1.
if self.use_dynamic_thres:
s = torch.quantile(
rearrange(x0, 'b ... -> b (...)').abs(),
self.dynamic_thres_percentile,
dim=-1
)
s.clamp_(min=1.)
s = s.view(-1, *((1,) * (x0.ndim - 1)))
x0 = x0.clamp(-s, s) / s
# DDIM formula
alpha_t = self.alphas_cumprod[t].view(-1, 1, 1, 1)
alpha_prev = self.alphas_cumprod[t_prev].view(-1, 1, 1, 1)
sigma = eta * ((1 - alpha_prev) / (1 - alpha_t) * (1 - alpha_t / alpha_prev)).sqrt()
pred_dir = (1 - alpha_prev).sqrt() * noise_pred
x_prev = alpha_prev.sqrt() * x0 + pred_dir
if eta > 0:
noise = torch.randn_like(x)
x_prev = x_prev + sigma * noise
return x_prev
@torch.inference_mode()
def interpolate(self, x1, x2, t=None, lam=0.5):
b, *_, device = *x1.shape, x1.device
t = default(t, self.num_timesteps - 1)
assert x1.shape == x2.shape
t_batched = torch.stack([torch.tensor(t, device=device)] * b)
xt1, xt2 = map(lambda x: self.q_sample(x, t=t_batched), (x1, x2))
img = (1 - lam) * xt1 + lam * xt2
for i in tqdm(reversed(range(0, t)), desc='interpolation sample time step', total=t):
img = self.p_sample(img, torch.full(
(b,), i, device=device, dtype=torch.long))
return img
def q_sample(self, x_start, t, noise=None):
noise = default(noise, lambda: torch.randn_like(x_start))
return (
extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
extract(self.sqrt_one_minus_alphas_cumprod,
t, x_start.shape) * noise
)
def p_losses(self, denoise_fn,x_start, t, y=None, res=None,noise=None, hint = None,**kwargs):
b, c, f, h, w, device = *x_start.shape, x_start.device
noise = default(noise, lambda: torch.randn_like(x_start))
x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
if is_list_str(y):
y = bert_embed(
tokenize(y), return_cls_repr=self.text_use_bert_cls)
y = y.to(device)
if hint == None:
x_recon = denoise_fn(x_noisy, t, y=y,res=res)
else:
x_recon = denoise_fn(x_noisy, t, y=y, res=res,hint = hint)
# time_rel_pos_bias = self.time_rel_pos_bias(x.shape[2], device=x.device)
if self.loss_type == 'l1':
loss = F.l1_loss(noise, x_recon)
elif self.loss_type == 'l2':
loss = F.mse_loss(noise, x_recon)
else:
raise NotImplementedError()
return loss | Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | dataset/vqgan.py | .py | 3,077 | 77 |
import torch
from torch.utils.data.dataset import Dataset
import os
import random
import glob
import torchio as tio
import json
import random
class VQGANDataset(Dataset):
def __init__(self, root_dir=None, augmentation=False,split='train',stage = 1,patch_size = 64):
randnum = 216
self.file_names = []
self.stage = stage
print(root_dir)
if root_dir.endswith('json'):
with open(root_dir) as json_file:
dataroots = json.load(json_file)
for key,value in dataroots.items():
if type(value) == list:
for path in value:
self.file_names += (glob.glob(os.path.join(path, './*.nii.gz'), recursive=True))
else:
self.file_names += (glob.glob(os.path.join(value, './*.nii.gz'), recursive=True))
else:
self.root_dir = root_dir
self.file_names = glob.glob(os.path.join(
root_dir, './*.nii.gz'), recursive=True)
random.seed(randnum)
random.shuffle(self.file_names )
self.split = split
self.augmentation = augmentation
if split == 'train':
self.file_names = self.file_names[:-40]
elif split == 'val':
self.file_names = self.file_names[-40:]
self.augmentation = False
self.patch_sampler = tio.data.UniformSampler(patch_size)
self.patch_sampler_192 = tio.data.UniformSampler((192,192,192))
self.patch_sampler_256 = tio.data.UniformSampler((256,256,256))
self.randomflip = tio.RandomFlip( axes=(0,1),flip_probability=0.5)
print(f'With patch size {str(patch_size)}')
def __len__(self):
return len(self.file_names)
def __getitem__(self, index):
path = self.file_names[index]
whole_img = tio.ScalarImage(path)
if self.stage == 1 and self.split == 'train':
img = None
while img== None or img.data.sum() ==0:
img = next(self.patch_sampler(tio.Subject(image = whole_img)))['image']
elif self.stage ==2 and self.split == 'train':
img = whole_img
if img.shape[1]*img.shape[2]*img.shape[3] > 256*256*128:
img = next(self.patch_sampler_192(tio.Subject(image = img)))['image']
elif self.split =='val':
img = whole_img
if img.shape[1]*img.shape[2]*img.shape[3] > 256*256*256:
img = next(self.patch_sampler_256(tio.Subject(image = img)))['image']
if self.augmentation:
img = self.randomflip(img)
imageout = img.data
if self.augmentation and random.random()>0.5:
imageout = torch.rot90(imageout,dims=(1,2))
imageout = imageout * 2 - 1
imageout = imageout.transpose(1,3).transpose(2,3)
imageout = imageout.type(torch.float32)
if self.split =='val':
return {'data': imageout , 'affine' : img.affine , 'path':path}
else:
return {'data': imageout}
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | dataset/tr_generate.py | .py | 873 | 31 |
import torch
from torch.utils.data.dataset import Dataset
import os
import glob
import torchio as tio
import json
class GenerateTrData_dataset(Dataset):
def __init__(self, root_dir=None,no_norm=False):
self.no_norm = no_norm
self.root_dir = root_dir
self.all_files = glob.glob(os.path.join(root_dir, './*.nii.gz'))
self.file_num = len(self.all_files)
print(f"Total files : {self.file_num}")
print('no_norm:',self.no_norm)
def __len__(self):
return self.file_num
def __getitem__(self, index):
path = self.all_files[index]
img = tio.ScalarImage(path)
imageout = img.data
if not self.no_norm:
imageout = imageout * 2 - 1
imageout = imageout.transpose(1,3).transpose(2,3)
imageout = imageout.type(torch.float32)
return imageout, path | Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | dataset/__init__.py | .py | 97 | 6 |
from dataset.vqgan import VQGANDataset
from dataset.tr_generate import GenerateTrData_dataset
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | dataset/vqgan_4x.py | .py | 2,973 | 76 |
import torch
from torch.utils.data.dataset import Dataset
import os
import random
import glob
import torchio as tio
import json
import random
class VQGANDataset_4x(Dataset):
def __init__(self, root_dir=None, augmentation=False,split='train',stage = 1,patch_size = 64):
randnum = 216
self.file_names = []
self.stage = stage
print(root_dir)
if root_dir.endswith('json'):
with open(root_dir) as json_file:
dataroots = json.load(json_file)
for key,value in dataroots.items():
if type(value) == list:
for path in value:
self.file_names += (glob.glob(os.path.join(path, './*.nii.gz'), recursive=True))
else:
self.file_names += (glob.glob(os.path.join(value, './*.nii.gz'), recursive=True))
else:
self.root_dir = root_dir
self.file_names = glob.glob(os.path.join(
root_dir, './*.nii.gz'), recursive=True)
random.seed(randnum)
random.shuffle(self.file_names )
self.split = split
self.augmentation = augmentation
if split == 'train':
self.file_names = self.file_names[:-40]
elif split == 'val':
self.file_names = self.file_names[-40:]
self.patch_sampler = tio.data.UniformSampler(patch_size)
self.patch_sampler_256 = tio.data.UniformSampler((256,256,128))#
self.randomflip = tio.RandomFlip( axes=(0,1),flip_probability=0.5)
print(f'With patch size {str(patch_size)}')
def __len__(self):
return len(self.file_names)
def __getitem__(self, index):
path = self.file_names[index]
whole_img = tio.ScalarImage(path)
if self.stage == 1 and self.split == 'train':
img = None
while img== None or img.data.sum() ==0:
img = next(self.patch_sampler(tio.Subject(image = whole_img)))['image']
elif self.stage ==2 and self.split == 'train':
img = whole_img
if img.shape[1]*img.shape[2]*img.shape[3] > 256*256*128:#
img = next(self.patch_sampler_256(tio.Subject(image = img)))['image']
elif self.split =='val':
img = whole_img
if img.shape[1]*img.shape[2]*img.shape[3] > 256*256*128:#
img = next(self.patch_sampler_256(tio.Subject(image = img)))['image']
if self.augmentation:
img = self.randomflip(img)
imageout = img.data
if self.augmentation and random.random()>0.5:
imageout = torch.rot90(imageout,dims=(1,2))
imageout = imageout * 2 - 1
imageout = imageout.transpose(1,3).transpose(2,3)
imageout = imageout.type(torch.float32)
if self.split =='val':
return {'data': imageout , 'affine' : img.affine , 'path':path}
else:
return {'data': imageout}
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | dataset/Singleres_dataset.py | .py | 1,832 | 49 |
import numpy as np
import torch
from torch.utils.data.dataset import Dataset
import os
import glob
import numpy as np
import json
import torchio as tio
class Singleres_dataset(Dataset):
def __init__(self, root_dir=None, resolution= [32,32,32], generate_latents= False):
self.all_files = []
self.resolution = resolution
self.generate_latents = generate_latents
if root_dir.endswith('json'):
with open(root_dir) as json_file:
dataroots = json.load(json_file)
for key,value in dataroots.items():
if not generate_latents:
value = value+'_latents'
file_paths = glob.glob(value+'/*.nii.gz', recursive=True)
if len(file_paths[0]) == 0:
raise FileNotFoundError(f"No .nii.gz files found in directory: {value}")
for file_path in file_paths:
self.all_files.append({key:file_path})
self.file_num = len(self.all_files)
print(f"Total files : {self.file_num}")
def __len__(self):
return self.file_num
def __getitem__(self, index):
if self.generate_latents:
file_path = list(self.all_files[index].items())[0][1]
img = tio.ScalarImage(file_path)
img_data = img.data.to(torch.float32)
imageout = img_data * 2 - 1
imageout = imageout.transpose(1,3).transpose(2,3)
return imageout, file_path
else:
cls_idx , file_path = list(self.all_files[index].items())[0][0] , list(self.all_files[index].items())[0][1]
latent = tio.ScalarImage(file_path)
latent = latent.data.to(torch.float32)
return latent, torch.tensor(int(cls_idx)), torch.tensor(self.resolution)/64.0 | Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | evaluation/class_conditional_generation.py | .py | 5,060 | 116 | import sys
import os
current_dir = os.path.dirname(os.path.abspath(__file__))
project_root = os.path.abspath(os.path.join(current_dir, ".."))
sys.path.append(project_root)
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
import argparse
import os
from ddpm.BiFlowNet import GaussianDiffusion
from ddpm import BiFlowNet
from AutoEncoder.model.PatchVolume import patchvolumeAE
import torchio as tio
import numpy as np
class_res_mapping={0:[(32,32,32),(16,32,32)],1:[(32,32,32),(64,64,64)],2:[(32,32,32),(64,32,32)],3:[(24,24,24)],4:[(24,24,24)],5:[(16,32,32)],6:[(8,40,40)]}
spacing_mapping={0:[(1,1,1),(1,1,1)],1:[(1.25,1.25,1.25),(0.7,0.7,1)],2:[(1.25,1.25,1.25),(1.25,1.25,1.25)],3:[(1,1,1)],4:[(1,1,1)],5:[(1.2,1.2,2)],6:[(1,1,1)]}
name_mapping = {0:'CTHeadNeck',1:'CTChestAbdomen',2:'CTLegs',3:'MRTIBrain',4:'MRT2Brain',5:'MRAbdomen',6:'MRKnee'}
def main(args):
torch.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
import random
import numpy as np
random.seed(args.seed)
np.random.seed(args.seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
args.output_dir = os.path.join(args.output_dir, f'seed_{args.seed}')
os.makedirs(args.output_dir, exist_ok=True)
model = BiFlowNet(
dim=args.model_dim,
dim_mults=args.dim_mults,
channels=args.volume_channels,
init_kernel_size=3,
cond_classes=args.num_classes,
learn_sigma=False,
use_sparse_linear_attn=args.use_attn,
vq_size=args.vq_size,
num_mid_DiT = args.num_dit,
patch_size = args.patch_size
).cuda()
diffusion= GaussianDiffusion(
channels=args.volume_channels,
timesteps=args.timesteps,
loss_type=args.loss_type,
).cuda()
model_ckpt = torch.load(args.model_ckpt, map_location=torch.device('cpu'))
model.load_state_dict(model_ckpt['ema'], strict = True)
model = model.cuda()
model.train()
AE = patchvolumeAE.load_from_checkpoint(args.AE_ckpt).cuda()
AE.eval()
device = torch.device("cuda")
output_dir = args.output_dir
os.makedirs(output_dir , exist_ok=True)
for key, value in class_res_mapping.items():
class_idx = key
anatomy_name = name_mapping[key]
for idx,res in enumerate(value):
with torch.no_grad():
spacing = tuple(spacing_mapping[key][idx])
affine = np.diag(spacing + (1,))
output_name = str(f'{anatomy_name}_{res[0]*8}x{res[1]*8}x{res[2]*8}.nii.gz')
z = torch.randn(1, args.volume_channels, res[0], res[1],res[2], device=device)
y = torch.tensor([class_idx], device=device)
res_emb = torch.tensor(res,device=device)/64.0
samples = diffusion.sample(
model, z, y = y, res=res_emb, strategy = args.sampling_strategy
)
samples = (((samples + 1.0) / 2.0) * (AE.codebook.embeddings.max() -
AE.codebook.embeddings.min())) + AE.codebook.embeddings.min()
if res[0]*res[1]*res[2] <= 32*32*32:
volume = AE.decode(samples, quantize=True)
else:
volume = AE.decode_sliding(samples, quantize=True)
volume_path = os.path.join(output_dir,output_name)
volume = volume.detach().squeeze(0).cpu()
volume = volume.transpose(1,3).transpose(1,2)
tio.ScalarImage(tensor = volume,affine = affine).save(volume_path)
torch.cuda.empty_cache()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--AE-ckpt", type=str, required=True,help="Path to Autoencoder Checkpoint ")
parser.add_argument("--model-ckpt", type=str, required=True, help="Path to Diffusion Model Checkpoint")
parser.add_argument("--output-dir", type=str, required=True, help="Path to save the generated images")
parser.add_argument("--timesteps", type=int, default=1000)
parser.add_argument("--model-dim", type=int, default=72)
parser.add_argument("--patch-size", type=int, default=1)
parser.add_argument("--num-dit", type=int, default=1)
parser.add_argument("--dim-mults", type=list, default=[1,1,2,4,8])
parser.add_argument("--use-attn", type=list, default=[0,0,0,1,1])
parser.add_argument("--sampling-strategy", type=str, default='ddpm',help='The sampling strategy')
parser.add_argument("--ddim_steps", type=int, default=100,help='DDIM sampling steps')
parser.add_argument("--num-classes", type=int, default=7)
parser.add_argument("--loss-type", type=str, default='l1')
parser.add_argument("--volume-channels", type=int, default=8)
parser.add_argument("--vq-size", type=int, default=64)
parser.add_argument("--seed", type=int, default=64)
args = parser.parse_args()
main(args)
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | evaluation/class_conditional_generation_4x.py | .py | 5,068 | 115 | import sys
import os
current_dir = os.path.dirname(os.path.abspath(__file__))
project_root = os.path.abspath(os.path.join(current_dir, ".."))
sys.path.append(project_root)
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
import argparse
import os
from ddpm.BiFlowNet import GaussianDiffusion
from ddpm import BiFlowNet
from AutoEncoder.model.PatchVolume import patchvolumeAE
import torchio as tio
import numpy as np
class_res_mapping={0:[(64,64,64),(32,64,64)],1:[(64,64,64),(128,128,128)],2:[(64,64,64),(128,64,64)],3:[(48,48,48)],4:[(48,48,48)],5:[(32,64,64)],6:[(16,80,80)]}
spacing_mapping={0:[(1,1,1),(1,1,1)],1:[(1.25,1.25,1.25),(0.7,0.7,1)],2:[(1.25,1.25,1.25),(1.25,1.25,1.25)],3:[(1,1,1)],4:[(1,1,1)],5:[(1.2,1.2,2)],6:[(1,1,1)]}
name_mapping = {0:'CTHeadNeck',1:'CTChestAbdomen',2:'CTLegs',3:'MRTIBrain',4:'MRT2Brain',5:'MRAbdomen',6:'MRKnee'}
def main(args):
torch.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
import random
import numpy as np
random.seed(args.seed)
np.random.seed(args.seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
args.output_dir = os.path.join(args.output_dir, f'seed_{args.seed}')
os.makedirs(args.output_dir, exist_ok=True)
model = BiFlowNet(
dim=args.model_dim,
dim_mults=args.dim_mults,
channels=args.volume_channels,
init_kernel_size=3,
cond_classes=args.num_classes,
learn_sigma=False,
use_sparse_linear_attn=args.use_attn,
vq_size=args.vq_size,
num_mid_DiT = args.num_dit,
patch_size = args.patch_size
).cuda()
diffusion= GaussianDiffusion(
channels=args.volume_channels,
timesteps=args.timesteps,
loss_type=args.loss_type,
).cuda()
model_ckpt = torch.load(args.model_ckpt, map_location=torch.device('cpu'))
model.load_state_dict(model_ckpt['ema'], strict = True)
model = model.cuda()
model.train()
AE = patchvolumeAE.load_from_checkpoint(args.AE_ckpt).cuda()
AE.eval()
device = torch.device("cuda")
output_dir = args.output_dir
os.makedirs(output_dir , exist_ok=True)
for key, value in class_res_mapping.items():
class_idx = key
anatomy_name = name_mapping[key]
for idx,res in enumerate(value):
with torch.no_grad():
spacing = tuple(spacing_mapping[key][idx])
affine = np.diag(spacing + (1,))
output_name = str(f'{anatomy_name}_{res[0]*4}x{res[1]*4}x{res[2]*4}.nii.gz')
z = torch.randn(1, args.volume_channels, res[0], res[1],res[2], device=device)
y = torch.tensor([class_idx], device=device)
res_emb = torch.tensor(res,device=device)/64.0
samples = diffusion.sample(
model, z, y = y, res=res_emb, strategy = args.sampling_strategy
)
samples = (((samples + 1.0) / 2.0) * (AE.codebook.embeddings.max() -
AE.codebook.embeddings.min())) + AE.codebook.embeddings.min()
if res[0]*res[1]*res[2] <= 64*64*64:
volume = AE.decode(samples, quantize=True)
else:
volume = AE.decode_sliding(samples, quantize=True)
volume_path = os.path.join(output_dir,output_name)
volume = volume.detach().squeeze(0).cpu()
volume = volume.transpose(1,3).transpose(1,2)
tio.ScalarImage(tensor = volume,affine = affine).save(volume_path)
torch.cuda.empty_cache()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--AE-ckpt", type=str, required=True,help="Path to Autoencoder Checkpoint ")
parser.add_argument("--model-ckpt", type=str, required=True, help="Path to Diffusion Model Checkpoint")
parser.add_argument("--output-dir", type=str, required=True, help="Path to save the generated images")
parser.add_argument("--timesteps", type=int, default=1000)
parser.add_argument("--model-dim", type=int, default=72)
parser.add_argument("--patch-size", type=int, default=2)
parser.add_argument("--num-dit", type=int, default=1)
parser.add_argument("--dim-mults", type=list, default=[1,1,2,4,8])
parser.add_argument("--use-attn", type=list, default=[0,0,0,1,1])
parser.add_argument("--sampling-strategy", type=str, default='ddpm',help='The sampling strategy')
parser.add_argument("--ddim_steps", type=int, default=100,help='DDIM sampling steps')
parser.add_argument("--num-classes", type=int, default=7)
parser.add_argument("--loss-type", type=str, default='l1')
parser.add_argument("--volume-channels", type=int, default=8)
parser.add_argument("--vq-size", type=int, default=64)
parser.add_argument("--seed", type=int, default=19)
args = parser.parse_args()
main(args) | Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | train/generate_training_latent.py | .py | 2,022 | 60 |
import sys
import os
current_dir = os.path.dirname(os.path.abspath(__file__))
project_root = os.path.abspath(os.path.join(current_dir, ".."))
sys.path.append(project_root)
from torch.utils.data import DataLoader
from AutoEncoder.model.PatchVolume import patchvolumeAE
from dataset.Singleres_dataset import Singleres_dataset
import torch
from os.path import join
import argparse
import torchio as tio
os.environ["PL_TORCH_DISTRIBUTED_BACKEND"] = "gloo"
def generate(args):
tr_dataset = Singleres_dataset(root_dir=args.data_path,generate_latents = True)
tr_dataloader = DataLoader(tr_dataset, batch_size=args.batch_size,
shuffle=False, num_workers=args.num_workers)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
AE_ckpt = args.AE_ckpt
AE = patchvolumeAE.load_from_checkpoint(AE_ckpt)
AE = AE.to(device)
AE.eval()
for sample,paths in tr_dataloader:
sample = sample.cuda()
with torch.no_grad():
z = AE.patch_encode(sample,patch_size = 64)
output = ((z - AE.codebook.embeddings.min()) /
(AE.codebook.embeddings.max() -
AE.codebook.embeddings.min())) * 2.0 - 1.0
output = output.cpu()
for idx, path in enumerate(paths):
output_ = output[idx]
dir_name = os.path.basename(os.path.dirname(path))
latent_dir_name = dir_name + '_latents'
path = path.replace(dir_name, latent_dir_name)
os.makedirs(os.path.dirname(path), exist_ok=True)
img = tio.ScalarImage(tensor = output_ )
img.save(path)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--data-path", type=str, required=True)
parser.add_argument("--AE-ckpt", type=str, required=True)
parser.add_argument("--batch-size", type=int, default=2)
parser.add_argument("--num-workers", type=int, default=8)
args = parser.parse_args()
generate(args)
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | train/train_PatchVolume.py | .py | 3,501 | 94 |
import sys
import os
current_dir = os.path.dirname(os.path.abspath(__file__))
project_root = os.path.abspath(os.path.join(current_dir, ".."))
sys.path.append(project_root)
import pytorch_lightning as pl
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.loggers import TensorBoardLogger
from torch.utils.data import DataLoader
from AutoEncoder.model.PatchVolume import patchvolumeAE
from train.callbacks import VolumeLogger
from dataset.vqgan_4x import VQGANDataset_4x
from dataset.vqgan import VQGANDataset
import argparse
from omegaconf import OmegaConf
os.environ["PL_TORCH_DISTRIBUTED_BACKEND"] = "gloo"
def main(cfg_path: str):
cfg = OmegaConf.load(cfg_path)
pl.seed_everything(cfg.model.seed)
downsample_ratio = cfg.model.downsample[0]
if downsample_ratio == 4:
train_dataset = VQGANDataset_4x(
root_dir=cfg.dataset.root_dir,augmentation=True,split='train',stage=cfg.model.stage)
val_dataset = VQGANDataset_4x(
root_dir=cfg.dataset.root_dir,augmentation=False,split='val')
else:
train_dataset = VQGANDataset(
root_dir=cfg.dataset.root_dir,augmentation=True,split='train',stage=cfg.model.stage)
val_dataset = VQGANDataset(
root_dir=cfg.dataset.root_dir,augmentation=False,split='val')
train_dataloader = DataLoader(dataset=train_dataset, batch_size=cfg.model.batch_size,shuffle=True,
num_workers=cfg.model.num_workers)
val_dataloader = DataLoader(val_dataset, batch_size=1,
shuffle=True, num_workers=cfg.model.num_workers)
# automatically adjust learning rate
bs, lr, ngpu = cfg.model.batch_size, cfg.model.lr, cfg.model.gpus
print("Setting learning rate to {:.2e}, batch size to {}, ngpu to {}".format(lr, bs, ngpu))
model = patchvolumeAE(cfg)
callbacks = []
callbacks.append(ModelCheckpoint(monitor='val/recon_loss',
save_top_k=3, mode='min', filename='latest_checkpoint'))
callbacks.append(ModelCheckpoint(every_n_train_steps=3000,
save_top_k=-1, filename='{epoch}-{step}-{train/recon_loss:.2f}'))
callbacks.append(ModelCheckpoint(every_n_train_steps=10000, save_top_k=-1,
filename='{epoch}-{step}-10000-{train/recon_loss:.2f}'))
callbacks.append(VolumeLogger(
batch_frequency=1500, max_volumes=4, clamp=True))
logger = TensorBoardLogger(cfg.model.default_root_dir, name="my_model")
trainer = pl.Trainer(
accelerator='gpu',
devices=cfg.model.gpus,
default_root_dir=cfg.model.default_root_dir,
strategy='ddp_find_unused_parameters_true',
callbacks=callbacks,
max_steps=cfg.model.max_steps,
max_epochs=cfg.model.max_epochs,
precision=cfg.model.precision,
check_val_every_n_epoch=2,
num_sanity_val_steps = 2,
logger=logger
)
if cfg.model.resume_from_checkpoint and os.path.exists(cfg.model.resume_from_checkpoint):
print('will start from the recent ckpt %s' % cfg.model.resume_from_checkpoint)
trainer.fit(model, train_dataloader, val_dataloader,ckpt_path=cfg.model.resume_from_checkpoint)
else:
trainer.fit(model, train_dataloader, val_dataloader)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--config", type=str, required=True)
args = parser.parse_args()
main(args.config)
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | train/callbacks.py | .py | 3,178 | 91 |
import os
import numpy as np
from PIL import Image
import torch
import torchvision
from pytorch_lightning.callbacks import Callback
from pytorch_lightning.utilities.rank_zero import rank_zero_only
import torchio as tio
class VolumeLogger(Callback):
def __init__(self, batch_frequency, max_volumes, clamp=True, increase_log_steps=True):
super().__init__()
self.batch_freq = batch_frequency
self.max_volumes = max_volumes
self.log_steps = [
2 ** n for n in range(int(np.log2(self.batch_freq)) + 1)]
if not increase_log_steps:
self.log_steps = [self.batch_freq]
self.clamp = clamp
@rank_zero_only
def log_local(self, save_dir, split, volumes,
global_step, current_epoch, batch_idx):
root = os.path.join(save_dir, "volumes", split)
for k in volumes:
volumes[k] = (volumes[k] + 1.0)/2.0
for idx,volume in enumerate(volumes[k]):
volume = volume.transpose(1,3).transpose(1,2)
filename = "{}_gs-{:06}_e-{:06}_b-{:06}-{}.nii.gz".format(
k,
global_step,
current_epoch,
batch_idx,
idx)
path = os.path.join(root, filename)
os.makedirs(os.path.split(path)[0], exist_ok=True)
tio.ScalarImage(tensor = volume).save(path)
def log_vid(self, pl_module, batch, batch_idx, split="train"):
if (self.check_frequency(batch_idx) and # batch_idx % self.batch_freq == 0
hasattr(pl_module, "log_volumes") and
callable(pl_module.log_volumes) and
self.max_volumes > 0):
# print(batch_idx, self.batch_freq, self.check_frequency(batch_idx))
logger = type(pl_module.logger)
is_train = pl_module.training
if is_train:
pl_module.eval()
with torch.no_grad():
volumes = pl_module.log_volumes(
batch, split=split, batch_idx=batch_idx)
for k in volumes:
N = min(volumes[k].shape[0], self.max_volumes)
volumes[k] = volumes[k][:N]
if isinstance(volumes[k], torch.Tensor):
volumes[k] = volumes[k].detach().cpu()
self.log_local(pl_module.logger.save_dir, split, volumes,
pl_module.global_step, pl_module.current_epoch, batch_idx)
if is_train:
pl_module.train()
def check_frequency(self, batch_idx):
if (batch_idx % self.batch_freq) == 0 or (batch_idx in self.log_steps):
try:
self.log_steps.pop(0)
except IndexError:
pass
return True
return False
def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx ):
self.log_vid(pl_module, batch, batch_idx, split="train")
def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx):
self.log_vid(pl_module, batch, batch_idx, split="val")
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | train/train_BiFlowNet_SingleRes.py | .py | 13,901 | 362 |
import sys
import os
current_dir = os.path.dirname(os.path.abspath(__file__))
project_root = os.path.abspath(os.path.join(current_dir, ".."))
sys.path.append(project_root)
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
import numpy as np
from collections import OrderedDict
from glob import glob
from time import time
import argparse
import logging
import os
from ddpm.BiFlowNet import GaussianDiffusion
from ddpm.BiFlowNet import BiFlowNet
from AutoEncoder.model.PatchVolume import patchvolumeAE
import torchio as tio
import copy
from torch.cuda.amp import autocast, GradScaler
import random
from torch.optim.lr_scheduler import StepLR
from dataset.Singleres_dataset import Singleres_dataset
from torch.utils.data import DataLoader
#################################################################################
# Training Helper Functions #
#################################################################################
@torch.no_grad()
def update_ema(ema_model, model, decay=0.9999):
"""
Step the EMA model towards the current model.
"""
ema_params = OrderedDict(ema_model.named_parameters())
model_params = OrderedDict(model.named_parameters())
for name, param in model_params.items():
# TODO: Consider applying only to params that require_grad to avoid small numerical changes of pos_embed
ema_params[name].mul_(decay).add_(param.data, alpha=1 - decay)
def requires_grad(model, flag=True):
"""
Set requires_grad flag for all parameters in a model.
"""
for p in model.parameters():
p.requires_grad = flag
def cleanup():
"""
End DDP training.
"""
dist.destroy_process_group()
def create_logger(logging_dir):
"""
Create a logger that writes to a log file and stdout.
"""
if dist.get_rank() == 0: # real logger
logging.basicConfig(
level=logging.INFO,
format='[\033[34m%(asctime)s\033[0m] %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
handlers=[logging.StreamHandler(), logging.FileHandler(f"{logging_dir}/log.txt")]
)
logger = logging.getLogger(__name__)
else: # dummy logger (does nothing)
logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())
return logger
def _ddp_dict(_dict):
new_dict = {}
for k in _dict:
new_dict['module.' + k] = _dict[k]
return new_dict
#################################################################################
# Training Loop #
#################################################################################
def get_optimizer_size_in_bytes(optimizer):
"""Calculates the size of the optimizer state in bytes."""
size_in_bytes = 0
for state in optimizer.state.values():
for k, v in state.items():
if isinstance(v, torch.Tensor):
size_in_bytes += v.numel() * v.element_size()
return size_in_bytes
def format_size(bytes_size):
"""Formats the size in bytes into a human-readable string."""
if bytes_size < 1024:
return f"{bytes_size} B"
elif bytes_size < 1024 ** 2:
return f"{bytes_size / 1024:.2f} KB"
elif bytes_size < 1024 ** 3:
return f"{bytes_size / 1024 ** 2:.2f} MB"
else:
return f"{bytes_size / 1024 ** 3:.2f} GB"
class EMA():
def __init__(self, beta):
super().__init__()
self.beta = beta
def update_model_average(self, ma_model, current_model):
for current_params, ma_params in zip(current_model.parameters(), ma_model.parameters()):
old_weight, up_weight = ma_params.data, current_params.data
ma_params.data = self.update_average(old_weight, up_weight)
def update_average(self, old, new):
if old is None:
return new
return old * self.beta + (1 - self.beta) * new
def main(args):
"""
Trains a new DiT model.
"""
os.environ['PYTORCH_CUDA_ALLOC_CONF'] = 'expandable_segments:True'
assert torch.cuda.is_available(), "Training currently requires at least one GPU."
start_epoch = 0
train_steps = 0
log_steps = 0
running_loss = 0
# Setup DDP:
dist.init_process_group("nccl")
print(dist.get_world_size())
rank = dist.get_rank()
device = rank % torch.cuda.device_count()
seed = args.global_seed * dist.get_world_size() + rank
torch.manual_seed(seed)
torch.cuda.set_device(device)
print(f"Starting rank={rank}, seed={seed}, world_size={dist.get_world_size()}.")
# Setup an experiment folder:
if rank == 0:
os.makedirs(args.results_dir, exist_ok=True) # Make results folder (holds all experiment subfolders)
if args.ckpt == None:
experiment_index = len(glob(f"{args.results_dir}/*"))
model_string_name = args.model
experiment_dir = f"{args.results_dir}/{experiment_index:03d}-{model_string_name}" # Create an experiment folder
else:
experiment_dir = os.path.dirname(os.path.dirname(args.ckpt))
checkpoint_dir = f"{experiment_dir}/checkpoints" # Stores saved model checkpoints
samples_dir = f"{experiment_dir}/samples"
os.makedirs(checkpoint_dir, exist_ok=True)
os.makedirs(samples_dir, exist_ok= True)
logger = create_logger(experiment_dir)
logger.info(f"Experiment directory created at {experiment_dir}")
else:
logger = create_logger(None)
# Create model:
model = BiFlowNet(
dim=args.model_dim,
dim_mults=args.dim_mults,
channels=args.volume_channels,
init_kernel_size=3,
cond_classes=args.num_classes,
learn_sigma=False,
use_sparse_linear_attn=args.use_attn,
vq_size=args.vq_size,
num_mid_DiT = args.num_dit,
patch_size = args.patch_size
).cuda()
diffusion= GaussianDiffusion(
channels=args.volume_channels,
timesteps=args.timesteps,
loss_type=args.loss_type,
).cuda()
ema = EMA(0.995)
ema_model = copy.deepcopy(model)
updata_ema_every = 10
step_start_ema = 2000
model = DDP(model.to(device), device_ids=[rank])
amp = args.enable_amp
scaler = GradScaler(enabled=amp)
if args.AE_ckpt:
AE = patchvolumeAE.load_from_checkpoint(args.AE_ckpt).cuda()
AE.eval()
else:
raise NotImplementedError()
logger.info(f"Model Parameters: {sum(p.numel() for p in model.parameters()):,}")
# Setup optimizer (we used default Adam betas=(0.9, 0.999) and a constant learning rate of 1e-4 in our paper):
opt = torch.optim.Adam(model.parameters(), lr=1e-4)
print(args.epochs)
if args.ckpt:
checkpoint = torch.load(args.ckpt, map_location=torch.device('cpu'))
model.load_state_dict(_ddp_dict(checkpoint['model']), strict= True)
ema_model.load_state_dict(checkpoint['ema'], strict=True)
scaler.load_state_dict(checkpoint['scaler'])
opt.load_state_dict(checkpoint['opt'])
if 'epoch' in checkpoint:
start_epoch = checkpoint['epoch']
del checkpoint
logger.info(f'Using checkpoint: {args.ckpt}')
# Setup data:
dataset = Singleres_dataset(args.data_path, resolution=args.resolution)
loader = DataLoader(
dataset=dataset,
batch_size = args.batch_size,
num_workers=args.num_workers,
shuffle=True,
)
if args.ckpt:
train_steps = int(os.path.basename(args.ckpt).split('.')[0])
logger.info(f'Inital state: step = {train_steps}, epoch = {start_epoch}')
model.train()
ema_model.eval()
log_steps = 0
running_loss = 0
start_time = time()
logger.info(f"Training for {args.epochs} epochs...")
for epoch in range(start_epoch,args.epochs):
logger.info(f"Beginning epoch {epoch}...")
for z,y,res in loader:
b = z.shape[0]
z = z.to(device)
y = y.to(device)
res = res.to(device)
with autocast(enabled=amp):
t = torch.randint(0, diffusion.num_timesteps, (b,), device=device)
loss = diffusion.p_losses(model, z,t,y=y,res=res)
scaler.scale(loss).backward()
scaler.step(opt)
scaler.update()
opt.zero_grad()
running_loss += loss.item()
log_steps += 1
train_steps += 1
if train_steps % updata_ema_every == 0:
if train_steps < step_start_ema:
ema_model.load_state_dict(model.module.state_dict(),strict= True)
else:
ema.update_model_average(ema_model,model)
if train_steps % args.log_every == 0:
torch.cuda.synchronize()
end_time = time()
steps_per_sec = log_steps / (end_time - start_time)
# Reduce loss history over all processes:
avg_loss = torch.tensor(running_loss / log_steps, device=device)
dist.all_reduce(avg_loss, op=dist.ReduceOp.SUM)
avg_loss = avg_loss.item() / dist.get_world_size()
logger.info(f"(step={train_steps:07d}) Train Loss: {avg_loss:.4f}, Train Steps/Sec: {steps_per_sec:.2f}, LR : {opt.state_dict()['param_groups'][0]['lr']:.6f}")
# Reset monitoring variables:
running_loss = 0
log_steps = 0
start_time = time()
# Save Diffusion checkpoint:
# train_steps = args.ckpt_every #
if train_steps % args.ckpt_every == 0 and train_steps > 0:
if rank == 0:
checkpoint = {
"model": model.module.state_dict(),
"ema": ema_model.state_dict(),
"scaler": scaler.state_dict(),
"opt":opt.state_dict(),
"args": args,
"epoch":epoch
}
checkpoint_path = f"{checkpoint_dir}/{train_steps:07d}.pt"
torch.save(checkpoint, checkpoint_path)
logger.info(f"Saved checkpoint to {checkpoint_path}")
if len(os.listdir(checkpoint_dir))>6:
os.remove(f"{checkpoint_dir}/{train_steps-6*args.ckpt_every:07d}.pt")
with torch.no_grad():
milestone = train_steps // args.ckpt_every
cls_num = np.random.choice(list(range(0,args.num_classes)))
# cls_num = 1 #
volume_size = args.resolution
z = torch.randn(1, args.volume_channels, volume_size[0], volume_size[1],volume_size[2], device=device)
y = torch.tensor([cls_num], device=device)
res = torch.tensor(volume_size,device=device)/64.0
samples = diffusion.p_sample_loop(
ema_model, z, y = y,res=res
)
samples = (((samples + 1.0) / 2.0) * (AE.codebook.embeddings.max() -
AE.codebook.embeddings.min())) + AE.codebook.embeddings.min()
torch.cuda.empty_cache()
volume = AE.decode(samples, quantize=True)
volume_path = os.path.join(samples_dir,str(f'{milestone}_{str(cls_num)}.nii.gz'))
volume = volume.detach().squeeze(0).cpu()
volume = volume.transpose(1,3).transpose(1,2)
tio.ScalarImage(tensor = volume).save(volume_path)
dist.barrier()
torch.cuda.empty_cache()
# scheduler.step()
model.eval() # important! This disables randomized embedding dropout
logger.info("Done!")
cleanup()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--data-path", type=str, required=True)
parser.add_argument("--results-dir", type=str, required=True)
parser.add_argument("--loss-type", type=str, default='l1')
parser.add_argument("--volume-channels", type=int, default=8)
parser.add_argument("--timesteps", type=int, default=1000)
parser.add_argument("--model-dim", type=int, default=72)
parser.add_argument("--dim-mults", nargs='+', type=int, default=[1,1,2,4,8])
parser.add_argument("--use-attn", nargs='+', type=int, default=[0,0,0,1,1])
parser.add_argument("--patch-size", type=int, default=1)
parser.add_argument("--num-dit", type=int, default=1)
parser.add_argument("--enable_amp", type=bool, default=True)
parser.add_argument("--model", type=str,default="BiFlowNet")
parser.add_argument("--AE-ckpt", type=str, required=True)
parser.add_argument("--num-classes", type=int, default=7)
parser.add_argument("--epochs", type=int, default=1000)
parser.add_argument("--global-seed", type=int, default=0)
parser.add_argument("--num-workers", type=int, default=8)
parser.add_argument("--batch-size", type=int, default=16)
parser.add_argument('--resolution', nargs='+', type=int, default=[32, 32, 32])
parser.add_argument("--log-every", type=int, default=50)
parser.add_argument("--ckpt-every", type=int, default=500)
parser.add_argument("--ckpt", type=str, default=None)
parser.add_argument("--vq-size", type=int, default=64)
args = parser.parse_args()
main(args)
| Python |
3D | ShanghaiTech-IMPACT/3D-MedDiffusion | train/train_PatchVolume_stage2.py | .py | 3,385 | 91 |
import sys
import os
current_dir = os.path.dirname(os.path.abspath(__file__))
project_root = os.path.abspath(os.path.join(current_dir, ".."))
sys.path.append(project_root)
import pytorch_lightning as pl
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.loggers import TensorBoardLogger
from torch.utils.data import DataLoader
from AutoEncoder.model.PatchVolume import patchvolumeAE,AE_finetuning
from train.callbacks import VolumeLogger
from dataset.vqgan_4x import VQGANDataset_4x
from dataset.vqgan import VQGANDataset
import argparse
from omegaconf import OmegaConf
import torch
os.environ["PL_TORCH_DISTRIBUTED_BACKEND"] = "gloo"
def main(cfg_path: str):
cfg = OmegaConf.load(cfg_path)
pl.seed_everything(cfg.model.seed)
downsample_ratio = cfg.model.downsample[0]
if downsample_ratio == 4:
train_dataset = VQGANDataset_4x(
root_dir=cfg.dataset.root_dir,augmentation=True,split='train',stage=cfg.model.stage)
val_dataset = VQGANDataset_4x(
root_dir=cfg.dataset.root_dir,augmentation=False,split='val')
else:
train_dataset = VQGANDataset(
root_dir=cfg.dataset.root_dir,augmentation=True,split='train',stage=cfg.model.stage)
val_dataset = VQGANDataset(
root_dir=cfg.dataset.root_dir,augmentation=False,split='val')
train_dataloader = DataLoader(dataset=train_dataset, batch_size=cfg.model.batch_size,shuffle=True,
num_workers=cfg.model.num_workers)
val_dataloader = DataLoader(val_dataset, batch_size=1,
shuffle=True, num_workers=cfg.model.num_workers)
# automatically adjust learning rate
bs, lr, ngpu = cfg.model.batch_size, cfg.model.lr, cfg.model.gpus
print("Setting learning rate to {:.2e}, batch size to {}, ngpu to {}".format(lr, bs, ngpu))
callbacks = []
callbacks.append(ModelCheckpoint(monitor='val/recon_loss',
save_top_k=3, mode='min', filename='latest_checkpoint'))
callbacks.append(ModelCheckpoint(every_n_train_steps=3000,
save_top_k=-1, filename='{epoch}-{step}-{train/recon_loss:.2f}'))
callbacks.append(ModelCheckpoint(every_n_train_steps=10000, save_top_k=-1,
filename='{epoch}-{step}-10000-{train/recon_loss:.2f}'))
callbacks.append(VolumeLogger(
batch_frequency=1500, max_volumes=4, clamp=True))
callbacks.append(AE_finetuning())
logger = TensorBoardLogger(cfg.model.default_root_dir, name="my_model")
trainer = pl.Trainer(
accelerator='gpu',
devices=cfg.model.gpus,
default_root_dir=cfg.model.default_root_dir,
strategy='ddp_find_unused_parameters_true',
callbacks=callbacks,
max_steps=cfg.model.max_steps,
max_epochs=cfg.model.max_epochs,
precision=cfg.model.precision,
check_val_every_n_epoch=1,
num_sanity_val_steps = 2,
logger=logger
)
ckpt = torch.load(cfg.model.resume_from_checkpoint)
model = patchvolumeAE(cfg)
model.load_state_dict(ckpt['state_dict'])
model.cfg = cfg
trainer.fit(model, train_dataloader, val_dataloader)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--config", type=str, required=True)
args = parser.parse_args()
main(args.config)
| Python |
3D | Demon-Kervin/3D-reconstruction | main_boundaryrecon.m | .m | 345 | 10 | % Example:
I = imread('E:\1PHD\Paper\1\proe network\94.bmp');
phi = ac_SDF_2D('rectangle', size(I), 10) ;
smooth_weight = 3; image_weight = 1e-3;
delta_t = 2; n_iters = 100; show_result = 1;
phi = ac_ChanVese_model(double(I), phi, smooth_weight, image_weight, ...
delta_t, n_iters, show_result);
% axis on
% grid on
% gird minor | MATLAB |
3D | brainglobe/brainreg-napari | brainreg_napari/sample_data.py | .py | 1,155 | 43 | import zipfile
from typing import List
import numpy as np
import pooch
from napari.types import LayerData
from skimage.io import imread
# git SHA for version of sample data to download
data_commit_sha = "72b73c52f19cee2173467ecdca60747a60e5fb95"
POOCH_REGISTRY = pooch.create(
path=pooch.os_cache("brainreg_napari"),
base_url=(
"https://gin.g-node.org/cellfinder/data/"
f"raw/{data_commit_sha}/brainreg/"
),
registry={
"test_brain.zip": "7bcfbc45bb40358cd8811e5264ca0a2367976db90bcefdcd67adf533e0162b5f" # noqa: E501
},
)
def load_test_brain() -> List[LayerData]:
"""
Load test brain data.
"""
data = []
brain_zip = POOCH_REGISTRY.fetch("test_brain.zip")
with zipfile.ZipFile(brain_zip, mode="r") as archive:
for i in range(270):
with archive.open(
f"test_brain/image_{str(i).zfill(4)}.tif"
) as tif:
data.append(imread(tif))
data = np.stack(data, axis=0)
meta = {"voxel_size": [50, 40, 40], "data_orientation": "psl"}
return [
(data, {"name": "Sample brain", "metadata": meta}, "image"),
]
| Python |
3D | brainglobe/brainreg-napari | brainreg_napari/__init__.py | .py | 131 | 7 | import warnings
warnings.warn(
"brainreg-napari is deprecated, please switch to brainreg[napari].",
DeprecationWarning,
)
| Python |
3D | brainglobe/brainreg-napari | brainreg_napari/util.py | .py | 2,569 | 89 | import logging
from dataclasses import dataclass
import bg_space as bg
import numpy as np
import skimage.transform
from bg_atlasapi import BrainGlobeAtlas
from brainglobe_utils.general.system import get_num_processes
from tqdm import tqdm
def initialise_brainreg(atlas_key, data_orientation_key, voxel_sizes):
scaling_rounding_decimals = 5
n_free_cpus = 2
atlas = BrainGlobeAtlas(atlas_key)
source_space = bg.AnatomicalSpace(data_orientation_key)
scaling = []
for idx, axis in enumerate(atlas.space.axes_order):
scaling.append(
round(
float(voxel_sizes[idx])
/ atlas.resolution[
atlas.space.axes_order.index(source_space.axes_order[idx])
],
scaling_rounding_decimals,
)
)
n_processes = get_num_processes(min_free_cpu_cores=n_free_cpus)
load_parallel = n_processes > 1
logging.info("Loading raw image data")
return (
n_free_cpus,
n_processes,
atlas,
scaling,
load_parallel,
)
def downsample_and_save_brain(img_layer, scaling):
first_frame_shape = skimage.transform.rescale(
img_layer.data[0], scaling[1:2], anti_aliasing=True
).shape
preallocated_array = np.empty(
(img_layer.data.shape[0], first_frame_shape[0], first_frame_shape[1])
)
print("downsampling data in x, y")
for i, img in tqdm(enumerate(img_layer.data)):
down_xy = skimage.transform.rescale(
img, scaling[1:2], anti_aliasing=True
)
preallocated_array[i] = down_xy
first_ds_frame_shape = skimage.transform.rescale(
preallocated_array[:, :, 0], [scaling[0], 1], anti_aliasing=True
).shape
downsampled_array = np.empty(
(first_ds_frame_shape[0], first_frame_shape[0], first_frame_shape[1])
)
print("downsampling data in z")
for i, img in tqdm(enumerate(preallocated_array.T)):
down_xyz = skimage.transform.rescale(
img, [1, scaling[0]], anti_aliasing=True
)
downsampled_array[:, :, i] = down_xyz.T
return downsampled_array
@dataclass
class NiftyregArgs:
"""
Class for niftyreg arguments.
"""
affine_n_steps: int
affine_use_n_steps: int
freeform_n_steps: int
freeform_use_n_steps: int
bending_energy_weight: float
grid_spacing: float
smoothing_sigma_reference: float
smoothing_sigma_floating: float
histogram_n_bins_floating: float
histogram_n_bins_reference: float
debug: bool
| Python |
3D | brainglobe/brainreg-napari | brainreg_napari/register.py | .py | 21,784 | 584 | import json
import logging
import pathlib
from collections import namedtuple
from enum import Enum
from typing import Dict, List, Tuple
import bg_space as bg
import brainreg as program_for_log
import napari
import numpy as np
from bg_atlasapi import BrainGlobeAtlas
from brainglobe_napari_io.cellfinder.reader_dir import load_registration
from brainreg.backend.niftyreg.run import run_niftyreg
from brainreg.paths import Paths
from brainreg.utils.boundaries import boundaries
from brainreg.utils.misc import log_metadata
from brainreg.utils.volume import calculate_volumes
from brainreg_segment.atlas.utils import get_available_atlases
from fancylog import fancylog
from magicgui import magicgui
from napari._qt.qthreading import thread_worker
from napari.types import LayerDataTuple
from napari.utils.notifications import show_info
from brainreg_napari.util import (
NiftyregArgs,
downsample_and_save_brain,
initialise_brainreg,
)
PRE_PROCESSING_ARGS = None
def add_registered_image_layers(
viewer: napari.Viewer, *, registration_directory: pathlib.Path
) -> Tuple[napari.layers.Image, napari.layers.Labels]:
"""
Read in saved registration data and add as layers to the
napari viewer.
Returns
-------
boundaries :
Registered boundaries.
labels :
Registered brain regions.
"""
layers: List[LayerDataTuple] = []
meta_file = (registration_directory / "brainreg.json").resolve()
if meta_file.exists():
with open(meta_file) as json_file:
metadata = json.load(json_file)
layers = load_registration(layers, registration_directory, metadata)
else:
raise FileNotFoundError(
f"'brainreg.json' file not found in {registration_directory}"
)
boundaries = viewer.add_layer(napari.layers.Layer.create(*layers[0]))
labels = viewer.add_layer(napari.layers.Layer.create(*layers[1]))
return boundaries, labels
def get_layer_labels(widget):
return [layer._name for layer in widget.viewer.value.layers]
def get_additional_images_downsample(widget) -> Dict[str, str]:
"""
For any selected layers loaded from a file, get a mapping from
layer name -> layer file path.
"""
images = {}
for layer in widget.viewer.value.layers.selection:
if layer._source.path is not None:
images[layer._name] = str(layer._source.path)
return images
def get_atlas_dropdown():
atlas_dict = {}
for i, k in enumerate(get_available_atlases().keys()):
atlas_dict.setdefault(k, k)
atlas_keys = Enum("atlas_key", atlas_dict)
return atlas_keys
def get_brain_geometry_dropdown():
geometry_dict = {
"Full brain": "full",
"Right hemisphere": "hemisphere_r",
"Left hemisphere": "hemisphere_l",
}
return Enum("geometry_keys", geometry_dict)
def brainreg_register():
DEFAULT_PARAMETERS = dict(
z_pixel_um=5,
y_pixel_um=2,
x_pixel_um=2,
data_orientation="psl",
brain_geometry=get_brain_geometry_dropdown(),
save_original_orientation=False,
atlas_key=get_atlas_dropdown(),
registration_output_folder=pathlib.Path.home(),
affine_n_steps=6,
affine_use_n_steps=5,
freeform_n_steps=6,
freeform_use_n_steps=4,
bending_energy_weight=0.95,
grid_spacing=10,
smoothing_sigma_reference=1,
smoothing_sigma_floating=1,
histogram_n_bins_floating=128,
histogram_n_bins_reference=128,
debug=False,
)
@magicgui(
call_button=True,
img_layer=dict(
label="Image layer",
),
atlas_key=dict(
label="Atlas",
),
z_pixel_um=dict(
value=DEFAULT_PARAMETERS["z_pixel_um"],
label="Voxel size (z)",
step=0.1,
),
y_pixel_um=dict(
value=DEFAULT_PARAMETERS["y_pixel_um"],
label="Voxel size (y)",
step=0.1,
),
x_pixel_um=dict(
value=DEFAULT_PARAMETERS["x_pixel_um"],
label="Voxel size (x)",
step=0.1,
),
data_orientation=dict(
value=DEFAULT_PARAMETERS["data_orientation"],
label="Data orientation",
),
brain_geometry=dict(
label="Brain geometry",
),
registration_output_folder=dict(
value=DEFAULT_PARAMETERS["registration_output_folder"],
mode="d",
label="Output directory",
),
save_original_orientation=dict(
value=DEFAULT_PARAMETERS["save_original_orientation"],
label="Save original orientation",
),
affine_n_steps=dict(
value=DEFAULT_PARAMETERS["affine_n_steps"], label="affine_n_steps"
),
affine_use_n_steps=dict(
value=DEFAULT_PARAMETERS["affine_use_n_steps"],
label="affine_use_n_steps",
),
freeform_n_steps=dict(
value=DEFAULT_PARAMETERS["freeform_n_steps"],
label="freeform_n_steps",
),
freeform_use_n_steps=dict(
value=DEFAULT_PARAMETERS["freeform_use_n_steps"],
label="freeform_use_n_steps",
),
bending_energy_weight=dict(
value=DEFAULT_PARAMETERS["bending_energy_weight"],
label="bending_energy_weight",
),
grid_spacing=dict(
value=DEFAULT_PARAMETERS["grid_spacing"], label="grid_spacing"
),
smoothing_sigma_reference=dict(
value=DEFAULT_PARAMETERS["smoothing_sigma_reference"],
label="smoothing_sigma_reference",
),
smoothing_sigma_floating=dict(
value=DEFAULT_PARAMETERS["smoothing_sigma_floating"],
label="smoothing_sigma_floating",
),
histogram_n_bins_floating=dict(
value=DEFAULT_PARAMETERS["histogram_n_bins_floating"],
label="histogram_n_bins_floating",
),
histogram_n_bins_reference=dict(
value=DEFAULT_PARAMETERS["histogram_n_bins_reference"],
label="histogram_n_bins_reference",
),
debug=dict(
value=DEFAULT_PARAMETERS["debug"],
label="Debug mode",
),
reset_button=dict(widget_type="PushButton", text="Reset defaults"),
check_orientation_button=dict(
widget_type="PushButton", text="Check orientation"
),
)
def widget(
viewer: napari.Viewer,
img_layer: napari.layers.Image,
atlas_key: get_atlas_dropdown(),
data_orientation: str,
brain_geometry: get_brain_geometry_dropdown(),
z_pixel_um: float,
x_pixel_um: float,
y_pixel_um: float,
registration_output_folder: pathlib.Path,
save_original_orientation: bool,
affine_n_steps: int,
affine_use_n_steps: int,
freeform_n_steps: int,
freeform_use_n_steps: int,
bending_energy_weight: float,
grid_spacing: int,
smoothing_sigma_reference: int,
smoothing_sigma_floating: float,
histogram_n_bins_floating: float,
histogram_n_bins_reference: float,
debug: bool,
reset_button,
check_orientation_button,
block: bool = False,
):
"""
Parameters
----------
img_layer : napari.layers.Image
Image layer to be registered
atlas_key : str
Atlas to use for registration
data_orientation: str
Three characters describing the data orientation, e.g. "psl".
See docs for more details.
brain_geometry: str
To allow brain sub-volumes to be processed. Choose whether your
data is a whole brain or a single hemisphere.
z_pixel_um : float
Size of your voxels in the axial dimension
y_pixel_um : float
Size of your voxels in the y direction (top to bottom)
x_pixel_um : float
Size of your voxels in the xdirection (left to right)
registration_output_folder: pathlib.Path
Where to save the registration output
affine_n_steps: int,
save_original_orientation: bool
Option to save annotations with the same orientation as the input
data. Use this if you plan to map
segmented objects outside brainglobe/tools.
affine_n_steps: int
Registration starts with further downsampled versions of the
original data to optimize the global fit of the result and
prevent "getting stuck" in local minima of the similarity
function. This parameter determines how many downsampling steps
are being performed, with each step halving the data size along
each dimension.
affine_use_n_steps: int
Determines how many of the downsampling steps defined by
affine-_n_steps will have their registration computed. The
combination affine_n_steps=3, affine_use_n_steps=2 will e.g.
calculate 3 downsampled steps, each of which is half the size
of the previous one but only perform the registration on the
2 smallest resampling steps, skipping the full resolution data.
Can be used to save time if running the full resolution doesn't
result in noticeable improvements.
freeform_n_steps: int
Registration starts with further downsampled versions of the
original data to optimize the global fit of the result and prevent
"getting stuck" in local minima of the similarity function. This
parameter determines how many downsampling steps are being
performed, with each step halving the data size along each
dimension.
freeform_use_n_steps: int
Determines how many of the downsampling steps defined by
freeform_n_steps will have their registration computed. The
combination freeform_n_steps=3, freeform_use_n_steps=2 will e.g.
calculate 3 downsampled steps, each of which is half the size of
the previous one but only perform the registration on the 2
smallest resampling steps, skipping the full resolution data.
Can be used to save time if running the full resolution doesn't
result in noticeable improvements
bending_energy_weight: float
Sets the bending energy, which is the coefficient of the penalty
term, preventing the freeform registration from over-fitting.
The range is between 0 and 1 (exclusive) with higher values
leading to more restriction of the registration.
grid_spacing: int
Sets the control point grid spacing in x, y & z.
Smaller grid spacing allows for more local deformations
but increases the risk of over-fitting.
smoothing_sigma_reference: int
Adds a Gaussian smoothing to the reference image (the one being
registered), with the sigma defined by the number. Positive
values are interpreted as real values in mm, negative values
are interpreted as distance in voxels.
smoothing_sigma_floating: float
Adds a Gaussian smoothing to the floating image (the one being
registered), with the sigma defined by the number. Positive
values are interpreted as real values in mm, negative values
are interpreted as distance in voxels.
histogram_n_bins_floating: float
Number of bins used for the generation of the histograms used
for the calculation of Normalized Mutual Information on the
floating image
histogram_n_bins_reference: float
Number of bins used for the generation of the histograms used
for the calculation of Normalized Mutual Information on the
reference image
debug: bool
Activate debug mode (save intermediate steps).
check_orientation_button:
Interactively check the input orientation by comparing the average
projection along each axis. The top row of displayed images are
the projections of the reference atlas. The bottom row are the
projections of the aligned input data. If the two rows are
similarly oriented, the orientation is correct. If not, change
the orientation and try again.
reset_button:
Reset parameters to default
block : bool
If `True`, registration will block execution when called. By
default this is `False` to avoid blocking the napari GUI, but
is set to `True` in the tests.
"""
def load_registration_as_layers() -> None:
"""
Load the saved registration data into napari layers.
"""
viewer = getattr(widget, "viewer").value
registration_directory = pathlib.Path(
getattr(widget, "registration_output_folder").value
)
add_registered_image_layers(
viewer, registration_directory=registration_directory
)
def get_gui_logging_args():
args_dict = {}
args_dict.setdefault("image_paths", img_layer.source.path)
args_dict.setdefault("backend", "niftyreg")
voxel_sizes = []
for name in ["z_pixel_um", "y_pixel_um", "x_pixel_um"]:
voxel_sizes.append(str(getattr(widget, name).value))
args_dict.setdefault("voxel_sizes", voxel_sizes)
for name, value in DEFAULT_PARAMETERS.items():
if "pixel" not in name:
if name == "atlas_key":
args_dict.setdefault(
"atlas", str(getattr(widget, name).value.value)
)
if name == "data_orientation":
args_dict.setdefault(
"orientation", str(getattr(widget, name).value)
)
args_dict.setdefault(
name, str(getattr(widget, name).value)
)
return (
namedtuple("namespace", args_dict.keys())(*args_dict.values()),
args_dict,
)
@thread_worker
def run():
paths = Paths(pathlib.Path(registration_output_folder))
niftyreg_args = NiftyregArgs(
affine_n_steps,
affine_use_n_steps,
freeform_n_steps,
freeform_use_n_steps,
bending_energy_weight,
-grid_spacing,
-smoothing_sigma_reference,
-smoothing_sigma_floating,
histogram_n_bins_floating,
histogram_n_bins_reference,
debug=False,
)
args_namedtuple, args_dict = get_gui_logging_args()
log_metadata(paths.metadata_path, args_dict)
fancylog.start_logging(
str(paths.registration_output_folder),
program_for_log,
variables=args_namedtuple,
verbose=niftyreg_args.debug,
log_header="BRAINREG LOG",
multiprocessing_aware=False,
)
voxel_sizes = z_pixel_um, x_pixel_um, y_pixel_um
(
n_free_cpus,
n_processes,
atlas,
scaling,
load_parallel,
) = initialise_brainreg(
atlas_key.value, data_orientation, voxel_sizes
)
additional_images_downsample = get_additional_images_downsample(
widget
)
logging.info(f"Registering {img_layer._name}")
target_brain = downsample_and_save_brain(img_layer, scaling)
target_brain = bg.map_stack_to(
data_orientation, atlas.metadata["orientation"], target_brain
)
sort_input_file = False
run_niftyreg(
registration_output_folder,
paths,
atlas,
target_brain,
n_processes,
additional_images_downsample,
data_orientation,
atlas.metadata["orientation"],
niftyreg_args,
PRE_PROCESSING_ARGS,
scaling,
load_parallel,
sort_input_file,
n_free_cpus,
save_original_orientation=save_original_orientation,
brain_geometry=brain_geometry.value,
)
logging.info("Calculating volumes of each brain area")
calculate_volumes(
atlas,
paths.registered_atlas,
paths.registered_hemispheres,
paths.volume_csv_path,
# for all brainglobe atlases
left_hemisphere_value=1,
right_hemisphere_value=2,
)
logging.info("Generating boundary image")
boundaries(paths.registered_atlas, paths.boundaries_file_path)
logging.info(
f"brainreg completed. Results can be found here: "
f"{paths.registration_output_folder}"
)
worker = run()
if not block:
worker.returned.connect(load_registration_as_layers)
worker.start()
if block:
worker.await_workers()
load_registration_as_layers()
@widget.reset_button.changed.connect
def restore_defaults(event=None):
for name, value in DEFAULT_PARAMETERS.items():
if name not in ["atlas_key", "brain_geometry"]:
getattr(widget, name).value = value
@widget.check_orientation_button.changed.connect
def check_orientation(event=None):
"""
Function used to check that the input orientation is correct.
To do so it transforms the input data into the requested atlas
orientation, compute the average projection and displays it alongside
the atlas. It is then easier for the user to identify which dimension
should be swapped and avoid running the pipeline on wrongly aligned
data.
"""
if getattr(widget, "img_layer").value is None:
show_info("Raw data must be loaded before checking orientation.")
return widget
# Get viewer object
viewer = getattr(widget, "viewer").value
brain_geometry = getattr(widget, "brain_geometry").value
# Remove previous average projection layer if needed
ind_pop = []
for i, layer in enumerate(viewer.layers):
if layer.name in [
"Ref. proj. 0",
"Ref. proj. 1",
"Ref. proj. 2",
"Input proj. 0",
"Input proj. 1",
"Input proj. 2",
]:
ind_pop.append(i)
else:
layer.visible = False
for index in ind_pop[::-1]:
del viewer.layers[index]
# Load atlas and gather data
atlas = BrainGlobeAtlas(widget.atlas_key.value.name)
if brain_geometry.value == "hemisphere_l":
atlas.reference[
atlas.hemispheres == atlas.left_hemisphere_value
] = 0
elif brain_geometry.value == "hemisphere_r":
atlas.reference[
atlas.hemispheres == atlas.right_hemisphere_value
] = 0
input_orientation = getattr(widget, "data_orientation").value
data = getattr(widget, "img_layer").value.data
# Transform data to atlas orientation from user input
data_remapped = bg.map_stack_to(
input_orientation, atlas.orientation, data
)
# Compute average projection of atlas and remapped data
u_proj = []
u_proja = []
s = []
for i in range(3):
u_proj.append(np.mean(data_remapped, axis=i))
u_proja.append(np.mean(atlas.reference, axis=i))
s.append(u_proja[-1].shape[0])
s = np.max(s)
# Display all projections with somewhat consistent scaling
viewer.add_image(u_proja[0], name="Ref. proj. 0")
viewer.add_image(
u_proja[1], translate=[0, u_proja[0].shape[1]], name="Ref. proj. 1"
)
viewer.add_image(
u_proja[2],
translate=[0, u_proja[0].shape[1] + u_proja[1].shape[1]],
name="Ref. proj. 2",
)
s1 = u_proja[0].shape[0] / u_proj[0].shape[0]
s2 = u_proja[0].shape[1] / u_proj[0].shape[1]
viewer.add_image(
u_proj[0], translate=[s, 0], name="Input proj. 0", scale=[s1, s2]
)
s1 = u_proja[1].shape[0] / u_proj[1].shape[0]
s2 = u_proja[1].shape[1] / u_proj[1].shape[1]
viewer.add_image(
u_proj[1],
translate=[s, u_proja[0].shape[1]],
name="Input proj. 1",
scale=[s1, s2],
)
s1 = u_proja[2].shape[0] / u_proj[2].shape[0]
s2 = u_proja[2].shape[1] / u_proj[2].shape[1]
viewer.add_image(
u_proj[2],
translate=[s, u_proja[0].shape[1] + u_proja[1].shape[1]],
name="Input proj. 2",
scale=[s1, s2],
)
return widget
| Python |
3D | brainglobe/brainreg-napari | brainreg_napari/tests/__init__.py | .py | 0 | 0 | null | Python |
3D | brainglobe/brainreg-napari | brainreg_napari/tests/test_brainreg_napari.py | .py | 5,104 | 162 | import napari
import pytest
from bg_atlasapi import BrainGlobeAtlas
from brainreg_napari.register import (
add_registered_image_layers,
brainreg_register,
)
def test_add_detect_widget(make_napari_viewer):
"""
Smoke test to check that adding detection widget works
"""
viewer = make_napari_viewer()
widget = brainreg_register()
viewer.window.add_dock_widget(widget)
def test_napari_sample_data(make_napari_viewer):
"""
Check that loading the sample data via napari works.
"""
viewer = make_napari_viewer()
assert len(viewer.layers) == 0
viewer.open_sample("brainreg-napari", "sample")
assert len(viewer.layers) == 1
new_layer = viewer.layers[0]
assert isinstance(new_layer, napari.layers.Image)
assert new_layer.data.shape == (270, 193, 271)
def test_workflow(make_napari_viewer, tmp_path):
"""
Test a full workflow using brainreg-napari.
"""
viewer = make_napari_viewer()
# Load sample data
added_layers = viewer.open_sample("brainreg-napari", "sample")
brain_layer = added_layers[0]
# Load widget
_, widget = viewer.window.add_plugin_dock_widget(
plugin_name="brainreg-napari", widget_name="Atlas Registration"
)
# Set active layer and output folder
widget.img_layer.value = brain_layer
widget.registration_output_folder.value = tmp_path
# Set widget settings from brain data metadata
widget.data_orientation.value = brain_layer.metadata["data_orientation"]
for i, dim in enumerate(["z", "y", "x"]):
pixel_widget = getattr(widget, f"{dim}_pixel_um")
pixel_widget.value = brain_layer.metadata["voxel_size"][i]
assert len(viewer.layers) == 1
# Run registration
widget(block=True)
# Check that layers have been added
assert len(viewer.layers) == 3
# Check layers have expected type/name
labels = viewer.layers[1]
assert isinstance(labels, napari.layers.Labels)
assert labels.name == "example_mouse_100um"
for key in ["orientation", "atlas"]:
# There are lots of other keys in the metadata, but just check
# for a couple here.
assert (
key in labels.metadata
), f"Missing key '{key}' from labels metadata"
boundaries = viewer.layers[2]
assert isinstance(boundaries, napari.layers.Image)
assert boundaries.name == "Boundaries"
def test_orientation_check(
make_napari_viewer, tmp_path, atlas_choice="allen_mouse_50um"
):
"""
Test that the check orientation function works
"""
viewer = make_napari_viewer()
# Load widget
_, widget = viewer.window.add_plugin_dock_widget(
plugin_name="brainreg-napari", widget_name="Atlas Registration"
)
# Set a specific atlas
# Should be a better way of doing this
for choice in widget.atlas_key.choices:
if choice.name == atlas_choice:
widget.atlas_key.value = choice
break
assert widget.atlas_key.value.name == atlas_choice
widget.check_orientation_button.clicked()
assert len(viewer.layers) == 0
# Load sample data
added_layers = viewer.open_sample("brainreg-napari", "sample")
brain_layer = added_layers[0]
assert len(viewer.layers) == 1
atlas = BrainGlobeAtlas(atlas_choice)
# click check orientation button and check output
run_and_check_orientation_check(widget, viewer, brain_layer, atlas)
# run again and check previous output was deleted properly
run_and_check_orientation_check(widget, viewer, brain_layer, atlas)
def run_and_check_orientation_check(widget, viewer, brain_layer, atlas):
widget.check_orientation_button.clicked()
check_orientation_output(viewer, brain_layer, atlas)
def check_orientation_output(viewer, brain_layer, atlas):
# 1 for the loaded data, and three each for the
# views of two images (data & atlas)
assert len(viewer.layers) == 7
assert brain_layer.visible is False
for i in range(1, 7):
layer = viewer.layers[i]
assert isinstance(layer, napari.layers.Image)
assert layer.visible is True
assert layer.ndim == 2
assert viewer.layers[4].data.shape == (
brain_layer.data.shape[1],
brain_layer.data.shape[2],
)
assert viewer.layers[5].data.shape == (
brain_layer.data.shape[0],
brain_layer.data.shape[2],
)
assert viewer.layers[6].data.shape == (
brain_layer.data.shape[0],
brain_layer.data.shape[1],
)
assert viewer.layers[1].data.shape == (atlas.shape[1], atlas.shape[2])
assert viewer.layers[2].data.shape == (atlas.shape[0], atlas.shape[2])
assert viewer.layers[3].data.shape == (atlas.shape[0], atlas.shape[1])
def test_add_layers_errors(tmp_path, make_napari_viewer):
"""
Check that an error is raised if registration metadata isn't present when
trying to add registered images to a napari viewer.
"""
viewer = make_napari_viewer()
with pytest.raises(FileNotFoundError):
add_registered_image_layers(viewer, registration_directory=tmp_path)
| Python |
3D | brainglobe/brainreg-napari | examples/load_sample_data.py | .py | 1,217 | 40 | """
Load and show sample data
=========================
This example:
- loads some sample data
- adds the data to a napari viewer
- loads the brainreg-napari registration plugin
- opens the napari viewer
"""
import napari
import numpy as np
from napari.layers import Layer
from brainreg_napari.sample_data import load_test_brain
viewer = napari.Viewer()
layer_data = load_test_brain()[0]
# Set sensible contrast limits for viewing the brain
layer_data[1]["contrast_limits"] = np.percentile(layer_data[0], [0, 99.5])
# Add data to napari
viewer.add_layer(Layer.create(*layer_data))
# Open plugin
_, brainreg_widget = viewer.window.add_plugin_dock_widget(
plugin_name="brainreg-napari", widget_name="Atlas Registration"
)
# Set brainreg-napari plugin settings from sample data metadata
metadata = layer_data[1]["metadata"]
brainreg_widget.data_orientation.value = metadata["data_orientation"]
for i, dim in enumerate(["z", "y", "x"]):
pixel_widget = getattr(brainreg_widget, f"{dim}_pixel_um")
pixel_widget.value = metadata["voxel_size"][i]
if __name__ == "__main__":
# The napari event loop needs to be run under here to allow the window
# to be spawned from a Python script
napari.run()
| Python |
3D | SZUHvern/D-UNet | data_load.py | .py | 8,790 | 256 | import nibabel as nib
import numpy as np
import os
import h5py
import random
import cv2
import copy
from matplotlib import pyplot as plt
from PIL import Image
import time
def nii_to_h5(path_nii,path_save,ratio=0.8):
data = []
label = []
ori = []
list_site = os.listdir(path_nii)
list_data = []
ori_min = 10000
ori_max = 0
for dir_num, dir_site in enumerate(list_site):
if dir_site[-3:] == 'csv':
continue
list_patients = os.listdir(path_nii+'/'+dir_site)
for dir_patients in list_patients:
for t0n in ['/t01/', '/t02/']:
try:
location = path_nii+'/' + dir_site + '/' + dir_patients + t0n
location_all = os.listdir(location)
for i in range(len(location_all)):
location_all[i] = location+location_all[i]
list_data.append(location_all)
except:
continue
random.shuffle(list_data)
for num, data_dir in enumerate(list_data):
for i, deface in enumerate(data_dir):
if deface.find('deface') != -1:
ori = nib.load(deface)
ori = ori.get_fdata()
ori = np.array(ori)
ori = ori.transpose((2, 1, 0))
if ori_max < ori.max():
ori_max = ori.max()
if ori_min > ori.min():
ori_min = ori.min()
del list_data[num][i]
break
label_merge = np.zeros_like(ori)
for i, dir_data in enumerate(list_data[num]):
img = nib.load(dir_data)
img = np.array(img.get_fdata())
img = img.transpose((2, 1, 0))
label_merge = label_merge + img
print(str(num)+'/'+str(len(list_data)),'max=',str(ori.max()),'min=',str(ori.min()))
if num == 0 or num == int(ratio * len(list_data)):
data = copy.deepcopy(ori)
label = copy.deepcopy(label_merge)
else:
data = np.concatenate((data, ori), axis=0)
label = np.concatenate((label, label_merge), axis=0)
if num == int(ratio * len(list_data))-1:
print('saving train set...')
data = np.array(data, dtype=float)
label = np.array(label, dtype=bool)
#'''
file = h5py.File(path_save + '/train_' + str(ratio), 'w')
file.create_dataset('data', data=data)
file.create_dataset('label', data=label)
file.close()
data = []
label = []
print('Finished!')
elif num == len(list_data)-1:
print('saving test set...')
data = np.array(data, dtype=float)
label = np.array(label, dtype=bool)
file = h5py.File(path_save + '/test_' + str(ratio), 'w')
file.create_dataset('data', data=data)
file.create_dataset('label', data=label)
file.close()
print('Finished!')
return ori_max, ori_min
#'''
def data_adjust(max, min, h5_path, ratio=0.8):
file = h5py.File(h5_path + '/test_' + str(ratio))
data = file['data']
label = file['label']
data = data - min
data = data / max
data = data*255
file_adjust = h5py.File(h5_path + '/detection/test', 'w')
file_adjust.create_dataset('data', data=data)
file_adjust.create_dataset('label', data=label)
file.close()
file_adjust.close()
file = h5py.File(h5_path + '/train_' + str(ratio))
data = file['data']
label = file['label']
data = data - min
data = data / max
data = data*255
file_adjust = h5py.File(h5_path + '/detection/train', 'w')
file_adjust.create_dataset('data', data=data)
file_adjust.create_dataset('label', data=label)
file.close()
file_adjust.close()
def load_h5(path_h5, shuffle=False, size=None, test_programme=None, only=False):
h5 = h5py.File(path_h5)
data = h5['data'][:]
label = h5['label'][:]
if test_programme is not None:
data = data[:test_programme]
label = label[:test_programme]
data_only = []
label_only = []
if only is True:
for i in range(len(data)):
if label[i].max() == 1:
data_only.append(data[i])
label_only.append(label[i])
del data, label
data = data_only
label = label_only
data = np.uint8(np.multiply(data, 2.55))
label = np.uint8(np.multiply(label, 255))
if size is not None:
data_resize = []
label_resize = []
for i in range(len(data)):
data_resize_single = Image.fromarray(data[i]).crop((10, 40, 190, 220))
data_resize_single = data_resize_single.resize(size, Image.ANTIALIAS)
data_resize_single = np.asarray(data_resize_single)
label_resize_single = Image.fromarray(label[i]).crop((10, 40, 190, 220))
label_resize_single = label_resize_single.resize(size, Image.ANTIALIAS)
label_resize_single = np.asarray(label_resize_single)
data_resize.append(data_resize_single)
label_resize.append(label_resize_single)
data = np.array(data_resize, dtype=float)
label = np.array(label_resize, dtype=int)
data = data - data.min()
data = data / data.max()
label = label - label.min()
label = label / label.max()
if shuffle is True:
orders = []
data_output = np.zeros_like(data)
label_output = np.zeros_like(label)
for i in range(len(data)):
orders.append(i)
random.shuffle(orders)
for i, order in enumerate(orders):
data_output[i] = data[order]
label_output[i] = label[order]
else:
data_output = data
label_output = label
# for i in range(500):
# plt.subplot(1,2,1)
# plt.imshow(data_output[i],cmap='gray')
# plt.subplot(1,2,2)
# plt.imshow(label_output[i],cmap='gray')
# plt.pause(0.1)
# print(data_output[i].max(),data_output[i].min(),label_output[i].max(),label_output[i].min())
return data_output, label_output
def data_toxn(data, z):
data_xn = np.zeros((data.shape[0], data.shape[1], data.shape[2], z))
for patient in range(int(len(data) / 189)):
for i in range(189):
for j in range(z):
if i + j - z // 2 >= 0 and i + j - z // 2 < 189:
data_xn[patient * 189 + i, :, :, j] = data[patient * 189 + i + j - z // 2]
print(i, i + j - z // 2)
else:
data_xn[patient * 189 + i, :, :, j] = np.zeros_like(data[0])
return data_xn
if __name__ == "__main__":
start = time.time()
path_nii = './ATLAS_R1.1'
path_save = './h5'
ratio = 0.8
img_size = [192, 192]
ori_max, ori_min = nii_to_h5(path_nii, path_save, ratio=ratio)
data_adjust(ori_max, ori_min, path_save)
print('using :{}'.format(time.time()-start))
print('loading training-data...')
time_start = time.time()
original, label = load_h5(path_save + 'train_' + str(ratio), size=(img_size[1], img_size[0]),
test_programme = None)
file = h5py.File(path_save+'/train', 'w')
original = data_toxn(original, 4)
file.create_dataset('data', data=original)
original = original.transpose((0, 3, 1, 2))
original = np.expand_dims(original, axis=-1)
file.create_dataset('data_lstm', data=original)
del original
label_change = data_toxn(label, 4)
file.create_dataset('label_change', data=label_change)
del label_change
label = np.expand_dims(label, axis=-1)
file.create_dataset('label', data=label)
del label
file.close()
print('training_data done!, using:', str(time.time() - time_start) + 's\n\nloading validation-data...')
time_start = time.time()
original_val, label_val = load_h5(path_save + 'test_' + str(ratio), size=(img_size[1], img_size[0]))
file = h5py.File(path_save+'/train', 'w')
original_val = data_toxn(original_val, 4)
file.create_dataset('data_val', data=original_val)
original_val = original_val.transpose((0, 3, 1, 2))
original_val = np.expand_dims(original_val, axis=-1)
file.create_dataset('data_val_lstm', data=original_val)
del original_val
label_val_change = data_toxn(label_val, 4)
file.create_dataset('label_val_change', data=label_val_change)
del label_val_change
label_val = np.expand_dims(label_val, axis=-1)
file.create_dataset('label_val', data=label_val)
del label_val
file.close()
print('validation_data done!, using:', str(time.time() - time_start) + 's\n\n') | Python |
3D | SZUHvern/D-UNet | model.py | .py | 17,534 | 519 | from __future__ import print_function
from keras.models import Model, load_model
from keras.optimizers import Adam, SGD
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import *
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.utils import multi_gpu_model
import math
import h5py
from keras import backend as K
import tensorflow as tf
import math
from matplotlib import pyplot as plt
import time
import random
import copy
import os
def expand(x):
x = K.expand_dims(x, axis=-1)
return x
def squeeze(x):
x = K.squeeze(x, axis=-1)
return x
def BN_block(filter_num, input):
x = Conv2D(filter_num, 3, padding='same', kernel_initializer='he_normal')(input)
x = BatchNormalization()(x)
x1 = Activation('relu')(x)
x = Conv2D(filter_num, 3, padding='same', kernel_initializer='he_normal')(x1)
x = BatchNormalization()(x)
x = Activation('relu')(x)
return x
def BN_block3d(filter_num, input):
x = Conv3D(filter_num, 3, padding='same', kernel_initializer='he_normal')(input)
x = BatchNormalization()(x)
x1 = Activation('relu')(x)
x = Conv3D(filter_num, 3, padding='same', kernel_initializer='he_normal')(x1)
x = BatchNormalization()(x)
x = Activation('relu')(x)
return x
def D_Add(filter_num, input3d, input2d):
x = Conv3D(1, 1, padding='same', kernel_initializer='he_normal')(input3d)
x = Lambda(squeeze)(x)
x = Conv2D(filter_num, 3, padding='same', activation='relu', kernel_initializer='he_normal')(x)
x = Add()([x, input2d])
return x
def D_concat(filter_num, input3d, input2d):
x = Conv3D(1, 1, padding='same', kernel_initializer='he_normal')(input3d)
x = Lambda(squeeze)(x)
x = Conv2D(filter_num, 3, padding='same', activation='relu', kernel_initializer='he_normal')(x)
x = Concatenate()([x, input2d])
x = Conv2D(filter_num, 1, padding='same', activation='relu', kernel_initializer='he_normal')(x)
return x
def D_SE_concat(filter_num, input3d, input2d):
x = Conv3D(1, 1, padding='same', kernel_initializer='he_normal')(input3d)
x = Lambda(squeeze)(x)
x = Conv2D(filter_num, 3, padding='same', activation='relu', kernel_initializer='he_normal')(x)
x = squeeze_excite_block(x)
input2d = squeeze_excite_block(input2d)
x = Concatenate()([x, input2d])
x = Conv2D(filter_num, 1, padding='same', activation='relu', kernel_initializer='he_normal')(x)
return x
def D_Add_SE(filter_num, input3d, input2d):
x = Conv3D(1, 1, padding='same', kernel_initializer='he_normal')(input3d)
x = Lambda(squeeze)(x)
x = Conv2D(filter_num, 3, padding='same', activation='relu', kernel_initializer='he_normal')(x)
x = Add()([x, input2d])
x = squeeze_excite_block(x)
return x
def D_SE_Add(filter_num, input3d, input2d):
x = Conv3D(1, 1, padding='same', kernel_initializer='he_normal')(input3d)
x = Lambda(squeeze)(x)
x = Conv2D(filter_num, 3, padding='same', activation='relu', kernel_initializer='he_normal')(x)
x = squeeze_excite_block(x)
input2d = squeeze_excite_block(input2d)
x = Add()([x, input2d])
return x
def D_concat_SE(filter_num, input3d, input2d):
x = Conv3D(1, 1, padding='same', kernel_initializer='he_normal')(input3d)
x = Lambda(squeeze)(x)
x = Conv2D(filter_num, 3, padding='same', activation='relu', kernel_initializer='he_normal')(x)
x = Concatenate()([x, input2d])
x = squeeze_excite_block(x)
x = Conv2D(filter_num, 1, padding='same', activation='relu', kernel_initializer='he_normal')(x)
return x
def squeeze_excite_block(input, ratio=16):
''' Create a squeeze-excite block
Args:
input: input tensor
filters: number of output filters
k: width factor
Returns: a keras tensor
'''
init = input
channel_axis = 1 if K.image_data_format() == "channels_first" else -1
filters = init._keras_shape[channel_axis]
se_shape = (1, 1, filters)
se = GlobalAveragePooling2D()(init)
se = Reshape(se_shape)(se)
se = Dense(filters // ratio, activation='relu', kernel_initializer='he_normal', use_bias=False)(se)
se = Dense(filters, activation='sigmoid', kernel_initializer='he_normal', use_bias=False)(se)
if K.image_data_format() == 'channels_first':
se = Permute((3, 1, 2))(se)
x = multiply([init, se])
return x
def D_Unet():
inputs = Input(shape=(192, 192, 4))
input3d = Lambda(expand)(inputs)
conv3d1 = BN_block3d(32, input3d)
pool3d1 = MaxPooling3D(pool_size=2)(conv3d1)
conv3d2 = BN_block3d(64, pool3d1)
pool3d2 = MaxPooling3D(pool_size=2)(conv3d2)
conv3d3 = BN_block3d(128, pool3d2)
conv1 = BN_block(32, inputs)
#conv1 = D_Add(32, conv3d1, conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = BN_block(64, pool1)
conv2 = D_SE_Add(64, conv3d2, conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = BN_block(128, pool2)
conv3 = D_SE_Add(128, conv3d3, conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = BN_block(256, pool3)
conv4 = Dropout(0.3)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
conv5 = BN_block(512, pool4)
conv5 = Dropout(0.3)(conv5)
up6 = Conv2D(256, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(conv5))
merge6 = Concatenate()([conv4, up6])
conv6 = BN_block(256, merge6)
up7 = Conv2D(128, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(conv6))
merge7 = Concatenate()([conv3, up7])
conv7 = BN_block(128, merge7)
up8 = Conv2D(64, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(conv7))
merge8 = Concatenate()([conv2, up8])
conv8 = BN_block(64, merge8)
up9 = Conv2D(32, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(conv8))
merge9 = Concatenate()([conv1, up9])
conv9 = BN_block(32, merge9)
conv10 = Conv2D(1, 1, activation='sigmoid')(conv9) # conv10作为输出
model = Model(input=inputs, output=conv10)
return model
def Unet():
inputs = Input(shape=(192, 192, 1))
conv1 = BN_block(32, inputs)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = BN_block(64, pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = BN_block(128, pool2)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = BN_block(256, pool3)
drop4 = Dropout(0.3)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
conv5 = BN_block(512, pool4)
drop5 = Dropout(0.3)(conv5)
up6 = Conv2D(256, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(drop5))
merge6 = Concatenate()([drop4, up6])
conv6 = BN_block(256, merge6)
up7 = Conv2D(128, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(conv6))
merge7 = Concatenate()([conv3, up7])
conv7 = BN_block(128, merge7)
up8 = Conv2D(64, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(conv7))
merge8 = Concatenate()([conv2, up8])
conv8 = BN_block(64, merge8)
up9 = Conv2D(32, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(conv8))
merge9 = Concatenate()([conv1, up9])
conv9 = BN_block(32, merge9)
conv10 = Conv2D(1, 1, activation='sigmoid')(conv9) # conv10作为输出
model = Model(input=inputs, output=conv10)
return model
def origin_block(filter_num, input):
x = Conv2D(filter_num, 3, padding='same', kernel_initializer='he_normal')(input)
x1 = Activation('relu')(x)
x = Conv2D(filter_num, 3, padding='same', kernel_initializer='he_normal')(x1)
x = Activation('relu')(x)
return x
def Unet_origin():
inputs = Input(shape=(192, 192, 1))
conv1 = origin_block(64, inputs)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = origin_block(128, pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = origin_block(256, pool2)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = origin_block(512, pool3)
drop4 = Dropout(0.3)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
conv5 = origin_block(1024, pool4)
drop5 = Dropout(0.3)(conv5)
up6 = Conv2D(512, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(drop5))
merge6 = Concatenate()([drop4, up6])
conv6 = origin_block(512, merge6)
up7 = Conv2D(256, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(conv6))
merge7 = Concatenate()([conv3, up7])
conv7 = origin_block(256, merge7)
up8 = Conv2D(128, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(conv7))
merge8 = Concatenate()([conv2, up8])
conv8 = origin_block(128, merge8)
up9 = Conv2D(64, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(conv8))
merge9 = Concatenate()([conv1, up9])
conv9 = origin_block(64, merge9)
conv10 = Conv2D(1, 1, activation='sigmoid')(conv9) # conv10作为输出
model = Model(input=inputs, output=conv10)
return model
def Unet3d():
inputs = Input(shape=(192, 192, 4))
input3d = Lambda(expand)(inputs)
conv1 = BN_block3d(32, input3d)
pool1 = MaxPooling3D(pool_size=(2, 2, 1))(conv1)
conv2 = BN_block3d(64, pool1)
pool2 = MaxPooling3D(pool_size=(2, 2, 1))(conv2)
conv3 = BN_block3d(128, pool2)
pool3 = MaxPooling3D(pool_size=(2, 2, 1))(conv3)
conv4 = BN_block3d(256, pool3)
drop4 = Dropout(0.3)(conv4)
pool4 = MaxPooling3D(pool_size=(2, 2, 1))(drop4)
conv5 = BN_block3d(512, pool4)
drop5 = Dropout(0.3)(conv5)
up6 = Conv3D(256, 2, activation='relu', padding='same', kernel_initializer='he_normal', name='6')(
UpSampling3D(size=(2, 2, 1))(drop5))
merge6 = Concatenate()([drop4, up6])
conv6 = BN_block3d(256, merge6)
up7 = Conv3D(128, 2, activation='relu', padding='same', kernel_initializer='he_normal', name='8')(
UpSampling3D(size=(2, 2, 1))(conv6))
merge7 = Concatenate()([conv3, up7])
conv7 = BN_block3d(128, merge7)
up8 = Conv3D(64, 2, activation='relu', padding='same', kernel_initializer='he_normal', name='10')(
UpSampling3D(size=(2, 2, 1))(conv7))
merge8 = Concatenate()([conv2, up8])
conv8 = BN_block3d(64, merge8)
up9 = Conv3D(32, 2, activation='relu', padding='same', kernel_initializer='he_normal', name='12')(
UpSampling3D(size=(2, 2, 1))(conv8))
merge9 = Concatenate()([conv1, up9])
conv9 = BN_block3d(32, merge9)
conv10 = Conv3D(1, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
conv10 = Lambda(squeeze)(conv10)
# '''
# conv11 = Lambda(squeeze)(conv10)
conv11 = BN_block(32, conv10)
conv11 = Conv2D(1, 1, activation='sigmoid')(conv11)
# '''
model = Model(input=inputs, output=conv11)
return model
def SegNet(nClasses=1, input_height=192, input_width=192):
img_input = Input(shape=(input_height, input_width, 1))
kernel_size = 3
# encoder
x = Conv2D(64, (kernel_size, kernel_size), padding='same',
kernel_initializer='he_normal')(img_input)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# 128x128
x = Conv2D(128, (kernel_size, kernel_size), padding='same',
kernel_initializer='he_normal')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# 64x64
x = Conv2D(256, (kernel_size, kernel_size), padding='same',
kernel_initializer='he_normal')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# 32x32
x = Conv2D(512, (kernel_size, kernel_size), padding='same',
kernel_initializer='he_normal')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# 16x16
x = Conv2D(512, (kernel_size, kernel_size), padding='same',
kernel_initializer='he_normal')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# 8x8
# decoder
x = UpSampling2D(size=(2, 2))(x)
x = Conv2D(512, (kernel_size, kernel_size), padding='same',
kernel_initializer='he_normal')(x)
x = BatchNormalization()(x)
x = UpSampling2D(size=(2, 2))(x)
x = Conv2D(512, (kernel_size, kernel_size), padding='same',
kernel_initializer='he_normal')(x)
x = BatchNormalization()(x)
x = UpSampling2D(size=(2, 2))(x)
x = Conv2D(256, (kernel_size, kernel_size), padding='same',
kernel_initializer='he_normal')(x)
x = BatchNormalization()(x)
x = UpSampling2D(size=(2, 2))(x)
x = Conv2D(128, (kernel_size, kernel_size), padding='same',
kernel_initializer='he_normal')(x)
x = BatchNormalization()(x)
x = UpSampling2D(size=(2, 2))(x)
x = Conv2D(64, (kernel_size, kernel_size), padding='same',
kernel_initializer='he_normal')(x)
x = BatchNormalization()(x)
x = Conv2D(nClasses, (1, 1), padding='valid',
kernel_initializer='he_normal')(x)
x = Activation('sigmoid')(x)
model = Model(img_input, x, name='SegNet')
return model
def conv_bn_block(x, filter):
x = Conv3D(filter, 3, padding='same', kernel_initializer='he_normal')(x)
x = BatchNormalization()(x)
x1 = Activation('relu')(x)
x = Conv3D(filter, 3, padding='same', kernel_initializer='he_normal')(x1)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Concatenate()([x, x1])
return x
def fcn_8s(num_classes, input_shape, lr_init, lr_decay, vgg_weight_path=None):
img_input = Input(input_shape)
# Block 1
x = Conv2D(64, (3, 3), padding='same', name='block1_conv1')(img_input)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(64, (3, 3), padding='same', name='block1_conv2')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D()(x)
# Block 2
x = Conv2D(128, (3, 3), padding='same', name='block2_conv1')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(128, (3, 3), padding='same', name='block2_conv2')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D()(x)
# Block 3
x = Conv2D(256, (3, 3), padding='same', name='block3_conv1')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(256, (3, 3), padding='same', name='block3_conv2')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(256, (3, 3), padding='same', name='block3_conv3')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
block_3_out = MaxPooling2D()(x)
# Block 4
x = Conv2D(512, (3, 3), padding='same', name='block4_conv1')(block_3_out)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(512, (3, 3), padding='same', name='block4_conv2')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(512, (3, 3), padding='same', name='block4_conv3')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
block_4_out = MaxPooling2D()(x)
# Block 5
x = Conv2D(512, (3, 3), padding='same', name='block5_conv1')(block_4_out)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(512, (3, 3), padding='same', name='block5_conv2')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(512, (3, 3), padding='same', name='block5_conv3')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D()(x)
# Load pretrained weights.
if vgg_weight_path is not None:
vgg16 = Model(img_input, x)
vgg16.load_weights(vgg_weight_path, by_name=True)
# Convolutinalized fully connected layer.
x = Conv2D(4096, (7, 7), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(4096, (1, 1), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
# Classifying layers.
x = Conv2D(num_classes, (1, 1), strides=(1, 1), activation='linear')(x)
x = BatchNormalization()(x)
block_3_out = Conv2D(num_classes, (1, 1), strides=(1, 1), activation='linear')(block_3_out)
block_3_out = BatchNormalization()(block_3_out)
block_4_out = Conv2D(num_classes, (1, 1), strides=(1, 1), activation='linear')(block_4_out)
block_4_out = BatchNormalization()(block_4_out)
x = Lambda(lambda x: tf.image.resize_images(x, (x.shape[1] * 2, x.shape[2] * 2)))(x)
x = Add()([x, block_4_out])
x = Activation('relu')(x)
x = Lambda(lambda x: tf.image.resize_images(x, (x.shape[1] * 2, x.shape[2] * 2)))(x)
x = Add()([x, block_3_out])
x = Activation('relu')(x)
x = Lambda(lambda x: tf.image.resize_images(x, (x.shape[1] * 8, x.shape[2] * 8)))(x)
x = Activation('softmax')(x)
model = Model(img_input, x)
return model
| Python |
3D | SZUHvern/D-UNet | Stroke_segment.py | .py | 6,124 | 149 | import numpy as np
import os
from model import *
from Statistics import *
if __name__ == "__main__":
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
path_h5_save = './h5/'
output_path = './model/'
dataset_name = '0.8'
load_weight = ''
mode = 'train' # use 'train' or 'detect'
img_size = [192, 192]
batch_size = 36
lr = 1e-4
gpu_used = 2
model = D_Unet()
h5_name = 'DUnet'
output_path += h5_name+'/'
if not os.path.exists(output_path):
os.makedirs(output_path)
model.summary()
model = multi_gpu_model(model, gpus=gpu_used)
model.compile(optimizer=SGD(lr=lr), loss=EML, metrics=[dice_coef])
if load_weight != '':
print('loading:', load_weight)
model.load_weights(load_weight, by_name=True)
else:
print('no loading weight!')
if mode == 'train':
h5 = h5py.File('/home/siat/data/train')
original = h5['data']
label = h5['label']
# label = h5['label_change']
h5 = h5py.File('/home/siat/data/test')
original_val = h5['data_val']
label_val = h5['label_val']
# label_val = h5['label_val_change']
num_train_steps = math.floor(len(original) / batch_size)
num_val_steps = math.floor(len(original_val) / batch_size)
print('training data:' + str(len(original)) + ' validation data:' + str(len(original_val)))
# print('using:', str(time.time() - time_start) + 's\n')
time_start = time.time()
data_gen_args = dict(width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.2, rotation_range=20,
horizontal_flip=True, featurewise_center=True, featurewise_std_normalization=True)
data_gen_args_validation = dict(featurewise_center=True, featurewise_std_normalization=True)
#data_gen_args = dict()
#data_gen_args_validation = dict()
train_datagen = ImageDataGenerator(**data_gen_args)
train_datagen_label = ImageDataGenerator(**data_gen_args)
validation_datagen = ImageDataGenerator(**data_gen_args_validation)
validation_datagen_label = ImageDataGenerator(**data_gen_args_validation)
image_generator = train_datagen.flow(original, batch_size=batch_size, seed=1)
mask_generator = train_datagen_label.flow(label, batch_size=batch_size, seed=1)
image_generator_val = validation_datagen.flow(original_val, batch_size=batch_size, seed=1)
mask_generator_val = validation_datagen_label.flow(label_val, batch_size=batch_size, seed=1)
train_generator = zip(image_generator, mask_generator)
validation_generator = zip(image_generator_val, mask_generator_val)
checkpointer = ModelCheckpoint(output_path + h5_name + '-{epoch:02d}-{val_dice_coef:.2f}.hdf5', verbose=2, save_best_only=False, period=10)
History=model.fit_generator(train_generator, epochs=150, steps_per_epoch=num_train_steps,
shuffle=True, callbacks=[checkpointer], validation_data=validation_generator, validation_steps=num_val_steps, verbose=2)
elif mode == 'detect':
print('loading testing-data...')
h5 = h5py.File('./h5/x4/test')
original = h5['data_val']
label = h5['label_val']
# label_val_change = h5['label_val_change']
print('load data done!')
model.compile(optimizer=Adam(lr=lr), loss=dice_coef_loss, metrics=[TP, TN, FP, FN, dice_coef])
dice_list = []
recall_list = []
precision_list = []
tp = 0
fp = 0
fn = 0
for i in range(len(label) // 189):
start = i * 189
result = model.evaluate(original[start:start + 189], label[start:start + 189], verbose=2)
dice_per = (2 * result[1] / (2 * result[1] + result[3] + result[4]))
recall_per = result[1] / (result[1] + result[4])
precision_per = result[1] / (float(result[1]) + float(result[3]))
dice_list.append(dice_per)
recall_list.append(recall_per)
if np.isnan(precision_per):
precision_per = 0
precision_list.append(precision_per)
tp = tp + result[1]
fp = fp + result[3]
fn = fn + result[4]
dice_all = 2 * tp / (2 * tp + fp + fn)
dice_list = sorted(dice_list)
dice_mean = np.mean(dice_list)
dice_std = np.std(dice_list)
print('dice_media: ' + str(
(dice_list[int(dice_list.__len__() / 2)] + dice_list[int(dice_list.__len__() / 2 - 1)]) / 2) +
' dice_all: ' + str(dice_all) + '\n'
'dice_mean: ' + str(np.mean(dice_list)) + ' dice_std:' + str(
np.std(dice_list, ddof=1)) + '\n'
'recall_mean: ' + str(np.mean(recall_list)) + ' recall_std:' + str(
np.std(recall_list, ddof=1)) + '\n'
'precision_mean: ' + str(np.mean(precision_list)) + ' precision_std:' + str(
np.std(precision_list, ddof=1)) + '\n')
#np.save('/root/桌面/paper材料/box/' + h5_name, dice_list)
# plt.boxplot(dice_list)
# plt.show()
#'''
tim = time.time()
predict = model.predict(original, verbose=1, batch_size=batch_size)
print('predict patients: '+str(len(predict)/189)+' using: '+str(time.time()-tim)+'s')
predict[predict >= 0.5] = 1
predict[predict < 0.5] = 0
for i in range(len(predict)):
if predict[i, :, :, 0].max() == 1 or label[i, :, :, 0].max() == 1:
plt.subplot(1, 3, 1)
plt.imshow(original[i, :, :, 0], cmap='gray')
plt.subplot(1, 3, 2)
plt.imshow(label[i, :, :, 0], cmap='gray')
plt.subplot(1, 3, 3)
plt.imshow(predict[i, :, :, 0], cmap='gray')
plt.title(i)
plt.pause(0.1)
if i % 20==0:
plt.close()
plt.close()
plt.close()
| Python |
3D | SZUHvern/D-UNet | Statistics.py | .py | 3,539 | 108 | from keras import backend as K
import numpy as np
def TP(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
true_positives = K.sum(K.round(K.clip(y_true_f * y_pred_f, 0, 1)))
return true_positives
def FP(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
y_pred_f01 = K.round(K.clip(y_pred_f, 0, 1))
tp_f01 = K.round(K.clip(y_true_f * y_pred_f, 0, 1))
false_positives = K.sum(K.round(K.clip(y_pred_f01 - tp_f01, 0, 1)))
return false_positives
def TN(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
y_pred_f01 = K.round(K.clip(y_pred_f, 0, 1))
all_one = K.ones_like(y_pred_f01)
y_pred_f_1 = -1 * (y_pred_f01 - all_one)
y_true_f_1 = -1 * (y_true_f - all_one)
true_negatives = K.sum(K.round(K.clip(y_true_f_1 + y_pred_f_1, 0, 1)))
return true_negatives
def FN(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
# y_pred_f01 = keras.round(keras.clip(y_pred_f, 0, 1))
tp_f01 = K.round(K.clip(y_true_f * y_pred_f, 0, 1))
false_negatives = K.sum(K.round(K.clip(y_true_f - tp_f01, 0, 1)))
return false_negatives
def recall(y_true, y_pred):
tp = TP(y_true, y_pred)
fn = FN(y_true, y_pred)
return tp / (tp + fn)
def precision(y_true, y_pred):
tp = TP(y_true, y_pred)
fp = FP(y_true, y_pred)
return tp / (tp + fp)
def patch_whole_dice(truth, predict):
dice = []
count_dice = 0
for i in range(len(truth)):
true_positive = truth[i] > 0
predict_positive = predict[i] > 0
match = np.equal(true_positive, predict_positive)
match_count = np.count_nonzero(match)
P1 = np.count_nonzero(predict[i])
T1 = np.count_nonzero(truth[i])
full_back = np.zeros(truth[i].shape)
non_back = np.invert(np.equal(truth[i], full_back))
TP = np.logical_and(match, non_back)
TP_count = np.count_nonzero(TP)
# print("m:", match_count, " P:", P1, " T:", T1, " TP:", TP_count)
if (P1 + T1) == 0:
dice.append(0)
else:
dice.append(2 * TP_count / (P1 + T1))
if P1 != 0 or T1 != 0:
count_dice += 1
if count_dice == 0:
count_dice = 1e6
return dice # , count_dice
# return dice
def patch_whole_dice2(truth, predict):
y_true_f = np.reshape(truth, (1, -1))
y_pred_f = np.reshape(predict, (1, -1))
intersection = np.sum(y_true_f * y_pred_f)
return (2. * intersection + 1) / (np.sum(y_true_f * y_true_f) + np.sum(y_pred_f * y_pred_f) + 1)
def dice_coef(y_true, y_pred):
smooth = 1.
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f * y_true_f) + K.sum(y_pred_f * y_pred_f) + smooth)
def EML(y_true, y_pred):
gamma = 1.1
alpha = 0.48
smooth = 1.
y_true = K.flatten(y_true)
y_pred = K.flatten(y_pred)
intersection = K.sum(y_true*y_pred)
dice_loss = (2.*intersection + smooth)/(K.sum(y_true*y_true)+K.sum(y_pred * y_pred)+smooth)
y_pred = K.clip(y_pred, K.epsilon())
pt_1 = tf.where(tf.equal(y_true, 1),y_pred,tf.ones_like(y_pred))
pt_0 = tf.where(tf.equal(y_true, 0),y_pred,tf.zeros_like(y_pred))
focal_loss = -K.mean(alpha*K.pow(1. -pt_1, gamma)*K.log(pt_1),axis=-1)\
-K.mean(1-alpha)*K.pow(pt_0,gamma)*K.log(1. -pt_0),axis=-1)
return focal_loss - K.log(dice_loss)
| Python |
3D | overengineer/TR-FDTD | bitirme5.m | .m | 1,485 | 66 | clear all
close all
%TMz Polarization
%physical constants
c = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%environment parameters
nx = 249;
ny = 249;
delta = 1.2e-2; %1.2cm
dx = delta;
dy = delta;
dt = 20e-12; %0.95/(c*sqrt(dx^-2+dy^-2));
%f0 = 2e9; %2GHz
tw = 16*dt;
t0 = 200*dt;
srcx = round(nx/2);
srcy = round(ny/2);
eps_r = 4.58;
sigma = 0.52; %S/m
ksi = (dt * sigma) / ( 2 * eps0 * eps_r );
%calculation parameters
n_iter = 500;
%initalization
Hx = zeros(nx,ny);
Hy = zeros(nx,ny);
Ez = zeros(nx,ny);
receivers = zeros(23,n_iter);
%iteration
for n=1:1:n_iter
%Maxwell Equations (TMz)
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Hx(2:nx-1,2:ny) = Hx(2:nx-1,2:ny) - (dt/(mu0*dy))*Ezy(2:nx-1,:);
Hy(2:nx,2:ny-1) = Hy(2:nx,2:ny-1) + (dt/(mu0*dx))*Ezx(:,2:ny-1);
Hxy = diff(Hx,1,2);
Hyx = diff(Hy,1,1);
Ez(2:nx-1,2:ny-1) = ((1-ksi)/(1+ksi))*Ez(2:nx-1,2:ny-1) + ((1/(1+ksi))*(dt/(eps0*eps_r)))*((1/dx)*Hyx(2:nx-1,2:ny-1) - (1/dy)*Hxy(2:nx-1,2:ny-1));
%Gaussian Source
f(n)= (-2*(n*dt-t0)*dt/(tw^2))*exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(srcx,srcy) = Ez(srcx,srcy) + f(n);
%Neuman Condition
Ez(:,2) = -Ez(:,1);
Ez(2,:) = -Ez(1,:);
Ez(:,ny-1) = -Ez(:,ny);
Ez(nx-1,:) = -Ez(nx,:);
%display
%n = n + 1;
for i=1:1:23
receivers(i,n) = Ez(i*10,srcy);
end
pcolor(Ez')
colorbar
shading interp
title(n)
drawnow
end
plot(f)
save('bitirme5.mat','receivers');
| MATLAB |
3D | overengineer/TR-FDTD | maxwell3d.py | .py | 2,255 | 104 | import torch
from math import sin, exp, sqrt, pi
def get_device():
try:
device = torch.device('cuda')
assert device
except:
device = torch.device('cpu')
return torch.device('cpu') #device
def zeros(m,n,k):
return torch.zeros((m,n,k), device=get_device(), dtype=torch.double)
def diff(a,dim):
if dim == 0:
return a[1:,:,:] - a[:-1,:,:]
elif dim == 1:
return a[:,1:,:] - a[:,:-1,:]
elif dim == 2:
return a[:,:,1:] - a[:,:,:-1]
#physical constants
c = 2.998e8
eta0 = 120*pi
mu0 = pi*4e-7
eps0 = 1e-9/(36*pi)
#environment parameters
width = 1
height = 1
depth = 1
f0 = 1e9
tw = 1e-8/pi
t0 = 4*tw
#discretization parameters
dx = 0.01
dy = 0.01
dz = 0.01
nx = width / dx
ny = height / dy
nz = depth / dz
dt = 0.95 / (c*sqrt(dx**-2+dy**-2+dz**-2))
#calculation parameters
n_iter = 100
#initalization
Hx = zeros(nx,ny,nz)
Hy = zeros(nx,ny,nz)
Hz = zeros(nx,ny,nz)
Ex = zeros(nx,ny,nz)
Ey = zeros(nx,ny,nz)
Ez = zeros(nx,ny,nz)
#iteration
i = 0
for n in range(n_iter):
#derivatives
Hxy = diff(Hx,1);
Hxz = diff(Hx,2);
Hzx = diff(Hz,0);
Hzy = diff(Hz,1);
Hyx = diff(Hy,0);
Hyz = diff(Hy,2);
#Maxwell Equations
Ex[:,1:-1,1:-1] += (dt/(eps0*dy))*Hzy[:,:-1,1:-1] - (dt/(eps0*dz))*Hyz[:,1:-1,:-1]
Ey[1:-1,:,1:-1] += (dt/(eps0*dz))*Hxz[1:-1,:,:-1] - (dt/(eps0*dx))*Hzx[:-1,:,1:-1]
Ez[1:-1,1:-1,:] += (dt/(eps0*dx))*Hyx[:-1,1:-1,:] - (dt/(eps0*dy))*Hxy[1:-1,:-1,:]
#Gaussian Source
Ez[int(nx/2),int(ny/2),int(nz/2)] += sin(2*pi*f0*n*dt)*exp(-(n*dt-t0)**2/(tw**2))/dy
#derivatives
Exy = diff(Ex,1);
Exz = diff(Ex,2);############################!!!!!!!!!!!
Ezx = diff(Ez,0);
Ezy = diff(Ez,1);
Eyx = diff(Ey,0);
Eyz = diff(Ey,2);
#Maxwell Equations
Hx[:,:-1,:-1] += -(dt/(mu0*dy))*Ezy[:,:,:-1] + (dt/(mu0*dz))*Eyz[:,:-1,:]
Hy[:-1,:,:-1] += -(dt/(mu0*dz))*Exz[:-1,:,:] + (dt/(mu0*dx))*Ezx[:,:,:-1]
Hz[:-1,:-1,:] += -(dt/(mu0*dx))*Eyx[:,:-1,:] + (dt/(mu0*dy))*Exy[:-1,:,:]
####################### I AM HERE #########################
"""
#display
if not i%5:
pcolor(Ez[:,int(ny/2),:])
drawnow
"""
i += 1
print(i)
| Python |
3D | overengineer/TR-FDTD | TR2D.m | .m | 1,295 | 60 | clear all
close all
load 'bitirme5.mat'
%TMz Polarization
%physical constants
c = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%environment parameters
nx = 249;
ny = 249;
delta = 1.2e-2; %1.2cm
dx = delta;
dy = delta;
dt = 20e-12; %0.95/(c*sqrt(dx^-2+dy^-2));
%f0 = 2e9; %2GHz
tw = 16*dt;
t0 = 200*dt;
srcx = round(nx/2);
srcy = round(ny/2);
eps_r = 1;
sigma = 0;
ksi = (dt * sigma) / ( 2 * eps0* eps_r );
%calculation parameters
n_iter = 500;
%initalization
Hx = zeros(nx,ny);
Hy = zeros(nx,ny);
Ez = zeros(nx,ny);
%iteration
for n=n_iter:-1:1
%Maxwell Equations (TMz)
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Hx(2:nx-1,2:ny) = Hx(2:nx-1,2:ny) + (dt/(mu0*dy))*Ezy(2:nx-1,:);
Hy(2:nx,2:ny-1) = Hy(2:nx,2:ny-1) - (dt/(mu0*dx))*Ezx(:,2:ny-1);
Hxy = diff(Hx,1,2);
Hyx = diff(Hy,1,1);
Ez(2:nx-1,2:ny-1) = ((1-ksi)/(1+ksi))*Ez(2:nx-1,2:ny-1) - ((1/(1+ksi))*(dt/(eps0*eps_r)))*((1/dx)*Hyx(2:nx-1,2:ny-1) - (1/dy)*Hxy(2:nx-1,2:ny-1));
for i=1:1:23
Ez(i*10,srcy) = Ez(i*10,srcy) + receivers(i,n);
end
%Neuman Condition
Ez(:,2) = -Ez(:,1);
Ez(2,:) = -Ez(1,:);
Ez(:,ny-1) = -Ez(:,ny);
Ez(nx-1,:) = -Ez(nx,:);
%display
pcolor(Ez')
shading interp
colorbar
title(n)
drawnow
end
| MATLAB |
3D | overengineer/TR-FDTD | diel_tumor_lin.m | .m | 4,148 | 160 | %physical constants
clear all;
close all;
c0 = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%box dimensions
width = 0.5; % cm
height = 0.5;
length = 0.5; % cm
%source parameters
f0 = 1e9; % GHz
band = 2e9;
%tw = sqrt(-log(0.1)/(pi*band)^2);%1e-8/pi;
%spatial discretization
adipose = 1; %5;
tumor = 10;
sigma = 0;
epsr = tumor;
w = 2 * pi * band;
k = (w/c0)*sqrt(epsr-1j*sigma/(w*eps0));
beta = real(k);
c = w / beta;
lambda = c/f0;
dxmax = lambda / 20;
dx = dxmax;
dy = dxmax;
dz = dxmax;
nx = round(width/dx);
ny = round(height/dy);
nz = round(length/dz);
%source position
srcx = round(nx / 2);
srcy = round( 3 * ny / 4);
srcz = round(nz / 2);
%material
mw = 4; % width
mh = 4; % height
ml = 4; % length
%mx = nx / 4-10-mw/2;
mx = nx / 2-10-mw/2;
my = ny / 2-10-mh/2;
mz = nz / 2-ml/2;
al = 0;
eps = ones(nx,ny,nz) * eps0 ;
sigma = zeros(nx,ny,nz);%*f0 * 1e-9 * 0.5 - 0.5;
for i=1:1:nx
for j=1:1:ny
for k=1:1:nz
% adipose tissue is located under z < al
if (k<al)
eps(i,j,k) = eps0 * adipose;
sigma(i,j,k) = 0;
end
if (i>mx && i<(mw+mx) && j>my && j<(mh+my) && k>mz && k<(ml+mz))
eps(i,j,k) = eps0 * tumor;
sigma(i,j,k) = 0;
end
end
end
end
%time discretization
dt = 0.99/(c0*sqrt(dx^-2+dy^-2+dz^-2));
tw=16*dt;
t0=3*tw;
n_iter = 250;
%receivers
nrec = round(nx / 3)-1;
recdx = round(nx / nrec);
recy = srcy-15;
recz = srcz;
rec = zeros(nrec,n_iter);
%EM field dimensions
Hx = zeros(nx,ny,nz);
Hy = zeros(nx,ny,nz);
Hz = zeros(nx,ny,nz);
Ex = zeros(nx,ny,nz);
Ey = zeros(nx,ny,nz);
Ez = zeros(nx,ny,nz);
%iteration
i = 0;
for n=1:1:n_iter
%magnetic field derivatives
Hxy = diff(Hx,1,2);
Hxz = diff(Hx,1,3);
Hzx = diff(Hz,1,1);
Hzy = diff(Hz,1,2);
Hyx = diff(Hy,1,1);
Hyz = diff(Hy,1,3);
%electric field maxwell equations
epsi = eps(:,2:end-1,2:nz-1);
ksi = (dt * sigma(:,2:end-1,2:nz-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ex(:,2:end-1,2:end-1) = c1.*Ex(:,2:end-1,2:nz-1) + c2.*((1/dy)*Hzy(:,1:end-1,2:end-1) - (1/dz)*Hyz(:,2:ny-1,1:end-1));
epsi = eps(2:end-1,:,2:end-1);
ksi = (dt * sigma(2:end-1,:,2:end-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ey(2:end-1,:,2:end-1) = c1.*Ey(2:end-1,:,2:end-1) + c2.*((1/dz)*Hxz(2:end-1,:,1:end-1) - (1/dx)*Hzx(1:end-1,:,2:end-1));
epsi = eps(2:end-1,2:end-1,:);
ksi = (dt * sigma(2:end-1,2:end-1,:)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ez(2:end-1,2:end-1,:) = c1.*Ez(2:end-1,2:end-1,:) + c2.*((1/dx)*Hyx(1:end-1,2:end-1,:) - (1/dy)*Hxy(2:end-1,1:end-1,:));
%gaussian source
%f(n) = sin(2*pi*f0*n*dt)*exp(-(n*dt-t0)^2/(tw^2))/dy;
f(n) = -2*(n*dt-t0)/tw*exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(srcx,srcy,srcz) = Ez(srcx,srcy,srcz) + f(n);
%Ezn(n)=Ez(srcx,srcy,srcz);
%electric field derivatives
Exy = diff(Ex,1,2);
Exz = diff(Ex,1,3);
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Eyx = diff(Ey,1,1);
Eyz = diff(Ey,1,3);
%magnetic field maxwell equations
Hx(:,1:end-1,1:end-1) = Hx(:,1:end-1,1:end-1) - (dt/(mu0*dy))*Ezy(:,:,1:end-1) + (dt/(mu0*dz))*Eyz(:,1:end-1,:);
Hy(1:end-1,:,1:end-1) = Hy(1:end-1,:,1:end-1) - (dt/(mu0*dz))*Exz(1:end-1,:,:) + (dt/(mu0*dx))*Ezx(:,:,1:end-1);
Hz(1:end-1,1:end-1,:) = Hz(1:end-1,1:end-1,:) - (dt/(mu0*dx))*Eyx(:,1:end-1,:) + (dt/(mu0*dy))*Exy(1:end-1,:,:);
for k=1:1:nrec
rec(k,n) = Ez(recdx * k, recy, recz);
end
%display
if (mod(i,5)==0)
slice(:,:)=Ez(:,:,srcz);
pcolor(slice');
colorbar;
shading interp
drawnow
end
i = i+1;
disp(i)
end
close all
hold on
for k=1:1:nrec
plot(rec(k,:))
end
trec = rec;
save('withtumor.mat','trec','nrec','n_iter','recy','recdx','recz')
| MATLAB |
3D | overengineer/TR-FDTD | bitirme1.m | .m | 1,243 | 53 | %TMz Polarization
%physical constants
c = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%environment parameters
width = 1;
height = 1;
tw = 1e-9/pi;%?
t0 = 4*tw;
%discretization parameters
dx = 0.005;
dy = 0.005;
nx = width/dx;
ny = height/dy;
dt = 0.95/(c*sqrt(dx^-2+dy^-2));
%calculation parameters
n_iter = 1000;
c1 = dt/(mu0*dy);
c2 = dt/(mu0*dx);
c3 = dt/(eps0*dx);
c4 = dt/(eps0*dy);
%initalization
Hx = zeros(nx,ny);
Hy = zeros(nx,ny);
Ez = zeros(nx,ny);
%iteration
for n=1:1:n_iter
%Maxwell Equations (TMz)
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Hx(2:nx-1,2:ny) = Hx(2:nx-1,2:ny) - c1*Ezy(2:nx-1,:);
Hy(2:nx,2:ny-1) = Hy(2:nx,2:ny-1) + c2*Ezx(:,2:ny-1);
Hxy = diff(Hx,1,2);
Hyx = diff(Hy,1,1);
Ez(2:nx-1,2:ny-1) = Ez(2:nx-1,2:ny-1) + c3*Hyx(2:nx-1,2:ny-1) - c4*Hxy(2:nx-1,2:ny-1);
%Gaussian Source
f(n)= exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(round(nx/2),round(ny/2)) = Ez(round(nx/2),round(ny/2)) + f(n);
%Neuman Condition
Ez(:,2) = -Ez(:,1);
Ez(2,:) = -Ez(1,:);
Ez(:,ny-1) = -Ez(:,ny);
Ez(nx-1,:) = -Ez(nx,:);
%display
%n = n + 1;
pcolor(Ez);
shading interp;
drawnow
end
| MATLAB |
3D | overengineer/TR-FDTD | Material3D.m | .m | 2,882 | 104 | %physical constants
clear all;
close all;
c = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%box dimensions
width = 1;
height = 1;
length = 1;
%spatial discretization
dx = 0.02;
dy = dx;
dz = dx;
nx = width/dx;
ny = height/dy;
nz = length/dz;
%source
f0 =1e9;
tw = 1e-8/pi;
t0 = 4*tw;
srcx = round(nx / 2);
srcy = round(nz / 2);
srcz = round(3 * ny / 4);
%material
adipose = 10;
tumor = 60;
mx = 3 * ny / 8;
my = 0;
mz = 0;
mw = nx / 4; % width
mh = ny / 4; % height
ml = nz / 4; % length
al = ny / 2;
eps = ones(nx,ny,nz) * eps0;
for i=1:1:nx
for j=1:1:ny
for k=1:1:nz
% adipose tissue is located under z < al
if (k<al)
eps(i,j,k) = eps0 * adipose ;
end
if (i>mx && i<(mw+mx) && j>my && j<(mh+my) && k>mz && k<(ml+mz))
eps(i,j,k) = eps0 * tumor;
end
end
end
end
%temporal discretization
dt = 0.95/(c*sqrt(dx^-2+dy^-2+dz^-2));
n_iter = 5000;
%EM field dimensions
Hx = zeros(nx,ny,nz);
Hy = zeros(nx,ny,nz);
Hz = zeros(nx,ny,nz);
Ex = zeros(nx,ny,nz);
Ey = zeros(nx,ny,nz);
Ez = zeros(nx,ny,nz);
%iteration
i = 0;
for n=1:1:n_iter
%magnetic field derivatives
Hxy = diff(Hx,1,2);
Hxz = diff(Hx,1,3);
Hzx = diff(Hz,1,1);
Hzy = diff(Hz,1,2);
Hyx = diff(Hy,1,1);
Hyz = diff(Hy,1,3);
%electric field maxwell equations
Ex(:,2:end-1,2:end-1) = Ex(:,2:end-1,2:nz-1) + (dt/(eps(:,2:end-1,2:nz-1)*dy)).*Hzy(:,1:end-1,2:end-1) - (dt/(eps(:,2:end-1,2:nz-1)*dz)).*Hyz(:,2:ny-1,1:end-1);
Ey(2:end-1,:,2:end-1) = Ey(2:end-1,:,2:end-1) + (dt/(eps(2:end-1,:,2:end-1)*dz)).*Hxz(2:end-1,:,1:end-1) - (dt/(eps(2:end-1,:,2:end-1)*dx)).*Hzx(1:end-1,:,2:end-1);
Ez(2:end-1,2:end-1,:) = Ez(2:end-1,2:end-1,:) + (dt/(eps(2:end-1,2:end-1,:)*dx)).*Hyx(1:end-1,2:end-1,:) - (dt/(eps(2:end-1,2:end-1,:)*dy)).*Hxy(2:end-1,1:end-1,:);
%gaussian source
f = sin(2*pi*f0*n*dt)*exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(srcx,srcy,srcz) = Ez(srcx,srcy,srcz) + f;
%Ezn(n)=Ez(srcx,srcy,srcz);
%electric field derivatives
Exy = diff(Ex,1,2);
Exz = diff(Ex,1,3);
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Eyx = diff(Ey,1,1);
Eyz = diff(Ey,1,3);
%magnetic field maxwell equations
Hx(:,1:end-1,1:end-1) = Hx(:,1:end-1,1:end-1) - (dt/(mu0*dy))*Ezy(:,:,1:end-1) + (dt/(mu0*dz))*Eyz(:,1:end-1,:);
Hy(1:end-1,:,1:end-1) = Hy(1:end-1,:,1:end-1) - (dt/(mu0*dz))*Exz(1:end-1,:,:) + (dt/(mu0*dx))*Ezx(:,:,1:end-1);
Hz(1:end-1,1:end-1,:) = Hz(1:end-1,1:end-1,:) - (dt/(mu0*dx))*Eyx(:,1:end-1,:) + (dt/(mu0*dy))*Exy(1:end-1,:,:);
%display
if (mod(i,10)==0)
slice(:,:)=Ez(nx/2,:,:);
pcolor(slice);
colorbar;
drawnow
end
i = i+1
end
| MATLAB |
3D | overengineer/TR-FDTD | Maxwell3D.m | .m | 2,511 | 83 | %physical constants
clear all;
c = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%environment parameters
width = 1;
height = 1;
depth = 1;
f0 =1e9;
tw = 1e-8/pi;%?
t0 = 4*tw;
%discretization parameters
dx = 0.01;
dy = dx;
dz = dx;
nx = width/dx;
ny = height/dy;
nz = depth/dz;
dt = 0.95/(c*sqrt(dx^-2+dy^-2+dz^-2));
%calculation parameters
n_iter = 2000;
%initalization
Hx = zeros(nx,ny,nz);
Hy = zeros(nx,ny,nz);
Hz = zeros(nx,ny,nz);
Ex = zeros(nx,ny,nz);
Ey = zeros(nx,ny,nz);
Ez = zeros(nx,ny,nz);
%iteration
i = 0
for n=1:1:n_iter
%derivatives
%H
Hxy = diff(Hx,1,2);
Hxz = diff(Hx,1,3);
Hzx = diff(Hz,1,1);
Hzy = diff(Hz,1,2);
Hyx = diff(Hy,1,1);
Hyz = diff(Hy,1,3);
%Maxwell Equations
% boyutlar e? uzunlukta olmas? için türev al?nmayan boyutu bir eksik
% indeksliyoruz. E vektörlerinin türev boyutunu k?saltmadan al?p di?rev
% vektörün türev boyutunda bir ileriden ba?lat?yoruz.
%E
Ex(:,2:end-1,2:end-1) = Ex(:,2:end-1,2:end-1) + (dt/(eps0*dy))*Hzy(:,1:end-1,2:end-1) - (dt/(eps0*dz))*Hyz(:,2:ny-1,1:end-1); %OK
Ey(2:end-1,:,2:end-1) = Ey(2:end-1,:,2:end-1) + (dt/(eps0*dz))*Hxz(2:end-1,:,1:end-1) - (dt/(eps0*dx))*Hzx(1:end-1,:,2:end-1); %OK
Ez(2:end-1,2:end-1,:) = Ez(2:end-1,2:end-1,:) + (dt/(eps0*dx))*Hyx(1:end-1,2:end-1,:) - (dt/(eps0*dy))*Hxy(2:end-1,1:end-1,:); %OK
%Gaussian Source
f(n)= sin(2*pi*f0*n*dt)*exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(round(nx/2),round(ny/2),round(nz/2)) = Ez(round(nx/2),round(ny/2),round(nz/2)) + f(n);
Ezn(n)=Ez(round(nx/2),round(ny/2)-4,round(nz/2));
Exy = diff(Ex,1,2);
Exz = diff(Ex,1,3);
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Eyx = diff(Ey,1,1);
Eyz = diff(Ey,1,3);
%H
Hx(:,1:end-1,1:end-1) = Hx(:,1:end-1,1:end-1) - (dt/(mu0*dy))*Ezy(:,:,1:end-1) + (dt/(mu0*dz))*Eyz(:,1:end-1,:); %OK
Hy(1:end-1,:,1:end-1) = Hy(1:end-1,:,1:end-1) - (dt/(mu0*dz))*Exz(1:end-1,:,:) + (dt/(mu0*dx))*Ezx(:,:,1:end-1); %OK
Hz(1:end-1,1:end-1,:) = Hz(1:end-1,1:end-1,:) - (dt/(mu0*dx))*Eyx(:,1:end-1,:) + (dt/(mu0*dy))*Exy(1:end-1,:,:); %OK
%display
if (mod(i,5)==0)
aa(:,:)= Ez(:,round(ny/2),:);
a = log(aa.*(aa>0));
a(1,1)= -60;
a(1,2)=0;
a(end,1)=-60;
a(end,end)= -60;
a(1,end)=-60;
pcolor(a);colorbar
end
%shading interp;
drawnow
i = i+1
end
| MATLAB |
3D | overengineer/TR-FDTD | bitirme3.m | .m | 1,297 | 53 | clear all
close all
%TMz Polarization
%physical constants
c = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%environment parameters
nx = 249;
ny = 249;
delta = 1.2e-2; %1.2cm
dx = delta;
dy = delta;
dt = 20e-12;%0.95/(c*sqrt(dx^-2+dy^-2));
f0 = 2e9; % GHz
tw = 16*dt;
t0 = 200*dt;
eps_r = 2.9;
sigma = 0.2; %S/m
ksi = (dt * sigma) / ( 2 * eps0* eps_r );
%calculation parameters
n_iter = 5000;
%initalization
Hx = zeros(nx,ny);
Hy = zeros(nx,ny);
Ez = zeros(nx,ny);
%iteration
for n=1:1:n_iter
%Maxwell Equations (TMz)
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Hx(2:nx-1,2:ny) = Hx(2:nx-1,2:ny) - (dt/(mu0*dy))*Ezy(2:nx-1,:);
Hy(2:nx,2:ny-1) = Hy(2:nx,2:ny-1) + (dt/(mu0*dx))*Ezx(:,2:ny-1);
Hxy = diff(Hx,1,2);
Hyx = diff(Hy,1,1);
Ez(2:nx-1,2:ny-1) = ((1-ksi)/(1+ksi))*Ez(2:nx-1,2:ny-1) + ((1/(1+ksi))*(dt/(eps0*eps_r)))*((1/dx)*Hyx(2:nx-1,2:ny-1) - (1/dy)*Hxy(2:nx-1,2:ny-1));
%Gaussian Source
f(n)= sin(2*pi*f0*n*dt)*exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(round(nx/2),round(ny/2)) = Ez(round(nx/2),round(ny/2)) + f(n);
%Neuman Condition
Ez(:,2) = -Ez(:,1);
Ez(2,:) = -Ez(1,:);
Ez(:,ny-1) = -Ez(:,ny);
Ez(nx-1,:) = -Ez(nx,:);
%display
%n = n + 1;
pcolor(Ez);
shading interp;
drawnow
end
| MATLAB |
3D | overengineer/TR-FDTD | kaynakTR_lossy.m | .m | 0 | 0 | null | MATLAB |
3D | overengineer/TR-FDTD | makale3d.m | .m | 3,918 | 152 | %physical constants
clear all;
close all;
c0 = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%box dimensions
nx = 249;
ny = 249;
delta = 1.2e-2; %1.2cm
dx = delta;
dy = delta;
dt = 20e-12; %0.95/(c*sqrt(dx^-2+dy^-2));
f0 = 2e9; %2GHz
tw = 16*dt;
t0 = 200*dt;
%spatial discretization
adipose = 4.58;
tumor = 60;
sigma = 0.52;
epsr = tumor;
w = 2 * pi * f0;
k = (w/c0)*sqrt(epsr-1j*sigma/(w*eps0));
beta = real(k);
c = w / beta;
lambda = c/f0;
dxmax = lambda / 10;
dx = dxmax;
dy = dxmax;
dz = dxmax;
nx = round(width/dx);
ny = round(height/dy);
nz = round(length/dz);
%source position
srcx = round(nx / 2);
srcy = round(ny / 2);
srcz = round(3 * nz / 4);
%material
mx = 3 * nx / 8;
my = ny / 8;
mz = nz / 8;
mw = nx / 4; % width
mh = ny / 4; % height
ml = nz / 4; % length
al = ny / 2;
eps = ones(nx,ny,nz) * eps0 * adipose;
sigma = ones(nx,ny,nz) * f0 * 1e-9 * 0.5 - 0.5;;
for i=1:1:nx
for j=1:1:ny
for k=1:1:nz
% adipose tissue is located under z < al
if (k<al)
eps(i,j,k) = eps0 * adipose ;
sigma(i,j,k) = f0 * 1e-9 * 0.5 - 0.5;
end
if (i>mx && i<(mw+mx) && j>my && j<(mh+my) && k>mz && k<(ml+mz))
eps(i,j,k) = eps0 * tumor;
sigma(i,j,k) = f0 * 1e-9 - 0.5;
end
end
end
end
%time discretization
dt = 0.99/(c0*sqrt(dx^-2+dy^-2+dz^-2));
n_iter = 10000;
%receivers
nrec = round(nx / 3)-1;
recdy = round(ny / nrec);
recx = srcx;
recz = srcz;
rec = zeros(nrec,n_iter);
%EM field dimensions
Hx = zeros(nx,ny,nz);
Hy = zeros(nx,ny,nz);
Hz = zeros(nx,ny,nz);
Ex = zeros(nx,ny,nz);
Ey = zeros(nx,ny,nz);
Ez = zeros(nx,ny,nz);
%iteration
i = 0;
for n=1:1:n_iter
%magnetic field derivatives
Hxy = diff(Hx,1,2);
Hxz = diff(Hx,1,3);
Hzx = diff(Hz,1,1);
Hzy = diff(Hz,1,2);
Hyx = diff(Hy,1,1);
Hyz = diff(Hy,1,3);
%electric field maxwell equations
epsi = eps(:,2:end-1,2:nz-1);
ksi = (dt * sigma(:,2:end-1,2:nz-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ex(:,2:end-1,2:end-1) = c1.*Ex(:,2:end-1,2:nz-1) + c2.*((1/dy)*Hzy(:,1:end-1,2:end-1) - (1/dz)*Hyz(:,2:ny-1,1:end-1));
epsi = eps(2:end-1,:,2:end-1);
ksi = (dt * sigma(2:end-1,:,2:end-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ey(2:end-1,:,2:end-1) = c1.*Ey(2:end-1,:,2:end-1) + c2.*((1/dz)*Hxz(2:end-1,:,1:end-1) - (1/dx)*Hzx(1:end-1,:,2:end-1));
epsi = eps(2:end-1,2:end-1,:);
ksi = (dt * sigma(2:end-1,2:end-1,:)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ez(2:end-1,2:end-1,:) = c1.*Ez(2:end-1,2:end-1,:) + c2.*((1/dx)*Hyx(1:end-1,2:end-1,:) - (1/dy)*Hxy(2:end-1,1:end-1,:));
%gaussian source
f = sin(2*pi*f0*n*dt)*exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(srcx,srcy,srcz) = Ez(srcx,srcy,srcz) + f;
%Ezn(n)=Ez(srcx,srcy,srcz);
%electric field derivatives
Exy = diff(Ex,1,2);
Exz = diff(Ex,1,3);
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Eyx = diff(Ey,1,1);
Eyz = diff(Ey,1,3);
%magnetic field maxwell equations
Hx(:,1:end-1,1:end-1) = Hx(:,1:end-1,1:end-1) - (dt/(mu0*dy))*Ezy(:,:,1:end-1) + (dt/(mu0*dz))*Eyz(:,1:end-1,:);
Hy(1:end-1,:,1:end-1) = Hy(1:end-1,:,1:end-1) - (dt/(mu0*dz))*Exz(1:end-1,:,:) + (dt/(mu0*dx))*Ezx(:,:,1:end-1);
Hz(1:end-1,1:end-1,:) = Hz(1:end-1,1:end-1,:) - (dt/(mu0*dx))*Eyx(:,1:end-1,:) + (dt/(mu0*dy))*Exy(1:end-1,:,:);
for k=1:1:nrec
rec(k,n) = Ez(recx,recdy * k ,recz);
end
%display
if (mod(i,5)==0)
slice(:,:)=Ez(:,:,round(nz/2));
pcolor(slice');
colorbar;
drawnow
end
i = i+1;
disp(i)
end
close all
hold on
for k=1:1:nrec
plot(rec(k,:))
end
save('conductive-receivers.mat','rec','nrec','n_iter','recx','recdy','recz')
| MATLAB |
3D | overengineer/TR-FDTD | TR.m | .m | 3,739 | 139 | %physical constants
clear all;
close all;
load 'withtumor.mat';
load 'withouttumor.mat';
c0 = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%box dimensions
width = 0.5; % 30cm
height = 0.5;
length = 0.5; % 1cm
%source parameters
f0 = 1e9; % GHz
band = 2e9;
tw = sqrt(-log(0.1)/(pi*band)^2);%1e-8/pi;
t0 = 4*tw;
%spatial discretization
adipose = 5;
tumor = 10;
sigma = 5;
epsr = tumor;
w = 2 * pi * band;
k = (w/c0)*sqrt(epsr-1j*sigma/(w*eps0));
beta = real(k);
c = w / beta;
lambda = c/f0;
dxmax = lambda / 20;
dx = dxmax;
dy = dx;
dz = dx;
nx = round(width/dx);
ny = round(height/dy);
nz = round(length/dz);
%source position
srcx = round(nx / 2);
srcy = round( 3 * ny / 4);
srcz = round(nz / 2);
% material
eps = ones(nx,ny,nz) * eps0; %* adipose;
sigma = zeros(nx,ny,nz);% * f0 * 1e-9 * 0.5 - 0.5;
%temporal discretization
dt = 0.99/(c0*sqrt(dx^-2+dy^-2+dz^-2));
rec1 = trec - rec;
tau = 100e-12;
[foo,tp] = max(abs(rec1),[],2);
for k=1:1:nrec
recn(k,:) = exp(-((dt*((1:1:n_iter)-tp(k)))/tau).^2) .* rec1(k,:);
end
% hold on
% plot(rec(15,:))
% plot(exp(-((dt*((1:1:n_iter)-tp(15)))/tau).^2))
% draw now
%
% while 1
% end
%EM field dimensions
Hx = zeros(nx,ny,nz);
Hy = zeros(nx,ny,nz);
Hz = zeros(nx,ny,nz);
Ex = zeros(nx,ny,nz);
Ey = zeros(nx,ny,nz);
Ez = zeros(nx,ny,nz);
%iteration
i = 0;
for n=1:1:n_iter
%magnetic field derivatives
Hxy = diff(Hx,1,2);
Hxz = diff(Hx,1,3);
Hzx = diff(Hz,1,1);
Hzy = diff(Hz,1,2);
Hyx = diff(Hy,1,1);
Hyz = diff(Hy,1,3);
%electric field maxwell equations
epsi = eps(:,2:end-1,2:nz-1);
ksi = (dt * sigma(:,2:end-1,2:nz-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ex(:,2:end-1,2:end-1) = c1.*Ex(:,2:end-1,2:nz-1) - c2.*((1/dy)*Hzy(:,1:end-1,2:end-1) - (1/dz)*Hyz(:,2:ny-1,1:end-1));
epsi = eps(2:end-1,:,2:end-1);
ksi = (dt * sigma(2:end-1,:,2:end-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ey(2:end-1,:,2:end-1) = c1.*Ey(2:end-1,:,2:end-1) - c2.*((1/dz)*Hxz(2:end-1,:,1:end-1) - (1/dx)*Hzx(1:end-1,:,2:end-1));
epsi = eps(2:end-1,2:end-1,:);
ksi = (dt * sigma(2:end-1,2:end-1,:)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ez(2:end-1,2:end-1,:) = c1.*Ez(2:end-1,2:end-1,:) - c2.*((1/dx)*Hyx(1:end-1,2:end-1,:) - (1/dy)*Hxy(2:end-1,1:end-1,:));
%TR sources
for k=1:nrec
Ez(recdx * k, recy, recz) = Ez(recdx * k, recy, recz) + recn(k, n_iter-n+1);
end
%Ez(recx, recdy , recz)
%rec(1,n_iter-n)
%electric field derivatives
Exy = diff(Ex,1,2);
Exz = diff(Ex,1,3);
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Eyx = diff(Ey,1,1);
Eyz = diff(Ey,1,3);
%magnetic field maxwell equations
Hx(:,1:end-1,1:end-1) = Hx(:,1:end-1,1:end-1) + (dt/(mu0*dy))*Ezy(:,:,1:end-1) - (dt/(mu0*dz))*Eyz(:,1:end-1,:);
Hy(1:end-1,:,1:end-1) = Hy(1:end-1,:,1:end-1) + (dt/(mu0*dz))*Exz(1:end-1,:,:) - (dt/(mu0*dx))*Ezx(:,:,1:end-1);
Hz(1:end-1,1:end-1,:) = Hz(1:end-1,1:end-1,:) + (dt/(mu0*dx))*Eyx(:,1:end-1,:) - (dt/(mu0*dy))*Exy(1:end-1,:,:);
%display
if (mod(n,10)==0)
slice(:,:)=Ez(60:100,round(ny/2)-20:round(ny/2)+20,srcz);
pcolor(slice.');
colorbar;
shading interp
drawnow
end
i = i+1;
disp(i)
R(n) = varimax_norm(Ez(60:100,round(ny/2)-20:round(ny/2)+20,srcz));
end
figure;plot(R)
function R = varimax_norm(Ez)
R = sum(sum(sum(Ez.^2)))^2 / sum(sum(sum(Ez.^4)));
end
| MATLAB |
3D | overengineer/TR-FDTD | Conductivity.m | .m | 3,879 | 150 | %physical constants
clear all;
close all;
c0 = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%box dimensions
width = 0.05; % cm
height = 0.05;
length = 0.002; % cm
%source parameters
f0 = 2e9; % GHz
tw = 1e-8/pi;
t0 = 4*tw;
%spatial discretization
adipose = 10;
tumor = 60;
sigma = 1;
epsr = tumor;
w = 2 * pi * f0;
k = (w/c0)*sqrt(epsr-1j*sigma/(w*eps0));
beta = real(k);
c = w / beta;
lambda = c/f0;
dxmax = lambda / 10;
dx = dxmax;
dy = dxmax;
dz = dxmax;
nx = round(width/dx);
ny = round(height/dy);
nz = round(length/dz);
%source position
srcx = round(nx / 2);
srcy = round(ny / 2);
srcz = round(3 * nz / 4);
%material
mx = 3 * nx / 8;
my = ny / 8;
mz = nz / 8;
mw = nx / 4; % width
mh = ny / 4; % height
ml = nz / 4; % length
al = ny / 2;
eps = ones(nx,ny,nz) * eps0 * adipose;
sigma = ones(nx,ny,nz) * f0 * 1e-9 * 0.5 - 0.5;;
for i=1:1:nx
for j=1:1:ny
for k=1:1:nz
% adipose tissue is located under z < al
if (k<al)
eps(i,j,k) = eps0 * adipose ;
sigma(i,j,k) = f0 * 1e-9 * 0.5 - 0.5;
end
if (i>mx && i<(mw+mx) && j>my && j<(mh+my) && k>mz && k<(ml+mz))
eps(i,j,k) = eps0 * tumor;
sigma(i,j,k) = f0 * 1e-9 - 0.5;
end
end
end
end
%time discretization
dt = 0.99/(c0*sqrt(dx^-2+dy^-2+dz^-2));
n_iter = 10000;
%receivers
nrec = round(nx / 3)-1;
recdy = round(ny / nrec);
recx = srcx;
recz = srcz;
rec = zeros(nrec,n_iter);
%EM field dimensions
Hx = zeros(nx,ny,nz);
Hy = zeros(nx,ny,nz);
Hz = zeros(nx,ny,nz);
Ex = zeros(nx,ny,nz);
Ey = zeros(nx,ny,nz);
Ez = zeros(nx,ny,nz);
%iteration
i = 0;
for n=1:1:n_iter
%magnetic field derivatives
Hxy = diff(Hx,1,2);
Hxz = diff(Hx,1,3);
Hzx = diff(Hz,1,1);
Hzy = diff(Hz,1,2);
Hyx = diff(Hy,1,1);
Hyz = diff(Hy,1,3);
%electric field maxwell equations
epsi = eps(:,2:end-1,2:nz-1);
ksi = (dt * sigma(:,2:end-1,2:nz-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ex(:,2:end-1,2:end-1) = c1.*Ex(:,2:end-1,2:nz-1) + c2.*((1/dy)*Hzy(:,1:end-1,2:end-1) - (1/dz)*Hyz(:,2:ny-1,1:end-1));
epsi = eps(2:end-1,:,2:end-1);
ksi = (dt * sigma(2:end-1,:,2:end-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ey(2:end-1,:,2:end-1) = c1.*Ey(2:end-1,:,2:end-1) + c2.*((1/dz)*Hxz(2:end-1,:,1:end-1) - (1/dx)*Hzx(1:end-1,:,2:end-1));
epsi = eps(2:end-1,2:end-1,:);
ksi = (dt * sigma(2:end-1,2:end-1,:)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ez(2:end-1,2:end-1,:) = c1.*Ez(2:end-1,2:end-1,:) + c2.*((1/dx)*Hyx(1:end-1,2:end-1,:) - (1/dy)*Hxy(2:end-1,1:end-1,:));
%gaussian source
f = sin(2*pi*f0*n*dt)*exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(srcx,srcy,srcz) = Ez(srcx,srcy,srcz) + f;
%Ezn(n)=Ez(srcx,srcy,srcz);
%electric field derivatives
Exy = diff(Ex,1,2);
Exz = diff(Ex,1,3);
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Eyx = diff(Ey,1,1);
Eyz = diff(Ey,1,3);
%magnetic field maxwell equations
Hx(:,1:end-1,1:end-1) = Hx(:,1:end-1,1:end-1) - (dt/(mu0*dy))*Ezy(:,:,1:end-1) + (dt/(mu0*dz))*Eyz(:,1:end-1,:);
Hy(1:end-1,:,1:end-1) = Hy(1:end-1,:,1:end-1) - (dt/(mu0*dz))*Exz(1:end-1,:,:) + (dt/(mu0*dx))*Ezx(:,:,1:end-1);
Hz(1:end-1,1:end-1,:) = Hz(1:end-1,1:end-1,:) - (dt/(mu0*dx))*Eyx(:,1:end-1,:) + (dt/(mu0*dy))*Exy(1:end-1,:,:);
for k=1:1:nrec
rec(k,n) = Ez(recx,recdy * k ,recz);
end
%display
if (mod(i,5)==0)
slice(:,:)=Ez(:,:,round(nz/2));
pcolor(slice');
colorbar;
drawnow
end
i = i+1;
disp(i)
end
close all
hold on
for k=1:1:nrec
plot(rec(k,:))
end
save('conductive-receivers.mat','rec','nrec','n_iter','recx','recdy','recz')
| MATLAB |
3D | overengineer/TR-FDTD | TR_lin.m | .m | 3,830 | 141 | %physical constants
clear all;
close all;
load 'withtumor.mat';
load 'withouttumor.mat';
c0 = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%box dimensions
width = 0.5; % 30cm
height = 0.5;
length = 0.5; % 1cm
%source parameters
f0 = 1e9; % GHz
band = 2e9;
tw = sqrt(-log(0.1)/(pi*band)^2);%1e-8/pi;
t0 = 4*tw;
%spatial discretization
adipose = 5;
tumor = 10;
sigma = 0;
epsr = tumor;
w = 2 * pi * band;
k = (w/c0)*sqrt(epsr-1j*sigma/(w*eps0));
beta = real(k);
c = w / beta;
lambda = c/f0;
dxmax = lambda / 20;
dx = dxmax;
dy = dx;
dz = dx;
nx = round(width/dx);
ny = round(height/dy);
nz = round(length/dz);
%source position
srcx = round(nx / 2);
srcy = round( 3 * ny / 4);
srcz = round(nz / 2);
% material
eps = ones(nx,ny,nz) * eps0; %* adipose;
sigma = zeros(nx,ny,nz);% * f0 * 1e-9 * 0.5 - 0.5;
%temporal discretization
dt = 0.99/(c0*sqrt(dx^-2+dy^-2+dz^-2));
rec1 = trec - rec;
tau = 20e-12;
[foo,tp] = max(abs(rec1),[],2);
for k=1:1:nrec
recn(k,:) = exp(-((dt*((1:1:n_iter)-tp(k)))/tau).^2) .* rec1(k,:);
end
% hold on
% plot(rec(15,:))
% plot(exp(-((dt*((1:1:n_iter)-tp(15)))/tau).^2))
% draw now
%
% while 1
% end
%EM field dimensions
Hx = zeros(nx,ny,nz);
Hy = zeros(nx,ny,nz);
Hz = zeros(nx,ny,nz);
Ex = zeros(nx,ny,nz);
Ey = zeros(nx,ny,nz);
Ez = zeros(nx,ny,nz);
%iteration
i = 0;
for n=1:1:n_iter
%magnetic field derivatives
Hxy = diff(Hx,1,2);
Hxz = diff(Hx,1,3);
Hzx = diff(Hz,1,1);
Hzy = diff(Hz,1,2);
Hyx = diff(Hy,1,1);
Hyz = diff(Hy,1,3);
%electric field maxwell equations
epsi = eps(:,2:end-1,2:nz-1);
ksi = (dt * sigma(:,2:end-1,2:nz-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ex(:,2:end-1,2:end-1) = c1.*Ex(:,2:end-1,2:nz-1) - c2.*((1/dy)*Hzy(:,1:end-1,2:end-1) - (1/dz)*Hyz(:,2:ny-1,1:end-1));
epsi = eps(2:end-1,:,2:end-1);
ksi = (dt * sigma(2:end-1,:,2:end-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ey(2:end-1,:,2:end-1) = c1.*Ey(2:end-1,:,2:end-1) - c2.*((1/dz)*Hxz(2:end-1,:,1:end-1) - (1/dx)*Hzx(1:end-1,:,2:end-1));
epsi = eps(2:end-1,2:end-1,:);
ksi = (dt * sigma(2:end-1,2:end-1,:)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ez(2:end-1,2:end-1,:) = c1.*Ez(2:end-1,2:end-1,:) - c2.*((1/dx)*Hyx(1:end-1,2:end-1,:) - (1/dy)*Hxy(2:end-1,1:end-1,:));
%TR sources
for k=1:nrec
Ez(recdx * k, recy, recz) = Ez(recdx * k, recy, recz) + recn(k, n_iter-n+1);
end
%Ez(recx, recdy , recz)
%rec(1,n_iter-n)
%electric field derivatives
Exy = diff(Ex,1,2);
Exz = diff(Ex,1,3);
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Eyx = diff(Ey,1,1);
Eyz = diff(Ey,1,3);
%magnetic field maxwell equations
Hx(:,1:end-1,1:end-1) = Hx(:,1:end-1,1:end-1) + (dt/(mu0*dy))*Ezy(:,:,1:end-1) - (dt/(mu0*dz))*Eyz(:,1:end-1,:);
Hy(1:end-1,:,1:end-1) = Hy(1:end-1,:,1:end-1) + (dt/(mu0*dz))*Exz(1:end-1,:,:) - (dt/(mu0*dx))*Ezx(:,:,1:end-1);
Hz(1:end-1,1:end-1,:) = Hz(1:end-1,1:end-1,:) + (dt/(mu0*dx))*Eyx(:,1:end-1,:) - (dt/(mu0*dy))*Exy(1:end-1,:,:);
%display
if 1 %n>120 && n<160)
%slice(:,:)=Ez(30:60,round(ny/2)-20:round(ny/2)+3,srcz);
slice(:,:)=Ez(35:55,35:55,srcz);
pcolor(slice.');
colorbar;
shading interp
drawnow
end
i = i+1;
disp(i)
%R(n) = varimax_norm(Ez(30:56,round(ny/2)-20:round(ny/2)+3,srcz));
R(n) = varimax_norm(Ez(35:55,35:55,srcz));
end
figure;plot(R)
function R = varimax_norm(Ez)
R = sum(sum(sum(Ez.^2)))^2 / sum(sum(sum(Ez.^4)));
end
| MATLAB |
3D | overengineer/TR-FDTD | WithoutTumor.m | .m | 3,851 | 147 | %physical constants
clear all;
close all;
c0 = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%box dimensions
width = 0.5; % cm
height = 0.5;
length = 0.5; % cm
%source parameters
f0 = 1e9; % GHz
band = 2e9;
tw = sqrt(-log(0.1)/(pi*band)^2);%1e-8/pi;
t0 = 4*tw;
%spatial discretization
adipose = 1; %5;
sigma = 5;
epsr = 10;
w = 2 * pi * band;
k = (w/c0)*sqrt(epsr-1j*sigma/(w*eps0));
beta = real(k);
c = w / beta;
lambda = c/f0;
dxmax = lambda / 20;
dx = dxmax;
dy = dxmax;
dz = dxmax;
nx = round(width/dx);
ny = round(height/dy);
nz = round(length/dz);
%source position
srcx = round(nx / 2);
srcy = round( 3 * ny / 4);
srcz = round(nz / 2);
%material
al = 0;
eps = ones(nx,ny,nz) * eps0 ;
sigma = zeros(nx,ny,nz);%*f0 * 1e-9 * 0.5 - 0.5;
for i=1:1:nx
for j=1:1:ny
for k=1:1:nz
% adipose tissue is located under z < al
if (k<al)
eps(i,j,k) = eps0 * adipose;
sigma(i,j,k) = f0 * 1e-9 * 0.5 - 0.5;
end
end
end
end
%time discretization
dt = 0.99/(c0*sqrt(dx^-2+dy^-2+dz^-2));
tw=16*dt;
t0=3*tw;
n_iter = 250;
%receivers
nrec = round(nx / 3)-1;
recdx = round(nx / nrec);
recy = srcy-20;
recz = srcz;
rec = zeros(nrec,n_iter);
%EM field dimensions
Hx = zeros(nx,ny,nz);
Hy = zeros(nx,ny,nz);
Hz = zeros(nx,ny,nz);
Ex = zeros(nx,ny,nz);
Ey = zeros(nx,ny,nz);
Ez = zeros(nx,ny,nz);
%iteration
i = 0;
for n=1:1:n_iter
%magnetic field derivatives
Hxy = diff(Hx,1,2);
Hxz = diff(Hx,1,3);
Hzx = diff(Hz,1,1);
Hzy = diff(Hz,1,2);
Hyx = diff(Hy,1,1);
Hyz = diff(Hy,1,3);
%electric field maxwell equations
epsi = eps(:,2:end-1,2:nz-1);
ksi = (dt * sigma(:,2:end-1,2:nz-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ex(:,2:end-1,2:end-1) = c1.*Ex(:,2:end-1,2:nz-1) + c2.*((1/dy)*Hzy(:,1:end-1,2:end-1) - (1/dz)*Hyz(:,2:ny-1,1:end-1));
epsi = eps(2:end-1,:,2:end-1);
ksi = (dt * sigma(2:end-1,:,2:end-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ey(2:end-1,:,2:end-1) = c1.*Ey(2:end-1,:,2:end-1) + c2.*((1/dz)*Hxz(2:end-1,:,1:end-1) - (1/dx)*Hzx(1:end-1,:,2:end-1));
epsi = eps(2:end-1,2:end-1,:);
ksi = (dt * sigma(2:end-1,2:end-1,:)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ez(2:end-1,2:end-1,:) = c1.*Ez(2:end-1,2:end-1,:) + c2.*((1/dx)*Hyx(1:end-1,2:end-1,:) - (1/dy)*Hxy(2:end-1,1:end-1,:));
%gaussian source
%f(n) = sin(2*pi*f0*n*dt)*exp(-(n*dt-t0)^2/(tw^2))/dy;
f(n) = -2*(n*dt-t0)/tw*exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(srcx,srcy,srcz) = Ez(srcx,srcy,srcz) + f(n);
%Ezn(n)=Ez(srcx,srcy,srcz);
%electric field derivatives
Exy = diff(Ex,1,2);
Exz = diff(Ex,1,3);
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Eyx = diff(Ey,1,1);
Eyz = diff(Ey,1,3);
%magnetic field maxwell equations
Hx(:,1:end-1,1:end-1) = Hx(:,1:end-1,1:end-1) - (dt/(mu0*dy))*Ezy(:,:,1:end-1) + (dt/(mu0*dz))*Eyz(:,1:end-1,:);
Hy(1:end-1,:,1:end-1) = Hy(1:end-1,:,1:end-1) - (dt/(mu0*dz))*Exz(1:end-1,:,:) + (dt/(mu0*dx))*Ezx(:,:,1:end-1);
Hz(1:end-1,1:end-1,:) = Hz(1:end-1,1:end-1,:) - (dt/(mu0*dx))*Eyx(:,1:end-1,:) + (dt/(mu0*dy))*Exy(1:end-1,:,:);
for k=1:1:nrec
rec(k,n) = Ez(recdx * k, recy, recz);
end
%display
if (mod(i,5)==0)
slice(:,:)=Ez(:,:,srcz);
pcolor(slice');
colorbar;
shading interp
drawnow
end
i = i+1;
disp(i)
end
close all
hold on
for k=1:1:nrec
plot(rec(k,:))
end
save('withouttumor.mat','rec','nrec','n_iter','recy','recdx','recz')
| MATLAB |
3D | overengineer/TR-FDTD | bitirme2.m | .m | 1,302 | 53 | clear all
close all
%TMz Polarization
%physical constants
c = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%environment parameters
nx = 249;
ny = 249;
delta = 0.012; %1.2cm
dx = delta;
dy = delta;
dt = 20e-12;%0.95/(c*sqrt(dx^-2+dy^-2));
f0 = 2e9; % GHz
tw = 16*dt;
t0 = 200*dt;
eps_r = 1;%2.9;
sigma = 0;%0.2; %S/m
ksi = (dt * sigma) / ( 2 * eps0* eps_r );
%calculation parameters
n_iter = 5000;
%initalization
Hx = zeros(nx,ny);
Hy = zeros(nx,ny);
Ez = zeros(nx,ny);
%iteration
for n=1:1:n_iter
%Maxwell Equations (TMz)
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Hx(2:nx-1,2:ny) = Hx(2:nx-1,2:ny) - (dt/(mu0*dy))*Ezy(2:nx-1,:);
Hy(2:nx,2:ny-1) = Hy(2:nx,2:ny-1) + (dt/(mu0*dx))*Ezx(:,2:ny-1);
Hxy = diff(Hx,1,2);
Hyx = diff(Hy,1,1);
Ez(2:nx-1,2:ny-1) = ((1-ksi)/(1+ksi))*Ez(2:nx-1,2:ny-1) + ((1/(1+ksi))*(dt/(eps0*eps_r)))*((1/dx)*Hyx(2:nx-1,2:ny-1) - (1/dy)*Hxy(2:nx-1,2:ny-1));
%Gaussian Source
f(n)= sin(2*pi*f0*n*dt)*exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(round(nx/2),round(ny/2)) = Ez(round(nx/2),round(ny/2)) + f(n);
%Neuman Condition
Ez(:,2) = -Ez(:,1);
Ez(2,:) = -Ez(1,:);
Ez(:,ny-1) = -Ez(:,ny);
Ez(nx-1,:) = -Ez(nx,:);
%display
%n = n + 1;
pcolor(Ez);
shading interp;
drawnow
end
| MATLAB |
3D | overengineer/TR-FDTD | hytrf.m | .m | 0 | 0 | null | MATLAB |
3D | overengineer/TR-FDTD | diel_no_tumor_lin.m | .m | 3,831 | 147 | %physical constants
clear all;
close all;
c0 = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%box dimensions
width = 0.5; % cm
height = 0.5;
length = 0.5; % cm
%source parameters
f0 = 1e9; % GHz
band = 2e9;
tw = sqrt(-log(0.1)/(pi*band)^2);%1e-8/pi;
t0 = 4*tw;
%spatial discretization
adipose = 1; %5;
sigma = 0;
epsr = 10;
w = 2 * pi * band;
k = (w/c0)*sqrt(epsr-1j*sigma/(w*eps0));
beta = real(k);
c = w / beta;
lambda = c/f0;
dxmax = lambda / 20;
dx = dxmax;
dy = dxmax;
dz = dxmax;
nx = round(width/dx);
ny = round(height/dy);
nz = round(length/dz);
%source position
srcx = round(nx / 2);
srcy = round( 3 * ny / 4);
srcz = round(nz / 2);
%material
al = 0;
eps = ones(nx,ny,nz) * eps0 ;
sigma = zeros(nx,ny,nz);%*f0 * 1e-9 * 0.5 - 0.5;
for i=1:1:nx
for j=1:1:ny
for k=1:1:nz
% adipose tissue is located under z < al
if (k<al)
eps(i,j,k) = eps0 * adipose;
sigma(i,j,k) = 0;
end
end
end
end
%time discretization
dt = 0.99/(c0*sqrt(dx^-2+dy^-2+dz^-2));
tw=16*dt;
t0=3*tw;
n_iter = 250;
%receivers
nrec = round(nx / 3)-1;
recdx = round(nx / nrec);
recy = srcy-15;
recz = srcz;
rec = zeros(nrec,n_iter);
%EM field dimensions
Hx = zeros(nx,ny,nz);
Hy = zeros(nx,ny,nz);
Hz = zeros(nx,ny,nz);
Ex = zeros(nx,ny,nz);
Ey = zeros(nx,ny,nz);
Ez = zeros(nx,ny,nz);
%iteration
i = 0;
for n=1:1:n_iter
%magnetic field derivatives
Hxy = diff(Hx,1,2);
Hxz = diff(Hx,1,3);
Hzx = diff(Hz,1,1);
Hzy = diff(Hz,1,2);
Hyx = diff(Hy,1,1);
Hyz = diff(Hy,1,3);
%electric field maxwell equations
epsi = eps(:,2:end-1,2:nz-1);
ksi = (dt * sigma(:,2:end-1,2:nz-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ex(:,2:end-1,2:end-1) = c1.*Ex(:,2:end-1,2:nz-1) + c2.*((1/dy)*Hzy(:,1:end-1,2:end-1) - (1/dz)*Hyz(:,2:ny-1,1:end-1));
epsi = eps(2:end-1,:,2:end-1);
ksi = (dt * sigma(2:end-1,:,2:end-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ey(2:end-1,:,2:end-1) = c1.*Ey(2:end-1,:,2:end-1) + c2.*((1/dz)*Hxz(2:end-1,:,1:end-1) - (1/dx)*Hzx(1:end-1,:,2:end-1));
epsi = eps(2:end-1,2:end-1,:);
ksi = (dt * sigma(2:end-1,2:end-1,:)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ez(2:end-1,2:end-1,:) = c1.*Ez(2:end-1,2:end-1,:) + c2.*((1/dx)*Hyx(1:end-1,2:end-1,:) - (1/dy)*Hxy(2:end-1,1:end-1,:));
%gaussian source
%f(n) = sin(2*pi*f0*n*dt)*exp(-(n*dt-t0)^2/(tw^2))/dy;
f(n) = -2*(n*dt-t0)/tw*exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(srcx,srcy,srcz) = Ez(srcx,srcy,srcz) + f(n);
%Ezn(n)=Ez(srcx,srcy,srcz);
%electric field derivatives
Exy = diff(Ex,1,2);
Exz = diff(Ex,1,3);
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Eyx = diff(Ey,1,1);
Eyz = diff(Ey,1,3);
%magnetic field maxwell equations
Hx(:,1:end-1,1:end-1) = Hx(:,1:end-1,1:end-1) - (dt/(mu0*dy))*Ezy(:,:,1:end-1) + (dt/(mu0*dz))*Eyz(:,1:end-1,:);
Hy(1:end-1,:,1:end-1) = Hy(1:end-1,:,1:end-1) - (dt/(mu0*dz))*Exz(1:end-1,:,:) + (dt/(mu0*dx))*Ezx(:,:,1:end-1);
Hz(1:end-1,1:end-1,:) = Hz(1:end-1,1:end-1,:) - (dt/(mu0*dx))*Eyx(:,1:end-1,:) + (dt/(mu0*dy))*Exy(1:end-1,:,:);
for k=1:1:nrec
rec(k,n) = Ez(recdx * k, recy, recz);
end
%display
if (mod(i,5)==0)
slice(:,:)=Ez(:,:,srcz);
pcolor(slice');
colorbar;
shading interp
drawnow
end
i = i+1;
disp(i)
end
close all
hold on
for k=1:1:nrec
plot(rec(k,:))
end
save('withouttumor.mat','rec','nrec','n_iter','recy','recdx','recz')
| MATLAB |
3D | overengineer/TR-FDTD | WithTumor.m | .m | 4,179 | 160 | %physical constants
clear all;
close all;
c0 = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%box dimensions
width = 0.5; % cm
height = 0.5;
length = 0.5; % cm
%source parameters
f0 = 1e9; % GHz
band = 2e9;
%tw = sqrt(-log(0.1)/(pi*band)^2);%1e-8/pi;
%spatial discretization
adipose = 1; %5;
tumor = 10;
sigma = 5;
epsr = tumor;
w = 2 * pi * band;
k = (w/c0)*sqrt(epsr-1j*sigma/(w*eps0));
beta = real(k);
c = w / beta;
lambda = c/f0;
dxmax = lambda / 20;
dx = dxmax;
dy = dxmax;
dz = dxmax;
nx = round(width/dx);
ny = round(height/dy);
nz = round(length/dz);
%source position
srcx = round(nx / 2);
srcy = round( 3 * ny / 4);
srcz = round(nz / 2);
%material
mw = 4; % width
mh = 4; % height
ml = 4; % length
%mx = nx / 4-10-mw/2;
mx = nx / 2-10-mw/2;
my = ny / 2-mh/2;
mz = nz / 2-ml/2;
al = 0;
eps = ones(nx,ny,nz) * eps0 ;
sigma = zeros(nx,ny,nz);%*f0 * 1e-9 * 0.5 - 0.5;
for i=1:1:nx
for j=1:1:ny
for k=1:1:nz
% adipose tissue is located under z < al
if (k<al)
eps(i,j,k) = eps0 * adipose;
sigma(i,j,k) = f0 * 1e-9 * 0.5 - 0.5;
end
if (i>mx && i<(mw+mx) && j>my && j<(mh+my) && k>mz && k<(ml+mz))
eps(i,j,k) = eps0 * tumor;
sigma(i,j,k) = f0 * 1e-9 - 0.5;
end
end
end
end
%time discretization
dt = 0.99/(c0*sqrt(dx^-2+dy^-2+dz^-2));
tw=16*dt;
t0=3*tw;
n_iter = 250;
%receivers
nrec = round(nx / 3)-1;
recdx = round(nx / nrec);
recy = srcy-20;
recz = srcz;
rec = zeros(nrec,n_iter);
%EM field dimensions
Hx = zeros(nx,ny,nz);
Hy = zeros(nx,ny,nz);
Hz = zeros(nx,ny,nz);
Ex = zeros(nx,ny,nz);
Ey = zeros(nx,ny,nz);
Ez = zeros(nx,ny,nz);
%iteration
i = 0;
for n=1:1:n_iter
%magnetic field derivatives
Hxy = diff(Hx,1,2);
Hxz = diff(Hx,1,3);
Hzx = diff(Hz,1,1);
Hzy = diff(Hz,1,2);
Hyx = diff(Hy,1,1);
Hyz = diff(Hy,1,3);
%electric field maxwell equations
epsi = eps(:,2:end-1,2:nz-1);
ksi = (dt * sigma(:,2:end-1,2:nz-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ex(:,2:end-1,2:end-1) = c1.*Ex(:,2:end-1,2:nz-1) + c2.*((1/dy)*Hzy(:,1:end-1,2:end-1) - (1/dz)*Hyz(:,2:ny-1,1:end-1));
epsi = eps(2:end-1,:,2:end-1);
ksi = (dt * sigma(2:end-1,:,2:end-1)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ey(2:end-1,:,2:end-1) = c1.*Ey(2:end-1,:,2:end-1) + c2.*((1/dz)*Hxz(2:end-1,:,1:end-1) - (1/dx)*Hzx(1:end-1,:,2:end-1));
epsi = eps(2:end-1,2:end-1,:);
ksi = (dt * sigma(2:end-1,2:end-1,:)) ./ ( 2 * epsi );
c2 = (1./(1+ksi)).*(dt./epsi);
c1 = (1-ksi)./(1+ksi);
Ez(2:end-1,2:end-1,:) = c1.*Ez(2:end-1,2:end-1,:) + c2.*((1/dx)*Hyx(1:end-1,2:end-1,:) - (1/dy)*Hxy(2:end-1,1:end-1,:));
%gaussian source
%f(n) = sin(2*pi*f0*n*dt)*exp(-(n*dt-t0)^2/(tw^2))/dy;
f(n) = -2*(n*dt-t0)/tw*exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(srcx,srcy,srcz) = Ez(srcx,srcy,srcz) + f(n);
%Ezn(n)=Ez(srcx,srcy,srcz);
%electric field derivatives
Exy = diff(Ex,1,2);
Exz = diff(Ex,1,3);
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Eyx = diff(Ey,1,1);
Eyz = diff(Ey,1,3);
%magnetic field maxwell equations
Hx(:,1:end-1,1:end-1) = Hx(:,1:end-1,1:end-1) - (dt/(mu0*dy))*Ezy(:,:,1:end-1) + (dt/(mu0*dz))*Eyz(:,1:end-1,:);
Hy(1:end-1,:,1:end-1) = Hy(1:end-1,:,1:end-1) - (dt/(mu0*dz))*Exz(1:end-1,:,:) + (dt/(mu0*dx))*Ezx(:,:,1:end-1);
Hz(1:end-1,1:end-1,:) = Hz(1:end-1,1:end-1,:) - (dt/(mu0*dx))*Eyx(:,1:end-1,:) + (dt/(mu0*dy))*Exy(1:end-1,:,:);
for k=1:1:nrec
rec(k,n) = Ez(recdx * k, recy, recz);
end
%display
if (mod(i,5)==0)
slice(:,:)=Ez(:,:,srcz);
pcolor(slice');
colorbar;
shading interp
drawnow
end
i = i+1;
disp(i)
end
close all
hold on
for k=1:1:nrec
plot(rec(k,:))
end
trec = rec;
save('withtumor.mat','trec','nrec','n_iter','recy','recdx','recz')
| MATLAB |
3D | overengineer/TR-FDTD | bitirme4.m | .m | 1,300 | 53 | clear all
close all
%TMz Polarization
%physical constants
c = 2.998e8;
eta0 = 120*pi;
mu0 = pi*4e-7;
eps0 = 1e-9/(36*pi);
%environment parameters
nx = 249;
ny = 249;
delta = 1.2e-2; %1.2cm
dx = delta;
dy = delta;
dt = 20e-12; %0.95/(c*sqrt(dx^-2+dy^-2));
f0 = 2e9; %2GHz
tw = 16*dt;
t0 = 200*dt;
eps_r = 4.58;
sigma = 0.52; %S/m
ksi = (dt * sigma) / ( 2 * eps0* eps_r );
%calculation parameters
n_iter = 5000;
%initalization
Hx = zeros(nx,ny);
Hy = zeros(nx,ny);
Ez = zeros(nx,ny);
%iteration
for n=1:1:n_iter
%Maxwell Equations (TMz)
Ezx = diff(Ez,1,1);
Ezy = diff(Ez,1,2);
Hx(2:nx-1,2:ny) = Hx(2:nx-1,2:ny) - (dt/(mu0*dy))*Ezy(2:nx-1,:);
Hy(2:nx,2:ny-1) = Hy(2:nx,2:ny-1) + (dt/(mu0*dx))*Ezx(:,2:ny-1);
Hxy = diff(Hx,1,2);
Hyx = diff(Hy,1,1);
Ez(2:nx-1,2:ny-1) = ((1-ksi)/(1+ksi))*Ez(2:nx-1,2:ny-1) + ((1/(1+ksi))*(dt/(eps0*eps_r)))*((1/dx)*Hyx(2:nx-1,2:ny-1) - (1/dy)*Hxy(2:nx-1,2:ny-1));
%Gaussian Source
f(n)= sin(2*pi*f0*n*dt)*exp(-(n*dt-t0)^2/(tw^2))/dy;
Ez(round(nx/2),round(ny/2)) = Ez(round(nx/2),round(ny/2)) + f(n);
%Neuman Condition
Ez(:,2) = -Ez(:,1);
Ez(2,:) = -Ez(1,:);
Ez(:,ny-1) = -Ez(:,ny);
Ez(nx-1,:) = -Ez(nx,:);
%display
%n = n + 1;
pcolor(Ez);
shading interp;
drawnow
end
| MATLAB |
3D | overengineer/TR-FDTD | Eztr_lossy.m | .m | 0 | 0 | null | MATLAB |
3D | jianlin-cheng/TransFun | training.py | .py | 10,866 | 290 | import math
import os
import numpy as np
import torch.optim as optim
from torchsummary import summary
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
import Constants
import params
from Dataset.Dataset import load_dataset
from models.gnn import GCN
import argparse
import torch
import time
from torch_geometric.loader import DataLoader
from preprocessing.utils import pickle_save, pickle_load, save_ckp, load_ckp, class_distribution_counter, \
draw_architecture, compute_roc
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
parser = argparse.ArgumentParser()
parser.add_argument('--no-cuda', action='store_true', default=False, help='Disables CUDA training.')
parser.add_argument('--fastmode', action='store_true', default=False, help='Validate during training pass.')
parser.add_argument('--seed', type=int, default=42, help='Random seed.')
parser.add_argument('--epochs', type=int, default=70, help='Number of epochs to train.')
parser.add_argument('--lr', type=float, default=0.001, help='Initial learning rate.')
parser.add_argument('--train_batch', type=int, default=10, help='Training batch size.')
parser.add_argument('--valid_batch', type=int, default=10, help='Validation batch size.')
parser.add_argument('--seq', type=float, default=0.9, help='Sequence Identity (Sequence Identity).')
parser.add_argument("--ont", default='molecular_function', type=str, help='Ontology under consideration')
args = parser.parse_args()
args.cuda = not args.no_cuda and torch.cuda.is_available()
if args.cuda:
device = 'cuda'
kwargs = {
'seq_id': args.seq,
'ont': args.ont,
'session': 'train'
}
if args.ont == 'molecular_function':
ont_kwargs = params.mol_kwargs
elif args.ont == 'cellular_component':
ont_kwargs = params.cc_kwargs
elif args.ont == 'biological_process':
ont_kwargs = params.bio_kwargs
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if args.cuda:
torch.cuda.manual_seed(args.seed)
num_class = len(pickle_load(Constants.ROOT + 'go_terms')[f'GO-terms-{args.ont}'])
def create_class_weights(cnter):
class_weight_path = Constants.ROOT + "{}/{}/class_weights".format(kwargs['seq_id'], kwargs['ont'])
if os.path.exists(class_weight_path + ".pickle"):
print("Loading class weights")
class_weights = pickle_load(class_weight_path)
else:
print("Generating class weights")
go_terms = pickle_load(Constants.ROOT + "/go_terms")
terms = go_terms['GO-terms-{}'.format(args.ont)]
class_weights = [cnter[i] for i in terms]
total = sum(class_weights)
_max = max(class_weights)
class_weights = torch.tensor([total / i for i in class_weights], dtype=torch.float).to(device)
return class_weights
class_weights = create_class_weights(class_distribution_counter(**kwargs))
dataset = load_dataset(root=Constants.ROOT, **kwargs)
labels = pickle_load(Constants.ROOT + "{}_labels".format(args.ont))
edge_types = list(params.edge_types)
train_dataloader = DataLoader(dataset,
batch_size=args.train_batch,
drop_last=True,
exclude_keys=edge_types,
shuffle=True)
kwargs['session'] = 'validation'
val_dataset = load_dataset(root=Constants.ROOT, **kwargs)
valid_dataloader = DataLoader(val_dataset,
batch_size=args.valid_batch,
drop_last=False,
shuffle=False,
exclude_keys=edge_types)
print('========================================')
print(f'# training proteins: {len(dataset)}')
print(f'# validation proteins: {len(val_dataset)}')
print(f'# Number of classes: {num_class}')
# print(f'# Max class weights: {torch.max(class_weights)}')
# print(f'# Min class weights: {torch.min(class_weights)}')
print('========================================')
current_epoch = 1
min_val_loss = np.Inf
inpu = next(iter(train_dataloader))
model = GCN(**ont_kwargs)
model.to(device)
optimizer = optim.Adam(model.parameters(), lr=args.lr, weight_decay=ont_kwargs['wd'])
lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, args.epochs)
criterion = torch.nn.BCELoss(reduction='none')
labels = pickle_load(Constants.ROOT + "{}_labels".format(args.ont))
def train(start_epoch, min_val_loss, model, optimizer, criterion, data_loader):
min_val_loss = min_val_loss
for epoch in range(start_epoch, args.epochs):
print(" ---------- Epoch {} ----------".format(epoch))
# initialize variables to monitor training and validation loss
epoch_loss, epoch_precision, epoch_recall, epoch_accuracy, epoch_f1 = 0.0, 0.0, 0.0, 0.0, 0.0
val_loss, val_precision, val_recall, val_accuracy, val_f1 = 0.0, 0.0, 0.0, 0.0, 0.0
t = time.time()
with torch.autograd.set_detect_anomaly(True):
lr_scheduler.step()
###################
# train the model #
###################
model.train()
for pos, data in enumerate(data_loader['train']):
labs = []
for la in data['atoms'].protein:
labs.append(torch.tensor(labels[la], dtype=torch.float32).view(1, -1))
labs = torch.cat(labs, dim=0)
cnts = torch.sum(labs, dim=0)
total = torch.sum(cnts)
optimizer.zero_grad()
output = model(data.to(device))
loss = criterion(output, labs.to(device))
loss = (loss * class_weights).mean()
pom = output.detach().cpu().numpy()
bins = np.arange(0.0, 1.1, 0.1)
digitized = np.digitize(pom, bins) - 1
counts = np.bincount(digitized.flatten(), minlength=len(bins))
# Display the counts for each range
for i, count in enumerate(counts):
lower_bound = round(bins[i], 1)
upper_bound = round(bins[i+1], 1) if i < len(bins) - 1 else 1.0
print(f"Range {lower_bound:.1f} - {upper_bound:.1f}: {count} elements")
exit()
loss.backward()
optimizer.step()
epoch_loss += loss.data.item()
out_cpu_5 = output.cpu() > 0.5
epoch_accuracy += accuracy_score(y_true=labs, y_pred=out_cpu_5)
epoch_precision += precision_score(y_true=labs, y_pred=out_cpu_5, average="samples")
epoch_recall += recall_score(y_true=labs, y_pred=out_cpu_5, average="samples")
epoch_f1 += f1_score(y_true=labs, y_pred=out_cpu_5, average="samples")
epoch_accuracy = epoch_accuracy / len(loaders['train'])
epoch_precision = epoch_precision / len(loaders['train'])
epoch_recall = epoch_recall / len(loaders['train'])
epoch_f1 = epoch_f1 / len(loaders['train'])
###################
# Validate the model #
###################
with torch.no_grad():
model.eval()
for data in data_loader['valid']:
labs = []
for la in data['atoms'].protein:
labs.append(torch.tensor(labels[la], dtype=torch.float32).view(1, -1))
labs = torch.cat(labs)
output = model(data.to(device))
_val_loss = criterion(output, labs.to(device))
_val_loss = (_val_loss * class_weights).mean()
val_loss += _val_loss.data.item()
val_accuracy += accuracy_score(labs, output.cpu() > 0.5)
val_precision += precision_score(labs, output.cpu() > 0.5, average="samples")
val_recall += recall_score(labs, output.cpu() > 0.5, average="samples")
val_f1 += f1_score(labs, output.cpu() > 0.5, average="samples")
val_loss = val_loss / len(loaders['valid'])
val_accuracy = val_accuracy / len(loaders['valid'])
val_precision = val_precision / len(loaders['valid'])
val_recall = val_recall / len(loaders['valid'])
val_f1 = val_f1 / len(loaders['valid'])
print('Epoch: {:04d}'.format(epoch),
'train_loss: {:.4f}'.format(epoch_loss),
'train_acc: {:.4f}'.format(epoch_accuracy),
'precision: {:.4f}'.format(epoch_precision),
'recall: {:.4f}'.format(epoch_recall),
'f1: {:.4f}'.format(epoch_f1),
'val_acc: {:.4f}'.format(val_accuracy),
'val_loss: {:.4f}'.format(val_loss),
'val_precision: {:.4f}'.format(val_precision),
'val_recall: {:.4f}'.format(val_recall),
'val_f1: {:.4f}'.format(val_f1),
'time: {:.4f}s'.format(time.time() - t))
checkpoint = {
'epoch': epoch,
'valid_loss_min': val_loss,
'state_dict': model.state_dict(),
'optimizer': optimizer.state_dict(),
}
# save checkpoint
# save_ckp(checkpoint, False, ckp_pth,
# ckp_dir + "best_model.pt")
#
# if val_loss <= min_val_loss:
# print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'. \
# format(min_val_loss, val_loss))
#
# # save checkpoint as best model
# save_ckp(checkpoint, True, ckp_pth,
# ckp_dir + "best_model.pt")
# min_val_loss = val_loss
return model
loaders = {
'train': train_dataloader,
'valid': valid_dataloader
}
ckp_dir = Constants.ROOT + 'checkpoints/model_checkpoint/{}/'.format("cur")
ckp_pth = ckp_dir + "current_checkpoint.pt"
#ckp_pth = ""
ckp_dir = "/home/fbqc9/PycharmProjects/TFUNClone/TransFun/data/"
ckp_pth = ckp_dir + "molecular_function.pt"
print(ckp_pth)
if os.path.exists(ckp_pth):
print("Loading model checkpoint @ {}".format(ckp_pth))
model, optimizer, current_epoch, min_val_loss = load_ckp(ckp_pth, model, optimizer, device="cuda:0")
else:
if not os.path.exists(ckp_dir):
os.makedirs(ckp_dir)
print("Training model on epoch {}, with minimum validation loss as {}".format(current_epoch, min_val_loss))
config = {
"learning_rate": args.lr,
"epochs": current_epoch,
"batch_size": args.train_batch,
"valid_size": args.valid_batch,
"weight_decay": ont_kwargs['wd']
}
trained_model = train(current_epoch, min_val_loss,
model=model, optimizer=optimizer,
criterion=criterion, data_loader=loaders)
| Python |
3D | jianlin-cheng/TransFun | predict.py | .py | 8,348 | 227 | import argparse
import os
import networkx as nx
import obonet
import torch
from Bio import SeqIO
from torch import optim
from torch_geometric.loader import DataLoader
import Constants
import params
from Dataset.Dataset import load_dataset
from models.gnn import GCN
from preprocessing.utils import load_ckp, get_sequence_from_pdb, create_seqrecord, get_proteins_from_fasta, \
generate_bulk_embedding, pickle_load, fasta_to_dictionary
parser = argparse.ArgumentParser(description=" Predict protein functions with TransFun ", epilog=" Thank you !!!")
parser.add_argument('--data-path', type=str, default="data", help="Path to data files")
parser.add_argument('--ontology', type=str, default="cellular_component", help="Path to data files")
parser.add_argument('--no-cuda', default=False, help='Disables CUDA training.')
parser.add_argument('--batch-size', default=10, help='Batch size.')
parser.add_argument('--input-type', choices=['fasta', 'pdb'], default="fasta",
help='Input Data: fasta file or PDB files')
parser.add_argument('--fasta-path', default="sequence.fasta", help='Path to Fasta')
parser.add_argument('--pdb-path', default="alphafold", help='Path to directory of PDBs')
parser.add_argument('--cut-off', type=float, default=0.0, help="Cut of to report function")
parser.add_argument('--output', type=str, default="output", help="File to save output")
# parser.add_argument('--add-ancestors', default=False, help="Add ancestor terms to prediction")
args = parser.parse_args()
args.cuda = not args.no_cuda and torch.cuda.is_available()
if args.cuda:
device = 'cuda'
else:
device = 'cpu'
if args.ontology == 'molecular_function':
ont_kwargs = params.mol_kwargs
elif args.ontology == 'cellular_component':
ont_kwargs = params.cc_kwargs
elif args.ontology == 'biological_process':
ont_kwargs = params.bio_kwargs
ont_kwargs['device'] = device
FUNC_DICT = {
'cellular_component': 'GO:0005575',
'molecular_function': 'GO:0003674',
'biological_process': 'GO:0008150'
}
print("Predicting proteins")
def create_fasta(proteins):
fasta = []
for protein in proteins:
alpha_fold_seq = get_sequence_from_pdb("{}/{}/{}.pdb.gz".format(args.data_path, args.pdb_path, protein), "A")
fasta.append(create_seqrecord(id=protein, seq=alpha_fold_seq))
SeqIO.write(fasta, "{}/sequence.fasta".format(args.data_path), "fasta")
args.fasta_path = "{}/sequence.fasta".format(args.data_path)
def write_to_file(data, output):
with open('{}'.format(output), 'w') as fp:
for protein, go_terms in data.items():
for go_term, score in go_terms.items():
fp.write('%s %s %s\n' % (protein, go_term, score))
def generate_embeddings(fasta_path):
def merge_pts(keys, fasta):
embeddings = [0, 32, 33]
for protein in keys:
fasta_dic = fasta_to_dictionary(fasta)
tmp = []
for level in range(keys[protein]):
os_path = "{}/esm/{}_{}.pt".format(args.data_path, protein, level)
tmp.append(torch.load(os_path))
data = {'representations': {}, 'mean_representations': {}}
for index in tmp:
for rep in embeddings:
assert torch.equal(index['mean_representations'][rep], torch.mean(index['representations'][rep], dim=0))
if rep in data['representations']:
data['representations'][rep] = torch.cat((data['representations'][rep], index['representations'][rep]))
else:
data['representations'][rep] = index['representations'][rep]
for emb in embeddings:
assert len(fasta_dic[protein][3]) == data['representations'][emb].shape[0]
for rep in embeddings:
data['mean_representations'][rep] = torch.mean(data['representations'][rep], dim=0)
# print("saving {}".format(protein))
torch.save(data, "{}/esm/{}.pt".format(args.data_path, protein))
def crop_fasta(record):
splits = []
keys = {}
main_id = record.id
keys[main_id] = int(len(record.seq) / 1021) + 1
for pos in range(int(len(record.seq) / 1021) + 1):
id = "{}_{}".format(main_id, pos)
seq = str(record.seq[pos * 1021:(pos * 1021) + 1021])
splits.append(create_seqrecord(id=id, name=id, description="", seq=seq))
return splits, keys
keys = {}
sequences = []
input_seq_iterator = SeqIO.parse(fasta_path, "fasta")
for record in input_seq_iterator:
if len(record.seq) > 1021:
_seqs, _keys = crop_fasta(record)
sequences.extend(_seqs)
keys.update(_keys)
else:
sequences.append(record)
cropped_fasta = "{}/sequence_cropped.fasta".format(args.data_path)
SeqIO.write(sequences, cropped_fasta, "fasta")
generate_bulk_embedding("./preprocessing/extract.py", "{}".format(cropped_fasta),
"{}/esm".format(args.data_path))
# merge
if len(keys) > 0:
print("Found {} protein with length > 1021".format(len(keys)))
merge_pts(keys, fasta_path)
if args.input_type == 'fasta':
if not args.fasta_path is None:
proteins = set(get_proteins_from_fasta("{}/{}".format(args.data_path, args.fasta_path)))
pdbs = set([i.split(".")[0] for i in os.listdir("{}/{}".format(args.data_path, args.pdb_path))])
proteins = list(pdbs.intersection(proteins))
elif args.input_type == 'pdb':
if not args.pdb_path is None:
pdb_path = "{}/{}".format(args.data_path, args.pdb_path)
if os.path.exists(pdb_path):
proteins = os.listdir(pdb_path)
proteins = [protein.split('.')[0] for protein in proteins if protein.endswith(".pdb.gz")]
if len(proteins) == 0:
print(print("No proteins found.".format(pdb_path)))
exit()
create_fasta(proteins)
else:
print("PDB directory not found -- {}".format(pdb_path))
exit()
if len(proteins) > 0:
print("Predicting for {} proteins".format(len(proteins)))
print("Generating Embeddings from {}".format(args.fasta_path))
os.makedirs("{}/esm".format(args.data_path), exist_ok=True)
generate_embeddings(args.fasta_path)
kwargs = {
'seq_id': Constants.Final_thresholds[args.ontology],
'ont': args.ontology,
'session': 'selected',
'prot_ids': proteins,
'pdb_path': "{}/{}".format(args.data_path, args.pdb_path)
}
dataset = load_dataset(root=args.data_path, **kwargs)
test_dataloader = DataLoader(dataset,
batch_size=args.batch_size,
drop_last=False,
shuffle=False)
# model
model = GCN(**ont_kwargs)
model.to(device)
optimizer = optim.Adam(model.parameters())
ckp_pth = "{}/{}.pt".format(args.data_path, args.ontology)
print(ckp_pth)
# load the saved checkpoint
if os.path.exists(ckp_pth):
model, optimizer, current_epoch, min_val_loss = load_ckp(ckp_pth, model, optimizer, device)
else:
print("Model not found. Skipping...")
exit()
model.eval()
scores = []
proteins = []
for data in test_dataloader:
with torch.no_grad():
proteins.extend(data['atoms'].protein)
scores.extend(model(data.to(device)).tolist())
assert len(proteins) == len(scores)
goterms = pickle_load('{}/go_terms'.format(args.data_path))[f'GO-terms-{args.ontology}']
go_graph = obonet.read_obo(open("{}/go-basic.obo".format(args.data_path), 'r'))
go_set = nx.ancestors(go_graph, FUNC_DICT[args.ontology])
results = {}
for protein, score in zip(proteins, scores):
protein_scores = {}
for go_term, _score in zip(goterms, score):
if _score > args.cut_off:
protein_scores[go_term] = max(protein_scores.get(go_term, 0), _score)
for go_term, max_score in list(protein_scores.items()):
descendants = nx.descendants(go_graph, go_term).intersection(go_set)
for descendant in descendants:
protein_scores[descendant] = max(protein_scores.get(descendant, 0), max_score)
results[protein] = protein_scores
print("Writing output to {}".format(args.output))
write_to_file(results, "{}/{}".format(args.data_path, args.output))
| Python |
3D | jianlin-cheng/TransFun | parser.py | .py | 1,369 | 29 | import warnings
import argparse
import os
import torch
warnings.filterwarnings("ignore", category=UserWarning)
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
parser = argparse.ArgumentParser()
parser.add_argument('--no-cuda', action='store_true', default=False, help='Disables CUDA training.')
parser.add_argument('--fastmode', action='store_true', default=False, help='Validate during training pass.')
parser.add_argument('--seed', type=int, default=42, help='Random seed.')
parser.add_argument('--epochs', type=int, default=50, help='Number of epochs to train.')
parser.add_argument('--lr', type=float, default=0.001, help='Initial learning rate.')
parser.add_argument('--weight_decay', type=float, default=1e-16, help='Weight decay (L2 loss on parameters).')
parser.add_argument('--train_batch', type=int, default=32, help='Training batch size.')
parser.add_argument('--valid_batch', type=int, default=32, help='Validation batch size.')
parser.add_argument('--dropout', type=float, default=0., help='Dropout rate (1 - keep probability).')
parser.add_argument('--seq', type=float, default=0.5, help='Sequence Identity (Sequence Identity).')
parser.add_argument("--ont", default='biological_process', type=str, help='Ontology under consideration')
args = parser.parse_args()
args.cuda = not args.no_cuda and torch.cuda.is_available()
def get_parser():
return args
| Python |
3D | jianlin-cheng/TransFun | params.py | .py | 918 | 40 | bio_kwargs = {
'hidden': 16,
'input_features_size': 1280,
'num_classes': 3774,
'edge_type': 'cbrt',
'edge_features': 0,
'egnn_layers': 12,
'layers': 1,
'device': 'cuda',
'wd': 5e-4
}
mol_kwargs = {
'hidden': 16,
'input_features_size': 1280,
'num_classes': 600,
'edge_type': 'cbrt',
'edge_features': 0,
'egnn_layers': 12,
'layers': 1,
'device': 'cuda',
'wd': 0.001
}
cc_kwargs = {
'hidden': 16,
'input_features_size': 1280,
'num_classes': 547,
'edge_type': 'cbrt',
'edge_features': 0,
'egnn_layers': 12,
'layers': 1,
'device': 'cuda',
'wd': 0.001 #5e-4
}
edge_types = set(['sqrt', 'cbrt', 'dist_3', 'dist_4', 'dist_6', 'dist_10', 'dist_12',
'molecular_function', 'biological_process', 'cellular_component',
'all', 'names', 'sequence_letters', 'ptr', 'sequence_features'])
| Python |
3D | jianlin-cheng/TransFun | Constants.py | .py | 1,930 | 66 | residues = {
"A": 1, "C": 2, "D": 3, "E": 4, "F": 5, "G": 6, "H": 7, "I": 8, "K": 9, "L": 10, "M": 11,
"N": 12, "P": 13, "Q": 14, "R": 15, "S": 16, "T": 17, "V": 18, "W": 19, "Y": 20
}
INVALID_ACIDS = {"U", "O", "B", "Z", "J", "X", "*"}
amino_acids = {
"ALA": "A", "ARG": "R", "ASN": "N", "ASP": "D", "CYS": "C", "GLN": "Q", "GLU": "E",
"GLY": "G", "HIS": "H", "ILE": "I", "LEU": "L", "LYS": "K", "MET": "M", "PHE": "F",
"PRO": "P", "PYL": "O", "SER": "S", "SEC": "U", "THR": "T", "TRP": "W", "TYR": "Y",
"VAL": "V", "ASX": "B", "GLX": "Z", "XAA": "X", "XLE": "J"
}
root_terms = {"GO:0008150", "GO:0003674", "GO:0005575"}
exp_evidence_codes = {"EXP", "IDA", "IPI", "IMP", "IGI", "IEP", "TAS", "IC"}
exp_evidence_codes = set([
"EXP", "IDA", "IPI", "IMP", "IGI", "IEP", "TAS", "IC",
"HTP", "HDA", "HMP", "HGI", "HEP"])
ROOT = "/home/fbqc9/PycharmProjects/TransFunData/data/"
# ROOT = "D:/Workspace/python-3/transfunData/data_bp/"
# ROOT = "/data_bp/pycharm/TransFunData/data_bp/"
# CAFA4 Targets
CAFA_TARGETS = {"287", "3702", "4577", "6239", "7227", "7955", "9606", "9823", "10090", "10116", "44689", "83333",
"99287", "226900", "243273", "284812", "559292"}
NAMESPACES = {
"cc": "cellular_component",
"mf": "molecular_function",
"bp": "biological_process"
}
FUNC_DICT = {
'cc': 'GO:0005575',
'mf': 'GO:0003674',
'bp': 'GO:0008150'}
BENCH_DICT = {
'cc': "CCO",
'mf': 'MFO',
'bp': 'BPO'
}
NAMES = {
"cc": "Cellular Component",
"mf": "Molecular Function",
"bp": "Biological Process"
}
TEST_GROUPS = ["LK_bpo", "LK_mfo", "LK_cco", "NK_bpo", "NK_mfo", "NK_cco"]
Final_thresholds = {
"cellular_component": 0.50,
"molecular_function": 0.90,
"biological_process": 0.50
}
TFun_Plus_thresholds = {
"cellular_component": (0.13, 0.87),
"molecular_function": (0.36, 0.61),
"biological_process": (0.38, 0.62)
}
| Python |
3D | jianlin-cheng/TransFun | net_utils.py | .py | 2,480 | 76 | import torch
import torch.nn as nn
from torch_geometric.nn import GCNConv, BatchNorm, global_add_pool, global_mean_pool, global_max_pool
class GCN(nn.Module):
def __init__(self, input_features, out_channels, relu=True):
super(GCN, self).__init__()
self.conv = GCNConv(input_features, out_channels)
self.relu = nn.LeakyReLU(0.1, inplace=True) if relu else None
def forward(self, x):
edge_index = x[1]
x = self.conv(x[0], edge_index)
if self.relu is not None:
x = self.relu(x)
return (x, edge_index)
class GCN_BatchNorm(nn.Module):
def __init__(self, in_channels, out_channels, relu=True):
super(GCN_BatchNorm, self).__init__()
self.conv = GCNConv(in_channels, out_channels, bias=False)
self.bn = BatchNorm(out_channels, momentum=0.1)
self.relu = nn.LeakyReLU(0.1, inplace=True) if relu else None
def forward(self, x):
edge_index = x[1]
x = self.conv(x[0], edge_index)
if self.relu is not None:
x = self.relu(x)
x = self.bn(x)
return x
class FC(nn.Module):
def __init__(self, in_features, out_features, relu=True, bnorm=True):
super(FC, self).__init__()
_bias = False if bnorm else True
self.fc = nn.Linear(in_features, out_features, bias=_bias)
self.relu = nn.ReLU(inplace=True) if relu else None
#self.bn = BatchNorm(out_features, momentum=0.1) if bnorm else None
self.bn = nn.BatchNorm1d(out_features, momentum=0.1) if bnorm else None
def forward(self, x):
x = self.fc(x)
if self.bn is not None:
x = self.bn(x)
if self.relu is not None:
x = self.relu(x)
return x
class BNormRelu(nn.Module):
def __init__(self, in_features, relu=True, bnorm=True):
super(BNormRelu, self).__init__()
self.relu = nn.ReLU(inplace=True) if relu else None
self.bn = BatchNorm(in_features, momentum=0.1) if bnorm else None
# self.bn = nn.BatchNorm1d(out_features, momentum=0.1) if bnorm else None
def forward(self, x):
if self.bn is not None:
x = self.bn(x)
if self.relu is not None:
x = self.relu(x)
return x
def get_pool(pool_type='max'):
if pool_type == 'mean':
return global_mean_pool
elif pool_type == 'add':
return global_add_pool
elif pool_type == 'max':
return global_max_pool
| Python |
3D | jianlin-cheng/TransFun | tools/jackhmmer.py | .py | 7,515 | 202 | # Copyright 2021 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Library to run Jackhmmer from Python."""
from concurrent import futures
import glob
import os
import subprocess
from typing import Any, Callable, Mapping, Optional, Sequence
from urllib import request
from absl import logging
# Internal import (7716).
from tools import utils
class Jackhmmer:
"""Python wrapper of the Jackhmmer binary."""
def __init__(self,
*,
binary_path: str,
database_path: str,
n_cpu: int = 16,
n_iter: int = 1,
e_value: float = 0.0001,
z_value: Optional[int] = None,
get_tblout: bool = False,
filter_f1: float = 0.0005,
filter_f2: float = 0.00005,
filter_f3: float = 0.0000005,
incdom_e: Optional[float] = None,
dom_e: Optional[float] = None,
num_streamed_chunks: Optional[int] = None,
streaming_callback: Optional[Callable[[int], None]] = None):
"""Initializes the Python Jackhmmer wrapper.
Args:
binary_path: The path to the jackhmmer executable.
database_path: The path to the jackhmmer database (FASTA format).
n_cpu: The number of CPUs to give Jackhmmer.
n_iter: The number of Jackhmmer iterations.
e_value: The E-value, see Jackhmmer docs for more details.
z_value: The Z-value, see Jackhmmer docs for more details.
get_tblout: Whether to save tblout string.
filter_f1: MSV and biased composition pre-filter, set to >1.0 to turn off.
filter_f2: Viterbi pre-filter, set to >1.0 to turn off.
filter_f3: Forward pre-filter, set to >1.0 to turn off.
incdom_e: Domain e-value criteria for inclusion of domains in MSA/next
round.
dom_e: Domain e-value criteria for inclusion in tblout.
num_streamed_chunks: Number of database chunks to stream over.
streaming_callback: Callback function run after each chunk iteration with
the iteration number as argument.
"""
self.binary_path = binary_path
self.database_path = database_path
self.num_streamed_chunks = num_streamed_chunks
if not os.path.exists(self.database_path) and num_streamed_chunks is None:
logging.error('Could not find Jackhmmer database %s', database_path)
raise ValueError(f'Could not find Jackhmmer database {database_path}')
self.n_cpu = n_cpu
self.n_iter = n_iter
self.e_value = e_value
self.z_value = z_value
self.filter_f1 = filter_f1
self.filter_f2 = filter_f2
self.filter_f3 = filter_f3
self.incdom_e = incdom_e
self.dom_e = dom_e
self.get_tblout = get_tblout
self.streaming_callback = streaming_callback
def _query_chunk(self, input_fasta_path: str, database_path: str, base_path: str) -> Mapping[str, Any]:
"""Queries the database chunk using Jackhmmer."""
with utils.tmpdir_manager(base_dir=base_path) as query_tmp_dir:
sto_path = os.path.join(query_tmp_dir, 'output.sto')
# The F1/F2/F3 are the expected proportion to pass each of the filtering
# stages (which get progressively more expensive), reducing these
# speeds up the pipeline at the expensive of sensitivity. They are
# currently set very low to make querying Mgnify run in a reasonable
# amount of time.
cmd_flags = [
# Don't pollute stdout with Jackhmmer output.
'-o', '/dev/null',
'-A', sto_path,
'--noali',
'--F1', str(self.filter_f1),
'--F2', str(self.filter_f2),
'--F3', str(self.filter_f3),
'--incE', str(self.e_value),
# Report only sequences with E-values <= x in per-sequence output.
'-E', str(self.e_value),
'--cpu', str(self.n_cpu),
'-N', str(self.n_iter)
]
if self.get_tblout:
tblout_path = os.path.join(query_tmp_dir, 'tblout.txt')
cmd_flags.extend(['--tblout', tblout_path])
if self.z_value:
cmd_flags.extend(['-Z', str(self.z_value)])
if self.dom_e is not None:
cmd_flags.extend(['--domE', str(self.dom_e)])
if self.incdom_e is not None:
cmd_flags.extend(['--incdomE', str(self.incdom_e)])
cmd = [self.binary_path] + cmd_flags + [input_fasta_path,
database_path]
logging.info('Launching subprocess "%s"', ' '.join(cmd))
process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
with utils.timing(
f'Jackhmmer ({os.path.basename(database_path)}) query'):
_, stderr = process.communicate()
retcode = process.wait()
if retcode:
raise RuntimeError(
'Jackhmmer failed\nstderr:\n%s\n' % stderr.decode('utf-8'))
# Get e-values for each target name
tbl = ''
if self.get_tblout:
with open(tblout_path) as f:
tbl = f.read()
with open(sto_path) as f:
sto = f.read()
raw_output = dict(
sto=sto,
tbl=tbl,
stderr=stderr,
n_iter=self.n_iter,
e_value=self.e_value)
return raw_output
def query(self, input_fasta_path: str, base_path: str) -> Sequence[Mapping[str, Any]]:
"""Queries the database using Jackhmmer."""
if self.num_streamed_chunks is None:
return [self._query_chunk(input_fasta_path, self.database_path, base_path=base_path)]
db_basename = os.path.basename(self.database_path)
db_remote_chunk = lambda db_idx: f'{self.database_path}.{db_idx}'
db_local_chunk = lambda db_idx: f'/tmp/ramdisk/{db_basename}.{db_idx}'
# Remove existing files to prevent OOM
for f in glob.glob(db_local_chunk('[0-9]*')):
try:
os.remove(f)
except OSError:
print(f'OSError while deleting {f}')
# Download the (i+1)-th chunk while Jackhmmer is running on the i-th chunk
with futures.ThreadPoolExecutor(max_workers=2) as executor:
chunked_output = []
for i in range(1, self.num_streamed_chunks + 1):
# Copy the chunk locally
if i == 1:
future = executor.submit(
request.urlretrieve, db_remote_chunk(i), db_local_chunk(i))
if i < self.num_streamed_chunks:
next_future = executor.submit(
request.urlretrieve, db_remote_chunk(i+1), db_local_chunk(i+1))
# Run Jackhmmer with the chunk
future.result()
chunked_output.append(
self._query_chunk(input_fasta_path, db_local_chunk(i)))
# Remove the local copy of the chunk
os.remove(db_local_chunk(i))
# Do not set next_future for the last chunk so that this works even for
# databases with only 1 chunk.
if i < self.num_streamed_chunks:
future = next_future
if self.streaming_callback:
self.streaming_callback(i)
return chunked_output
| Python |
3D | jianlin-cheng/TransFun | tools/residue_constants.py | .py | 34,990 | 900 | # Copyright 2021 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Constants used in AlphaFold."""
import collections
import functools
import os
from typing import List, Mapping, Tuple
import numpy as np
# Internal import (35fd).
# Distance from one CA to next CA [trans configuration: omega = 180].
import tree
ca_ca = 3.80209737096
# Format: The list for each AA type contains chi1, chi2, chi3, chi4 in
# this order (or a relevant subset from chi1 onwards). ALA and GLY don't have
# chi angles so their chi angle lists are empty.
chi_angles_atoms = {
'ALA': [],
# Chi5 in arginine is always 0 +- 5 degrees, so ignore it.
'ARG': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'CD'],
['CB', 'CG', 'CD', 'NE'], ['CG', 'CD', 'NE', 'CZ']],
'ASN': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'OD1']],
'ASP': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'OD1']],
'CYS': [['N', 'CA', 'CB', 'SG']],
'GLN': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'CD'],
['CB', 'CG', 'CD', 'OE1']],
'GLU': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'CD'],
['CB', 'CG', 'CD', 'OE1']],
'GLY': [],
'HIS': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'ND1']],
'ILE': [['N', 'CA', 'CB', 'CG1'], ['CA', 'CB', 'CG1', 'CD1']],
'LEU': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'CD1']],
'LYS': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'CD'],
['CB', 'CG', 'CD', 'CE'], ['CG', 'CD', 'CE', 'NZ']],
'MET': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'SD'],
['CB', 'CG', 'SD', 'CE']],
'PHE': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'CD1']],
'PRO': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'CD']],
'SER': [['N', 'CA', 'CB', 'OG']],
'THR': [['N', 'CA', 'CB', 'OG1']],
'TRP': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'CD1']],
'TYR': [['N', 'CA', 'CB', 'CG'], ['CA', 'CB', 'CG', 'CD1']],
'VAL': [['N', 'CA', 'CB', 'CG1']],
}
# If chi angles given in fixed-length array, this matrix determines how to mask
# them for each AA type. The order is as per restype_order (see below).
chi_angles_mask = [
[0.0, 0.0, 0.0, 0.0], # ALA
[1.0, 1.0, 1.0, 1.0], # ARG
[1.0, 1.0, 0.0, 0.0], # ASN
[1.0, 1.0, 0.0, 0.0], # ASP
[1.0, 0.0, 0.0, 0.0], # CYS
[1.0, 1.0, 1.0, 0.0], # GLN
[1.0, 1.0, 1.0, 0.0], # GLU
[0.0, 0.0, 0.0, 0.0], # GLY
[1.0, 1.0, 0.0, 0.0], # HIS
[1.0, 1.0, 0.0, 0.0], # ILE
[1.0, 1.0, 0.0, 0.0], # LEU
[1.0, 1.0, 1.0, 1.0], # LYS
[1.0, 1.0, 1.0, 0.0], # MET
[1.0, 1.0, 0.0, 0.0], # PHE
[1.0, 1.0, 0.0, 0.0], # PRO
[1.0, 0.0, 0.0, 0.0], # SER
[1.0, 0.0, 0.0, 0.0], # THR
[1.0, 1.0, 0.0, 0.0], # TRP
[1.0, 1.0, 0.0, 0.0], # TYR
[1.0, 0.0, 0.0, 0.0], # VAL
]
# The following chi angles are pi periodic: they can be rotated by a multiple
# of pi without affecting the structure.
chi_pi_periodic = [
[0.0, 0.0, 0.0, 0.0], # ALA
[0.0, 0.0, 0.0, 0.0], # ARG
[0.0, 0.0, 0.0, 0.0], # ASN
[0.0, 1.0, 0.0, 0.0], # ASP
[0.0, 0.0, 0.0, 0.0], # CYS
[0.0, 0.0, 0.0, 0.0], # GLN
[0.0, 0.0, 1.0, 0.0], # GLU
[0.0, 0.0, 0.0, 0.0], # GLY
[0.0, 0.0, 0.0, 0.0], # HIS
[0.0, 0.0, 0.0, 0.0], # ILE
[0.0, 0.0, 0.0, 0.0], # LEU
[0.0, 0.0, 0.0, 0.0], # LYS
[0.0, 0.0, 0.0, 0.0], # MET
[0.0, 1.0, 0.0, 0.0], # PHE
[0.0, 0.0, 0.0, 0.0], # PRO
[0.0, 0.0, 0.0, 0.0], # SER
[0.0, 0.0, 0.0, 0.0], # THR
[0.0, 0.0, 0.0, 0.0], # TRP
[0.0, 1.0, 0.0, 0.0], # TYR
[0.0, 0.0, 0.0, 0.0], # VAL
[0.0, 0.0, 0.0, 0.0], # UNK
]
# Atoms positions relative to the 8 rigid groups, defined by the pre-omega, phi,
# psi and chi angles:
# 0: 'backbone group',
# 1: 'pre-omega-group', (empty)
# 2: 'phi-group', (currently empty, because it defines only hydrogens)
# 3: 'psi-group',
# 4,5,6,7: 'chi1,2,3,4-group'
# The atom positions are relative to the axis-end-atom of the corresponding
# rotation axis. The x-axis is in direction of the rotation axis, and the y-axis
# is defined such that the dihedral-angle-definiting atom (the last entry in
# chi_angles_atoms above) is in the xy-plane (with a positive y-coordinate).
# format: [atomname, group_idx, rel_position]
rigid_group_atom_positions = {
'ALA': [
['N', 0, (-0.525, 1.363, 0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.526, -0.000, -0.000)],
['CB', 0, (-0.529, -0.774, -1.205)],
['O', 3, (0.627, 1.062, 0.000)],
],
'ARG': [
['N', 0, (-0.524, 1.362, -0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.525, -0.000, -0.000)],
['CB', 0, (-0.524, -0.778, -1.209)],
['O', 3, (0.626, 1.062, 0.000)],
['CG', 4, (0.616, 1.390, -0.000)],
['CD', 5, (0.564, 1.414, 0.000)],
['NE', 6, (0.539, 1.357, -0.000)],
['NH1', 7, (0.206, 2.301, 0.000)],
['NH2', 7, (2.078, 0.978, -0.000)],
['CZ', 7, (0.758, 1.093, -0.000)],
],
'ASN': [
['N', 0, (-0.536, 1.357, 0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.526, -0.000, -0.000)],
['CB', 0, (-0.531, -0.787, -1.200)],
['O', 3, (0.625, 1.062, 0.000)],
['CG', 4, (0.584, 1.399, 0.000)],
['ND2', 5, (0.593, -1.188, 0.001)],
['OD1', 5, (0.633, 1.059, 0.000)],
],
'ASP': [
['N', 0, (-0.525, 1.362, -0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.527, 0.000, -0.000)],
['CB', 0, (-0.526, -0.778, -1.208)],
['O', 3, (0.626, 1.062, -0.000)],
['CG', 4, (0.593, 1.398, -0.000)],
['OD1', 5, (0.610, 1.091, 0.000)],
['OD2', 5, (0.592, -1.101, -0.003)],
],
'CYS': [
['N', 0, (-0.522, 1.362, -0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.524, 0.000, 0.000)],
['CB', 0, (-0.519, -0.773, -1.212)],
['O', 3, (0.625, 1.062, -0.000)],
['SG', 4, (0.728, 1.653, 0.000)],
],
'GLN': [
['N', 0, (-0.526, 1.361, -0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.526, 0.000, 0.000)],
['CB', 0, (-0.525, -0.779, -1.207)],
['O', 3, (0.626, 1.062, -0.000)],
['CG', 4, (0.615, 1.393, 0.000)],
['CD', 5, (0.587, 1.399, -0.000)],
['NE2', 6, (0.593, -1.189, -0.001)],
['OE1', 6, (0.634, 1.060, 0.000)],
],
'GLU': [
['N', 0, (-0.528, 1.361, 0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.526, -0.000, -0.000)],
['CB', 0, (-0.526, -0.781, -1.207)],
['O', 3, (0.626, 1.062, 0.000)],
['CG', 4, (0.615, 1.392, 0.000)],
['CD', 5, (0.600, 1.397, 0.000)],
['OE1', 6, (0.607, 1.095, -0.000)],
['OE2', 6, (0.589, -1.104, -0.001)],
],
'GLY': [
['N', 0, (-0.572, 1.337, 0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.517, -0.000, -0.000)],
['O', 3, (0.626, 1.062, -0.000)],
],
'HIS': [
['N', 0, (-0.527, 1.360, 0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.525, 0.000, 0.000)],
['CB', 0, (-0.525, -0.778, -1.208)],
['O', 3, (0.625, 1.063, 0.000)],
['CG', 4, (0.600, 1.370, -0.000)],
['CD2', 5, (0.889, -1.021, 0.003)],
['ND1', 5, (0.744, 1.160, -0.000)],
['CE1', 5, (2.030, 0.851, 0.002)],
['NE2', 5, (2.145, -0.466, 0.004)],
],
'ILE': [
['N', 0, (-0.493, 1.373, -0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.527, -0.000, -0.000)],
['CB', 0, (-0.536, -0.793, -1.213)],
['O', 3, (0.627, 1.062, -0.000)],
['CG1', 4, (0.534, 1.437, -0.000)],
['CG2', 4, (0.540, -0.785, -1.199)],
['CD1', 5, (0.619, 1.391, 0.000)],
],
'LEU': [
['N', 0, (-0.520, 1.363, 0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.525, -0.000, -0.000)],
['CB', 0, (-0.522, -0.773, -1.214)],
['O', 3, (0.625, 1.063, -0.000)],
['CG', 4, (0.678, 1.371, 0.000)],
['CD1', 5, (0.530, 1.430, -0.000)],
['CD2', 5, (0.535, -0.774, 1.200)],
],
'LYS': [
['N', 0, (-0.526, 1.362, -0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.526, 0.000, 0.000)],
['CB', 0, (-0.524, -0.778, -1.208)],
['O', 3, (0.626, 1.062, -0.000)],
['CG', 4, (0.619, 1.390, 0.000)],
['CD', 5, (0.559, 1.417, 0.000)],
['CE', 6, (0.560, 1.416, 0.000)],
['NZ', 7, (0.554, 1.387, 0.000)],
],
'MET': [
['N', 0, (-0.521, 1.364, -0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.525, 0.000, 0.000)],
['CB', 0, (-0.523, -0.776, -1.210)],
['O', 3, (0.625, 1.062, -0.000)],
['CG', 4, (0.613, 1.391, -0.000)],
['SD', 5, (0.703, 1.695, 0.000)],
['CE', 6, (0.320, 1.786, -0.000)],
],
'PHE': [
['N', 0, (-0.518, 1.363, 0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.524, 0.000, -0.000)],
['CB', 0, (-0.525, -0.776, -1.212)],
['O', 3, (0.626, 1.062, -0.000)],
['CG', 4, (0.607, 1.377, 0.000)],
['CD1', 5, (0.709, 1.195, -0.000)],
['CD2', 5, (0.706, -1.196, 0.000)],
['CE1', 5, (2.102, 1.198, -0.000)],
['CE2', 5, (2.098, -1.201, -0.000)],
['CZ', 5, (2.794, -0.003, -0.001)],
],
'PRO': [
['N', 0, (-0.566, 1.351, -0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.527, -0.000, 0.000)],
['CB', 0, (-0.546, -0.611, -1.293)],
['O', 3, (0.621, 1.066, 0.000)],
['CG', 4, (0.382, 1.445, 0.0)],
# ['CD', 5, (0.427, 1.440, 0.0)],
['CD', 5, (0.477, 1.424, 0.0)], # manually made angle 2 degrees larger
],
'SER': [
['N', 0, (-0.529, 1.360, -0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.525, -0.000, -0.000)],
['CB', 0, (-0.518, -0.777, -1.211)],
['O', 3, (0.626, 1.062, -0.000)],
['OG', 4, (0.503, 1.325, 0.000)],
],
'THR': [
['N', 0, (-0.517, 1.364, 0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.526, 0.000, -0.000)],
['CB', 0, (-0.516, -0.793, -1.215)],
['O', 3, (0.626, 1.062, 0.000)],
['CG2', 4, (0.550, -0.718, -1.228)],
['OG1', 4, (0.472, 1.353, 0.000)],
],
'TRP': [
['N', 0, (-0.521, 1.363, 0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.525, -0.000, 0.000)],
['CB', 0, (-0.523, -0.776, -1.212)],
['O', 3, (0.627, 1.062, 0.000)],
['CG', 4, (0.609, 1.370, -0.000)],
['CD1', 5, (0.824, 1.091, 0.000)],
['CD2', 5, (0.854, -1.148, -0.005)],
['CE2', 5, (2.186, -0.678, -0.007)],
['CE3', 5, (0.622, -2.530, -0.007)],
['NE1', 5, (2.140, 0.690, -0.004)],
['CH2', 5, (3.028, -2.890, -0.013)],
['CZ2', 5, (3.283, -1.543, -0.011)],
['CZ3', 5, (1.715, -3.389, -0.011)],
],
'TYR': [
['N', 0, (-0.522, 1.362, 0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.524, -0.000, -0.000)],
['CB', 0, (-0.522, -0.776, -1.213)],
['O', 3, (0.627, 1.062, -0.000)],
['CG', 4, (0.607, 1.382, -0.000)],
['CD1', 5, (0.716, 1.195, -0.000)],
['CD2', 5, (0.713, -1.194, -0.001)],
['CE1', 5, (2.107, 1.200, -0.002)],
['CE2', 5, (2.104, -1.201, -0.003)],
['OH', 5, (4.168, -0.002, -0.005)],
['CZ', 5, (2.791, -0.001, -0.003)],
],
'VAL': [
['N', 0, (-0.494, 1.373, -0.000)],
['CA', 0, (0.000, 0.000, 0.000)],
['C', 0, (1.527, -0.000, -0.000)],
['CB', 0, (-0.533, -0.795, -1.213)],
['O', 3, (0.627, 1.062, -0.000)],
['CG1', 4, (0.540, 1.429, -0.000)],
['CG2', 4, (0.533, -0.776, 1.203)],
],
}
# A list of atoms (excluding hydrogen) for each AA type. PDB naming convention.
residue_atoms = {
'ALA': ['C', 'CA', 'CB', 'N', 'O'],
'ARG': ['C', 'CA', 'CB', 'CG', 'CD', 'CZ', 'N', 'NE', 'O', 'NH1', 'NH2'],
'ASP': ['C', 'CA', 'CB', 'CG', 'N', 'O', 'OD1', 'OD2'],
'ASN': ['C', 'CA', 'CB', 'CG', 'N', 'ND2', 'O', 'OD1'],
'CYS': ['C', 'CA', 'CB', 'N', 'O', 'SG'],
'GLU': ['C', 'CA', 'CB', 'CG', 'CD', 'N', 'O', 'OE1', 'OE2'],
'GLN': ['C', 'CA', 'CB', 'CG', 'CD', 'N', 'NE2', 'O', 'OE1'],
'GLY': ['C', 'CA', 'N', 'O'],
'HIS': ['C', 'CA', 'CB', 'CG', 'CD2', 'CE1', 'N', 'ND1', 'NE2', 'O'],
'ILE': ['C', 'CA', 'CB', 'CG1', 'CG2', 'CD1', 'N', 'O'],
'LEU': ['C', 'CA', 'CB', 'CG', 'CD1', 'CD2', 'N', 'O'],
'LYS': ['C', 'CA', 'CB', 'CG', 'CD', 'CE', 'N', 'NZ', 'O'],
'MET': ['C', 'CA', 'CB', 'CG', 'CE', 'N', 'O', 'SD'],
'PHE': ['C', 'CA', 'CB', 'CG', 'CD1', 'CD2', 'CE1', 'CE2', 'CZ', 'N', 'O'],
'PRO': ['C', 'CA', 'CB', 'CG', 'CD', 'N', 'O'],
'SER': ['C', 'CA', 'CB', 'N', 'O', 'OG'],
'THR': ['C', 'CA', 'CB', 'CG2', 'N', 'O', 'OG1'],
'TRP': ['C', 'CA', 'CB', 'CG', 'CD1', 'CD2', 'CE2', 'CE3', 'CZ2', 'CZ3',
'CH2', 'N', 'NE1', 'O'],
'TYR': ['C', 'CA', 'CB', 'CG', 'CD1', 'CD2', 'CE1', 'CE2', 'CZ', 'N', 'O',
'OH'],
'VAL': ['C', 'CA', 'CB', 'CG1', 'CG2', 'N', 'O']
}
# Naming swaps for ambiguous atom names.
# Due to symmetries in the amino acids the naming of atoms is ambiguous in
# 4 of the 20 amino acids.
# (The LDDT paper lists 7 amino acids as ambiguous, but the naming ambiguities
# in LEU, VAL and ARG can be resolved by using the 3d constellations of
# the 'ambiguous' atoms and their neighbours)
residue_atom_renaming_swaps = {
'ASP': {'OD1': 'OD2'},
'GLU': {'OE1': 'OE2'},
'PHE': {'CD1': 'CD2', 'CE1': 'CE2'},
'TYR': {'CD1': 'CD2', 'CE1': 'CE2'},
}
# Van der Waals radii [Angstroem] of the atoms (from Wikipedia)
van_der_waals_radius = {
'C': 1.7,
'N': 1.55,
'O': 1.52,
'S': 1.8,
}
Bond = collections.namedtuple(
'Bond', ['atom1_name', 'atom2_name', 'length', 'stddev'])
BondAngle = collections.namedtuple(
'BondAngle',
['atom1_name', 'atom2_name', 'atom3name', 'angle_rad', 'stddev'])
@functools.lru_cache(maxsize=None)
def load_stereo_chemical_props() -> Tuple[Mapping[str, List[Bond]],
Mapping[str, List[Bond]],
Mapping[str, List[BondAngle]]]:
"""Load stereo_chemical_props.txt into a nice structure.
Load literature values for bond lengths and bond angles and translate
bond angles into the length of the opposite edge of the triangle
("residue_virtual_bonds").
Returns:
residue_bonds: Dict that maps resname -> list of Bond tuples.
residue_virtual_bonds: Dict that maps resname -> list of Bond tuples.
residue_bond_angles: Dict that maps resname -> list of BondAngle tuples.
"""
stereo_chemical_props_path = os.path.join(
os.path.dirname(os.path.abspath(__file__)), 'stereo_chemical_props.txt'
)
with open(stereo_chemical_props_path, 'rt') as f:
stereo_chemical_props = f.read()
lines_iter = iter(stereo_chemical_props.splitlines())
# Load bond lengths.
residue_bonds = {}
next(lines_iter) # Skip header line.
for line in lines_iter:
if line.strip() == '-':
break
bond, resname, length, stddev = line.split()
atom1, atom2 = bond.split('-')
if resname not in residue_bonds:
residue_bonds[resname] = []
residue_bonds[resname].append(
Bond(atom1, atom2, float(length), float(stddev)))
residue_bonds['UNK'] = []
# Load bond angles.
residue_bond_angles = {}
next(lines_iter) # Skip empty line.
next(lines_iter) # Skip header line.
for line in lines_iter:
if line.strip() == '-':
break
bond, resname, angle_degree, stddev_degree = line.split()
atom1, atom2, atom3 = bond.split('-')
if resname not in residue_bond_angles:
residue_bond_angles[resname] = []
residue_bond_angles[resname].append(
BondAngle(atom1, atom2, atom3,
float(angle_degree) / 180. * np.pi,
float(stddev_degree) / 180. * np.pi))
residue_bond_angles['UNK'] = []
def make_bond_key(atom1_name, atom2_name):
"""Unique key to lookup bonds."""
return '-'.join(sorted([atom1_name, atom2_name]))
# Translate bond angles into distances ("virtual bonds").
residue_virtual_bonds = {}
for resname, bond_angles in residue_bond_angles.items():
# Create a fast lookup dict for bond lengths.
bond_cache = {}
for b in residue_bonds[resname]:
bond_cache[make_bond_key(b.atom1_name, b.atom2_name)] = b
residue_virtual_bonds[resname] = []
for ba in bond_angles:
bond1 = bond_cache[make_bond_key(ba.atom1_name, ba.atom2_name)]
bond2 = bond_cache[make_bond_key(ba.atom2_name, ba.atom3name)]
# Compute distance between atom1 and atom3 using the law of cosines
# c^2 = a^2 + b^2 - 2ab*cos(gamma).
gamma = ba.angle_rad
length = np.sqrt(bond1.length**2 + bond2.length**2
- 2 * bond1.length * bond2.length * np.cos(gamma))
# Propagation of uncertainty assuming uncorrelated errors.
dl_outer = 0.5 / length
dl_dgamma = (2 * bond1.length * bond2.length * np.sin(gamma)) * dl_outer
dl_db1 = (2 * bond1.length - 2 * bond2.length * np.cos(gamma)) * dl_outer
dl_db2 = (2 * bond2.length - 2 * bond1.length * np.cos(gamma)) * dl_outer
stddev = np.sqrt((dl_dgamma * ba.stddev)**2 +
(dl_db1 * bond1.stddev)**2 +
(dl_db2 * bond2.stddev)**2)
residue_virtual_bonds[resname].append(
Bond(ba.atom1_name, ba.atom3name, length, stddev))
return (residue_bonds,
residue_virtual_bonds,
residue_bond_angles)
# Between-residue bond lengths for general bonds (first element) and for Proline
# (second element).
between_res_bond_length_c_n = [1.329, 1.341]
between_res_bond_length_stddev_c_n = [0.014, 0.016]
# Between-residue cos_angles.
between_res_cos_angles_c_n_ca = [-0.5203, 0.0353] # degrees: 121.352 +- 2.315
between_res_cos_angles_ca_c_n = [-0.4473, 0.0311] # degrees: 116.568 +- 1.995
# This mapping is used when we need to store atom data_bp in a format that requires
# fixed atom data_bp size for every residue (e.g. a numpy array).
atom_types = [
'N', 'CA', 'C', 'CB', 'O', 'CG', 'CG1', 'CG2', 'OG', 'OG1', 'SG', 'CD',
'CD1', 'CD2', 'ND1', 'ND2', 'OD1', 'OD2', 'SD', 'CE', 'CE1', 'CE2', 'CE3',
'NE', 'NE1', 'NE2', 'OE1', 'OE2', 'CH2', 'NH1', 'NH2', 'OH', 'CZ', 'CZ2',
'CZ3', 'NZ', 'OXT'
]
atom_order = {atom_type: i for i, atom_type in enumerate(atom_types)}
atom_type_num = len(atom_types) # := 37.
# A compact atom encoding with 14 columns
# pylint: disable=line-too-long
# pylint: disable=bad-whitespace
restype_name_to_atom14_names = {
'ALA': ['N', 'CA', 'C', 'O', 'CB', '', '', '', '', '', '', '', '', ''],
'ARG': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'CD', 'NE', 'CZ', 'NH1', 'NH2', '', '', ''],
'ASN': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'OD1', 'ND2', '', '', '', '', '', ''],
'ASP': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'OD1', 'OD2', '', '', '', '', '', ''],
'CYS': ['N', 'CA', 'C', 'O', 'CB', 'SG', '', '', '', '', '', '', '', ''],
'GLN': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'CD', 'OE1', 'NE2', '', '', '', '', ''],
'GLU': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'CD', 'OE1', 'OE2', '', '', '', '', ''],
'GLY': ['N', 'CA', 'C', 'O', '', '', '', '', '', '', '', '', '', ''],
'HIS': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'ND1', 'CD2', 'CE1', 'NE2', '', '', '', ''],
'ILE': ['N', 'CA', 'C', 'O', 'CB', 'CG1', 'CG2', 'CD1', '', '', '', '', '', ''],
'LEU': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'CD1', 'CD2', '', '', '', '', '', ''],
'LYS': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'CD', 'CE', 'NZ', '', '', '', '', ''],
'MET': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'SD', 'CE', '', '', '', '', '', ''],
'PHE': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'CD1', 'CD2', 'CE1', 'CE2', 'CZ', '', '', ''],
'PRO': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'CD', '', '', '', '', '', '', ''],
'SER': ['N', 'CA', 'C', 'O', 'CB', 'OG', '', '', '', '', '', '', '', ''],
'THR': ['N', 'CA', 'C', 'O', 'CB', 'OG1', 'CG2', '', '', '', '', '', '', ''],
'TRP': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'CD1', 'CD2', 'NE1', 'CE2', 'CE3', 'CZ2', 'CZ3', 'CH2'],
'TYR': ['N', 'CA', 'C', 'O', 'CB', 'CG', 'CD1', 'CD2', 'CE1', 'CE2', 'CZ', 'OH', '', ''],
'VAL': ['N', 'CA', 'C', 'O', 'CB', 'CG1', 'CG2', '', '', '', '', '', '', ''],
'UNK': ['', '', '', '', '', '', '', '', '', '', '', '', '', ''],
}
# pylint: enable=line-too-long
# pylint: enable=bad-whitespace
# This is the standard residue order when coding AA type as a number.
# Reproduce it by taking 3-letter AA codes and sorting them alphabetically.
restypes = [
'A', 'R', 'N', 'D', 'C', 'Q', 'E', 'G', 'H', 'I', 'L', 'K', 'M', 'F', 'P',
'S', 'T', 'W', 'Y', 'V'
]
restype_order = {restype: i for i, restype in enumerate(restypes)}
restype_num = len(restypes) # := 20.
unk_restype_index = restype_num # Catch-all index for unknown restypes.
restypes_with_x = restypes + ['X']
restype_order_with_x = {restype: i for i, restype in enumerate(restypes_with_x)}
def sequence_to_onehot(
sequence: str,
mapping: Mapping[str, int],
map_unknown_to_x: bool = False) -> np.ndarray:
"""Maps the given sequence into a one-hot encoded matrix.
Args:
sequence: An amino acid sequence.
mapping: A dictionary mapping amino acids to integers.
map_unknown_to_x: If True, any amino acid that is not in the mapping will be
mapped to the unknown amino acid 'X'. If the mapping doesn't contain
amino acid 'X', an error will be thrown. If False, any amino acid not in
the mapping will throw an error.
Returns:
A numpy array of shape (seq_len, num_unique_aas) with one-hot encoding of
the sequence.
Raises:
ValueError: If the mapping doesn't contain values from 0 to
num_unique_aas - 1 without any gaps.
"""
num_entries = max(mapping.values()) + 1
if sorted(set(mapping.values())) != list(range(num_entries)):
raise ValueError('The mapping must have values from 0 to num_unique_aas-1 '
'without any gaps. Got: %s' % sorted(mapping.values()))
one_hot_arr = np.zeros((len(sequence), num_entries), dtype=np.int32)
for aa_index, aa_type in enumerate(sequence):
if map_unknown_to_x:
if aa_type.isalpha() and aa_type.isupper():
aa_id = mapping.get(aa_type, mapping['X'])
else:
raise ValueError(f'Invalid character in the sequence: {aa_type}')
else:
aa_id = mapping[aa_type]
one_hot_arr[aa_index, aa_id] = 1
return one_hot_arr
restype_1to3 = {
'A': 'ALA',
'R': 'ARG',
'N': 'ASN',
'D': 'ASP',
'C': 'CYS',
'Q': 'GLN',
'E': 'GLU',
'G': 'GLY',
'H': 'HIS',
'I': 'ILE',
'L': 'LEU',
'K': 'LYS',
'M': 'MET',
'F': 'PHE',
'P': 'PRO',
'S': 'SER',
'T': 'THR',
'W': 'TRP',
'Y': 'TYR',
'V': 'VAL',
}
# NB: restype_3to1 differs from Bio.PDB.protein_letters_3to1 by being a simple
# 1-to-1 mapping of 3 letter names to one letter names. The latter contains
# many more, and less common, three letter names as keys and maps many of these
# to the same one letter name (including 'X' and 'U' which we don't use here).
restype_3to1 = {v: k for k, v in restype_1to3.items()}
# Define a restype name for all unknown residues.
unk_restype = 'UNK'
resnames = [restype_1to3[r] for r in restypes] + [unk_restype]
resname_to_idx = {resname: i for i, resname in enumerate(resnames)}
# The mapping here uses hhblits convention, so that B is mapped to D, J and O
# are mapped to X, U is mapped to C, and Z is mapped to E. Other than that the
# remaining 20 amino acids are kept in alphabetical order.
# There are 2 non-amino acid codes, X (representing any amino acid) and
# "-" representing a missing amino acid in an alignment. The id for these
# codes is put at the end (20 and 21) so that they can easily be ignored if
# desired.
HHBLITS_AA_TO_ID = {
'A': 0,
'B': 2,
'C': 1,
'D': 2,
'E': 3,
'F': 4,
'G': 5,
'H': 6,
'I': 7,
'J': 20,
'K': 8,
'L': 9,
'M': 10,
'N': 11,
'O': 20,
'P': 12,
'Q': 13,
'R': 14,
'S': 15,
'T': 16,
'U': 1,
'V': 17,
'W': 18,
'X': 20,
'Y': 19,
'Z': 3,
'-': 21,
}
# Partial inversion of HHBLITS_AA_TO_ID.
ID_TO_HHBLITS_AA = {
0: 'A',
1: 'C', # Also U.
2: 'D', # Also B.
3: 'E', # Also Z.
4: 'F',
5: 'G',
6: 'H',
7: 'I',
8: 'K',
9: 'L',
10: 'M',
11: 'N',
12: 'P',
13: 'Q',
14: 'R',
15: 'S',
16: 'T',
17: 'V',
18: 'W',
19: 'Y',
20: 'X', # Includes J and O.
21: '-',
}
restypes_with_x_and_gap = restypes + ['X', '-']
MAP_HHBLITS_AATYPE_TO_OUR_AATYPE = tuple(
restypes_with_x_and_gap.index(ID_TO_HHBLITS_AA[i])
for i in range(len(restypes_with_x_and_gap)))
def _make_standard_atom_mask() -> np.ndarray:
"""Returns [num_res_types, num_atom_types] mask array."""
# +1 to account for unknown (all 0s).
mask = np.zeros([restype_num + 1, atom_type_num], dtype=np.int32)
for restype, restype_letter in enumerate(restypes):
restype_name = restype_1to3[restype_letter]
atom_names = residue_atoms[restype_name]
for atom_name in atom_names:
atom_type = atom_order[atom_name]
mask[restype, atom_type] = 1
return mask
STANDARD_ATOM_MASK = _make_standard_atom_mask()
# A one hot representation for the first and second atoms defining the axis
# of rotation for each chi-angle in each residue.
def chi_angle_atom(atom_index: int) -> np.ndarray:
"""Define chi-angle rigid groups via one-hot representations."""
chi_angles_index = {}
one_hots = []
for k, v in chi_angles_atoms.items():
indices = [atom_types.index(s[atom_index]) for s in v]
indices.extend([-1]*(4-len(indices)))
chi_angles_index[k] = indices
for r in restypes:
res3 = restype_1to3[r]
one_hot = np.eye(atom_type_num)[chi_angles_index[res3]]
one_hots.append(one_hot)
one_hots.append(np.zeros([4, atom_type_num])) # Add zeros for residue `X`.
one_hot = np.stack(one_hots, axis=0)
one_hot = np.transpose(one_hot, [0, 2, 1])
return one_hot
chi_atom_1_one_hot = chi_angle_atom(1)
chi_atom_2_one_hot = chi_angle_atom(2)
# An array like chi_angles_atoms but using indices rather than names.
chi_angles_atom_indices = [chi_angles_atoms[restype_1to3[r]] for r in restypes]
chi_angles_atom_indices = tree.map_structure(
lambda atom_name: atom_order[atom_name], chi_angles_atom_indices)
chi_angles_atom_indices = np.array([
chi_atoms + ([[0, 0, 0, 0]] * (4 - len(chi_atoms)))
for chi_atoms in chi_angles_atom_indices])
# Mapping from (res_name, atom_name) pairs to the atom's chi group index
# and atom index within that group.
chi_groups_for_atom = collections.defaultdict(list)
for res_name, chi_angle_atoms_for_res in chi_angles_atoms.items():
for chi_group_i, chi_group in enumerate(chi_angle_atoms_for_res):
for atom_i, atom in enumerate(chi_group):
chi_groups_for_atom[(res_name, atom)].append((chi_group_i, atom_i))
chi_groups_for_atom = dict(chi_groups_for_atom)
def _make_rigid_transformation_4x4(ex, ey, translation):
"""Create a rigid 4x4 transformation matrix from two axes and transl."""
# Normalize ex.
ex_normalized = ex / np.linalg.norm(ex)
# make ey perpendicular to ex
ey_normalized = ey - np.dot(ey, ex_normalized) * ex_normalized
ey_normalized /= np.linalg.norm(ey_normalized)
# compute ez as cross product
eznorm = np.cross(ex_normalized, ey_normalized)
m = np.stack([ex_normalized, ey_normalized, eznorm, translation]).transpose()
m = np.concatenate([m, [[0., 0., 0., 1.]]], axis=0)
return m
# create an array with (restype, atomtype) --> rigid_group_idx
# and an array with (restype, atomtype, coord) for the atom positions
# and compute affine transformation matrices (4,4) from one rigid group to the
# previous group
restype_atom37_to_rigid_group = np.zeros([21, 37], dtype=np.int)
restype_atom37_mask = np.zeros([21, 37], dtype=np.float32)
restype_atom37_rigid_group_positions = np.zeros([21, 37, 3], dtype=np.float32)
restype_atom14_to_rigid_group = np.zeros([21, 14], dtype=np.int)
restype_atom14_mask = np.zeros([21, 14], dtype=np.float32)
restype_atom14_rigid_group_positions = np.zeros([21, 14, 3], dtype=np.float32)
restype_rigid_group_default_frame = np.zeros([21, 8, 4, 4], dtype=np.float32)
def _make_rigid_group_constants():
"""Fill the arrays above."""
for restype, restype_letter in enumerate(restypes):
resname = restype_1to3[restype_letter]
for atomname, group_idx, atom_position in rigid_group_atom_positions[
resname]:
atomtype = atom_order[atomname]
restype_atom37_to_rigid_group[restype, atomtype] = group_idx
restype_atom37_mask[restype, atomtype] = 1
restype_atom37_rigid_group_positions[restype, atomtype, :] = atom_position
atom14idx = restype_name_to_atom14_names[resname].index(atomname)
restype_atom14_to_rigid_group[restype, atom14idx] = group_idx
restype_atom14_mask[restype, atom14idx] = 1
restype_atom14_rigid_group_positions[restype,
atom14idx, :] = atom_position
for restype, restype_letter in enumerate(restypes):
resname = restype_1to3[restype_letter]
atom_positions = {name: np.array(pos) for name, _, pos
in rigid_group_atom_positions[resname]}
# backbone to backbone is the identity transform
restype_rigid_group_default_frame[restype, 0, :, :] = np.eye(4)
# pre-omega-frame to backbone (currently dummy identity matrix)
restype_rigid_group_default_frame[restype, 1, :, :] = np.eye(4)
# phi-frame to backbone
mat = _make_rigid_transformation_4x4(
ex=atom_positions['N'] - atom_positions['CA'],
ey=np.array([1., 0., 0.]),
translation=atom_positions['N'])
restype_rigid_group_default_frame[restype, 2, :, :] = mat
# psi-frame to backbone
mat = _make_rigid_transformation_4x4(
ex=atom_positions['C'] - atom_positions['CA'],
ey=atom_positions['CA'] - atom_positions['N'],
translation=atom_positions['C'])
restype_rigid_group_default_frame[restype, 3, :, :] = mat
# chi1-frame to backbone
if chi_angles_mask[restype][0]:
base_atom_names = chi_angles_atoms[resname][0]
base_atom_positions = [atom_positions[name] for name in base_atom_names]
mat = _make_rigid_transformation_4x4(
ex=base_atom_positions[2] - base_atom_positions[1],
ey=base_atom_positions[0] - base_atom_positions[1],
translation=base_atom_positions[2])
restype_rigid_group_default_frame[restype, 4, :, :] = mat
# chi2-frame to chi1-frame
# chi3-frame to chi2-frame
# chi4-frame to chi3-frame
# luckily all rotation axes for the next frame start at (0,0,0) of the
# previous frame
for chi_idx in range(1, 4):
if chi_angles_mask[restype][chi_idx]:
axis_end_atom_name = chi_angles_atoms[resname][chi_idx][2]
axis_end_atom_position = atom_positions[axis_end_atom_name]
mat = _make_rigid_transformation_4x4(
ex=axis_end_atom_position,
ey=np.array([-1., 0., 0.]),
translation=axis_end_atom_position)
restype_rigid_group_default_frame[restype, 4 + chi_idx, :, :] = mat
_make_rigid_group_constants()
def make_atom14_dists_bounds(overlap_tolerance=1.5,
bond_length_tolerance_factor=15):
"""compute upper and lower bounds for bonds to assess violations."""
restype_atom14_bond_lower_bound = np.zeros([21, 14, 14], np.float32)
restype_atom14_bond_upper_bound = np.zeros([21, 14, 14], np.float32)
restype_atom14_bond_stddev = np.zeros([21, 14, 14], np.float32)
residue_bonds, residue_virtual_bonds, _ = load_stereo_chemical_props()
for restype, restype_letter in enumerate(restypes):
resname = restype_1to3[restype_letter]
atom_list = restype_name_to_atom14_names[resname]
# create lower and upper bounds for clashes
for atom1_idx, atom1_name in enumerate(atom_list):
if not atom1_name:
continue
atom1_radius = van_der_waals_radius[atom1_name[0]]
for atom2_idx, atom2_name in enumerate(atom_list):
if (not atom2_name) or atom1_idx == atom2_idx:
continue
atom2_radius = van_der_waals_radius[atom2_name[0]]
lower = atom1_radius + atom2_radius - overlap_tolerance
upper = 1e10
restype_atom14_bond_lower_bound[restype, atom1_idx, atom2_idx] = lower
restype_atom14_bond_lower_bound[restype, atom2_idx, atom1_idx] = lower
restype_atom14_bond_upper_bound[restype, atom1_idx, atom2_idx] = upper
restype_atom14_bond_upper_bound[restype, atom2_idx, atom1_idx] = upper
# overwrite lower and upper bounds for bonds and angles
for b in residue_bonds[resname] + residue_virtual_bonds[resname]:
atom1_idx = atom_list.index(b.atom1_name)
atom2_idx = atom_list.index(b.atom2_name)
lower = b.length - bond_length_tolerance_factor * b.stddev
upper = b.length + bond_length_tolerance_factor * b.stddev
restype_atom14_bond_lower_bound[restype, atom1_idx, atom2_idx] = lower
restype_atom14_bond_lower_bound[restype, atom2_idx, atom1_idx] = lower
restype_atom14_bond_upper_bound[restype, atom1_idx, atom2_idx] = upper
restype_atom14_bond_upper_bound[restype, atom2_idx, atom1_idx] = upper
restype_atom14_bond_stddev[restype, atom1_idx, atom2_idx] = b.stddev
restype_atom14_bond_stddev[restype, atom2_idx, atom1_idx] = b.stddev
return {'lower_bound': restype_atom14_bond_lower_bound, # shape (21,14,14)
'upper_bound': restype_atom14_bond_upper_bound, # shape (21,14,14)
'stddev': restype_atom14_bond_stddev, # shape (21,14,14)
}
| Python |
3D | jianlin-cheng/TransFun | tools/hhblits.py | .py | 5,522 | 155 | # Copyright 2021 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Library to run HHblits from Python."""
import glob
import os
import subprocess
from typing import Any, List, Mapping, Optional, Sequence
from absl import logging
# Internal import (7716).
from tools import utils
_HHBLITS_DEFAULT_P = 20
_HHBLITS_DEFAULT_Z = 500
class HHBlits:
"""Python wrapper of the HHblits binary."""
def __init__(self,
*,
binary_path: str,
databases: Sequence[str],
n_cpu: int = 4,
n_iter: int = 3,
e_value: float = 0.001,
maxseq: int = 1_000_000,
realign_max: int = 100_000,
maxfilt: int = 100_000,
min_prefilter_hits: int = 1000,
all_seqs: bool = False,
alt: Optional[int] = None,
p: int = _HHBLITS_DEFAULT_P,
z: int = _HHBLITS_DEFAULT_Z):
"""Initializes the Python HHblits wrapper.
Args:
binary_path: The path to the HHblits executable.
databases: A sequence of HHblits database paths. This should be the
common prefix for the database files (i.e. up to but not including
_hhm.ffindex etc.)
n_cpu: The number of CPUs to give HHblits.
n_iter: The number of HHblits iterations.
e_value: The E-value, see HHblits docs for more details.
maxseq: The maximum number of rows in an input alignment. Note that this
parameter is only supported in HHBlits version 3.1 and higher.
realign_max: Max number of HMM-HMM hits to realign. HHblits default: 500.
maxfilt: Max number of hits allowed to pass the 2nd prefilter.
HHblits default: 20000.
min_prefilter_hits: Min number of hits to pass prefilter.
HHblits default: 100.
all_seqs: Return all sequences in the MSA / Do not filter the result MSA.
HHblits default: False.
alt: Show up to this many alternative alignments.
p: Minimum Prob for a hit to be included in the output hhr file.
HHblits default: 20.
z: Hard cap on number of hits reported in the hhr file.
HHblits default: 500. NB: The relevant HHblits flag is -Z not -z.
Raises:
RuntimeError: If HHblits binary not found within the path.
"""
self.binary_path = binary_path
self.databases = databases
for database_path in self.databases:
if not glob.glob(database_path + '_*'):
logging.error('Could not find HHBlits database %s', database_path)
raise ValueError(f'Could not find HHBlits database {database_path}')
self.n_cpu = n_cpu
self.n_iter = n_iter
self.e_value = e_value
self.maxseq = maxseq
self.realign_max = realign_max
self.maxfilt = maxfilt
self.min_prefilter_hits = min_prefilter_hits
self.all_seqs = all_seqs
self.alt = alt
self.p = p
self.z = z
def query(self, input_fasta_path: str, base_path: str) -> List[Mapping[str, Any]]:
"""Queries the database using HHblits."""
with utils.tmpdir_manager(base_dir=base_path) as query_tmp_dir:
a3m_path = os.path.join(query_tmp_dir, 'output.a3m')
db_cmd = []
for db_path in self.databases:
db_cmd.append('-d')
db_cmd.append(db_path)
cmd = [
self.binary_path,
'-i', input_fasta_path,
'-cpu', str(self.n_cpu),
'-oa3m', a3m_path,
'-o', '/dev/null',
'-n', str(self.n_iter),
'-e', str(self.e_value),
'-maxseq', str(self.maxseq),
'-realign_max', str(self.realign_max),
'-maxfilt', str(self.maxfilt),
'-min_prefilter_hits', str(self.min_prefilter_hits)]
if self.all_seqs:
cmd += ['-all']
if self.alt:
cmd += ['-alt', str(self.alt)]
if self.p != _HHBLITS_DEFAULT_P:
cmd += ['-p', str(self.p)]
if self.z != _HHBLITS_DEFAULT_Z:
cmd += ['-Z', str(self.z)]
cmd += db_cmd
logging.info('Launching subprocess "%s"', ' '.join(cmd))
process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
with utils.timing('HHblits query'):
stdout, stderr = process.communicate()
retcode = process.wait()
if retcode:
# Logs have a 15k character limit, so log HHblits error line by line.
logging.error('HHblits failed. HHblits stderr begin:')
for error_line in stderr.decode('utf-8').splitlines():
if error_line.strip():
logging.error(error_line.strip())
logging.error('HHblits stderr end')
raise RuntimeError('HHblits failed\nstdout:\n%s\n\nstderr:\n%s\n' % (
stdout.decode('utf-8'), stderr[:500_000].decode('utf-8')))
with open(a3m_path) as f:
a3m = f.read()
raw_output = dict(
a3m=a3m,
output=stdout,
stderr=stderr,
n_iter=self.n_iter,
e_value=self.e_value)
return [raw_output]
| Python |
3D | jianlin-cheng/TransFun | tools/utils.py | .py | 1,226 | 41 | # Copyright 2021 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Common utilities for data_bp pipeline tools."""
import contextlib
import shutil
import tempfile
import time
from typing import Optional
from absl import logging
@contextlib.contextmanager
def tmpdir_manager(base_dir: Optional[str] = None):
"""Context manager that deletes a temporary directory on exit."""
tmpdir = tempfile.mkdtemp(dir=base_dir)
try:
yield tmpdir
finally:
shutil.rmtree(tmpdir, ignore_errors=True)
@contextlib.contextmanager
def timing(msg: str):
logging.info('Started %s', msg)
tic = time.time()
yield
toc = time.time()
logging.info('Finished %s in %.3f seconds', msg, toc - tic)
| Python |
3D | jianlin-cheng/TransFun | tools/parsers.py | .py | 21,339 | 608 | # Copyright 2021 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Functions for parsing various file formats."""
import collections
import dataclasses
import itertools
import re
import string
from typing import Dict, Iterable, List, Optional, Sequence, Tuple, Set
DeletionMatrix = Sequence[Sequence[int]]
@dataclasses.dataclass(frozen=True)
class Msa:
"""Class representing a parsed MSA file."""
sequences: Sequence[str]
deletion_matrix: DeletionMatrix
descriptions: Sequence[str]
def __post_init__(self):
if not (len(self.sequences) ==
len(self.deletion_matrix) ==
len(self.descriptions)):
raise ValueError(
'All fields for an MSA must have the same length. '
f'Got {len(self.sequences)} sequences, '
f'{len(self.deletion_matrix)} rows in the deletion matrix and '
f'{len(self.descriptions)} descriptions.')
def __len__(self):
return len(self.sequences)
def truncate(self, max_seqs: int):
return Msa(sequences=self.sequences[:max_seqs],
deletion_matrix=self.deletion_matrix[:max_seqs],
descriptions=self.descriptions[:max_seqs])
@dataclasses.dataclass(frozen=True)
class TemplateHit:
"""Class representing a template hit."""
index: int
name: str
aligned_cols: int
sum_probs: Optional[float]
query: str
hit_sequence: str
indices_query: List[int]
indices_hit: List[int]
def parse_fasta(fasta_string: str) -> Tuple[Sequence[str], Sequence[str]]:
"""Parses FASTA string and returns list of strings with amino-acid sequences.
Arguments:
fasta_string: The string contents of a FASTA file.
Returns:
A tuple of two lists:
* A list of sequences.
* A list of sequence descriptions taken from the comment lines. In the
same order as the sequences.
"""
sequences = []
descriptions = []
index = -1
for line in fasta_string.splitlines():
line = line.strip()
if line.startswith('>'):
index += 1
descriptions.append(line[1:]) # Remove the '>' at the beginning.
sequences.append('')
continue
elif not line:
continue # Skip blank lines.
sequences[index] += line
return sequences, descriptions
def parse_stockholm(stockholm_string: str) -> Msa:
"""Parses sequences and deletion matrix from stockholm format alignment.
Args:
stockholm_string: The string contents of a stockholm file. The first
sequence in the file should be the query sequence.
Returns:
A tuple of:
* A list of sequences that have been aligned to the query. These
might contain duplicates.
* The deletion matrix for the alignment as a list of lists. The element
at `deletion_matrix[i][j]` is the number of residues deleted from
the aligned sequence i at residue position j.
* The names of the targets matched, including the jackhmmer subsequence
suffix.
"""
name_to_sequence = collections.OrderedDict()
for line in stockholm_string.splitlines():
line = line.strip()
if not line or line.startswith(('#', '//')):
continue
name, sequence = line.split()
if name not in name_to_sequence:
name_to_sequence[name] = ''
name_to_sequence[name] += sequence
msa = []
deletion_matrix = []
query = ''
keep_columns = []
for seq_index, sequence in enumerate(name_to_sequence.values()):
if seq_index == 0:
# Gather the columns with gaps from the query
query = sequence
keep_columns = [i for i, res in enumerate(query) if res != '-']
# Remove the columns with gaps in the query from all sequences.
aligned_sequence = ''.join([sequence[c] for c in keep_columns])
msa.append(aligned_sequence)
# Count the number of deletions w.r.t. query.
deletion_vec = []
deletion_count = 0
for seq_res, query_res in zip(sequence, query):
if seq_res != '-' or query_res != '-':
if query_res == '-':
deletion_count += 1
else:
deletion_vec.append(deletion_count)
deletion_count = 0
deletion_matrix.append(deletion_vec)
return Msa(sequences=msa,
deletion_matrix=deletion_matrix,
descriptions=list(name_to_sequence.keys()))
def parse_a3m(a3m_string: str) -> Msa:
"""Parses sequences and deletion matrix from a3m format alignment.
Args:
a3m_string: The string contents of a a3m file. The first sequence in the
file should be the query sequence.
Returns:
A tuple of:
* A list of sequences that have been aligned to the query. These
might contain duplicates.
* The deletion matrix for the alignment as a list of lists. The element
at `deletion_matrix[i][j]` is the number of residues deleted from
the aligned sequence i at residue position j.
* A list of descriptions, one per sequence, from the a3m file.
"""
sequences, descriptions = parse_fasta(a3m_string)
deletion_matrix = []
for msa_sequence in sequences:
deletion_vec = []
deletion_count = 0
for j in msa_sequence:
if j.islower():
deletion_count += 1
else:
deletion_vec.append(deletion_count)
deletion_count = 0
deletion_matrix.append(deletion_vec)
# Make the MSA matrix out of aligned (deletion-free) sequences.
deletion_table = str.maketrans('', '', string.ascii_lowercase)
aligned_sequences = [s.translate(deletion_table) for s in sequences]
return Msa(sequences=aligned_sequences,
deletion_matrix=deletion_matrix,
descriptions=descriptions)
def _convert_sto_seq_to_a3m(
query_non_gaps: Sequence[bool], sto_seq: str) -> Iterable[str]:
for is_query_res_non_gap, sequence_res in zip(query_non_gaps, sto_seq):
if is_query_res_non_gap:
yield sequence_res
elif sequence_res != '-':
yield sequence_res.lower()
def convert_stockholm_to_a3m(stockholm_format: str,
max_sequences: Optional[int] = None,
remove_first_row_gaps: bool = True) -> str:
"""Converts MSA in Stockholm format to the A3M format."""
descriptions = {}
sequences = {}
reached_max_sequences = False
for line in stockholm_format.splitlines():
reached_max_sequences = max_sequences and len(sequences) >= max_sequences
if line.strip() and not line.startswith(('#', '//')):
# Ignore blank lines, markup and end symbols - remainder are alignment
# sequence parts.
seqname, aligned_seq = line.split(maxsplit=1)
if seqname not in sequences:
if reached_max_sequences:
continue
sequences[seqname] = ''
sequences[seqname] += aligned_seq
for line in stockholm_format.splitlines():
if line[:4] == '#=GS':
# Description row - example format is:
# #=GS UniRef90_Q9H5Z4/4-78 DE [subseq from] cDNA: FLJ22755 ...
columns = line.split(maxsplit=3)
seqname, feature = columns[1:3]
value = columns[3] if len(columns) == 4 else ''
if feature != 'DE':
continue
if reached_max_sequences and seqname not in sequences:
continue
descriptions[seqname] = value
if len(descriptions) == len(sequences):
break
# Convert sto format to a3m line by line
a3m_sequences = {}
if remove_first_row_gaps:
# query_sequence is assumed to be the first sequence
query_sequence = next(iter(sequences.values()))
query_non_gaps = [res != '-' for res in query_sequence]
for seqname, sto_sequence in sequences.items():
# Dots are optional in a3m format and are commonly removed.
out_sequence = sto_sequence.replace('.', '')
if remove_first_row_gaps:
out_sequence = ''.join(
_convert_sto_seq_to_a3m(query_non_gaps, out_sequence))
a3m_sequences[seqname] = out_sequence
fasta_chunks = (f">{k} {descriptions.get(k, '')}\n{a3m_sequences[k]}"
for k in a3m_sequences)
return '\n'.join(fasta_chunks) + '\n' # Include terminating newline.
def _keep_line(line: str, seqnames: Set[str]) -> bool:
"""Function to decide which lines to keep."""
if not line.strip():
return True
if line.strip() == '//': # End tag
return True
if line.startswith('# STOCKHOLM'): # Start tag
return True
if line.startswith('#=GC RF'): # Reference Annotation Line
return True
if line[:4] == '#=GS': # Description lines - keep if sequence in list.
_, seqname, _ = line.split(maxsplit=2)
return seqname in seqnames
elif line.startswith('#'): # Other markup - filter out
return False
else: # Alignment data_bp - keep if sequence in list.
seqname = line.partition(' ')[0]
return seqname in seqnames
def truncate_stockholm_msa(stockholm_msa: str, max_sequences: int) -> str:
"""Truncates a stockholm file to a maximum number of sequences."""
seqnames = set()
filtered_lines = []
for line in stockholm_msa.splitlines():
if line.strip() and not line.startswith(('#', '//')):
# Ignore blank lines, markup and end symbols - remainder are alignment
# sequence parts.
seqname = line.partition(' ')[0]
seqnames.add(seqname)
if len(seqnames) >= max_sequences:
break
for line in stockholm_msa.splitlines():
if _keep_line(line, seqnames):
filtered_lines.append(line)
return '\n'.join(filtered_lines) + '\n'
def remove_empty_columns_from_stockholm_msa(stockholm_msa: str) -> str:
"""Removes empty columns (dashes-only) from a Stockholm MSA."""
processed_lines = {}
unprocessed_lines = {}
for i, line in enumerate(stockholm_msa.splitlines()):
if line.startswith('#=GC RF'):
reference_annotation_i = i
reference_annotation_line = line
# Reached the end of this chunk of the alignment. Process chunk.
_, _, first_alignment = line.rpartition(' ')
mask = []
for j in range(len(first_alignment)):
for _, unprocessed_line in unprocessed_lines.items():
prefix, _, alignment = unprocessed_line.rpartition(' ')
if alignment[j] != '-':
mask.append(True)
break
else: # Every row contained a hyphen - empty column.
mask.append(False)
# Add reference annotation for processing with mask.
unprocessed_lines[reference_annotation_i] = reference_annotation_line
if not any(mask): # All columns were empty. Output empty lines for chunk.
for line_index in unprocessed_lines:
processed_lines[line_index] = ''
else:
for line_index, unprocessed_line in unprocessed_lines.items():
prefix, _, alignment = unprocessed_line.rpartition(' ')
masked_alignment = ''.join(itertools.compress(alignment, mask))
processed_lines[line_index] = f'{prefix} {masked_alignment}'
# Clear raw_alignments.
unprocessed_lines = {}
elif line.strip() and not line.startswith(('#', '//')):
unprocessed_lines[i] = line
else:
processed_lines[i] = line
return '\n'.join((processed_lines[i] for i in range(len(processed_lines))))
def deduplicate_stockholm_msa(stockholm_msa: str) -> str:
"""Remove duplicate sequences (ignoring insertions wrt query)."""
sequence_dict = collections.defaultdict(str)
# First we must extract all sequences from the MSA.
for line in stockholm_msa.splitlines():
# Only consider the alignments - ignore reference annotation, empty lines,
# descriptions or markup.
if line.strip() and not line.startswith(('#', '//')):
line = line.strip()
seqname, alignment = line.split()
sequence_dict[seqname] += alignment
seen_sequences = set()
seqnames = set()
# First alignment is the query.
query_align = next(iter(sequence_dict.values()))
mask = [c != '-' for c in query_align] # Mask is False for insertions.
for seqname, alignment in sequence_dict.items():
# Apply mask to remove all insertions from the string.
masked_alignment = ''.join(itertools.compress(alignment, mask))
if masked_alignment in seen_sequences:
continue
else:
seen_sequences.add(masked_alignment)
seqnames.add(seqname)
filtered_lines = []
for line in stockholm_msa.splitlines():
if _keep_line(line, seqnames):
filtered_lines.append(line)
return '\n'.join(filtered_lines) + '\n'
def _get_hhr_line_regex_groups(
regex_pattern: str, line: str) -> Sequence[Optional[str]]:
match = re.match(regex_pattern, line)
if match is None:
raise RuntimeError(f'Could not parse query line {line}')
return match.groups()
def _update_hhr_residue_indices_list(
sequence: str, start_index: int, indices_list: List[int]):
"""Computes the relative indices for each residue with respect to the original sequence."""
counter = start_index
for symbol in sequence:
if symbol == '-':
indices_list.append(-1)
else:
indices_list.append(counter)
counter += 1
def _parse_hhr_hit(detailed_lines: Sequence[str]) -> TemplateHit:
"""Parses the detailed HMM HMM comparison section for a single Hit.
This works on .hhr files generated from both HHBlits and HHSearch.
Args:
detailed_lines: A list of lines from a single comparison section between 2
sequences (which each have their own HMM's)
Returns:
A dictionary with the information from that detailed comparison section
Raises:
RuntimeError: If a certain line cannot be processed
"""
# Parse first 2 lines.
number_of_hit = int(detailed_lines[0].split()[-1])
name_hit = detailed_lines[1][1:]
# Parse the summary line.
pattern = (
'Probab=(.*)[\t ]*E-value=(.*)[\t ]*Score=(.*)[\t ]*Aligned_cols=(.*)[\t'
' ]*Identities=(.*)%[\t ]*Similarity=(.*)[\t ]*Sum_probs=(.*)[\t '
']*Template_Neff=(.*)')
match = re.match(pattern, detailed_lines[2])
if match is None:
raise RuntimeError(
'Could not parse section: %s. Expected this: \n%s to contain summary.' %
(detailed_lines, detailed_lines[2]))
(_, _, _, aligned_cols, _, _, sum_probs, _) = [float(x)
for x in match.groups()]
# The next section reads the detailed comparisons. These are in a 'human
# readable' format which has a fixed length. The strategy employed is to
# assume that each block starts with the query sequence line, and to parse
# that with a regexp in order to deduce the fixed length used for that block.
query = ''
hit_sequence = ''
indices_query = []
indices_hit = []
length_block = None
for line in detailed_lines[3:]:
# Parse the query sequence line
if (line.startswith('Q ') and not line.startswith('Q ss_dssp') and
not line.startswith('Q ss_pred') and
not line.startswith('Q Consensus')):
# Thus the first 17 characters must be 'Q <query_name> ', and we can parse
# everything after that.
# start sequence end total_sequence_length
patt = r'[\t ]*([0-9]*) ([A-Z-]*)[\t ]*([0-9]*) \([0-9]*\)'
groups = _get_hhr_line_regex_groups(patt, line[17:])
# Get the length of the parsed block using the start and finish indices,
# and ensure it is the same as the actual block length.
start = int(groups[0]) - 1 # Make index zero based.
delta_query = groups[1]
end = int(groups[2])
num_insertions = len([x for x in delta_query if x == '-'])
length_block = end - start + num_insertions
assert length_block == len(delta_query)
# Update the query sequence and indices list.
query += delta_query
_update_hhr_residue_indices_list(delta_query, start, indices_query)
elif line.startswith('T '):
# Parse the hit sequence.
if (not line.startswith('T ss_dssp') and
not line.startswith('T ss_pred') and
not line.startswith('T Consensus')):
# Thus the first 17 characters must be 'T <hit_name> ', and we can
# parse everything after that.
# start sequence end total_sequence_length
patt = r'[\t ]*([0-9]*) ([A-Z-]*)[\t ]*[0-9]* \([0-9]*\)'
groups = _get_hhr_line_regex_groups(patt, line[17:])
start = int(groups[0]) - 1 # Make index zero based.
delta_hit_sequence = groups[1]
assert length_block == len(delta_hit_sequence)
# Update the hit sequence and indices list.
hit_sequence += delta_hit_sequence
_update_hhr_residue_indices_list(delta_hit_sequence, start, indices_hit)
return TemplateHit(
index=number_of_hit,
name=name_hit,
aligned_cols=int(aligned_cols),
sum_probs=sum_probs,
query=query,
hit_sequence=hit_sequence,
indices_query=indices_query,
indices_hit=indices_hit,
)
def parse_hhr(hhr_string: str) -> Sequence[TemplateHit]:
"""Parses the content of an entire HHR file."""
lines = hhr_string.splitlines()
# Each .hhr file starts with a results table, then has a sequence of hit
# "paragraphs", each paragraph starting with a line 'No <hit number>'. We
# iterate through each paragraph to parse each hit.
block_starts = [i for i, line in enumerate(lines) if line.startswith('No ')]
hits = []
if block_starts:
block_starts.append(len(lines)) # Add the end of the final block.
for i in range(len(block_starts) - 1):
hits.append(_parse_hhr_hit(lines[block_starts[i]:block_starts[i + 1]]))
return hits
def parse_e_values_from_tblout(tblout: str) -> Dict[str, float]:
"""Parse target to e-value mapping parsed from Jackhmmer tblout string."""
e_values = {'query': 0}
lines = [line for line in tblout.splitlines() if line[0] != '#']
# As per http://eddylab.org/software/hmmer/Userguide.pdf fields are
# space-delimited. Relevant fields are (1) target name: and
# (5) E-value (full sequence) (numbering from 1).
for line in lines:
fields = line.split()
e_value = fields[4]
target_name = fields[0]
e_values[target_name] = float(e_value)
return e_values
def _get_indices(sequence: str, start: int) -> List[int]:
"""Returns indices for non-gap/insert residues starting at the given index."""
indices = []
counter = start
for symbol in sequence:
# Skip gaps but add a placeholder so that the alignment is preserved.
if symbol == '-':
indices.append(-1)
# Skip deleted residues, but increase the counter.
elif symbol.islower():
counter += 1
# Normal aligned residue. Increase the counter and append to indices.
else:
indices.append(counter)
counter += 1
return indices
@dataclasses.dataclass(frozen=True)
class HitMetadata:
pdb_id: str
chain: str
start: int
end: int
length: int
text: str
def _parse_hmmsearch_description(description: str) -> HitMetadata:
"""Parses the hmmsearch A3M sequence description line."""
# Example 1: >4pqx_A/2-217 [subseq from] mol:protein length:217 Free text
# Example 2: >5g3r_A/1-55 [subseq from] mol:protein length:352
match = re.match(
r'^>?([a-z0-9]+)_(\w+)/([0-9]+)-([0-9]+).*protein length:([0-9]+) *(.*)$',
description.strip())
if not match:
raise ValueError(f'Could not parse description: "{description}".')
return HitMetadata(
pdb_id=match[1],
chain=match[2],
start=int(match[3]),
end=int(match[4]),
length=int(match[5]),
text=match[6])
def parse_hmmsearch_a3m(query_sequence: str,
a3m_string: str,
skip_first: bool = True) -> Sequence[TemplateHit]:
"""Parses an a3m string produced by hmmsearch.
Args:
query_sequence: The query sequence.
a3m_string: The a3m string produced by hmmsearch.
skip_first: Whether to skip the first sequence in the a3m string.
Returns:
A sequence of `TemplateHit` results.
"""
# Zip the descriptions and MSAs together, skip the first query sequence.
parsed_a3m = list(zip(*parse_fasta(a3m_string)))
if skip_first:
parsed_a3m = parsed_a3m[1:]
indices_query = _get_indices(query_sequence, start=0)
hits = []
for i, (hit_sequence, hit_description) in enumerate(parsed_a3m, start=1):
if 'mol:protein' not in hit_description:
continue # Skip non-protein chains.
metadata = _parse_hmmsearch_description(hit_description)
# Aligned columns are only the match states.
aligned_cols = sum([r.isupper() and r != '-' for r in hit_sequence])
indices_hit = _get_indices(hit_sequence, start=metadata.start - 1)
hit = TemplateHit(
index=i,
name=f'{metadata.pdb_id}_{metadata.chain}',
aligned_cols=aligned_cols,
sum_probs=None,
query=query_sequence,
hit_sequence=hit_sequence.upper(),
indices_query=indices_query,
indices_hit=indices_hit,
)
hits.append(hit)
return hits
| Python |
3D | jianlin-cheng/TransFun | tools/msa_identifiers.py | .py | 3,110 | 93 | # Copyright 2021 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utilities for extracting identifiers from MSA sequence descriptions."""
import dataclasses
import re
from typing import Optional
# Sequences coming from UniProtKB database come in the
# `db|UniqueIdentifier|EntryName` format, e.g. `tr|A0A146SKV9|A0A146SKV9_FUNHE`
# or `sp|P0C2L1|A3X1_LOXLA` (for TREMBL/Swiss-Prot respectively).
_UNIPROT_PATTERN = re.compile(
r"""
^
# UniProtKB/TrEMBL or UniProtKB/Swiss-Prot
(?:tr|sp)
\|
# A primary accession number of the UniProtKB entry.
(?P<AccessionIdentifier>[A-Za-z0-9]{6,10})
# Occasionally there is a _0 or _1 isoform suffix, which we ignore.
(?:_\d)?
\|
# TREMBL repeats the accession ID here. Swiss-Prot has a mnemonic
# protein ID code.
(?:[A-Za-z0-9]+)
_
# A mnemonic species identification code.
(?P<SpeciesIdentifier>([A-Za-z0-9]){1,5})
# Small BFD uses a final value after an underscore, which we ignore.
(?:_\d+)?
$
""",
re.VERBOSE)
@dataclasses.dataclass(frozen=True)
class Identifiers:
uniprot_accession_id: str = ''
species_id: str = ''
def _parse_sequence_identifier(msa_sequence_identifier: str) -> Identifiers:
"""Gets accession id and species from an msa sequence identifier.
The sequence identifier has the format specified by
_UNIPROT_TREMBL_ENTRY_NAME_PATTERN or _UNIPROT_SWISSPROT_ENTRY_NAME_PATTERN.
An example of a sequence identifier: `tr|A0A146SKV9|A0A146SKV9_FUNHE`
Args:
msa_sequence_identifier: a sequence identifier.
Returns:
An `Identifiers` instance with a uniprot_accession_id and species_id. These
can be empty in the case where no identifier was found.
"""
matches = re.search(_UNIPROT_PATTERN, msa_sequence_identifier.strip())
if matches:
return Identifiers(
uniprot_accession_id=matches.group('AccessionIdentifier'),
species_id=matches.group('SpeciesIdentifier'))
return Identifiers()
def _extract_sequence_identifier(description: str) -> Optional[str]:
"""Extracts sequence identifier from description. Returns None if no match."""
split_description = description.split()
if split_description:
return split_description[0].partition('/')[0]
else:
return None
def get_identifiers(description: str) -> Identifiers:
"""Computes extra MSA features from the description."""
sequence_identifier = _extract_sequence_identifier(description)
if sequence_identifier is None:
return Identifiers()
else:
return _parse_sequence_identifier(sequence_identifier)
| Python |
3D | jianlin-cheng/TransFun | Sampler/ImbalancedDatasetSampler.py | .py | 3,323 | 90 | from typing import Callable
import torch
import Constants
from preprocessing.utils import class_distribution_counter, pickle_load
class ImbalancedDatasetSampler(torch.utils.data.sampler.Sampler):
"""Samples elements randomly from a given list of indices for imbalanced dataset
Arguments:
indices: a list of indices
num_samples: number of samples to draw
callback_get_label: a callback-like function which takes two arguments - dataset and index
"""
def __init__(
self,
dataset,
labels: list = None,
indices: list = None,
num_samples: int = None,
callback_get_label: Callable = None,
device: str = 'cpu',
**kwargs
):
# if indices is not provided, all elements in the dataset will be considered
self.indices = list(range(len(dataset))) if indices is None else indices
# define custom callback
self.callback_get_label = callback_get_label
# if num_samples is not provided, draw `len(indices)` samples in each iteration
self.num_samples = len(self.indices) if num_samples is None else num_samples
# distribution of classes in the dataset
# df["label"] = self._get_labels(dataset) if labels is None else labels
label_to_count = class_distribution_counter(**kwargs)
go_terms = pickle_load(Constants.ROOT + "/go_terms")
terms = go_terms['GO-terms-{}'.format(kwargs['ont'])]
class_weights = [label_to_count[i] for i in terms]
total = sum(class_weights)
self.weights = torch.tensor([1.0 / label_to_count[i] for i in terms],
dtype=torch.float).to(device)
# def _get_labels(self, dataset):
# if self.callback_get_label:
# return self.callback_get_label(dataset)
# elif isinstance(dataset, torch.utils.data_bp.TensorDataset):
# return dataset.tensors[1]
# elif isinstance(dataset, torchvision.datasets.MNIST):
# return dataset.train_labels.tolist()
# elif isinstance(dataset, torchvision.datasets.ImageFolder):
# return [x[1] for x in dataset.imgs]
# elif isinstance(dataset, torchvision.datasets.DatasetFolder):
# return dataset.samples[:][1]
# elif isinstance(dataset, torch.utils.data_bp.Subset):
# return dataset.dataset.imgs[:][1]
# elif isinstance(dataset, torch.utils.data_bp.Dataset):
# return dataset.get_labels()
# else:
# raise NotImplementedError
def __iter__(self):
return (self.indices[i] for i in torch.multinomial(self.weights, self.num_samples, replacement=True))
def __len__(self):
return self.num_samples
#
# from torch_geometric.loader import DataLoader
# from Dataset.Dataset import load_dataset
#
# kwargs = {
# 'seq_id': 0.95,
# 'ont': 'cellular_component',
# 'session': 'train'
# }
#
# dataset = load_dataset(root=Constants.ROOT, **kwargs)
# train_dataloader = DataLoader(dataset,
# batch_size=30,
# drop_last=False,
# sampler=ImbalancedDatasetSampler(dataset, **kwargs))
#
#
# for i in train_dataloader:
# print(i) | Python |
3D | jianlin-cheng/TransFun | Sampler/Smote.py | .py | 2,953 | 86 | import torch
from random import randint
import random
class SMOTE(object):
def __init__(self, distance='euclidian', dims=512, k=5):
super(SMOTE, self).__init__()
self.newindex = 0
self.k = k
self.dims = dims
self.distance_measure = distance
def populate(self, N, i, nnarray, min_samples, k):
while N:
nn = randint(0, k - 2)
diff = min_samples[nnarray[nn]] - min_samples[i]
gap = random.uniform(0, 1)
self.synthetic_arr[self.newindex, :] = min_samples[i] + gap * diff
self.newindex += 1
N -= 1
def k_neighbors(self, euclid_distance, k):
nearest_idx = torch.zeros((euclid_distance.shape[0], euclid_distance.shape[0]), dtype=torch.int64)
idxs = torch.argsort(euclid_distance, dim=1)
nearest_idx[:, :] = idxs
return nearest_idx[:, 1:k]
def find_k(self, X, k):
euclid_distance = torch.zeros((X.shape[0], X.shape[0]), dtype=torch.float32)
for i in range(len(X)):
dif = (X - X[i]) ** 2
dist = torch.sqrt(dif.sum(axis=1))
euclid_distance[i] = dist
return self.k_neighbors(euclid_distance, k)
def generate(self, min_samples, N, k):
"""
Returns (N/100) * n_minority_samples synthetic minority samples.
Parameters
----------
min_samples : Numpy_array-like, shape = [n_minority_samples, n_features]
Holds the minority samples
N : percetange of new synthetic samples:
n_synthetic_samples = N/100 * n_minority_samples. Can be < 100.
k : int. Number of nearest neighbours.
Returns
-------
S : Synthetic samples. array,
shape = [(N/100) * n_minority_samples, n_features].
"""
T = min_samples.shape[0]
self.synthetic_arr = torch.zeros(int(N / 100) * T, self.dims)
N = int(N / 100)
if self.distance_measure == 'euclidian':
indices = self.find_k(min_samples, k)
for i in range(indices.shape[0]):
self.populate(N, i, indices[i], min_samples, k)
self.newindex = 0
return self.synthetic_arr
def fit_generate(self, X, y):
# get occurence of each class
occ = torch.eye(int(y.max() + 1), int(y.max() + 1))[y].sum(axis=0)
# get the dominant class
dominant_class = torch.argmax(occ)
# get occurence of the dominant class
n_occ = int(occ[dominant_class].item())
for i in range(len(occ)):
if i != dominant_class:
# calculate the amount of synthetic data_bp to generate
N = (n_occ - occ[i]) * 100 / occ[i]
candidates = X[y == i]
xs = self.generate(candidates, N, self.k)
X = torch.cat((X, xs))
ys = torch.ones(xs.shape[0]) * i
y = torch.cat((y, ys))
return X, y | Python |
3D | jianlin-cheng/TransFun | prep/utils.py | .py | 3,395 | 116 | from collections import deque, Counter
import warnings
import pandas as pd
import numpy as np
import math
BIOLOGICAL_PROCESS = 'GO:0008150'
MOLECULAR_FUNCTION = 'GO:0003674'
CELLULAR_COMPONENT = 'GO:0005575'
FUNC_DICT = {
'cc': CELLULAR_COMPONENT,
'mf': MOLECULAR_FUNCTION,
'bp': BIOLOGICAL_PROCESS
}
NAMESPACES = {
'cc': 'cellular_component',
'mf': 'molecular_function',
'bp': 'biological_process'
}
EXP_CODES = {'EXP', 'IDA', 'IPI', 'IMP', 'IGI', 'IEP', 'TAS', 'IC', 'HTP', 'HDA', 'HMP', 'HGI', 'HEP'}
def read_fasta(filename):
seqs = list()
info = list()
seq = ''
inf = ''
with open(filename, 'r') as f:
for line in f:
line = line.strip()
if line.startswith('>'):
if seq != '':
seqs.append(seq)
info.append(inf)
seq = ''
inf = line[1:]
else:
seq += line
seqs.append(seq)
info.append(inf)
return info, seqs
class DataGenerator(object):
def __init__(self, batch_size, is_sparse=False):
self.batch_size = batch_size
self.is_sparse = is_sparse
def fit(self, inputs, targets=None):
self.start = 0
self.inputs = inputs
self.targets = targets
if isinstance(self.inputs, tuple) or isinstance(self.inputs, list):
self.size = self.inputs[0].shape[0]
else:
self.size = self.inputs.shape[0]
self.has_targets = targets is not None
def __next__(self):
return self.next()
def reset(self):
self.start = 0
def next(self):
if self.start < self.size:
batch_index = np.arange(
self.start, min(self.size, self.start + self.batch_size))
if isinstance(self.inputs, tuple) or isinstance(self.inputs, list):
res_inputs = []
for inp in self.inputs:
if self.is_sparse:
res_inputs.append(
inp[batch_index, :].toarray())
else:
res_inputs.append(inp[batch_index, :])
else:
if self.is_sparse:
res_inputs = self.inputs[batch_index, :].toarray()
else:
res_inputs = self.inputs[batch_index, :]
self.start += self.batch_size
if self.has_targets:
if self.is_sparse:
labels = self.targets[batch_index, :].toarray()
else:
#
# x = pd.read_csv("/data/pycharm/TransFun/nrPDB-GO_2019.06.18_annot.tsv", sep='\t', skiprows=12)
# go_terms = set()
# mf = x['GO-terms (cellular_component)'].to_list()
# for i in mf:
# if isinstance(i, str):
# go_terms.update(i.split(','))
# print(len(go_terms))
#
# xx = set(pickle_load(Constants.ROOT + "cellular_component/train_stats"))
# print(len(xx))
#
# print(len(xx.intersection(go_terms)))
# bp = x['GO-terms (biological_process)']
# for i in mf:
# go_terms.update(i.split(','))
# cc = x['GO-terms (cellular_component)']
# for i in mf:
# go_terms.update(i.split(','))
labels = self.targets[batch_index, :]
return res_inputs, labels
return res_inputs
else:
self.reset()
return self.next()
| Python |
3D | jianlin-cheng/TransFun | models/gnn.py | .py | 5,528 | 134 | import itertools
import torch
from torch.nn import Sigmoid
from models.egnn_clean import egnn_clean as eg
import net_utils
class GCN(torch.nn.Module):
def __init__(self, **kwargs):
super(GCN, self).__init__()
input_features_size = kwargs['input_features_size']
hidden_channels = kwargs['hidden']
edge_features = kwargs['edge_features']
num_classes = kwargs['num_classes']
num_egnn_layers = kwargs['egnn_layers']
self.edge_type = kwargs['edge_type']
self.num_layers = kwargs['layers']
self.device = kwargs['device']
self.egnn_1 = eg.EGNN(in_node_nf=input_features_size,
hidden_nf=hidden_channels,
n_layers=num_egnn_layers,
out_node_nf=num_classes,
in_edge_nf=edge_features,
attention=True,
normalize=False,
tanh=True)
self.egnn_2 = eg.EGNN(in_node_nf=num_classes,
hidden_nf=hidden_channels,
n_layers=num_egnn_layers,
out_node_nf=int(num_classes / 2),
in_edge_nf=edge_features,
attention=True,
normalize=False,
tanh=True)
self.egnn_3 = eg.EGNN(in_node_nf=input_features_size,
hidden_nf=hidden_channels,
n_layers=num_egnn_layers,
out_node_nf=int(num_classes / 2),
in_edge_nf=edge_features,
attention=True,
normalize=False,
tanh=True)
self.egnn_4 = eg.EGNN(in_node_nf=int(num_classes / 2),
hidden_nf=hidden_channels,
n_layers=num_egnn_layers,
out_node_nf=int(num_classes / 4),
in_edge_nf=edge_features,
attention=True,
normalize=False,
tanh=True)
self.fc1 = net_utils.FC(num_classes + int(num_classes / 2) * 2 + int(num_classes / 4),
num_classes + 50, relu=False, bnorm=True)
self.final = net_utils.FC(num_classes + 50, num_classes, relu=False, bnorm=False)
self.bnrelu1 = net_utils.BNormRelu(num_classes)
self.bnrelu2 = net_utils.BNormRelu(int(num_classes / 2))
self.bnrelu3 = net_utils.BNormRelu(int(num_classes / 4))
self.sig = Sigmoid()
def forward_once(self, data):
x_res, x_emb_seq, edge_index, x_batch, x_pos = data['atoms'].embedding_features_per_residue, \
data['atoms'].embedding_features_per_sequence, \
data[self.edge_type].edge_index, \
data['atoms'].batch, \
data['atoms'].pos
ppi_shape = x_emb_seq.shape[0]
if ppi_shape > 1:
edge_index_2 = list(zip(*list(itertools.combinations(range(ppi_shape), 2))))
edge_index_2 = [torch.LongTensor(edge_index_2[0]).to(self.device),
torch.LongTensor(edge_index_2[1]).to(self.device)]
else:
edge_index_2 = tuple(range(ppi_shape))
edge_index_2 = [torch.LongTensor(edge_index_2).to(self.device),
torch.LongTensor(edge_index_2).to(self.device)]
output_res, pre_pos_res = self.egnn_1(h=x_res,
x=x_pos.float(),
edges=edge_index,
edge_attr=None)
output_res_2, pre_pos_res_2 = self.egnn_2(h=output_res,
x=pre_pos_res.float(),
edges=edge_index,
edge_attr=None)
output_seq, pre_pos_seq = self.egnn_3(h=x_emb_seq,
x=net_utils.get_pool(pool_type='mean')(x_pos.float(), x_batch),
edges=edge_index_2,
edge_attr=None)
output_res_4, pre_pos_seq_4 = self.egnn_4(h=output_res_2,
x=pre_pos_res_2.float(),
edges=edge_index,
edge_attr=None)
output_res = net_utils.get_pool(pool_type='mean')(output_res, x_batch)
output_res = self.bnrelu1(output_res)
output_res_2 = net_utils.get_pool(pool_type='mean')(output_res_2, x_batch)
output_res_2 = self.bnrelu2(output_res_2)
output_seq = self.bnrelu2(output_seq)
output_res_4 = net_utils.get_pool(pool_type='mean')(output_res_4, x_batch)
output_res_4 = self.bnrelu3(output_res_4)
output = torch.cat([output_res, output_res_2, output_seq, output_res_4], 1)
return output
def forward(self, data):
passes = []
for i in range(self.num_layers):
passes.append(self.forward_once(data))
x = torch.cat(passes, 1)
x = self.fc1(x)
x = self.final(x)
x = self.sig(x)
return x
| Python |
3D | jianlin-cheng/TransFun | models/egnn_clean/egnn_clean.py | .py | 8,270 | 217 | from torch import nn
import torch
from torch.nn import Sigmoid, Linear
from torch_geometric.nn import global_mean_pool
import net_utils
class E_GCL(nn.Module):
"""
E(n) Equivariant Convolutional Layer
re
"""
def __init__(self, input_nf, output_nf, hidden_nf, edges_in_d=0, act_fn=nn.SiLU(), residual=True, attention=False, normalize=False, coords_agg='mean', tanh=False):
super(E_GCL, self).__init__()
input_edge = input_nf * 2
self.residual = residual
self.attention = attention
self.normalize = normalize
self.coords_agg = coords_agg
self.tanh = tanh
self.epsilon = 1e-8
edge_coords_nf = 1
self.edge_mlp = nn.Sequential(
nn.Linear(input_edge + edge_coords_nf + edges_in_d, hidden_nf),
act_fn,
nn.Linear(hidden_nf, hidden_nf),
act_fn)
self.node_mlp = nn.Sequential(
nn.Linear(hidden_nf + input_nf, hidden_nf),
act_fn,
nn.Linear(hidden_nf, output_nf))
layer = nn.Linear(hidden_nf, 1, bias=False)
torch.nn.init.xavier_uniform_(layer.weight, gain=0.001)
coord_mlp = []
coord_mlp.append(nn.Linear(hidden_nf, hidden_nf))
coord_mlp.append(act_fn)
coord_mlp.append(layer)
if self.tanh:
coord_mlp.append(nn.Tanh())
self.coord_mlp = nn.Sequential(*coord_mlp)
if self.attention:
self.att_mlp = nn.Sequential(
nn.Linear(hidden_nf, 1),
nn.Sigmoid())
def edge_model(self, source, target, radial, edge_attr):
if edge_attr is None: # Unused.
out = torch.cat([source, target, radial], dim=1)
else:
out = torch.cat([source, target, radial, edge_attr], dim=1)
out = self.edge_mlp(out)
if self.attention:
att_val = self.att_mlp(out)
out = out * att_val
return out
def node_model(self, x, edge_index, edge_attr, node_attr):
row, col = edge_index
agg = unsorted_segment_sum(edge_attr, row, num_segments=x.size(0))
if node_attr is not None:
agg = torch.cat([x, agg, node_attr], dim=1)
else:
agg = torch.cat([x, agg], dim=1)
out = self.node_mlp(agg)
if self.residual:
out = x + out
return out, agg
def coord_model(self, coord, edge_index, coord_diff, edge_feat):
row, col = edge_index
trans = coord_diff * self.coord_mlp(edge_feat)
if self.coords_agg == 'sum':
agg = unsorted_segment_sum(trans, row, num_segments=coord.size(0))
elif self.coords_agg == 'mean':
agg = unsorted_segment_mean(trans, row, num_segments=coord.size(0))
else:
raise Exception('Wrong coords_agg parameter' % self.coords_agg)
coord = coord + agg
return coord
def coord2radial(self, edge_index, coord):
row, col = edge_index
coord_diff = coord[row] - coord[col]
radial = torch.sum(coord_diff**2, 1).unsqueeze(1)
if self.normalize:
norm = torch.sqrt(radial).detach() + self.epsilon
coord_diff = coord_diff / norm
return radial, coord_diff
def forward(self, h, edge_index, coord, edge_attr=None, node_attr=None):
row, col = edge_index
radial, coord_diff = self.coord2radial(edge_index, coord)
edge_feat = self.edge_model(h[row], h[col], radial, edge_attr)
coord = self.coord_model(coord, edge_index, coord_diff, edge_feat)
h, agg = self.node_model(h, edge_index, edge_feat, node_attr)
return h, coord, edge_attr
class EGNN(nn.Module):
def __init__(self, in_node_nf, hidden_nf, out_node_nf, in_edge_nf=0, device='cpu', act_fn=nn.SiLU(), n_layers=4, residual=True, attention=False, normalize=False, tanh=False):
'''
:param in_node_nf: Number of features for 'h' at the input
:param hidden_nf: Number of hidden features
:param out_node_nf: Number of features for 'h' at the output
:param in_edge_nf: Number of features for the edge features
:param device: Device (e.g. 'cpu', 'cuda:0',...)
:param act_fn: Non-linearity
:param n_layers: Number of layer for the EGNN
:param residual: Use residual connections, we recommend not changing this one
:param attention: Whether using attention or not
:param normalize: Normalizes the coordinates messages such that:
instead of: x^{l+1}_i = x^{l}_i + Σ(x_i - x_j)phi_x(m_ij)
we get: x^{l+1}_i = x^{l}_i + Σ(x_i - x_j)phi_x(m_ij)/||x_i - x_j||
We noticed it may help in the stability or generalization in some future works.
We didn't use it in our paper.
:param tanh: Sets a tanh activation function at the output of phi_x(m_ij). I.e. it bounds the output of
phi_x(m_ij) which definitely improves in stability but it may decrease in accuracy.
We didn't use it in our paper.
'''
super(EGNN, self).__init__()
self.hidden_nf = hidden_nf
self.device = device
self.n_layers = n_layers
self.embedding_in = nn.Linear(in_node_nf, self.hidden_nf)
self.embedding_out = nn.Linear(self.hidden_nf, out_node_nf)
for i in range(0, n_layers):
self.add_module("gcl_%d" % i, E_GCL(self.hidden_nf, self.hidden_nf, self.hidden_nf, edges_in_d=in_edge_nf,
act_fn=act_fn, residual=residual, attention=attention,
normalize=normalize, tanh=tanh))
self.to(self.device)
def forward(self, h, x, edges, edge_attr):
h = self.embedding_in(h)
for i in range(0, self.n_layers):
h, x, _ = self._modules["gcl_%d" % i](h, edges, x, edge_attr=edge_attr)
h = self.embedding_out(h)
return h, x
def unsorted_segment_sum(data, segment_ids, num_segments):
result_shape = (num_segments, data.size(1))
result = data.new_full(result_shape, 0) # Init empty result tensor.
segment_ids = segment_ids.unsqueeze(-1).expand(-1, data.size(1))
result.scatter_add_(0, segment_ids, data)
return result
def unsorted_segment_mean(data, segment_ids, num_segments):
result_shape = (num_segments, data.size(1))
segment_ids = segment_ids.unsqueeze(-1).expand(-1, data.size(1))
result = data.new_full(result_shape, 0) # Init empty result tensor.
count = data.new_full(result_shape, 0)
result.scatter_add_(0, segment_ids, data)
count.scatter_add_(0, segment_ids, torch.ones_like(data))
return result / count.clamp(min=1)
def get_edges(n_nodes):
rows, cols = [], []
for i in range(n_nodes):
for j in range(n_nodes):
if i != j:
rows.append(i)
cols.append(j)
edges = [rows, cols]
return edges
def get_edges_batch(n_nodes, batch_size):
edges = get_edges(n_nodes)
edge_attr = torch.ones(len(edges[0]) * batch_size, 1)
edges = [torch.LongTensor(edges[0]), torch.LongTensor(edges[1])]
if batch_size == 1:
return edges, edge_attr
elif batch_size > 1:
rows, cols = [], []
for i in range(batch_size):
rows.append(edges[0] + n_nodes * i)
cols.append(edges[1] + n_nodes * i)
edges = [torch.cat(rows), torch.cat(cols)]
return edges, edge_attr
if __name__ == "__main__":
# Dummy parameters
batch_size = 8
n_nodes = 4
n_feat = 1
x_dim = 3
# Dummy variables h, x and fully connected edges
h = torch.ones(batch_size * n_nodes, n_feat)
x = torch.ones(batch_size * n_nodes, x_dim)
edges, edge_attr = get_edges_batch(n_nodes, batch_size)
# Initialize EGNN
egnn = EGNN(in_node_nf=n_feat, hidden_nf=32, out_node_nf=1, in_edge_nf=1)
# Run EGNN
h, x = egnn(h, x, edges, edge_attr)
| Python |
3D | jianlin-cheng/TransFun | models/egnn_clean/__init__.py | .py | 0 | 0 | null | Python |
3D | jianlin-cheng/TransFun | preprocessing/preprocess.py | .py | 6,121 | 158 | import os
import subprocess
import pandas as pd
import torch
import esm
import torch.nn.functional as F
import Constants
from preprocessing.utils import pickle_save, pickle_load, count_proteins_biopython
# Script to test esm
def test_esm():
# Load ESM-1b model
model, alphabet = esm.pretrained.esm1b_t33_650M_UR50S()
batch_converter = alphabet.get_batch_converter()
# Prepare data_bp (first 2 sequences from ESMStructuralSplitDataset superfamily / 4)
data = [
("protein1", "MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"),
("protein2", "KALTARQQEVFDLIRDHISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE"),
("protein2 with mask", "KALTARQQEVFDLIRD<mask>ISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE"),
("protein3", "K A <mask> I S Q"),
]
for i, j in data:
print(len(j))
batch_labels, batch_strs, batch_tokens = batch_converter(data)
# Extract per-residue representations (on CPU)
with torch.no_grad():
results = model(batch_tokens, repr_layers=[33], return_contacts=True)
token_representations = results["representations"][33]
print(token_representations.shape)
# Generate per-sequence representations via averaging
# NOTE: token 0 is always a beginning-of-sequence token, so the first residue is token 1.
sequence_representations = []
for i, (_, seq) in enumerate(data):
sequence_representations.append(token_representations[i, 1: len(seq) + 1].mean(0))
for i in sequence_representations:
print(len(i))
# # Look at the unsupervised self-attention map contact predictions
# import matplotlib.pyplot as plt
# for (_, seq), attention_contacts in zip(data_bp, results["contacts"]):
# plt.matshow(attention_contacts[: len(seq), : len(seq)])
# plt.title(seq)
# plt.show()
# Generate ESM embeddings in bulk
# In this function, I create embedding for each fasta sequence in the fasta file
# Extract file is taken from the github directory
def generate_bulk_embedding(fasta_file, output_dir, path_to_extract_file):
subprocess.call('python extract.py esm1b_t33_650M_UR50S {} {} --repr_layers 0 32 33 '
'--include mean per_tok --truncate'.format("{}".format(fasta_file),
"{}".format(output_dir)),
shell=True, cwd="{}".format(path_to_extract_file))
# print(count_proteins_biopython(Constants.ROOT + "eval/{}_1.fasta".format("test")))
# exit()
# generate_bulk_embedding(Constants.ROOT + "eval/{}.fasta".format("cropped"),
# "/data_bp/pycharm/TransFunData/data_bp/bnm",
# "/data_bp/pycharm/TransFun/preprocessing")
# generate_bulk_embedding(Constants.ROOT + "eval/{}.fasta".format("shorter"),
# "/data_bp/pycharm/TransFunData/data_bp/shorter",
# "/data_bp/pycharm/TransFun/preprocessing")
exit()
# Generate data_bp for each group
def generate_data():
def get_stats(data):
go_terms = {}
for i in data:
for j in i.split(","):
if j in go_terms:
go_terms[j] = go_terms[j] + 1
else:
go_terms[j] = 1
return go_terms
categories = [('molecular_function', 'GO-terms (molecular_function)'),
('biological_process', 'GO-terms (biological_process)'),
('cellular_component', 'GO-terms (cellular_component)')]
x_id = '### PDB-chain'
train_set = pickle_load(Constants.ROOT + "final_train")
valid_set = pickle_load(Constants.ROOT + "final_valid")
test_set = pickle_load(Constants.ROOT + "final_test")
for i in categories:
print("Generating for {}".format(i[0]))
if not os.path.isdir(Constants.ROOT + i[0]):
os.mkdir(Constants.ROOT + i[0])
df = pd.read_csv("/data_bp/pycharm/TransFunData/data_bp/final_annot.tsv", skiprows=12, delimiter="\t")
df = df[df[i[1]].notna()][[x_id, i[1]]]
train_df = df[df[x_id].isin(train_set)]
train_df.to_pickle(Constants.ROOT + i[0] + "/train.pickle")
stats = get_stats(train_df[i[1]].to_list())
pickle_save(stats, Constants.ROOT + i[0] + "/train_stats")
print(len(stats))
valid_df = df[df[x_id].isin(valid_set)]
valid_df.to_pickle(Constants.ROOT + i[0] + "/valid.pickle")
stats = get_stats(valid_df[i[1]].to_list())
pickle_save(stats, Constants.ROOT + i[0] + "/valid_stats")
print(len(stats))
test_df = df[df[x_id].isin(test_set)]
test_df.to_pickle(Constants.ROOT + i[0] + "/test.pickle")
stats = get_stats(test_df[i[1]].to_list())
pickle_save(stats, Constants.ROOT + i[0] + "/test_stats")
print(len(stats))
# generate_data()
# Generate labels for data_bp
def generate_labels(_type='GO-terms (molecular_function)', _name='molecular_function'):
# ['GO-terms (molecular_function)', 'GO-terms (biological_process)', 'GO-terms (cellular_component)']
# if not os.path.isfile('/data_bp/pycharm/TransFunData/data_bp/{}.pickle'.format(_name)):
file = '/data_bp/pycharm/TransFunData/data_bp/nrPDB-GO_2021.01.23_annot.tsv'
data = pd.read_csv(file, sep='\t', skiprows=12)
data = data[["### PDB-chain", _type]]
data = data[data[_type].notna()]
classes = data[_type].to_list()
classes = set([one_word for class_list in classes for one_word in class_list.split(',')])
class_keys = list(range(0, len(classes)))
classes = dict(zip(classes, class_keys))
data_to_one_hot = {}
for index, row in data.iterrows():
tmp = row[_type].split(',')
x = torch.tensor([classes[i] for i in tmp])
x = F.one_hot(x, num_classes=len(classes))
x = x.sum(dim=0).float()
assert len(tmp) == x.sum(dim=0).float()
data_to_one_hot[row['### PDB-chain']] = x.to(dtype=torch.int)
pickle_save(data_to_one_hot, '/data_bp/pycharm/TransFunData/data_bp/{}'.format(_name))
# generate_labels()
| Python |
3D | jianlin-cheng/TransFun | preprocessing/extract.py | .py | 5,095 | 137 | #!/usr/bin/env python3 -u
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import argparse
import pathlib
import torch
from esm import Alphabet, FastaBatchedDataset, ProteinBertModel, pretrained
def create_parser():
parser = argparse.ArgumentParser(
description="Extract per-token representations and model outputs for sequences in a FASTA file" # noqa
)
parser.add_argument(
"model_location",
type=str,
help="PyTorch model file OR name of pretrained model to download (see README for models)",
)
parser.add_argument(
"fasta_file",
type=pathlib.Path,
help="FASTA file on which to extract representations",
)
parser.add_argument(
"output_dir",
type=pathlib.Path,
help="output directory for extracted representations",
)
parser.add_argument("--toks_per_batch", type=int, default=4096, help="maximum batch size")
parser.add_argument(
"--repr_layers",
type=int,
default=[-1],
nargs="+",
help="layers indices from which to extract representations (0 to num_layers, inclusive)",
)
parser.add_argument(
"--include",
type=str,
nargs="+",
choices=["mean", "per_tok", "bos", "contacts"],
help="specify which representations to return",
required=True,
)
parser.add_argument(
"--truncate",
action="store_true",
help="Truncate sequences longer than 1024 to match the training setup",
)
parser.add_argument("--nogpu", action="store_true", help="Do not use GPU even if available")
return parser
def main(args):
model, alphabet = pretrained.load_model_and_alphabet(args.model_location)
model.eval()
if torch.cuda.is_available() and not args.nogpu:
model = model.cuda()
print("Transferred model to GPU")
dataset = FastaBatchedDataset.from_file(args.fasta_file)
batches = dataset.get_batch_indices(args.toks_per_batch, extra_toks_per_seq=1)
data_loader = torch.utils.data.DataLoader(
dataset, collate_fn=alphabet.get_batch_converter(), batch_sampler=batches
)
print(f"Read {args.fasta_file} with {len(dataset)} sequences")
args.output_dir.mkdir(parents=True, exist_ok=True)
return_contacts = "contacts" in args.include
assert all(-(model.num_layers + 1) <= i <= model.num_layers for i in args.repr_layers)
repr_layers = [(i + model.num_layers + 1) % (model.num_layers + 1) for i in args.repr_layers]
with torch.no_grad():
for batch_idx, (labels, strs, toks) in enumerate(data_loader):
print(
f"Processing {batch_idx + 1} of {len(batches)} batches ({toks.size(0)} sequences)"
)
if torch.cuda.is_available() and not args.nogpu:
toks = toks.to(device="cuda", non_blocking=True)
# The model is trained on truncated sequences and passing longer ones in at
# infernce will cause an error. See https://github.com/facebookresearch/esm/issues/21
if args.truncate:
toks = toks[:, :1022]
out = model(toks, repr_layers=repr_layers, return_contacts=return_contacts)
logits = out["logits"].to(device="cpu")
representations = {
layer: t.to(device="cpu") for layer, t in out["representations"].items()
}
if return_contacts:
contacts = out["contacts"].to(device="cpu")
for i, label in enumerate(labels):
args.output_file = args.output_dir / f"{label}.pt"
args.output_file.parent.mkdir(parents=True, exist_ok=True)
result = {"label": label}
# Call clone on tensors to ensure tensors are not views into a larger representation
# See https://github.com/pytorch/pytorch/issues/1995
if "per_tok" in args.include:
result["representations"] = {
layer: t[i, 1 : len(strs[i]) + 1].clone()
for layer, t in representations.items()
}
if "mean" in args.include:
result["mean_representations"] = {
layer: t[i, 1 : len(strs[i]) + 1].mean(0).clone()
for layer, t in representations.items()
}
if "bos" in args.include:
result["bos_representations"] = {
layer: t[i, 0].clone() for layer, t in representations.items()
}
if return_contacts:
result["contacts"] = contacts[i, : len(strs[i]), : len(strs[i])].clone()
torch.save(
result,
args.output_file,
)
if __name__ == "__main__":
parser = create_parser()
args = parser.parse_args()
main(args)
| Python |
3D | jianlin-cheng/TransFun | preprocessing/create_go.py | .py | 27,035 | 617 | import os
import csv
import subprocess
import networkx as nx
import numpy as np
import obonet
import pandas as pd
from Bio.Seq import Seq
from Bio import SeqIO, SwissProt
from Bio.SeqRecord import SeqRecord
import Constants
from preprocessing.utils import pickle_save, pickle_load, get_sequence_from_pdb, fasta_for_msas, \
count_proteins_biopython, count_proteins, fasta_for_esm, fasta_to_dictionary, read_dictionary, \
get_proteins_from_fasta, create_seqrecord, read_test_set, alpha_seq_fasta_to_dictionary, collect_test, is_ok, \
test_annotation, get_test_classes
def extract_id(header):
return header.split('|')[1]
def compare_sequence(uniprot_fasta_file, save=False):
"""
Script is used to compare sequences of uniprot & alpha fold.
:param uniprot_fasta_file: input uniprot fasta file.
:param save: whether to save the proteins that are similar & different.
:return: None
"""
identical = []
unidentical = []
detected = 0
for seq_record in SeqIO.parse(uniprot_fasta_file, "fasta"):
uniprot_id = extract_id(seq_record.id)
# print("Uniprot ID is {} and sequence is {}".format(uniprot_id, str(seq_record.seq)))
# Check if alpha fold predicted structure
src = os.path.join(Constants.ROOT, "alphafold/AF-{}-F1-model_v2.pdb.gz".format(uniprot_id))
if os.path.isfile(src):
detected = detected + 1
# compare sequence
alpha_fold_seq = get_sequence_from_pdb(src, "A")
uniprot_sequence = str(seq_record.seq)
if alpha_fold_seq == uniprot_sequence:
identical.append(uniprot_id)
else:
unidentical.append(uniprot_id)
print("{} number of sequence structures detected, {} identical to uniprot sequence & {} "
"not identical to uniprot sequence".format(detected, len(identical), len(unidentical)))
if save:
pickle_save(identical, Constants.ROOT + "uniprot/identical")
pickle_save(unidentical, Constants.ROOT + "uniprot/unidentical")
def filtered_sequences(uniprot_fasta_file):
"""
Script is used to create fasta files based on alphafold sequence, by replacing sequences that are different.
:param uniprot_fasta_file: input uniprot fasta file.
:return: None
"""
identified = set(pickle_load(Constants.ROOT + "uniprot/identical"))
unidentified = set(pickle_load(Constants.ROOT + "uniprot/unidentical"))
input_seq_iterator = SeqIO.parse(uniprot_fasta_file, "fasta")
identified_seqs = [record for record in input_seq_iterator if extract_id(record.id) in identified]
input_seq_iterator = SeqIO.parse(uniprot_fasta_file, "fasta")
unidentified_seqs = []
for record in input_seq_iterator:
uniprot_id = extract_id(record.id)
if uniprot_id in unidentified:
src = os.path.join(Constants.ROOT, "alphafold/AF-{}-F1-model_v2.pdb.gz".format(uniprot_id))
record.seq = Seq(get_sequence_from_pdb(src))
unidentified_seqs.append(record)
new_seq = identified_seqs + unidentified_seqs
print(len(identified_seqs), len(unidentified_seqs), len(new_seq))
SeqIO.write(new_seq, Constants.ROOT + "uniprot/{}.fasta".format("cleaned"), "fasta")
def get_protein_go(uniprot_sprot_dat=None, save_path=None):
"""
Get all the GO terms associated with a protein in a clean format
Creates file with structure: ACCESSION, ID, DESCRIPTION, WITH_STRING, EVIDENCE, GO_ID
:param uniprot_sprot_dat:
:param save_path:
:return: None
"""
handle = open(uniprot_sprot_dat)
all = [["ACC", "ID", "DESCRIPTION", "WITH_STRING", "EVIDENCE", "GO_ID", "ORGANISM", "TAXONOMY"]]
for record in SwissProt.parse(handle):
primary_accession = record.accessions[0]
entry_name = record.entry_name
cross_refs = record.cross_references
organism = record.organism
taxonomy = record.taxonomy_id
for ref in cross_refs:
if ref[0] == "GO":
assert len(ref) == 4
go_id = ref[1]
description = ref[2]
evidence = ref[3].split(":")
with_string = evidence[1]
evidence = evidence[0]
all.append(
[primary_accession, entry_name, description, with_string,
evidence, go_id, organism, taxonomy])
with open(save_path, "w") as f:
wr = csv.writer(f, delimiter='\t')
wr.writerows(all)
def generate_go_counts(fname="", go_graph="", cleaned_proteins =None):
"""
Get only sequences that meet the criteria of sequence length and sequences.
:param cleaned_proteins: proteins filtered for alphafold sequence
:param go_graph: obo-basic graph
:param fname: accession2go file
:param chains: the proteins we are considering
:return: None
"""
df = pd.read_csv(fname, delimiter="\t")
df = df[df['EVIDENCE'].isin(Constants.exp_evidence_codes)]
df = df[df['ACC'].isin(cleaned_proteins)]
protein2go = {}
go2info = {}
# for index, row in df.iterrows():
for line_number, (index, row) in enumerate(df.iterrows()):
acc = row[0]
evidence = row[4]
go_id = row[5]
# if (acc in chains) and (go_id in go_graph) and (go_id not in Constants.root_terms):
if go_id in go_graph:
if acc not in protein2go:
protein2go[acc] = {'goterms': [go_id], 'evidence': [evidence]}
namespace = go_graph.nodes[go_id]['namespace']
go_ids = nx.descendants(go_graph, go_id)
go_ids.add(go_id)
go_ids = go_ids.difference(Constants.root_terms)
for go in go_ids:
protein2go[acc]['goterms'].append(go)
protein2go[acc]['evidence'].append(evidence)
name = go_graph.nodes[go]['name']
if go not in go2info:
go2info[go] = {'ont': namespace, 'goname': name, 'accessions': set([acc])}
else:
go2info[go]['accessions'].add(acc)
return protein2go, go2info
def one_line_format(input_file, dir):
"""
Script takes the mm2seq cluster output and converts to representative seq1, seq2, seq3 ....
:param input_file: The clusters as csv file
:return: None
"""
data = {}
with open(input_file) as file:
lines = file.read().splitlines()
for line in lines:
x = line.split("\t")
if x[0] in data:
data[x[0]].append(x[1])
else:
data[x[0]] = list([x[1]])
result = [data[i] for i in data]
with open(dir + "/final_clusters.csv", "w") as f:
wr = csv.writer(f, delimiter='\t')
wr.writerows(result)
def get_prot_id_and_prot_name(cafa_proteins):
print('Mapping CAFA PROTEINS')
cafa_id_mapping = dict()
with open(Constants.ROOT + 'uniprot/idmapping_selected.tab') as file:
for line in file:
_tmp = line.split("\t")[:2]
if _tmp[1] in cafa_proteins:
cafa_id_mapping[_tmp[1]] = _tmp[0]
if len(cafa_id_mapping) == 97105:
break
return cafa_id_mapping
def target_in_swissprot_trembl_no_alpha():
gps = ["LK_bpo", "LK_mfo", "LK_cco", "NK_bpo", "NK_mfo", "NK_cco"]
targets = set()
for ts in gps:
ts_func_old = read_test_set("/data_bp/pycharm/TransFunData/data_bp/195-200/{}".format(ts))
targets.update(set([i[0] for i in ts_func_old]))
ts_func_new = read_test_set("/data_bp/pycharm/TransFunData/data_bp/205-now/{}".format(ts))
targets.update(set([i[0] for i in ts_func_new]))
target = []
for seq_record in SeqIO.parse(Constants.ROOT + "uniprot/uniprot_trembl.fasta", "fasta"):
if extract_id(seq_record.id) in targets:
target.append(seq_record)
print(len(target))
if len(target) == len(targets):
break
for seq_record in SeqIO.parse(Constants.ROOT + "uniprot/uniprot_sprot.fasta", "fasta"):
if extract_id(seq_record.id) in targets:
target.append(seq_record)
print(len(target))
if len(target) == len(targets):
break
SeqIO.write(target, Constants.ROOT + "uniprot/{}.fasta".format("target_and_sequence"), "fasta")
# target_in_swissprot_trembl_no_alpha()
def cluster_sequence(seq_id, proteins=None, add_target=False):
"""
Script is used to cluster the proteins with mmseq2.
:param threshold:
:param proteins:
:param add_target: Add CAFA targets
:param input_fasta: input uniprot fasta file.
:return: None
1. sequence to cluster is cleaned sequence.
2. Filter for only selected proteins
3. Add proteins in target not in the filtered list:
3.1
3.2
"""
input_fasta = Constants.ROOT + "uniprot/cleaned.fasta"
print("Number of proteins in raw cleaned is {}".format(count_proteins(input_fasta)))
print("Number of selected proteins in raw cleaned is {}".format(len(proteins)))
wd = Constants.ROOT + "{}/mmseq".format(seq_id)
if not os.path.exists(wd):
os.mkdir(wd)
if proteins:
fasta_path = wd + "/fasta_{}".format(seq_id)
if os.path.exists(fasta_path):
input_fasta = fasta_path
else:
input_seq_iterator = SeqIO.parse(input_fasta, "fasta")
cleaned_fasta = [record for record in input_seq_iterator if extract_id(record.id) in proteins]
SeqIO.write(cleaned_fasta, fasta_path, "fasta")
assert len(cleaned_fasta) == len(proteins)
input_fasta = fasta_path
# Add sequence for target not in the uniprotKB
if add_target:
cleaned_missing_target_sequence = Constants.ROOT + "uniprot/cleaned_missing_target_sequence.fasta"
if os.path.exists(cleaned_missing_target_sequence):
input_fasta = cleaned_missing_target_sequence
else:
missing_targets_205_now = []
missing_targets_195_200 = []
# Adding missing target sequence
all_list = set([extract_id(i.id) for i in (SeqIO.parse(input_fasta, "fasta"))])
extra_alpha_fold = alpha_seq_fasta_to_dictionary(Constants.ROOT + "uniprot/alphafold_sequences.fasta")
extra_trembl = fasta_to_dictionary(Constants.ROOT + "uniprot/target_and_sequence.fasta",
identifier='protein_id')
for ts in Constants.TEST_GROUPS:
ts_func_old = read_test_set("/data_bp/pycharm/TransFunData/data_bp/195-200/{}".format(ts))
ts_func_old = set([i[0] for i in ts_func_old])
ts_func_new = read_test_set("/data_bp/pycharm/TransFunData/data_bp/205-now/{}".format(ts))
ts_func_new = set([i[0] for i in ts_func_new])
print("Adding 195-200 {}".format(ts))
for _id in ts_func_old:
# Alphafold sequence always takes precedence
if _id not in all_list:
if _id in extra_alpha_fold:
_mp = extra_alpha_fold[_id]
missing_targets_195_200.append(SeqRecord(id=_mp[0].replace("AFDB:", "").
replace("AF-", "").
replace("-F1", ""),
name=_mp[1],
description=_mp[2],
seq=_mp[3]))
# print("found {} in alphafold".format(_id))
elif _id in extra_trembl:
_mp = extra_trembl[_id]
missing_targets_195_200.append(SeqRecord(id=_mp[0],
name=_mp[1],
description=_mp[2],
seq=_mp[3]))
# print("found {} in trembl".format(_id))
else:
print("Found in none for {}".format(_id))
print("Adding 205-now {}".format(ts))
for _id in ts_func_new:
# Alphafold sequence always takes precedence
if _id not in all_list:
if _id in extra_alpha_fold:
_mp = extra_alpha_fold[_id]
missing_targets_205_now.append(SeqRecord(id=_mp[0].replace("AFDB:", "").
replace("AF-", "").
replace("-F1", ""),
name=_mp[1],
description=_mp[2],
seq=_mp[3]))
# print("found {} in alphafold".format(_id))
elif _id in extra_trembl:
_mp = extra_trembl[_id]
missing_targets_205_now.append(SeqRecord(id=_mp[0],
name=_mp[1],
description=_mp[2],
seq=_mp[3]))
# print("found {} in trembl".format(_id))
else:
print("Found in none for {}".format(_id))
# save missing sequence
SeqIO.write(missing_targets_195_200, Constants.ROOT + "uniprot/{}.fasta".format("missing_targets_195_200"),
"fasta")
SeqIO.write(missing_targets_205_now, Constants.ROOT + "uniprot/{}.fasta".format("missing_targets_205_now"),
"fasta")
input_seq_iterator = list(SeqIO.parse(input_fasta, "fasta"))
SeqIO.write(input_seq_iterator + missing_targets_195_200 + missing_targets_205_now, Constants.ROOT +
"uniprot/{}.fasta".format("cleaned_missing_target_sequence"), "fasta")
input_fasta = cleaned_missing_target_sequence
# input_seq_iterator = SeqIO.parse(Constants.ROOT +
# "uniprot/{}.fasta".format("cleaned_missing_target_sequence"), "fasta")
#
# cleaned_fasta = set()
# for record in input_seq_iterator:
# if record.id.startswith("AFDB"):
# cleaned_fasta.add(record.id.split(':')[1].split('-')[1])
# else:
# cleaned_fasta.add(extract_id(record.id))
#
# print(len(collect_test() - cleaned_fasta), len(cleaned_fasta))
print("Number of proteins in cleaned_missing_target_sequence is {}".format(count_proteins(input_fasta)))
command = "mmseqs createdb {} {} ; " \
"mmseqs cluster {} {} tmp --min-seq-id {};" \
"mmseqs createtsv {} {} {} {}.tsv" \
"".format(input_fasta, "targetDB", "targetDB", "outputClu", seq_id, "targetDB", "targetDB",
"outputClu", "outputClu")
subprocess.call(command, shell=True, cwd="{}".format(wd))
one_line_format(wd + "/outputClu.tsv", wd)
def accession2sequence(fasta_file=""):
"""
Extract sequnce for each accession into dictionary.
:param fasta_file:
:return: None
"""
input_seq_iterator = SeqIO.parse(fasta_file, "fasta")
acc2seq = {extract_id(record.id): str(record.seq) for record in input_seq_iterator}
pickle_save(acc2seq, Constants.ROOT + "uniprot/acc2seq")
def collect_test_clusters(cluster_path):
# collect test and clusters
total_test = collect_test()
computed = pd.read_csv(cluster_path, names=['cluster'], header=None).to_dict()['cluster']
computed = {i: set(computed[i].split('\t')) for i in computed}
cafa3_cluster = set()
new_cluster = set()
train_cluster_indicies = []
for i in computed:
# cafa3
if total_test[0].intersection(computed[i]):
cafa3_cluster.update(computed[i])
# new set
elif total_test[1].intersection(computed[i]):
new_cluster.update(computed[i])
else:
train_cluster_indicies.append(i)
print(len(cafa3_cluster))
print(len(new_cluster))
exit()
return test_cluster, train_cluster_indicies
def write_output_files(protein2go, go2info, seq_id):
onts = ['molecular_function', 'biological_process', 'cellular_component']
selected_goterms = {ont: set() for ont in onts}
selected_proteins = set()
print("Number of GO terms is {} proteins is {}".format(len(go2info), len(protein2go)))
# for each go term count related proteins; if they are from 50 to 5000
# then we can add them to our data_bp.
for goterm in go2info:
prots = go2info[goterm]['accessions']
num = len(prots)
namespace = go2info[goterm]['ont']
if num >= 60:
selected_goterms[namespace].add(goterm)
selected_proteins = selected_proteins.union(prots)
# Convert the accepted go terms into list, so they have a fixed order
# Add the names of corresponding go terms.
selected_goterms_list = {ont: list(selected_goterms[ont]) for ont in onts}
selected_gonames_list = {ont: [go2info[goterm]['goname'] for goterm in selected_goterms_list[ont]] for ont in onts}
# print the count of each go term
for ont in onts:
print("###", ont, ":", len(selected_goterms_list[ont]))
terms = {}
for ont in onts:
terms['GO-terms-' + ont] = selected_goterms_list[ont]
terms['GO-names-' + ont] = selected_gonames_list[ont]
terms['GO-terms-all'] = selected_goterms_list['molecular_function'] + \
selected_goterms_list['biological_process'] + \
selected_goterms_list['cellular_component']
terms['GO-names-all'] = selected_gonames_list['molecular_function'] + \
selected_goterms_list['biological_process'] + \
selected_goterms_list['cellular_component']
pickle_save(terms, Constants.ROOT + 'go_terms')
fasta_dic = fasta_to_dictionary(Constants.ROOT + "uniprot/cleaned.fasta")
protein_list = set()
terms_count = {'mf': set(), 'bp': set(), 'cc': set()}
with open(Constants.ROOT + 'annot.tsv', 'wt') as out_file:
tsv_writer = csv.writer(out_file, delimiter='\t')
tsv_writer.writerow(["Protein", "molecular_function", "biological_process", "cellular_component", "all"])
for chain in selected_proteins:
goterms = set(protein2go[chain]['goterms'])
if len(goterms) > 2 and is_ok(str(fasta_dic[chain][3])):
# selected goterms
mf_goterms = goterms.intersection(set(selected_goterms_list[onts[0]]))
bp_goterms = goterms.intersection(set(selected_goterms_list[onts[1]]))
cc_goterms = goterms.intersection(set(selected_goterms_list[onts[2]]))
if len(mf_goterms) > 0 or len(bp_goterms) > 0 or len(cc_goterms) > 0:
terms_count['mf'].update(mf_goterms)
terms_count['bp'].update(bp_goterms)
terms_count['cc'].update(cc_goterms)
protein_list.add(chain)
tsv_writer.writerow([chain, ','.join(mf_goterms), ','.join(bp_goterms), ','.join(cc_goterms),
','.join(mf_goterms.union(bp_goterms).union(cc_goterms))])
assert len(terms_count['mf']) == len(selected_goterms_list['molecular_function']) \
and len(terms_count['mf']) == len(selected_goterms_list['molecular_function']) \
and len(terms_count['mf']) == len(selected_goterms_list['molecular_function'])
print("Creating Clusters")
cluster_path = Constants.ROOT + "{}/mmseq/final_clusters.csv".format(seq_id)
if not os.path.exists(cluster_path):
cluster_sequence(seq_id, protein_list, add_target=True)
# Remove test proteins & their cluster
# Decided to remove irrespective of mf, bp | cc
# It should be fine.
# test_cluster, train_cluster_indicies = collect_test_clusters(cluster_path)
# train_list = protein_list - test_cluster
# assert len(protein_list.intersection(test_cluster)) == len(protein_list.intersection(collect_test())) == 0
# print(len(protein_list), len(protein_list.intersection(cafa3)), len(protein_list.intersection(new_test)))
print("Getting test cluster")
cafa3, new_test = collect_test()
train_list = protein_list - (cafa3.union(new_test))
assert len(train_list.intersection(cafa3)) == len(train_list.intersection(new_test)) == 0
validation_len = 6000 #int(0.2 * len(protein_list))
validation_list = set()
for chain in train_list:
goterms = set(protein2go[chain]['goterms'])
mf_goterms = set(goterms).intersection(set(selected_goterms_list[onts[0]]))
bp_goterms = set(goterms).intersection(set(selected_goterms_list[onts[1]]))
cc_goterms = set(goterms).intersection(set(selected_goterms_list[onts[2]]))
if len(mf_goterms) > 0 and len(bp_goterms) > 0 and len(cc_goterms) > 0:
validation_list.add(chain)
if len(validation_list) >= validation_len:
break
pickle_save(validation_list, Constants.ROOT + '/{}/valid'.format(seq_id))
train_list = train_list - validation_list
print("Total number of train nrPDB=%d" % (len(train_list)))
annot = pd.read_csv(Constants.ROOT + 'annot.tsv', delimiter='\t')
for ont in onts + ['all']:
_pth = Constants.ROOT + '{}/{}'.format(seq_id, ont)
if not os.path.exists(_pth):
os.mkdir(_pth)
tmp = annot[annot[ont].notnull()][['Protein', ont]]
tmp_prot_list = set(tmp['Protein'].to_list())
tmp_prot_list = tmp_prot_list.intersection(train_list)
computed = pd.read_csv(cluster_path, names=['cluster'], header=None)
# train_indicies = computed.index.isin(train_cluster_indicies)
# computed = computed.loc[train_indicies].to_dict()['cluster']
computed = computed.to_dict()['cluster']
computed = {ont: set(computed[ont].split('\t')) for ont in computed}
new_computed = {}
index = 0
for i in computed:
_tmp = tmp_prot_list.intersection(computed[i])
if len(_tmp) > 0:
new_computed[index] = _tmp
index += 1
_train = set.union(*new_computed.values())
print("Total proteins for {} is {} in {} clusters".format(ont, len(_train), len(new_computed)))
assert len(cafa3.intersection(_train)) == 0 and len(validation_list.intersection(_train)) == 0
pickle_save(new_computed, _pth + '/train')
def pipeline(compare=False, curate_protein_goterms=False, generate_go_count=False,
generate_msa=False, generate_esm=False, seq_id=0.3):
"""
section 1
1. First compare the sequence in uniprot and alpha fold and retrieve same sequence and different sequences.
2. Replace mismatched sequences with alpha fold sequence & create the fasta from only alphafold sequences
3. Just another comparison to be sure, we have only alphafold sequences.
section 2
GO terms associated with a protein
section 3
1. Convert Fasta to dictionary
2. Read OBO graph
3. Get proteins and related go terms & go terms and associated proteins
:param generate_msa:
:param generate_esm:
:param generate_go_count:
:param curate_protein_goterms:
:param compare: Compare sequence between uniprot and alphafold
:return:
"""
# section 1
if compare:
compare_sequence(Constants.ROOT + "uniprot/uniprot_sprot.fasta", save=True) # 1
filtered_sequences(Constants.ROOT + "uniprot/uniprot_sprot.fasta") # 2 create cleaned.fasta
compare_sequence(Constants.ROOT + "uniprot/cleaned.fasta", save=False) # 3
# section 2
if curate_protein_goterms:
get_protein_go(uniprot_sprot_dat=Constants.ROOT + "uniprot/uniprot_sprot.dat",
save_path=Constants.ROOT + "protein2go.csv") # 4 contains proteins and go terms.
# section 3
if generate_go_count:
cleaned_proteins = fasta_to_dictionary(Constants.ROOT + "uniprot/cleaned.fasta")
go_graph = obonet.read_obo(open(Constants.ROOT + "obo/go-basic.obo", 'r')) # 5
protein2go, go2info = generate_go_counts(fname=Constants.ROOT + "protein2go.csv", go_graph=go_graph,
cleaned_proteins=list(cleaned_proteins.keys()))
pickle_save(protein2go, Constants.ROOT + "protein2go")
pickle_save(go2info, Constants.ROOT + "go2info")
protein2go = pickle_load(Constants.ROOT + "protein2go")
go2info = pickle_load(Constants.ROOT + "go2info")
print("Writing output for sequence identity {}".format(seq_id))
write_output_files(protein2go, go2info, seq_id=seq_id)
if generate_msa:
fasta_file = Constants.ROOT + "cleaned.fasta"
protein2go_primary = set(protein2go)
fasta_for_msas(protein2go_primary, fasta_file)
if generate_esm:
fasta_file = Constants.ROOT + "cleaned.fasta"
fasta_for_esm(protein2go, fasta_file)
# print(count_proteins(Constants.ROOT + "uniprot/{}.fasta".format("target_and_sequence")))
#
seq = [0.3, 0.5, 0.9, 0.95]
for i in seq:
pipeline(compare=False,
curate_protein_goterms=False,
generate_go_count=False,
generate_msa=False,
generate_esm=False,
seq_id=i)
exit()
groups = ['molecular_function', 'cellular_component', 'biological_process']
for i in seq:
for j in groups:
train = pd.read_pickle(Constants.ROOT + "{}/{}/train.pickle".format(i, j))
valid = set(pd.read_pickle(Constants.ROOT + "{}/{}/valid.pickle".format(i, j)))
test_cluster,_ = collect_test_clusters(Constants.ROOT + "{}/mmseq/final_clusters.csv".format(i))
test = collect_test()
print(i, j, len(test_cluster), len(test), len(test_cluster - test), len(test - test_cluster))
# assert len(train.intersection(test_cluster)) == 0
# assert len(train.intersection(test)) == 0
assert len(valid.intersection(test_cluster)) == 0
assert len(valid.intersection(test)) == 0
| Python |
3D | jianlin-cheng/TransFun | preprocessing/utils.py | .py | 13,800 | 405 | import math
import os, subprocess
import shutil
import pandas as pd
import torch
from Bio import SeqIO
import pickle
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from biopandas.pdb import PandasPdb
from collections import deque, Counter
import csv
from sklearn.metrics import roc_curve, auc
from torchviz import make_dot
import Constants
from Constants import INVALID_ACIDS, amino_acids
def extract_id(header):
return header.split('|')[1]
def count_proteins(fasta_file):
num = len([1 for line in open(fasta_file) if line.startswith(">")])
return num
def read_dictionary(file):
reader = csv.reader(open(file, 'r'), delimiter='\t')
d = {}
for row in reader:
k, v = row[0], row[1]
d[k] = v
return d
def create_seqrecord(id="", name="", description="", seq=""):
record = SeqRecord(Seq(seq), id=id, name=name, description=description)
return record
# Count the number of protein sequences in a fasta file with biopython -- slower.
def count_proteins_biopython(fasta_file):
num = len(list(SeqIO.parse(fasta_file, "fasta")))
return num
def get_proteins_from_fasta(fasta_file):
proteins = list(SeqIO.parse(fasta_file, "fasta"))
# proteins = [i.id.split("|")[1] for i in proteins]
proteins = [i.id for i in proteins]
return proteins
def fasta_to_dictionary(fasta_file, identifier='protein_id'):
if identifier == 'protein_id':
loc = 1
elif identifier == 'protein_name':
loc = 2
data = {}
for seq_record in SeqIO.parse(fasta_file, "fasta"):
if "|" in seq_record.id:
data[seq_record.id.split("|")[loc]] = (
seq_record.id, seq_record.name, seq_record.description, seq_record.seq)
else:
data[seq_record.id] = (seq_record.id, seq_record.name, seq_record.description, seq_record.seq)
return data
def cafa_fasta_to_dictionary(fasta_file):
data = {}
for seq_record in SeqIO.parse(fasta_file, "fasta"):
data[seq_record.description.split(" ")[0]] = (
seq_record.id, seq_record.name, seq_record.description, seq_record.seq)
return data
def alpha_seq_fasta_to_dictionary(fasta_file):
data = {}
for seq_record in SeqIO.parse(fasta_file, "fasta"):
_protein = seq_record.id.split(":")[1].split("-")[1]
data[_protein] = (seq_record.id, seq_record.name, seq_record.description, seq_record.seq)
return data
def pickle_save(data, filename):
with open('{}.pickle'.format(filename), 'wb') as handle:
pickle.dump(data, handle, protocol=pickle.HIGHEST_PROTOCOL)
def pickle_load(filename):
with open('{}.pickle'.format(filename), 'rb') as handle:
return pickle.load(handle)
def download_msa_database(url, name):
database_path = "./msa/hh_suite_database/{}".format(name)
if not os.path.isdir(database_path):
os.mkdir(database_path)
# download database, note downloading a ~1Gb file can take a minute
database_file = "{}/{}.tar.gz".format(database_path, name)
subprocess.call('wget -O {} {}'.format(database_file, url), shell=True)
# unzip the database
subprocess.call('tar xzvf {}.tar.gz'.format(name), shell=True, cwd="{}".format(database_path))
def search_database(file, database):
base_path = "./msa/{}"
output_path = base_path.format("outputs/{}.hhr".format(file))
input_path = base_path.format("inputs/{}.fasta".format(file))
oa3m_path = base_path.format("oa3ms/{}.03m".format(file))
database_path = base_path.format("hh_suite_database/{}/{}".format(database, database))
if not os.path.isfile(oa3m_path):
subprocess.call(
'hhblits -i {} -o {} -oa3m {} -d {} -cpu 4 -n 1'.format(input_path, output_path, oa3m_path, database_path),
shell=True)
# Just used to group msas to keep track of generation
def partition_files(group):
from glob import glob
dirs = glob("/data_bp/fasta_files/{}/*/".format(group), recursive=False)
for i in enumerate(dirs):
prt = i[1].split('/')[4]
if int(i[0]) % 100 == 0:
current = "/data_bp/fasta_files/{}/{}".format(group, (int(i[0]) // 100))
if not os.path.isdir(current):
os.mkdir(current)
old = "/data_bp/fasta_files/{}/{}".format(group, prt)
new = current + "/{}".format(prt)
if old != current:
os.rename(old, new)
# Just used to group msas to keep track of generation
def fasta_for_msas(proteins, fasta_file):
root_dir = '/data_bp/uniprot/'
input_seq_iterator = SeqIO.parse(fasta_file, "fasta")
num_protein = 0
for record in input_seq_iterator:
if num_protein % 200 == 0:
parent_dir = root_dir + str(int(num_protein / 200))
print(parent_dir)
if not os.path.exists(parent_dir):
os.mkdir(parent_dir)
protein = extract_id(record.id)
if protein in proteins:
protein_dir = parent_dir + '/' + protein
if not os.path.exists(protein_dir):
os.mkdir(protein_dir)
SeqIO.write(record, protein_dir + "/{}.fasta".format(protein), "fasta")
# Files to generate esm embedding for.
def fasta_for_esm(proteins, fasta_file):
protein_path = Constants.ROOT + "uniprot/{}.fasta".format("filtered")
input_seq_iterator = SeqIO.parse(fasta_file, "fasta")
filtered_seqs = [record for record in input_seq_iterator if extract_id(record.id) in proteins]
if not os.path.exists(protein_path):
SeqIO.write(filtered_seqs, protein_path, "fasta")
def get_sequence_from_pdb(pdb_file, chain_id):
pdb_to_pandas = PandasPdb().read_pdb(pdb_file)
pdb_df = pdb_to_pandas.df['ATOM']
assert (len(set(pdb_df['chain_id'])) == 1) & (list(set(pdb_df['chain_id']))[0] == chain_id)
pdb_df = pdb_df[(pdb_df['atom_name'] == 'CA') & ((pdb_df['chain_id'])[0] == chain_id)]
pdb_df = pdb_df.drop_duplicates()
residues = pdb_df['residue_name'].to_list()
residues = ''.join([amino_acids[i] for i in residues if i != "UNK"])
return residues
def is_ok(seq, MINLEN=49, MAXLEN=1022):
"""
Checks if sequence is of good quality
:param MAXLEN:
:param MINLEN:
:param seq:
:return: None
"""
if len(seq) < MINLEN or len(seq) >= MAXLEN:
return False
for c in seq:
if c in INVALID_ACIDS:
return False
return True
def is_cafa_target(org):
return org in Constants.CAFA_TARGETS
def is_exp_code(code):
return code in Constants.exp_evidence_codes
def read_test_set(file_name):
with open(file_name) as file:
lines = file.readlines()
lines = [line.rstrip('\n').split("\t")[0] for line in lines]
return lines
def read_test_set_x(file_name):
with open(file_name) as file:
lines = file.readlines()
lines = [line.rstrip('\n').split("\t") for line in lines]
return lines
def read_test(file_name):
with open(file_name) as file:
lines = file.readlines()
lines = [line.rstrip('\n') for line in lines]
return lines
def collect_test():
cafa3 = pickle_load(Constants.ROOT + "test/test_proteins_list")
cafa3 = set([i[0] for i in cafa3])
new_test = set()
for ts in Constants.TEST_GROUPS:
# tmp = read_test_set(Constants.ROOT + "test/195-200/{}".format(ts))
# total_test.update(set([i[0] for i in tmp]))
tmp = read_test_set(Constants.ROOT + "test/205-now/{}".format(ts))
new_test.update(set([i[0] for i in tmp]))
return cafa3, new_test
def test_annotation():
# Add annotations for test set
data = {}
for ts in Constants.TEST_GROUPS:
tmp = read_test_set("/data_bp/pycharm/TransFunData/data_bp/195-200/{}".format(ts))
for i in tmp:
if i[0] in data:
data[i[0]][ts].add(i[1])
else:
data[i[0]] = {'LK_bpo': set(), 'LK_mfo': set(), 'LK_cco': set(), 'NK_bpo': set(), 'NK_mfo': set(),
'NK_cco': set()}
data[i[0]][ts].add(i[1])
tmp = read_test_set("/data_bp/pycharm/TransFunData/data_bp/205-now/{}".format(ts))
for i in tmp:
if i[0] in data:
data[i[0]][ts].add(i[1])
else:
data[i[0]] = {'LK_bpo': set(), 'LK_mfo': set(), 'LK_cco': set(), 'NK_bpo': set(), 'NK_mfo': set(),
'NK_cco': set()}
data[i[0]][ts].add(i[1])
return data
# GO terms for test set.
def get_test_classes():
data = set()
for ts in Constants.TEST_GROUPS:
tmp = read_test_set("/data_bp/pycharm/TransFunData/data_bp/195-200/{}".format(ts))
for i in tmp:
data.add(i[1])
tmp = read_test_set("/data_bp/pycharm/TransFunData/data_bp/205-now/{}".format(ts))
for i in tmp:
data.add(i[1])
return data
def create_cluster(seq_identity=None):
def get_position(row, pos, column, split):
primary = row[column].split(split)[pos]
return primary
computed = pd.read_pickle(Constants.ROOT + 'uniprot/set1/swissprot.pkl')
computed['primary_accession'] = computed.apply(lambda row: get_position(row, 0, 'accessions', ';'), axis=1)
annotated = pickle_load(Constants.ROOT + "uniprot/anotated")
def max_go_terms(row):
members = row['cluster'].split('\t')
largest = 0
max = 0
for index, value in enumerate(members):
x = computed.loc[computed['primary_accession'] == value]['prop_annotations'].values # .tolist()
if len(x) > 0:
if len(x[0]) > largest:
largest = len(x[0])
max = index
return members[max]
if seq_identity is not None:
src = "/data_bp/pycharm/TransFunData/data_bp/uniprot/set1/mm2seq_{}/max_term".format(seq_identity)
if os.path.isfile(src):
cluster = pd.read_pickle(src)
else:
cluster = pd.read_csv("/data_bp/pycharm/TransFunData/data_bp/uniprot/set1/mm2seq_{}/final_clusters.tsv"
.format(seq_identity), names=['cluster'], header=None)
cluster['rep'] = cluster.apply(lambda row: get_position(row, 0, 'cluster', '\t'), axis=1)
cluster['max'] = cluster.apply(lambda row: max_go_terms(row), axis=1)
cluster.to_pickle("/data_bp/pycharm/TransFunData/data_bp/uniprot/set1/mm2seq_{}/max_term".format(seq_identity))
cluster = cluster['max'].to_list()
computed = computed[computed['primary_accession'].isin(cluster)]
return computed
def class_distribution_counter(**kwargs):
"""
Count the number of proteins for each GO term in training set.
"""
data = pickle_load(Constants.ROOT + "{}/{}/{}".format(kwargs['seq_id'], kwargs['ont'], kwargs['session']))
all_proteins = []
for i in data:
all_proteins.extend(data[i])
annot = pd.read_csv(Constants.ROOT + 'annot.tsv', delimiter='\t')
annot = annot.where(pd.notnull(annot), None)
annot = annot[annot['Protein'].isin(all_proteins)]
annot = pd.Series(annot[kwargs['ont']].values, index=annot['Protein']).to_dict()
terms = []
for i in annot:
terms.extend(annot[i].split(","))
counter = Counter(terms)
# for i in counter.most_common():
# print(i)
# print("# of ontologies is {}".format(len(counter)))
return counter
def save_ckp(state, is_best, checkpoint_path, best_model_path):
"""
state: checkpoint we want to save
is_best: is this the best checkpoint; min validation loss
checkpoint_path: path to save checkpoint
best_model_path: path to save best model
"""
f_path = checkpoint_path
# save checkpoint data_bp to the path given, checkpoint_path
torch.save(state, f_path)
# if it is a best model, min validation loss
if is_best:
best_fpath = best_model_path
# copy that checkpoint file to best path given, best_model_path
shutil.copyfile(f_path, best_fpath)
def load_ckp(checkpoint_fpath, model, optimizer, device):
"""
checkpoint_path: path to save checkpoint
model: model that we want to load checkpoint parameters into
optimizer: optimizer we defined in previous training
"""
# load check point
checkpoint = torch.load(checkpoint_fpath, map_location=torch.device(device))
# initialize state_dict from checkpoint to model
model.load_state_dict(checkpoint['state_dict'])
# initialize optimizer from checkpoint to optimizer
optimizer.load_state_dict(checkpoint['optimizer'])
# initialize valid_loss_min from checkpoint to valid_loss_min
valid_loss_min = checkpoint['valid_loss_min']
# return model, optimizer, epoch value, min validation loss
return model, optimizer, checkpoint['epoch'], valid_loss_min
def draw_architecture(model, data_batch):
'''
Draw the network architecture.
'''
output = model(data_batch)
make_dot(output, params=dict(model.named_parameters())).render("rnn_lstm_torchviz", format="png")
def compute_roc(labels, preds):
# Compute ROC curve and ROC area for each class
fpr, tpr, _ = roc_curve(labels.flatten(), preds.flatten())
roc_auc = auc(fpr, tpr)
return roc_auc
def generate_bulk_embedding(path_to_extract_file, fasta_file, output_dir):
subprocess.call('python {} esm1b_t33_650M_UR50S {} {} --repr_layers 0 32 33 '
'--include mean per_tok --truncate'.format(path_to_extract_file,
"{}".format(fasta_file),
"{}".format(output_dir)),
shell=True) | Python |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.