url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://math.stackexchange.com/questions/1462977/how-to-state-that-a-sequence-is-cauchy-in-terms-of-limsup-and-liminf
# How to state that a sequence is Cauchy in terms of $\limsup$ and $\liminf$? How to state that a sequence is Cauchy in terms of $\limsup$ and $\liminf$? For example, is it true that a sequence $(a_n)_{n=1}^{\infty}$ is Cauchy iff $\displaystyle\limsup_{n\to\infty}|a_{n+k}-a_n|=0$ for all $k\in\mathbb{N}$? • I'm not sure why you used the limsup there... Because she the absolute value can never be lower than 0 therefor if the limsup equals 0 the regular limit exists and is 0 Oct 3 '15 at 20:40 • But in R you can say that a sequence is Cauchy if limsup equals liminf..... Oct 3 '15 at 20:45 The condition that you state is not equivalent to being Cauchy. To see this, consider $$a_n =\sum_{t =1}^n \frac {1}{t}.$$ Then $$0\leq a_{n+k} - a_n =\sum_{t = n+1}^{n+k} \frac {1}{t}\leq \sum_{t = n+1}^{n+k}\frac {1}{n}\leq \frac{k}{n}\to 0$$ as $n\to\infty$ for every $k$, but as is well known, we have $a_n \to \infty$, so that the sequence is not Cauchy. As noted by @Börge, a real sequence $(a_n)_n$ is Cauchy if and only if it is convergent if and only if $\limsup_n a_n = \liminf_n a_n \in \Bbb {R}$. Another way would be to require $$a_{n+k}-a_n \to 0$$ for $n \to \infty$, uniformly with respect to $k$.
2021-10-19 22:24:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283337593078613, "perplexity": 102.75534302441105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585281.35/warc/CC-MAIN-20211019202148-20211019232148-00420.warc.gz"}
https://www.posdefecology.net/post/compiling-r/
# Prerequisites First, request a build node, which will have internet access. An hour of time should be enough to install R and a few packages. Building R (especially the suggested packages) can be sped up by using multiple build jobs. srun -p build --mem=10G --cpus-per-task=8 --time=1:00:00 --pty /bin/zsh To use the Intel compilers and link to MKL on Hyak, load the relevant module. In early 2020, this is accomplished by running module load icc_19 I add this to my .bashrc and .zshrc so that the module is loaded every time I log in. To avoid warnings about building PDF documentation, load TeXLive as well. module load contrib/texlive/2017 Next, download the R source from CRAN and extract the contents. For R v3.6.2, this is done using wget. wget https://cran.r-project.org/src/base/R-3/R-3.6.2.tar.gz tar -xvf R-3.6.2.tar.gz ## Configuration Following the Intel instructions for building R with MKL, we need to source the compilervars.sh script and set a number of environmental variables. This can be done in a script for convenience. Some packages (e.g. TMB) will break if you include OpenMP in the standard compiler flags, so these are moved to the more appropriate *_OPENMP_* environmental variables. The following takes care of these steps. source /sw/intel-2019/bin/compilervars.sh intel64 export CC="icc" export CXX="icpc" export F77="ifort" export FC="ifort" export AR="xiar" export LD="xild" export CFLAGS="-fPIC -O3 -ipo -xHost -multiple-processes=8" export CXXFLAGS="-fPIC -O3 -ipo -xHost -multiple-processes=8" export FFLAGS="-fPIC -O3 -ipo -xHost -multiple-processes=8" export FCFLAGS="-fPIC -O3 -ipo -xHost -multiple-processes=8" export LDFLAGS="" # OpenMP flags? Need to be here instead of above or TMB breaks? export R_OPENMP_CFLAGS="-qopenmp" export SHLIB_OPENMP_CFLAGS="-qopenmp" export SHLIB_OPENMP_CXXFLAGS="-qopenmp" export SHLIB_OPENMP_FFLAGS="-qopenmp" Finally, run the configure script inside the R-* directory. The --prefix flag should be changed to the directory where the built version should be installed. ./configure --prefix=/gscratch/*/bin/R-3.6.2 \ --enable-R-shlib \ --with-blas=\$MKL \ --with-lapack \ --enable-BLAS-shlib # Build and install R This part is pretty easy once configuration is complete. The -j flag can be used to specify the maximum number of parallel jobs to run. This should be equal to the --cpus-per-task specified when you requested the build node. make -j8 Installation can be accomplised using make install Finally, add the R and Rscript executables to your PATH. # Testing the installation In order to use multiple cores on a compute node, the --cpus-per-task flag must be greater than one. Note that --tasks-per-node does not allow multiple threads to be used by a single process. Watching CPU usage with e.g. htop will tell you if multithreaded linear algebra is working. So, using an interactive compute node (such as stf-int) open your newly-compiled version of R and running X <- matrix(rnorm(5000 * 5000), nrow = 5000) t(X) %*% X You should see multiple CPUs being used at 100% in htop. If only one CPU is maxed out, something went wrong. ##### John K Best ###### PhD candidate My research focuses on advancing the use of statistics in ecology and fisheries.
2020-04-02 05:18:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35778477787971497, "perplexity": 9583.58111360192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00215.warc.gz"}
https://www.nature.com/articles/s41467-021-21394-y?error=cookies_not_supported&code=2d250674-1086-4a4e-9c66-38751748d670
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Determinants of genome-wide distribution and evolution of uORFs in eukaryotes ### Subjects An Author Correction to this article was published on 31 March 2021 ## Abstract Upstream open reading frames (uORFs) play widespread regulatory functions in modulating mRNA translation in eukaryotes, but the principles underlying the genomic distribution and evolution of uORFs remain poorly understood. Here, we analyze ~17 million putative canonical uORFs in 478 eukaryotic species that span most of the extant taxa of eukaryotes. We demonstrate how positive and purifying selection, coupled with differences in effective population size (Ne), has shaped the contents of uORFs in eukaryotes. Besides, gene expression level is important in influencing uORF occurrences across genes in a species. Our analyses suggest that most uORFs might play regulatory roles rather than encode functional peptides. We also show that the Kozak sequence context of uORFs has evolved across eukaryotic clades, and that noncanonical uORFs tend to have weaker suppressive effects than canonical uORFs in translation regulation. This study provides insights into the driving forces underlying uORF evolution in eukaryotes. ## Introduction Upstream open reading frames (uORFs) are short open reading frames (ORFs) that have start codons located in the 5′ untranslated regions (UTRs) of eukaryotic mRNAs. uORFs can attenuate the translational initiation of downstream coding sequences (CDSs) by sequestering or competing for ribosomes1,2,3,4,5,6,7,8,9,10. For an AUG triplet in the 5′ UTR (defined as “uAUG” hereafter), it can function as the start codon of a uORF that has a stop codon either preceding the start codon of the downstream CDS (nonoverlapping uORF, nORF) or residing in the body of the downstream CDS (out-of-frame overlapping uORF, oORF)4,11,12,13,14,15,16,17,18. Less frequently, an uAUG can function as the start codon of an ORF whose stop codon overlaps with the stop codon of the downstream CDS (N-terminal extension, NTE)4,19,20,21. The advent of ribosome profiling22,23,24,25,26, a method that determines the ribosome occupancy on mRNAs at the codon level, has enabled the genome-wide characterization of uORFs and NTEs that showed evidence of translation in various species with high sensitivity and accuracy12,16,17,27,28,29,30,31,32,33,34,35,36. Besides the canonical uORFs (beginning with an AUG start codon and ending with a UAA/UAG/UGA stop codon), the modified ribosome profiling methods4,37, which detect initiating ribosomes in cells treated with harringtonine32,34 or lactimidomycin38,39,40, have provided further evidence showing that many noncanonical uORFs (beginning with a non-AUG codon and ending with a UAA/UAG/UGA stop codon) might be prevalent and functionally important. Collectively, recent studies have demonstrated that uORFs are prevalently translated in eukaryotic cells and that uORF-mediated regulation plays important roles in tuning the translational program during development32,41,42,43,44,45 or stress responses10,27,46,47,48,49,50,51,52,53,54,55. It is well accepted that canonical uORFs are generally deleterious and are depleted in the 5′ UTRs of eukaryotic genomes56,57,58,59,60, and mutations that generate polymorphic uORFs are also usually deleterious and selected against in humans19,61,62,63,64,65,66 and flies32. Nevertheless, our recent study indicated that many uAUGs recently fixed in Drosophila melanogaster were driven by positive Darwinian selection32, which suggests that some uORFs and NTEs might be adaptive. Despite these exciting progress, the principles underlying the genomic distribution and evolution of uORFs and NTEs are poorly understood. For example, the following questions remained unanswered: (1) What is the role of natural selection in shaping the genome-wide contents of uORFs and NTEs in eukaryotes at the micro- and macroevolutionary scales? (2) Can we detect signatures of positive selection on uORFs and NTEs in clades other than Drosophila? (3) Are the sequence characteristics that influence the efficacy of uORF-mediated translational repression conserved between different eukaryotic species? Answers to these questions will not only help elucidate the role of translational regulation in adaptation, but also advance our understanding of the mechanisms underlying protein homeostasis in health and disease. Here, we systematically characterize 16,907,129 uAUGs in 478 eukaryotic species and explore various factors and forces that determine the genome-wide distributions of uORFs and NTEs across genes and species. Our results suggest that differences in uORF occurrences across genes are mainly influenced by gene expression levels, while the interspecific variability of uORFs is shaped by the effective population size (Ne). We also compare the conservation patterns of start codons versus coding regions of the canonical uORFs in different clades, disentangled the relationship between the Kozak sequence context and the translational efficiency of uORFs, and explore the evolution of Kozak contextual characteristics across eukaryotes. Our analyses present a broad overview of the interspecies variability of uAUGs in eukaryotes and provide insights into the general principles underlying the distribution and sequence evolution of uORFs and NTEs in eukaryotes. ## Results ### Characterization of putative canonical uORFs and NTEs in 478 eukaryotes We developed a bioinformatic pipeline and characterized uAUGs in the genomes of 478 eukaryotic species, including 242 fungi, 20 protists, and 216 multicellular eukaryotes that comprise plants and animals. As most species surveyed in this study currently have no ribosome profiling data, and it is very challenging to predict the noncanonical uORFs in silico reliably, we only focused on the putative canonical uORFs that start with the AUG start codon. In what follows, the uORFs analyzed in this study are restricted to the putative canonical uORFs unless explicitly stated otherwise (all the annotated uORFs and NTEs are presented in figshare67). The number of annotated protein-coding genes in the 242 fungi ranged from 3623 (Pneumocystis murina) to 32,847 (Fibularhizoctonia sp.). A total of 3,469,095 uAUGs were identified in these fungal genomes, with the number ranging from 1233 (Malassezia sympodialis) to 94,695 (Verticillium longisporum) (Supplementary Data 1). Since many protists use alternative nuclear genetic codes involving stop-codon reassignments68,69,70,71,72,73 or obligatory frameshifting at internal stop codons74, here we only focused on 20 protists that use the standard genetic code (Supplementary Data 1). Among the 20 protists, the number of annotated protein-coding genes ranged from 5389 (Plasmodium vivax) to 38,544 (Emiliania huxleyi), and the number of uAUGs ranging from 1903 (Plasmodium falciparum) to 99,859 (Cystoisospora suis), which resulted in a total of 391,565 uAUGs in these protist genomes (Supplementary Data 1). The 216 multicellular plants and animals, whose last common ancestor was dated to 1.5 billion years ago75, span the following taxa: (1) 41 plants, including mosses, eudicotyledons, and monocotyledons; (2) 38 invertebrates, including sponges, ctenophores, flatworms, cephalopods, mites, centipedes, crustaceans, springtails, insects, and tunicates; and (3) 137 vertebrates, including hagfishes, fishes, coelacanths, toads, lizards, turtles, crocodiles, birds, and mammals (Fig. 1). Among these species, mammals, including monotremes (n = 1), marsupials (n = 3), an armadillo (n = 1), laurasiatherians (n = 17), a rabbit (n = 1), rodents (n = 21), and primates (n = 24), constituted the largest clade (n = 68 species). Nematodes were excluded from the analyses because trans-splicing alters the 5′ UTR sequences of many mRNAs in their transcriptomes45,76. The number of annotated protein-coding genes in the 216 multicellular plants and animals ranged from 10,581 (Bombus terrestris) to 107,545 (Triticum aestivum), and 3388 (Schistosoma mansoni) to 68,741 (T. aestivum) of these genes exhibited annotated 5′ UTRs for at least one transcript (Fig. 1a and Supplementary Data 1). In these species, the annotated 5′ UTRs were usually shorter than 500 nt (the median length of the annotated 5′ UTRs ranged from 22 nt in Arabidopsis lyrata to 477 nt in Physcomitrella patens; Supplementary Fig. S1 and Supplementary Data 1). The number of uAUGs ranged from 3249 (Drosophila willistoni) to 798,433 (Theobroma cacao). Altogether, we identified a total of 13,046,469 uAUGs in the 216 multicellular plants and animals, although the number varied greatly across species. The vast majority (>97%) of the uAUGs identified in the 478 eukaryotic species were start codons of putative canonical uORFs. Specifically, in a species, the percentage (mean ± s.e.) of nORFs, oORFs, and NTEs was 83.45 ± 0.41%, 14.24 ± 0.34%, and 2.31 ± 0.15%, respectively. The detailed information for the uORFs (nORFs and oORFs) and NTEs is presented in Supplementary Data 1. ### Purifying selection is the major force shaping the prevalence of uAUGs in eukaryotic genomes The number of uAUGs varied wildly across species, either due to the differences in the sequencing coverage of genomes, the accuracy and completeness of 5′ UTR annotation, the number of protein-coding genes, the length of 5′ UTRs, or mutational bias in 5′ UTRs77. To control for the compounding factors, in each species, we compared the observed number of uAUGs (O) versus the expected number (E) that was obtained with the assumption of randomness by randomly shuffling the 5′ UTR sequences. We maintained the same dinucleotide frequencies in each sequence during shuffling for two reasons. First, the stacking energy of a new base pair is influenced by the neighboring base pairs in an RNA molecule78,79. Second, the biased mutations in certain dinucleotide contexts, such as from CpG to TpG mutations in mammals, might also affect the occurrences of uAUGs. The O/E ratio enabled the efficient measurement of selective pressure on uAUG depletion in a given species. As expected32,56,57,58,59, the O/E ratio of uAUGs was significantly lower than 1 in nearly all the examined species (473 out of 478 species, Fig. 1a and Supplementary Data 1). As a negative control, we also calculated the O/E ratio of all the other 63 possible triplets in 5′ UTRs and 3′ UTRs separately in each species. Of note, AUG had the lowest relative O/E ratio (5′ UTRs over 3′ UTRs) among all the 64 possible triplets (Supplementary Fig. S2), supporting the notion that purifying selection is the major force shaping the prevalence of uAUGs in the eukaryotic genomes. Interestingly, some AUG-like triplets (e.g., AUU, UUG, AUC, and GUG) tended to have higher O/E ratios in 5′ UTRs than in 3′ UTRs in all the clades. Such AUG-like triplets were either selectively maintained in 5′ UTRs as they can be used as noncanonical start codons, or alternatively, were the consequence of the depletion of uAUGs because point mutations can easily convert AUG to AUG-like triplets (e.g., from AUG → UUG) in the 5′ UTRs. However, further studies are required to separate these two possibilities. Within a species, the O/E ratio of uAUGs was significantly lower in the 5′ UTR regions within a distance L from the start codons of CDSs (cAUGs) than in the remaining 5′ UTR regions (P = 3.5 × 10−37, two-sided Wilcoxon signed-rank test when L was set to 100 nt; other values of L did not affect the conclusion, see Supplementary Fig. S3). This pattern is consistent with previous observations that uAUGs closer to CDSs showed a higher tendency to be depleted from 5′ UTRs57. Notably, the O/E ratio of oORFs was significantly lower than that of nORFs (Supplementary Fig. S4), suggesting oORFs tend to be more repressive and thus under stronger purifying selection than nORFs. Interestingly, NTEs showed lower O/E ratios than both oORFs and nORFs in 457 out of 478 species (Supplementary Fig. S4), suggesting that novel NTEs were selected against as they might alter protein functions21. X-linked mutations experience stronger selection than autosomal mutations if the fitness effects of the mutations are (partially) recessive80,81,82. If purifying selection is the dominant force acting on the occurrences of uAUGs in a genome, we expect to observe lower O/E ratios of uAUGs on X chromosomes than on autosomes. Indeed, significantly lower O/E ratios of uAUGs were found in X chromosomes than in autosomes, and this finding was obtained with both vertebrates and insects (Fig. 1b). In birds, which present female heterogamety (males ZZ, females ZW), selection is more efficient on the Z chromosome than autosomes83. Accordingly, a slightly lower O/E ratio of uAUGs was observed on the Z chromosome than on autosomes (Fig. 1b). Thus, the comparison between sex chromosomes and autosomes reinforces the thesis that purifying selection is the major force governing the prevalence of uORFs and NTEs in eukaryotes. Overall, these results suggest that uAUGs were selected against in 5′ UTRs, and the NTEs, which only accounted for a small fraction (~2.31% on average) of the uAUGs, were also shaped by strong purifying selection during evolution. Since uORFs (nORFs and oORFs) and NTEs might have different mechanisms in regulating gene expression and function, in what follows, we only focused on the putative canonical uORFs. ### Gene expression level as an important factor influencing the genome-wide distributions of uORFs across genes In humans, genes with uORFs exhibited lower expression levels than genes without uORFs84. Similarly, our analysis of previously published mRNA and protein abundance data of fly, human, mouse, mustard plant, and yeast revealed uORFs were infrequently detected in housekeeping genes, and there were significant anticorrelations between the gene expression level and the number of uORFs (Supplementary Fig. S5a and Supplementary Data 2). Meanwhile, gene ontology analysis revealed that genes containing putative uORFs tend to be enriched in the categories of signal transduction, transcription factors, and membrane proteins (Supplementary Fig. S5b; Supplementary Data 3). These patterns still held when we focused on the uORFs supported by previously published ribosome profiling data in fly32 and other species collected in the GWIPs-viz database85 (Supplementary Table S1; Supplementary Fig. S5b). Noteworthy, the anticorrelation between uORF occurrences and gene expression level well reconciles with the gene ontology analyses as housekeeping genes tend to be highly (or broadly) expressed86. Since gene expression level affects the efficacy of natural selection87,88, we further asked whether the efficacy of purifying selection is reduced in removing deleterious uORFs in lowly expressed genes. We grouped genes of a species into 20 equal-sized bins based on increasing expression levels and calculated the O/E ratio of uORFs in each bin. In all the five species we examined, the O/E ratio was lower than 1 in each bin (Supplementary Fig. S5c), suggesting that purifying selection was the dominant evolutionary force acting on the uORF occurrence regardless of gene expression levels. Interestingly, we observed significant anticorrelations between the gene expression level and O/E ratio of uORFs in each species, suggesting that purifying selection acting on uORFs is relatively weak for lowly expressed mRNAs. Thus, our results suggest that gene expression level is an important factor influencing uORF distribution across genes in a eukaryotic species. Excessive uORFs in highly expressed genes might cause insufficient protein output, which is harmful to the organisms. We postulate that purifying selection has removed deleterious uORFs in the highly expressed genes more efficiently than in the lowly expressed genes. On the other hand, genes in specific functional categories, such as transcriptional factors, which are likely to be lowly expressed, might be preferentially suppressed by uORFs at the translational level for optimizing protein production. Further studies are needed to investigate the relative importance of the two mechanisms in shaping the anticorrelation between gene expression level and uORF occurrence. ### Differences in Ne influence interspecies differences in uORF occurrences The O/E ratio of uORFs varied widely across the eukaryotic species (Fig. 1a and Supplementary Data 1), suggesting that the efficacy of natural selection differs across these species. Because the efficiency of natural selection is determined by Ne89, we questioned whether the differences in the O/E ratios of uORFs between different eukaryotes are due to the differences in Ne. We reasoned that the O/E ratio should be lower in species with a larger Ne because purifying selection is the dominant force acting on uORFs, and deleterious uORFs will be depleted more efficiently by purifying selection. Indeed, we uncovered a significant negative correlation between the O/E ratio and Ne (Spearman’s  =−0.67, P = 0.011) for 14 animals for which the Ne value was estimated in previous studies (Fig. 1c and Supplementary Table S2). Because the Ne value was unknown for most eukaryotes investigated in this study, we calculated the genome-wide average dN/dS ratio (ω, number of nonsynonymous changes per nonsynonymous site over the number of synonymous changes per synonymous site) of CDSs between closely related species as an indirect measure of the average Ne for a clade based on the following rationale: if a clade includes species with a large Ne, deleterious nonsynonymous mutations in CDSs will be more efficiently removed by natural selection, resulting in a smaller ω value for that clade. Therefore, if the purifying selection is the major force acting on uORF prevalence, a positive correlation between the O/E of uORFs and the ω of CDSs would be expected across different species. We aligned orthologous CDS sequences at the genomic scale for 37 pairs of closely related species, and calculated the genome-wide ω value for each pair of species (Supplementary Table S3). In this analysis, we assumed that two closely related species would have the same ω values and obtained both the O/E ratios of uORFs and the ω values of CDSs for 56 species. We uncovered a significant positive correlation between the O/E ratio and the ω value ( = 0.70, P = 1.8 × 10−9; Fig. 1d), which further confirms that the differences in Ne determine the differences in uORF depletion among eukaryotic genomes. Interestingly, a significant positive correlation between the median 5′ UTR length and the ω was also observed ( = 0.54, P = 1.4 × 10−5; Supplementary Fig. S6), suggesting that the 5′ UTR length is also under selective constraints. This finding is not surprising because the number of uORFs is generally positively correlated with the 5′ UTR length90. To exclude the possibility that the observed positive correlations were confounded by the phylogenetic relationships of the eukaryotic species, we also performed phylogenetic independent contrasts91 and still detected significant positive correlations between the O/E ratio and the ω (P = 0.017) and between the 5′ UTR length and the ω (P = 0.021). Together, our analyses suggest that purifying selection is the dominant force governing the contents of uORFs in eukaryotes and that the degree of uORF depletion in a species is mainly determined by the Ne of that species. ### Role of positive selection in influencing the prevalence of uORFs in eukaryotes Although uAUGs are generally depleted in the 5′ UTRs of Drosophila, our previous results indicated that a considerable fraction of the uORFs recently fixed in D. melanogaster were driven by positive Darwinian selection32. Our results are consistent with the notion that the very large Ne of D. melanogaster increases the efficacy of both positive selection and purifying selection89. Nevertheless, whether positive selection drives the fixation of uORFs in a eukaryote with a small Ne, such as humans, remains unclear. To address this research gap, we analyzed the new uORFs that were newly fixed in the lineages leading to extant humans after divergence from Pongo abelii, Gorilla gorilla, or Pan troglodytes using the asymptotic McDonald-Kreitman test (asymptoticMK)92,93. We detected weak signals of positive selection on the newly fixed uORFs in all three branches, and the value of αasym, which represents the fraction of newly formed uORFs driven to fixation by positive selection, was 0.24 (95% confidence interval [CI], −0.04–0.51), 0.20 (95% CI, −0.09 to 0.49), and 0.19 (95% CI, −0.10 to 0.48) in the three branches, respectively (Fig. 2a). Noteworthy, C>T mutations at CpG dinucleotides are highly frequent in mammals94, and new AUGs can be generated from CpG to TpG mutations through two approaches95: (1) from ACG to ATG, and (2) from CGTG to CATG (Fig. 2b). Thus, we further examined new uORFs derived from the CpG contexts and the remaining new uORFs separately. Roughly speaking, ~33% of the new uORFs fixed in each of the three branches were generated by CpG to TpG mutations. Interestingly, the CpG-derived uORFs were under strong positive selection (the αasym was 0.48 (95% CI, 0.18–0.78), 0.45 (95% CI, 0.13–0.76), and 0.44 (95% CI, 0.11~0.76) in the three branches, respectively), while the αasym for the remaining uORFs was close to 0 (Fig. 2b). Noteworthy, the αasym values were even higher when we focused on the new uORFs that were derived from the CpG contexts in the highly expressed genes (Supplementary Table S4). Of note, for the new uORFs fixed in D. melanogaster we previously analyzed32, a higher αasym value was also observed for the highly expressed genes (Supplementary Table S4). Therefore, although the prevalence of uORFs in a species was generally under purifying selection, we still found a fraction of uORFs might be favored by positive selection even in primates that typically have a small Ne. To further explore how positive and purifying selection coupled with differences in Ne shaped the repertoire of uORFs in a given species, we mathematically modeled the O/E ratio of uORFs by treating this ratio as the average fixation probability of mutations with different fitness effects. Considering that uORFs are generally deleterious, we assumed that 20%, 75%, and 5% of newly originated uORFs are neutral, deleterious, and beneficial, respectively. We also assumed that both beneficial and deleterious mutations present the same absolute selection coefficient in a diploid organism and that the fitness reduction in heterozygotes is half of that in homozygous mutants. We then calculated the overall fixation probability of newly originated uORFs relative to the neutral expectation, which is, by definition, similar to the O/E ratio. In our modeling, the relative fixation probability of newly originated uORFs gradually decreased with increases in the Ne (Fig. 2c), which resembled our observation that species with larger Ne values tend to exhibit lower O/E ratios. Moreover, a higher fraction of fixed uORFs that are driven by positive selection was obtained with a higher Ne value (Fig. 2d). Together, our results suggest that both purifying selection and positive selection act on uORF occurrences during eukaryotic evolution and that differences in Ne, which affects the efficiency of both types of natural selection, plays a major role in shaping the differences in uORF prevalence among eukaryotic species. ### Selective constraints on start codons of uORFs in eukaryotes Next, we questioned how the uORFs were maintained during eukaryotic evolution. The start codon is the most important definitive characteristic of a uORF4, and a uORF with a more conserved start codon tends to be more repressive toward the translation of the downstream CDS19,32. Here, we first quantitatively measured the selective pressures on the AUG start codons of uORFs (uoAUGs) in vertebrates, insects, and yeasts. Among the 78,003 uoAUGs identified in the human reference genome, 98.7% have conserved uoAUGs in other vertebrates (Fig. 3a). Interestingly, 1030 (1.3%) uoAUGs were only observed in humans and not in any other species, suggesting that these have a recent origin. Whether the human-specific uoAUGs are associated with unique human features remains to be investigated. For each uoAUG identified in the human reference genome, we calculated the branch length score (BLS) based on the conservation patterns of the orthologous sites among 100 vertebrate species using a previously described method96 (Fig. 3b). To estimate the number of uoAUGs that are more conserved than the neutral expectation, we also calculated the BLS for all 63 other triplets present in 5′ UTRs based on the assumption that these triplets evolve neutrally. The start codons of 173,290 noncanonical uORFs identified in humans by McGillivray et al.17 were excluded from the neutral controls. Compared with the other triplets, uoAUGs showed significantly higher BLS values (P = 7.6 × 10−58, two-sided Wilcoxon rank-sum test [WRST]; Fig. 3c), suggesting that the uoAUGs are under selective constraints during evolution. At a BLS cutoff of 0.5, the signal-to-noise ratio (fraction of uoAUGs that meet a minimum BLS cutoff divided by the fraction of other triplets with the same minimum BLS) was 3.71, and this value increased with increases in the BLS cutoff (Fig. 3d). Moreover, the BLS values of translated uoAUGs supported by ribosome profiling data from human samples were significantly larger than those of untranslated uoAUGs (P = 8.6 × 10−262, two-sided WRST; Fig. 3c). Accordingly, at a BLS cutoff of 0.5, a markedly higher signal-to-noise ratio (4.40) was obtained for the translated uoAUGs (Fig. 3d), suggesting that uoAUGs from which translation is initiated are under even stronger functional constraints. We also calculated the BLS values for the start codons of the 173,290 noncanonical uORFs previously identified in humans by McGillivary et al.17. Since conservation was used as a feature to identify the noncanonical uORFs in that study, it is not surprising that these noncanonical start codons were slightly (~1.2 times) more conserved than the other random triplets (P = 2.1 × 10−77, two-sided WRST; Fig. 3d). However, they were significantly less conserved than the canonical uoAUGs (P = 1.3 × 10−12, two-sided WRST). The uoAUGs identified in D. melanogaster were also more conserved than the random triplets in 5′ UTRs across the 27 examined insect species (Fig. 3e, f and Supplementary Fig. S7), and the uoAUGs with translational evidence from ribosome profiling data were more conserved (Figs. 3e, f). Analogously, the uoAUGs in S. cerevisiae were significantly more conserved than the other triplets in the 5′ UTRs across the seven yeast species we examined, no matter we used all the uoAUGs or the translated ones only (P < 9.5 × 10−11, two-sided WRST; Supplementary Fig. S8). Altogether, the start codons of the canonical uORFs, particularly the translated ones, are more likely to be maintained by functional constraints during eukaryotic evolution. ### Coding regions of uORFs are overall under neutral evolution How many uORFs can encode functional peptides remains unclear4,10,32,97. If a uORF encodes a functional peptide, one expects that the coding region of that uORF should be under selective constraints. In contrast, if the function of a uORF is to tune the translation of the downstream CDS by sequestering or competing for ribosomes, the coding regions of uORFs might be under neutral evolution or weaker selective constraints. Thus, we investigated the conservation patterns of the uORF peptides in the vertebrates, insects, and yeasts. While NTEs were not included as uORFs in our analysis, we further excluded CDS-overlapping portions of oORFs due to the confounding effects of selective constraints on CDS evolution. Briefly, for each of the 48,286 human uORFs that encode peptides of at least ten amino acids, we searched the putative homologous peptide sequences in other vertebrate species and calculated the BLS for that peptide (homologous sequences that have stop codons or frameshifts within 80% of the start regions were excluded). 48.8% of these human uORFs putatively presented conserved peptide sequences only in primates; 36.6% of them putatively exhibited conserved peptide sequences in mammals other than primates, and 1.82% of them putatively exhibited conserved peptide sequences in fishes. Of note, the BLS values of the uORF coding sequences were significantly lower than those of the uoAUGs (Fig. 4a; Supplementary Fig. S9 for other cutoffs of the minimum number of AAs required for uORF peptides). Analogously, for the uORFs identified in D. melanogaster, the coding regions of uORFs were also less conserved than uoAUGs in the 27 examined insect species (Fig. 4b and Supplementary Fig. S9). A similar pattern was also observed in the seven-way alignments of yeasts (Supplementary Fig. S10). Of note, a strong anticorrelation was observed between the BLSs and the lengths of uORF peptides in both humans and flies (see Fig. 4c and d), suggesting the peptides encoded by long uORFs are less likely to be maintained during evolution because they were more likely disrupted by stop codons or frameshifts. Also, if the major function of uORFs is to regulate CDS translation, a longer uORF might be less advantageous than a shorter one because the translation of a longer uORF consumes more energy and metabolites, which might be harmful to the host organisms. (The analysis was not conducted in yeasts because for 69% of the uORF peptides in S. cerevisiae we could not reliably identify the orthologous sequences in other yeast species). Together, these results suggest that the coding regions of uORFs tend to be less conserved than start codons of uORFs. To further test the selective pressure on the coding sequences of uORFs, we calculated the ω for coding regions of uORFs between humans and macaques. To reduce the noise in estimating ω values, we ranked the uORFs based on the Kozak scores of their start codons and equally grouped the uORFs into 1000 bins. For each bin, we concatenated the alignments of the uORF coding sequences and calculated the ω value. In contrast to CDSs, which present ω values markedly lower than 1, the ω value of the uORF coding region was roughly equal to 1 between humans and macaques (median ω = 1.05; Fig. 5a and Supplementary Fig. S11a). Similarly, the ω of uORFs was also close to 1 between D. melanogaster and D. simulans (median ω = 0.99 for all uORFs or 0.98 for translated uORFs only; Fig. 5b and Supplementary Fig. S11b). Moreover, we also grouped the single nucleotide polymorphisms (SNPs) in uORFs of humans (1000 Genomes Project98) and flies (Drosophila Genetic Reference Panel99) based on the derived allele frequencies (DAF) and calculated the ratio of nonsynonymous SNPs to synonymous SNPs (pN/pS) in each bin. In parallel, we performed the same analyses on SNPs in CDSs. In CDSs of both humans and flies, the pN/pS ratios were substantially lower than the values expected under randomness, and the pN/pS ratio was significantly negatively correlated with the DAF bins in both species (Fig. 5c, d; Supplementary Fig. S11c, d). In contrast, in uORFs of both humans and flies, the pN/pS ratio fluctuated around expected values, and there was no correlation between pN/pS and DAF bins. Thus, these contrasting patterns indicated that at the population level, the nonsynonymous SNPs in CDSs were under strong purifying selection, while the nonsynonymous SNPs in uORFs were nearly neutral. Together, these analyses further revealed that the coding regions of uORFs are overall under neutral evolution in both primates and flies. To estimate the proportion of uORFs that might encode conserved peptides, for each uORF, we also calculated PhyloCSF score, which predicts whether a genomic region potentially represents a conserved protein-coding region or not based on multiple sequence alignments100 (a positive PhyloCSF score means that region is more likely to encode a peptide). As a negative control, we also calculated the PhyloCSF scores for 20,000 randomly selected ORFs in 3′ UTRs (downstream ORFs, dORFs), as these dORFs have little chance of translation. Among the 36,655 uORFs that are ≥10 codons and evidenced of translation in humans, only 361 (0.985%) had positive PhyloCSF scores (Supplementary Fig. S12a). In contrast, the PhyloCSF score was positive for 0.545% (109 out of 20,000) dORFs. Thus, after controlling for the background noises, only 0.44% (161) of the translated uORFs showed evidence of encoding conserved peptides. In Drosophila, 1.19% (152 of 12,745) translated uORFs and 0.39% (78 out of 20000 dORFs) had positive PhyloCSF scores (Supplementary Fig. S12b), yielding an estimate of 0.80% (102 of 12,754) of the translated uORFs might encode conserved peptides. Overall, these analyses suggest that <1% canonical uORFs might encode conserved peptides. To test whether our evolutionary analyses of uORFs were supported by experimental evidence, we analyzed the mass spectrometry (MS) data from 38 samples of different developmental stages or tissues of D. melanogaster (Supplementary Data 4)101,102,103,104,105. Among the 23,321 uORFs that met our parameter settings (Methods), 57 (0.24%) had peptides detected in at least one sample (Supplementary Data 5). Interestingly, the BLS analysis revealed that the MS-supported uORFs present slightly more conserved coding regions than the other uORFs (Fig. 5e), suggesting these MS-supported uORF peptides might be functionally important. Collectively, our results support the notion that most uORFs play regulatory roles and their start codons are maintained due to functional constraints, and only a tiny fraction (<1%) of the uORFs might encode peptides that are maintained by natural selection during evolution. ### Evolution of Kozak sequence contextual characteristics that influence uORF translation The Kozak sequence context (−6 to +4 nucleotides) around the uoAUG plays a prominent role in influencing the translational initiation of that uORF16,32,106,107. As the nucleotide compositions differ between eukaryotes108,109,110, the preferential Kozak sequence context around cAUGs also differs across species111,112. Nevertheless, whether Kozak contextual characteristics around uoAUGs evolve remains unclear. To address this research gap, in each of the 478 species, we reconstructed a position weight matrix of the Kozak sequence context (PWMK) for all the CDSs (Supplementary Data 6 and Supplementary Fig. S13). Subsequently, in each species, we calculated the Kozak score for each cAUG or uoAUG with the PWMK of that species as previously described32. To test the performance of the Kozak score in predicting the translational initiation of a uORF (or CDS), we analyzed translation initiation site (TIS) profiling data from three species (human, mouse, and fly)32,38,40. We detected the translation of 26,344, 16,245, and 15,195 canonical uORFs that were supported by TIS data in at least one sample for human, mouse, and fly, respectively (Supplementary Table S1). For each uoAUG in a sample, we calculated the normalized TIS signal by dividing its initiating ribosome-protected fragment (RPF) count by its mean coverage in the matched RNA-Seq data. Strong positive correlations were found between the Kozak score and the normalized TIS signal for both cAUGs (Supplementary Fig. S14) and uoAUGs (Fig. 6a and Supplementary Fig. S15), suggesting that start codons with an optimized Kozak sequence context exhibit a higher translation initiation efficiency for both CDSs and uORFs. Interestingly, the number of uORFs was negatively correlated with the Kozak score of the cAUGs in most species (Fig. 6b), suggesting that uORFs tend to suppress genes translated at low levels, as previously suggested59. Also, the Kozak scores of the uORFs were significantly lower than those of the CDSs in each species (Supplementary Fig. S16a), supporting the notion that uORFs are generally located in less optimal contexts than CDSs16,32,113. To test whether the sequence contexts of uoAUGs are optimized, in each species, we also calculated the Kozak scores of the AUG triplets in 3′ UTRs (downstream AUGs, dAUGs) as neutral controls. The Kozak scores of uoAUGs were significantly higher than those of dAUGs in most (82.4%, 112 out of 136) vertebrates, (61.0%, 25 out of 41) plants, and (71.9%, 174 out of 242) fungi; however, an opposite trend was observed in invertebrates, and no obvious trend was observed in protists (Supplementary Fig. S16b). These results suggest that the optimization of the Kozak sequence context of uORFs is different across eukaryotic clades. To examine whether the Kozak contextual characteristics of uORFs evolved, in each of the 478 species, we calculated the pairwise Euclidian distance of the PWMK for uORFs (or CDSs) between two species (“Methods”). Interestingly, for both uORFs and CDSs, the distance between two species from a clade tend to be significantly shorter than that between one species in that clade and another species outside of that clade (Fig. 6c). A similar pattern was observed for the uORF PWMK as well (Fig. 6d). Together, our results suggest that, although the Kozak sequence context plays a pivotal role in regulating the translational initiation of uORFs and CDSs in eukaryotes, its contextual characteristics evolved during eukaryotic evolution. ### Comparing the canonical versus noncanonical uORFs in repressing CDS translation in human populations Recent studies have demonstrated that noncanonical uORFs are very abundant17,34, and many of them might have diverse functions4,20. Moreover, hundreds of noncanonical uORFs are conserved between different yeast species, suggesting they might be functionally important114. In the above analyses, we mainly focused on the canonical uORFs because (1) the majority of the species analyzed in this study had no ribosome profiling data available, and (2) it is still challenging to identify noncanonical uORFs without experimental data. To test whether the noncanonical uORFs influence the translation of CDSs, we extracted high-quality genotyping, mRNA-Seq, and Ribo-Seq data of 60 human lymphoblastoid cell lines from previous studies98,115, and examined whether variations in uORF start codons influence the translation efficiency of the main CDSs among different samples (Fig. 7a). Among the potentially functional uORFs in humans predicted by McGillivray et al.17, 146 canonical and 796 noncanonical uORFs had genetic variants in their start codons among these samples (only variants with minor allele frequency ≥5% were considered in the analysis). We performed linear regressions to assess the regulatory impact of uORF alteration on the translation of down-stream CDSs, with a positive slope value in the regression meaning that the presence of a uORF in certain individuals is associated with a decrease in the translation efficiency of the downstream CDS in those individuals, and vice versa (“Methods”). A general trend was the slope values were overall positive for the canonical uORFs, while the slope values for the noncanonical uORFs fluctuated around 0 (Fig. 7b). This comparison suggests that in human populations, the noncanonical uORFs overall have relatively limited repressive effects on CDS translation compared to the canonical uORFs, although we cannot exclude the possibility that a small fraction of the noncanonical uORFs might have strong repressive effects on the translation of downstream CDSs. To experimentally verify the influence of both types of uORFs on CDS translation, we sampled 80 human uORFs and performed luciferase reporter assays in HEK293FT cells (Supplementary Fig. S17). These tested uORFs, which included 42 canonical and 38 noncanonical ones, were predicted potentially functional by McGillivray et al.17 and had polymorphic start codons in human populations. For each uORF, we compared the repressive effect of the annotated uORF allele versus that of the non-uORF allele in suppressing translation of the reporter gene. Although occasionally the non-uORF allele had a stronger repressive effect than the uORF allele, the general trend was that the uORF allele had a stronger effect than the non-uORF allele in suppressing translation (Fig. 7c, d). Moreover, a significantly higher proportion of the canonical (55%, 23/42) than the noncanonical (26%, 10/38) uORFs exhibited the pattern that the annotated uORF allele showed a significantly stronger repressive effect on the CDS translation than the non-uORF allele (P = 0.013, Fisher’s exact test, Fig. 7c, d). Also, the difference in CDS translation suppression between the uORF and the non-uORF allele is significantly larger for the canonical than the noncanonical uORFs (P = 0.006, one-sided WRST). Altogether, these results reinforced the thesis that the noncanonical uORFs overall have weaker repressive effects on CDS translation than the canonical uORFs. ## Discussion In this study, we analyzed ~17 million uAUGs, 97.69 ± 0.15% of which are start codons of putative canonical uORFs in 478 eukaryotic species that span the majority of extant taxa of eukaryotes. Although the prevalence of canonical uORFs in a species was generally under purifying selection, we still found a fraction of new canonical uORFs might be favored by positive selection even in primates that typically have a small Ne. These observations are consistent with the evolution model of uORFs we previously proposed4,32. Under that model, the majority of newly formed uORFs are deleterious and quickly removed from the population, and a relatively smaller fraction of the new uORFs are beneficial and rapidly fixed in populations under positive selection. After fixation, the functional uORFs, particularly the start codons, are maintained by natural selection during evolution. Hence, although in a species the occurrence of uORFs is influenced by positive or purifying selection, the opposing effects of positive selection and purifying selection acting on new uORFs result in a pattern that uORFs are overall depleted in 5′ UTRs. As shown in our population genetic modeling, the efficacies of both positive and purifying selection on uORF fixation in a species are influenced by the effective population size. Moreover, we also found that the gene expression level affects the efficacy of natural selection acting on uORF occurrences. Thus, our results have systematically demonstrated how positive and purifying selection, coupled with differences in gene expression level and Ne, influence the genome-wide distribution and contents of uORFs in eukaryotes. Together, our analyses reveal the general principles underlying the distribution and sequence evolution of uORFs in eukaryotes. As uORFs often control posttranscriptional gene expression in combination with other regulators such as microRNAs90, further studies are required to elucidate how uORFs coevolve with other regulatory elements. We found that start codons of canonical uORFs, particularly the translated ones, tend to be maintained by functional constraints during evolution. These results might also be pertinent to the translational buffering mechanism, which indicates that protein expression levels are more conserved between species than mRNAs116,117,118,119,120. Nevertheless, our analyses suggest the coding regions of uORFs are overall under neutral evolution. It is not uncommon that some uORF-encoded peptides are conserved across species; however, the conservation of such a peptide does not necessarily mean that peptide might be functional since the coding region of a uORF can be constrained to optimizing translation elongation of that uORF54,121,122. Overall, our results suggest that the major function of uORFs is to fine-tune CDS translation rather than to encode conserved peptides. Nevertheless, we do not deny that some uORFs can encode functional peptides, as clearly demonstrated by the previous studies15,123,124. Of note, both our PhyloCSF analyses and MS data analyses suggest that a small fraction (<1%) of uORFs might produce peptides. We found the start codons of the noncanonical uORFs McGillivray et al.17 identified in humans are overall slightly (~1.2 times) more conserved than the other random triplets across vertebrates. Moreover, our re-analyses of the previously published gene expression data revealed that the noncanonical uORFs tend to have weaker repressive effects on CDS translation than the canonical uORFs, and this pattern was further confirmed by our luciferase reporter assays. Of note, these results do not necessarily suggest that noncanonical uORFs are functionally unimportant, as it has been well established that many noncanonical uORFs might have diverse functions in various biological processes4,20, such as stress responses125,126 or tumor initiation127. Overall, our current understanding of the prevalence and function of the noncanonical uORFs are still very limited. Further studies are required to reliably identify the noncanonical uORFs and elucidate their regulatory functions and evolutionary principles. Protists have a very high phylogenetic diversity128, and many protists use alternative nuclear genetic codes involving stop-codon reassignments68,69 and obligatory frameshifting at internal stop codons74. In protists with no dedicated stop codons71, such as Condylostoma magnum70,71, Parduczia sp.71, Blastocrithidia72, and Amoebophrya sp. ex Karlodinium veneficum73, translation from any possible uAUG is supposed to terminate near the end of a transcript and overlaps with the main CDS, which results in a different protein. Thus, the occurrence of uORFs in protists with alternative genetic decoding schemes might differ considerably from that of most other eukaryotes. In this study, we only focused on 20 protists that use the standard genetic code. Although the O/E ratio of uAUGs was significantly <1 in all the fungi, multicellular plants and animals we examined, such a pattern was observed in only 15 of the 20 protists. The O/E ratio of uAUGs was close to or higher than 1 in the remaining five protists, including Cystoisospora suis (1.161, 95% CI 1.154–1.169), Toxoplasma gondii (0.998, 95% CI 0.989–1.1.007), Nannochloropsis gaditana (0.997, 95% CI 0.986–1.007), and two malaria vectors Plasmodium yoelii (1.016, 95% CI 1.008–1.025), and Plasmodium vivax (0.989, 95% CI 0.975–1.004). However, these five protists tended to have significantly longer 5′ UTRs than the other 15 protists (Supplementary Fig. S18), suggesting this observation might be an artifact caused by inaccurate 5′ UTR annotations in these five species. Indeed, the O/E ratio of uAUGs in the 5′ UTR regions that are proximal to CDS (within 100 or 150 nt) were significantly lower than 1 in all the five protists (Supplementary Table S5), suggesting that uAUG occurrence in 5′ UTR regions proximal to CDSs is still under purifying selection in these protists. The Kozak sequence context around the uoAUG plays a crucial role in controlling the translation of a uORF16,32,106,107, which subsequently regulates translation of the downstream CDS. There has been a growing interest in engineering uORFs for precise translation control of the main protein products129,130,131. Our results revealed the Kozak sequence context evolved across eukaryotic clades, which suggests that the species-specific Kozak sequence contextual features should be considered in designing uORFs for a specific desired trait. ## Methods ### Identification of putative canonical uORFs We downloaded the gene models and cDNA sequences of all eukaryotes that are annotated in the Ensembl Genome Browser (release 96)132, Ensembl Metazoa (release 43), Ensembl Plants (release 43), Ensembl Protists (release 46), and Ensembl Fungi (release 46). Transcript ends of yeast mRNAs were obtained from a previous study133. Putative uORFs and NTEs that start with AUG codons and end with stop codons (UAA/UAG/UGA) were identified from the annotated 5′ UTRs of protein-coding genes. uORFs and NTEs with start codons located in CDSs of other transcripts were excluded from the analysis. Only the species for which 5′ UTR annotation information was available for more than 25% of the protein-coding genes were considered in the analyses. Among all the 479 species meet this criteria, Ichthyophthirius multifiliis was excluded since UAA and UAG are reassigned to encode glutamine in this species134, which would interfere with the uORF and NTE prediction. ### Calculation of the O/E ratio A permutation analysis was performed to determine the ratio of the observed to the expected number of uAUGs (O/E ratio) for each species. For genes that exhibited more than one transcript, only the longest transcript was used in the analysis. Unusually long 5′ UTRs in each species (longer than the mean + 3s.d. of the lengths of the 5′ UTRs in that species) were excluded because these are likely annotation artifacts. We denoted the number of AUG triplets in the 5′ UTRs of a species as nobs. We subsequently shuffled the 5′ UTRs with 1000 replicates while maintaining the same dinucleotide frequency using uShuffle135. We calculated the median and 2.5% and 97.5% quantiles of the number of AUGs in the shuffled 5′ UTRs and denoted these numbers as nexp, n0.025, and n0.975, respectively. We then calculated the O/E ratios for the species as nobs/nexp with a 95% confidence interval of [nobs/n0.975, nobs/n0.025]. The O/E ratio of other triplets in 5′ UTRs or 3′ UTRs was calculated using the same procedure. ### Estimation of the genome-wide ω of protein-coding genes To estimate the genome-wide average ω of protein-coding genes between two closely-related species in a clade, we performed a reciprocal best BLAST136 of protein sequences between the two species (E < 10−10). We identified orthologs of protein-coding genes at the genomic scale for 37 pairs of closely related species, which spanned 56 species. For each pair of orthologs between two species, we aligned their protein sequences with MUSCLE (3.8.31)137 using the default parameters and generated codon alignments with tranalign from the EMBOSS package138. We then calculated ω using yn00 from PAML139 with the codon alignments as input. The median ω of all pairs of orthologs between two species was used as the genome-wide ω of protein-coding genes. For species that were compared with multiple other species, the median ω values obtained from different comparisons were averaged. ### Phylogenetic independent contrasts For the 56 metazoan species for which ω values were estimated, we obtained the phylogenetic tree from the Open Tree of Life140. We used BUSCO141 to identify single-copy protein orthologs that were conserved in all 56 species, concatenated the protein sequences of the single-copy orthologs in each species, and performed multiple alignments using MUSCLE with the default parameters. Poorly aligned regions in the resulting alignment were removed using trimAl142 with the “-automated1” method. The branch length of the tree was calculated using codeml from PAML with the JTT substitution model (“seqtype = 2, runmode = 0, model = 2, aaRateFile = jones.dat”). Phylogenetic independent contrasts were performed using the “pic” function in the ape package143. The O/E ratio, ω, and median 5′ UTR length of each species were log-transformed before the contrasts. ### McDonald–Kreitman test of newly fixed uORFs in humans and primates To identify fixed differences in AUG triplets in 5′ UTRs and introns, we downloaded whole-genome pairwise alignments between humans (hg19 freeze) and other primates (Pan troglodytes, Gorilla gorilla, Pongo abelii, and Macaca mulatta) from the UCSC Genome Browser144. AUG triplets that were newly fixed in the human or hominid lineages were inferred using the parsimonious method with M. mulatta as the outgroup. We obtained all human SNPs and their ancestral allele information from the phase 3 data of the 1000 Genomes Project98. Both fixed and polymorphic AUG differences located in repetitive regions were excluded from the downstream analysis. Newly fixed or polymorphic AUGs in 5′ UTRs that form NTEs were removed. AsymptoticMK92,93 tests were performed to detect the signal of positive selection. The data for asymptoticMK tests in flies was obtained from our previous study32. To determine the effect of gene expression on positive selection, fixed and segregating mutations were divided into two halves based on the median expression level of genes with fixed new AUGs in 5′ UTR. The average protein abundances across different tissues145 were used from humans, and the average Reads per Kilobase per Million mapped reads (RPKM) values in Ribo-Seq of 12 different developmental stages or tissues were utilized for flies32. ### The fixation probability of new uORFs For a new autosomal mutation with a selective coefficient s in a diploid population of size Ne, the fixation probability of the mutation relative to a neutral mutation was calculated as $$f(s) = 2N_e\mathop {\smallint }\limits_0^{\frac{1}{{2N_e}}} G\left( x \right)dx/\mathop {\smallint }\limits_0^1 G\left( x \right)dx$$, where $$G\left( x \right) = \exp [ - 4N_eshx - 2N_es\left( {1 - 2h} \right)x^2]$$ and h is the dominance coefficient146. For mutations that introduce new uORFs into the population, the fractions of neutral, deleterious, and beneficial mutations are denoted as p1, p2, and p3, respectively. Based on the assumption that the selective coefficients for deleterious and beneficial mutations have the same absolute value, we can obtain the overall relative fixation probability of mutations as $$p_1 + p_2f\left( { - s} \right) + p_3f(s)$$. In the simulation, we used a fixed h = 0.5, and p1, p2, and p3 were set to 0.2, 0.75, and 0.05, respectively. ### Processing of ribosome profiling data We obtained pre-calculated ribosome profiling coverage data from humans, mice, rats, zebrafish, and A. thaliana from the GWIPs-viz database85. Fly ribosome profiling data generated by our group32 and other researchers147 that covered all the major developmental stages were also used in this study. Each RPF was assigned to a P-site with plastid148. For a uORF in a species, the number of RPFs whose P-sites were within the uORF was calculated with BigWigAverageOverBed149. For uORFs that overlapped with CDSs, the overlapping regions were excluded. A uORF was considered as translated if it was covered by the P-site of at least one RPF read across different ribosome profiling datasets in a species. The genome-wide coverage of initiating ribosome profiling and matched mRNA-Seq data from human and mouse cell lines were downloaded from the GWIPs-viz database. The RNA-Seq data from S2 cells and corresponding Ribo-Seq data after harringtonine treatment were obtained from our previous study32. As previously conducted40, we first counted the number of initiating RPFs whose P-sites were within the 1-nt flanking region (i.e., −1 to +4) of each uORF or CDS start codon and then normalized the initiating RPF count with the mean coverage of the RNA-Seq data in the same region. We only used start codons with at least 2 initiating RPFs and at least 4 mRNA reads for the downstream analysis of human and mouse cell lines, and those with at least 5 initiating RPF reads and at least 10 mRNA reads were used for the analysis of S2 cells. ### Gene ontology analysis Gene ontology (GO) annotations for human, mouse, rat, zebrafish, fly, A. thaliana, and yeast were downloaded from the Gene Ontology Resource (2019-06-09 release). Because not all genes under a GO term were provided in the GO annotation files, we parsed the gene annotation files to obtain the complete list of genes under each term using topGO150. For each species, all the GO terms belonging to Molecular Function, Biological Process, and Cellular Component were combined in the enrichment analysis. The GO terms enriched in uORF-containing genes or uORF-free genes were determined using Fisher’s exact tests. Multiple testing correction was performed with the Benjamini–Hochberg method151, and significant terms were determined at a false discovery rate of 0.1 for each species. Nonredundant representative terms that were significantly enriched in at least five species were chosen for visualization. ### Branch length score calculation We downloaded the 100-way vertebrate genome alignments based on human (hg19), 27-way insect alignments based on D. melanogaster (dm6), the 7-way alignments of yeast species based on S. cerevisiae (sacCer3), and the corresponding phylogenetic trees from UCSC Genome Browser and used the Galaxy platform152 to parse the multiple sequence alignments of 5′ UTRs in vertebrates or insects. For the start codon of each human uORF (uoAUG), we calculated the sum of the branch lengths of the subtree composed of the species in which the uAUG was present in the orthologous sites (B0) and then calculated the BLS value by dividing B0 by the total branch lengths for the phylogenetic tree of the 100 species. Similarly, the BLS was calculated for the start codon of each uORF in D. melanogaster across 27 insect species. For each predicted uORF peptide in humans, we searched its peptide against the orthologous sequences of other species in the 5′ UTR alignments using Exonerate (V2.2)153. uoAUGs located in repeat regions (downloaded from UCSC Genome Browser) were excluded. For oORFs, only the portion that was not overlapping with CDSs were considered in the analysis. To avoid spurious matching, we only considered human uORF peptides containing at least m amino acids (m was set at 10, 15, and 20 in the analysis). We identified uORFs with conserved peptides in other species using the following criteria: (1) the first codon of the matched sequence was AUG; (2) no stop codons or frameshifts were present in the first 80% of the matched sequence; and (3) between humans and the studied species, the identity of the uORF peptide should be greater than the 2.5% quantile of the genome-wide identity of the main protein sequences. For each uORF, we also calculated the BLSs for peptide sequences based on the presence of conserved peptides in other vertebrates as described above. A similar analysis was performed for the fly and yeast uORFs. Based on these alignments of uORF peptides, we generated alignments of uORF coding regions between humans and macaques and between D. melanogaster and D. simulans. Due to the short length of uORFs, we ranked the uORFs based on their Kozak score and divided them into 1000 bins with equal numbers of uORFs. For the uORFs in each bin, we concatenated their alignments and calculated ω values using yn00 as described above. ### pN/pS analysis To study the population variation within uORFs, we merged the genomic intervals of human uORFs and excluded the regions overlapping with CDSs and repeats. We then extracted the SNPs overlapping with uORF regions from the phase 3 data of the 1000 Genomes Project. SNPs in the CDS-overlapping portion of oORFs were excluded. We annotated the effect of SNPs on human uORFs (nonsynonymous or synonymous) using custom scripts and excluded ambiguous SNPs that were annotated as both nonsynonymous and synonymous in different uORFs. For comparison, we also extracted the SNPs in CDS regions and determined their effect on CDSs using SnpEff154. The same analysis was performed for uORFs of D. melanogaster using the freeze 2 data of the Drosophila Genetic Reference Panel99. ### PhyloCSF score calculation The alignments of human uORFs with at least 10 codons were extracted from the 100-way vertebrate genome alignment based on humans as described above. PhyloCSF for each uORF was calculated with PhyloCSF software100 using the parameter set “100vertebrates”. As a negative control, we annotated all the possible ORFs in 3′ UTRs (dORFs) with at least ten codons using getorf from EMBOSS suit138. dORFs overlapping with any CDS or uORF were excluded. We randomly selected 20,000 unique dORFs from the remaining dORFs and calculated PhyloCSF scores with the same procedure as for uORFs. The same analysis was performed for uORFs and dORFs in flies, except that the parameter set “23flies” was used when calculating PhyloCSF scores. ### MS data analysis MS datasets for multiple tissues, developmental stages, and cell lines of D. melanogaster were obtained from ProteomeCentral155. Information on these datasets is listed in Supplementary Data 4. In peptide search, we used a custom database composed of the annotated proteome of D. melanogaster and all the peptides encoded by regions between two consecutive in-frame stop codons in cDNA sequences with at least 7 amino acids. To recover as many uORF-encoded peptides as possible, each sample was searched with three different search engines (MaxQuant v1.6.5156, OpenMS v2.3.0157, and pFind3158) at a 1% false discovery rate. Enzyme specificity was set to trypsin, and at most two missing cleavages were allowed. Cysteine carbamidomethylation was included as the fixed modification and methine oxidation as the variable modification. Both the precursor and fragment tolerance were set to 20 ppm for higher-energy collisional dissociation datasets. Fragment tolerance was set to 0.5 Da for collision-induced dissociation (CID) datasets. Peptides with <7 amino acids were excluded during searching. Peptides that match the built-in contaminants in MaxQuant, yeast proteins and the annotated fly proteome were removed with PeptideMatchCMD v1.0159 allowing mismatches of leucine and isoleucine. The remaining peptides were mapped to peptides encoded by all the putative canonical uORFs. uORFs with uniquely mapped peptides were kept as MS-supported uORFs. ### Calculation of the Kozak score For each species, we retrieved the six nucleotides upstream of the CDS start codons and one nucleotide downstream of these codons and built a position probability matrix (PWM) as the Kozak sequence context. We then determined the Kozak score for the start codon of a uORF or CDS, as well as for each AUG in 3′ UTRs by calculating the log-odds ratio of their flanking sequences using the above-derived PWM32. The Euclidian distance between two PWMs of uORF or CDS Kozak sequences was calculated using TFBSTools160. ### Effect of uORF variation on CDS translation in human populations The RNA-Seq and ribosome profiling data from lymphoblastoid cell lines (LCLs) were obtained in a previous study115. High-quality genotyping data from 60 LCLs were obtained from the 1000 Genomes project98. After pre-processing, the RNA-Seq reads and RPFs were mapped to the human reference genome with STAR161. Reads mapped to the CDS region of each protein-coding gene were tabulated with htseq-count162. CDS read counts were normalized across different cell lines with DESeq2163 separately for RNA-Seq and RPFs. The translation efficiency of a gene in a sample was calculated as the ratio of the normalized RPF read count over the normalized RNA-Seq read count. To control for false positives, only SNPs that disrupt the canonical and noncanonical uORFs annotated by McGillivray et al.17 were analyzed. SNPs with a minor allele frequency of <5% among the 60 LCLs were excluded. A SNP is classified as a canonical uORF variant if the wild-type start codon or mutant start codon is AUG and is classified as noncanonical otherwise. For each uORF variant, a linear regression was performed between the CDS translational efficiency and the number of non-uORF alleles (0, 1, or 2) across different LCLs. ### Experimental verification of uORF variants The effects of uORF variants were assayed with dual-luciferase reporter assays (psiCHECK-2 vector, Promega). HEK293FT cells were purchased from the Cell Bank of the Chinese Academy of Sciences. RNA was extracted using the TRIzol reagent (15596018, Thermo Fisher Scientific), and cDNAs were synthesized using the PrimeScript First-strand cDNA Synthesis Kit (6110B, TaKaRa). For each uORF variant, the wild-type (WT) 5′ UTR or mutated 5′ UTR were cloned from cDNAs by PCR. All the primers used for cloning 5′ UTR fragments were listed in Supplementary Data 7. The reporter plasmid was linearized using Nhe1 (R3131S, NEB). The product and the 5′ UTR sequence were assembled using the NEBuilder HiFi DNA Assembly Cloning Kit (E5520S, NEB). Plasmid libraries were extracted using a QIAGEN Miniprep kit (27106, QIAGEN) according to the manufacturer’s instructions. The constructed vectors were transfected into HEK293FT cells using Lipofectamine 3000 Transfection Reagent (L3000015, Thermo Fisher Scientific). The cells were cultivated in Dulbecco’s modified Eagle’s medium (DMEM) with 10% FBS for 32 h. Then the psiCHECK-2 dual-luciferase reporter assay system (Promega) was used to detect levels of the Renilla luciferase with WT or mutant 5′ UTR and normalized to the firefly luciferase as an internal control. At least four biological repetitions were performed for each WT or mutated 5′ UTR plasmid. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability The putative uORFs and NTEs annotated in this study are available from figshare67 (https://doi.org/10.6084/m9.figshare.9980441.v4). The following public data were analyzed in this study: (1) gene annotations, cDNA sequences and genome sequences from Ensembl Genome Browser (https://www.ensembl.org and http://ensemblgenomes.org); (2) the transcript ends of yeast mRNAs from Gene Expression Omnibus (GEO) under the accession number GSE49026; (3) functional annotation of gene categories from The Gene Ontology Resource (http://geneontology.org); (4) gene expression data in model organisms from previous studies as listed in Supplementary Data 2; (5) Ribo-Seq data from GWIPs-viz database (https://gwips.ucc.ie) and our previous study32; (6) the effective population size reported in previous studies as listed in Supplementary Table 2; (7) single nucleotide polymorphisms from the 1000 Genomes Project (https://www.internationalgenome.org/data) and DGRP2 (http://dgrp2.gnets.ncsu.edu); (8) multiple genome alignments from UCSC Genome Browser (https://genome.ucsc.edu); (9) the annotation of potential functional uORFs in humans from McGillivray et al.17; (10) mass spectrometry datasets from ProteomeCentral (http://proteomecentral.proteomexchange.org) as listed in Supplementary Data 4; 11) RNA-Seq and Ribo-Seq data of human lymphoblastoid cell lines from GEO under the accession number GSE61742 and the Gilad/Pritchard group (http://eqtl.uchicago.edu/RNA_Seq_data). Source Data have been deposited in figshare (https://doi.org/10.6084/m9.figshare.12612068.v2) and are provided with this paper. ## Code availability The data investigated in this study were analyzed using R statistical software (v3.6). The custom scripts used in this study are available from figshare (https://doi.org/10.6084/m9.figshare.12612068.v2). ## References 1. 1. Jackson, R. J., Hellen, C. U. & Pestova, T. V. The mechanism of eukaryotic translation initiation and principles of its regulation. Nat. Rev. Mol. Cell Biol. 11, 113–127 (2010). 2. 2. Sonenberg, N. & Hinnebusch, A. G. Regulation of translation initiation in eukaryotes: mechanisms and biological targets. Cell 136, 731–745 (2009). 3. 3. Ruiz-Orera, J. & Alba, M. M. Translation of small open reading frames: roles in regulation and evolutionary innovation. Trends Genet. 35, 186–198 (2018). 4. 4. Zhang, H., Wang, Y. & Lu, J. Function and evolution of Upstream ORFs in eukaryotes. Trends Biochem. Sci. 44, 782–794 (2019). 5. 5. Hinnebusch, A. G., Ivanov, I. P. & Sonenberg, N. Translational control by 5’-untranslated regions of eukaryotic mRNAs. Science 352, 1413–1416 (2016). 6. 6. Morris, D. R. & Geballe, A. P. Upstream open reading frames as regulators of mRNA translation. Mol. Cell. Biol. 20, 8635–8642 (2000). 7. 7. Wethmar, K. The regulatory potential of upstream open reading frames in eukaryotic gene expression. Wiley Interdiscip. Rev.: RNA 5, 765–768 (2014). 8. 8. Wethmar, K., Smink, J. J. & Leutz, A. Upstream open reading frames: molecular switches in (patho)physiology. Bioessays 32, 885–893 (2010). 9. 9. Medenbach, J., Seiler, M. & Hentze, MatthiasW. Translational control via protein-regulated upstream open reading frames. Cell 145, 902–913 (2011). 10. 10. Orr, M. W., Mao, Y., Storz, G. & Qian, S.-B. Alternative ORFs and small ORFs: shedding light on the dark proteome. Nucleic Acids Res. 48, 1029–1042 (2020). 11. 11. Calviello, L. et al. Detecting actively translated open reading frames in ribosome profiling data. Nat. Methods 13, 165–170 (2016). 12. 12. Johnstone, T. G., Bazzini, A. A. & Giraldez, A. J. Upstream ORFs are prevalent translational repressors in vertebrates. EMBO J. 35, 706–723 (2016). 13. 13. Whiffin, N. et al. Characterising the loss-of-function impact of 5’ untranslated region variants in 15,708 individuals. Nat. Commun. 11, 2523 (2020). 14. 14. Brar, G. A. et al. High-resolution view of the yeast meiotic program revealed by ribosome profiling. Science 335, 552–557 (2012). 15. 15. Aspden, J. L. et al. Extensive translation of small open reading frames revealed by Poly-Ribo-Seq. eLife 3, e03528 (2014). 16. 16. Chew, G. L., Pauli, A. & Schier, A. F. Conservation of uORF repressiveness and sequence features in mouse, human and zebrafish. Nat. Commun. 7, 11663 (2016). 17. 17. McGillivray, P. et al. A comprehensive catalog of predicted functional upstream open reading frames in humans. Nucleic Acids Res. 46, 3326–3338 (2018). 18. 18. Niu, R. et al. uORFlight: a vehicle toward uORF-mediated translational regulation mechanisms in eukaryotes. Database 2020, https://doi.org/10.1093/database/baaa007 (2020). 19. 19. Calvo, S. E., Pagliarini, D. J. & Mootha, V. K. Upstream open reading frames cause widespread reduction of protein expression and are polymorphic among humans. Proc. Natl Acad. Sci. USA 106, 7507–7512 (2009). 20. 20. Chen, J. et al. Pervasive functional translation of noncanonical human open reading frames. Science 367, 1140–1146 (2020). 21. 21. Benitez-Cantos, M. S. et al. Translation initiation downstream from annotated start codons in human mRNAs coevolves with the Kozak context. Genome Res. 30, 974–984 (2020). 22. 22. Calviello, L. & Ohler, U. Beyond read-counts: Ribo-seq data analysis to understand the functions of the transcriptome. Trends Genet. 33, 728–744 (2017). 23. 23. Andreev, D. E. et al. Insights into the mechanisms of eukaryotic translation gained with ribosome profiling. Nucleic Acids Res. 45, 513–526 (2017). 24. 24. Brar, G. A. & Weissman, J. S. Ribosome profiling reveals the what, when, where and how of protein synthesis. Nat. Rev. Mol. Cell Biol. 16, 651–664 (2015). 25. 25. Ingolia, N. T. Ribosome profiling: new views of translation, from single codons to genome scale. Nat. Rev. Genet. 15, 205–213 (2014). 26. 26. Ingolia, N. T. Ribosome footprint profiling of translation throughout the genome.Cell 165, 22–33 (2016). 27. 27. Ingolia, N. T., Ghaemmaghami, S., Newman, J. R. & Weissman, J. S. Genome-wide analysis in vivo of translation with nucleotide resolution using ribosome profiling. Science 324, 218–223 (2009). 28. 28. Guenther, U. P. et al. The helicase Ded1p controls use of near-cognate translation initiation codons in 5’ UTRs. Nature 559, 130–134 (2018). 29. 29. Lei, L. et al. Ribosome profiling reveals dynamic translational landscape in maize seedlings under drought stress. Plant J.: Cell Mol. Biol. 84, 1206–1218 (2015). 30. 30. Hsu, P. Y. et al. Super-resolution ribosome profiling reveals unannotated translation events in Arabidopsis. Proc. Natl Acad. Sci. USA 113, E7126–e7135 (2016). 31. 31. Bazin, J. et al. Global analysis of ribosome-associated noncoding RNAs unveils new modes of translational regulation. Proc. Natl Acad. Sci. USA 114, E10018–E10027 (2017). 32. 32. Zhang, H. et al. Genome-wide maps of ribosomal occupancy provide insights into adaptive evolution and regulatory roles of uORFs during Drosophila development. PLoS Biol. 16, e2003903 (2018). 33. 33. Dunn, J. G., Foo, C. K., Belletier, N. G., Gavis, E. R. & Weissman, J. S. Ribosome profiling reveals pervasive and regulated stop codon readthrough in Drosophila melanogaster. eLife 2, e01179 (2013). 34. 34. Ingolia, N. T., Lareau, L. F. & Weissman, J. S. Ribosome profiling of mouse embryonic stem cells reveals the complexity and dynamics of mammalian proteomes. Cell 147, 789–802 (2011). 35. 35. Stumpf, CraigR. et al. The translational landscape of the mammalian cell cycle. Mol. Cell 52, 574–582 (2013). 36. 36. Fritsch, C. et al. Genome-wide search for novel human uORFs and N-terminal protein extensions using ribosomal footprinting. Genome Res. 22, 2208–2218 (2012). 37. 37. Wang, Y., Zhang, H. & Lu, J. Recent advances in ribosome profiling for deciphering translational regulation. Methods https://doi.org/10.1016/j.ymeth.2019.05.011 (2019). 38. 38. Lee, S. et al. Global mapping of translation initiation sites in mammalian cells at single-nucleotide resolution. Proc. Natl Acad. Sci. USA 109, E2424–E2432 (2012). 39. 39. Garreau de Loubresse, N. et al. Structural basis for the inhibition of the eukaryotic ribosome. Nature 513, 517–522 (2014). 40. 40. Gao, X. et al. Quantitative profiling of initiating ribosomes in vivo. Nat. Methods 12, 147–153 (2015). 41. 41. Resch, A. M., Ogurtsov, A. Y., Rogozin, I. B., Shabalina, S. A. & Koonin, E. V. Evolution of alternative and constitutive regions of mammalian 5’UTRs. BMC Genom. 10, 162 (2009). 42. 42. Chen, J. et al. Kinetochore inactivation by expression of a repressive mRNA. Elife 6, https://doi.org/10.7554/eLife.27417 (2017). 43. 43. Cheng, Z. et al. Pervasive, coordinated protein-level changes driven by transcript isoform switching during meiosis. Cell 172, 910–923.e916 (2018). 44. 44. Kurihara, Y. et al. Transcripts from downstream alternative transcription start sites evade uORF-mediated inhibition of gene expression in Arabidopsis. Proc. Natl Acad. Sci. USA 115, 7831–7836 (2018). 45. 45. Yang, Y. F. et al. Trans-splicing enhances translational efficiency in C. elegans. Genome Res. 27, 1525–1535 (2017). 46. 46. Sidrauski, C., McGeachy, A. M., Ingolia, N. T. & Walter, P. The small molecule ISRIB reverses the effects of eIF2α phosphorylation on translation and stress granule assembly. eLife 4, e05033 (2015). 47. 47. Andreev, D. E. et al. Translation of 5’ leaders is pervasive in genes resistant to eIF2 repression. eLife 4, e03971 (2015). 48. 48. Hinnebusch, A. G. Translational regulation of GCN4 and the general amino acid control of yeast. Annu. Rev. Microbiol 59, 407–450 (2005). 49. 49. Young, S. K., Willy, J. A., Wu, C., Sachs, M. S. & Wek, R. C. Ribosome reinitiation directs gene-specific translation and regulates the integrated stress response. J. Biol. Chem. 290, 28257–28271 (2015). 50. 50. Vattem, K. M. & Wek, R. C. Reinitiation involving upstream ORFs regulates ATF4 mRNA translation in mammalian cells. Proc. Natl Acad. Sci. USA 101, 11269–11274 (2004). 51. 51. Xu, G. et al. Global translational reprogramming is a fundamental layer of immune regulation in plants. Nature 545, 487–490 (2017). 52. 52. Andreev, D. E. et al. Oxygen and glucose deprivation induces widespread alterations in mRNA translation within 20 minutes. Genome Biol. 16, 90 (2015). 53. 53. Andreev, D. E. et al. TASEP modelling provides a parsimonious explanation for the ability of a single uORF to derepress translation during the integrated stress response. Elife 7, https://doi.org/10.7554/eLife.32563 (2018). 54. 54. Gaba, A., Jacobson, A. & Sachs, M. S. Ribosome occupancy of the yeast CPA1 upstream open reading frame termination codon modulates nonsense-mediated mRNA decay. Mol. Cell 20, 449–460 (2005). 55. 55. Gerashchenko, M. V., Lobanov, A. V. & Gladyshev, V. N. Genome-wide ribosome profiling reveals complex translational regulation in response to oxidative stress. Proc. Natl Acad. Sci. USA 109, 17394–17399 (2012). 56. 56. Kozak, M. Possible role of flanking nucleotides in recognition of the AUG initiator codon by eukaryotic ribosomes. Nucleic Acids Res. 9, 5233–5252 (1981). 57. 57. Lynch, M., Scofield, D. G. & Hong, X. The evolution of transcription-initiation sites. Mol. Biol. Evol. 22, 1137–1146 (2005). 58. 58. Neafsey, D. E. & Galagan, J. E. Dual modes of natural selection on upstream open reading frames. Mol. Biol. Evol. 24, 1744–1751 (2007). 59. 59. Rogozin, I. B., Kochetov, A. V., Kondrashov, F. A., Koonin, E. V. & Milanesi, L. Presence of ATG triplets in 5’ untranslated regions of eukaryotic cDNAs correlates with a ‘weak’ context of the start codon. Bioinformatics 17, 890–900 (2001). 60. 60. Churbanov, A., Rogozin, I. B., Babenko, V. N., Ali, H. & Koonin, E. V. Evolutionary conservation suggests a regulatory function of AUG triplets in 5’-UTRs of eukaryotic genes. Nucleic Acids Res. 33, 5512–5520 (2005). 61. 61. von Bohlen, A. E. et al. A mutation creating an upstream initiation codon in the SOX9 5’ UTR causes acampomelic campomelic dysplasia. Mol. Genet. Genom. Med. 5, 261–268 (2017). 62. 62. Schulz, J. et al. Loss-of-function uORF mutations in human malignancies. Sci. Rep. 8, 2395 (2018). 63. 63. Barbosa, C., Peixeiro, I. & Romao, L. Gene expression regulation by upstream open reading frames and human disease. PLoS Genet. 9, e1003529 (2013). 64. 64. Cenik, C. et al. Integrative analysis of RNA, translation, and protein levels reveals distinct regulatory variation across humans. Genome Res. 25, 1610–1621 (2015). 65. 65. Wiestner, A., Schlemper, R. J., van der Maas, A. P. & Skoda, R. C. An activating splice donor mutation in the thrombopoietin gene causes hereditary thrombocythaemia. Nat. Genet. 18, 49–52 (1998). 66. 66. Liu, L. et al. Mutation of the CDKN2A 5’ UTR creates an aberrant initiation codon and predisposes to melanoma. Nat. Genet. 21, 128–132 (1999). 67. 67. Zhang, H. et al. The annotation of upstream open reading frames and N-terminal extensions in 478 eukaryotes. figshare. https://doi.org/10.6084/m9.figshare.9980441.v4 (2020). 68. 68. Sengupta, S. & Higgs, P. G. Pathways of genetic code evolution in ancient and modern organisms. J. Mol. Evol. 80, 229–243 (2015). 69. 69. Baranov, P. V., Atkins, J. F. & Yordanova, M. M. Augmented genetic decoding: global, local and temporal alterations of decoding processes and codon meaning. Nat. Rev. Genet. 16, 517–529 (2015). 70. 70. Heaphy, S. M., Mariotti, M., Gladyshev, V. N., Atkins, J. F. & Baranov, P. V. Novel ciliate genetic code variants including the reassignment of all three stop codons to sense codons in Condylostoma magnum. Mol. Biol. evolution 33, 2885–2889 (2016). 71. 71. Swart, E. C., Serra, V., Petroni, G. & Nowacki, M. Genetic codes with no dedicated stop codon: context-dependent translation termination. Cell 166, 691–702 (2016). 72. 72. Záhonová, K., Kostygov, A. Y., Ševčíková, T., Yurchenko, V. & Eliáš, M. An unprecedented non-canonical nuclear genetic code with all three termination codons reassigned as sense codons. Curr. Biol. 26, 2364–2369 (2016). 73. 73. Bachvaroff, T. R. A precedented nuclear genetic code with all three termination codons reassigned as sense codons in the syndinean Amoebophrya sp. ex Karlodinium veneficum. PLoS One 14, e0212912 (2019). 74. 74. Lobanov, A. V. et al. Position-dependent termination and widespread obligatory frameshifting in Euplotes translation. Nat. Struct. Mol. Biol. 24, 61–68 (2017). 75. 75. Kumar, S., Stecher, G., Suleski, M. & Hedges, S. B. TimeTree: a resource for timelines, timetrees, and divergence times. Mol. Biol. Evol. 34, 1812–1819 (2017). 76. 76. Nilsen, T. W. Trans-splicing of nematode premessenger RNA. Annu. Rev. Microbiol. 47, 413–440 (1993). 77. 77. Reuter, M., Engelstädter, J., Fontanillas, P. & Hurst, L. D. A test of the null model for 5’ UTR evolution based on GC content. Mol. Biol. Evolution 25, 801–804 (2008). 78. 78. Clote, P., Ferré, F., Kranakis, E. & Krizanc, D. Structural RNA has lower folding energy than random RNA of the same dinucleotide frequency. RNA 11, 578–591 (2005). 79. 79. Workman, C. & Krogh, A. No evidence that mRNAs have lower folding free energies than random sequences with the same dinucleotide distribution. Nucleic Acids Res. 27, 4816–4822 (1999). 80. 80. Charlesworth, B., Coyne, J. A. & Barton, N. H. The relative rates of evolution of sex chromosomes and autosomes. Am. Nat. 130, 113–146 (1987). 81. 81. Meisel, R. P. & Connallon, T. The faster-X effect: integrating theory and data. Trends Genet.: TIG 29, 537–544 (2013). 82. 82. Lu, J. & Wu, C.-I. Weak selection revealed by the whole-genome comparison of the X chromosome and autosomes of human and chimpanzee. Proc. Natl Acad. Sci. USA 102, 4063–4067 (2005). 83. 83. Mank, J. E., Axelsson, E. & Ellegren, H. Fast-X on the Z: Rapid evolution of sex-linked genes in birds. Genome Res. 17, 618–624 (2007). 84. 84. Ye, Y. et al. Analysis of human upstream open reading frames and impact on gene expression. Hum. Genet. 134, 605–612 (2015). 85. 85. Michel, A. M. et al. GWIPS-viz: 2018 update. Nucleic Acids Res. 46, D823–D830 (2017). 86. 86. Eisenberg, E. & Levanon, E. Y. Human housekeeping genes, revisited. Trends Genet. 29, 569–574 (2013). 87. 87. dos Reis, M. & Wernisch, L. Estimating translational selection in eukaryotic genomes. Mol. Biol. Evol. 26, 451–461 (2009). 88. 88. Zhang, J. & Yang, J.-R. Determinants of the rate of protein sequence evolution. Nat. Rev. Genet. 16, 409–420 (2015). 89. 89. Charlesworth, B. Effective population size and patterns of molecular evolution and variation. Nat. Rev. Genet. 10, 195–205 (2009). 90. 90. Zhang, H. et al. Combinatorial regulation of gene expression by uORFs and microRNAs in Drosophila. Sci. Bull. https://doi.org/10.1016/j.scib.2020.10.012 (2020). 91. 91. Felsenstein, J. Phylogenies and the comparative method. Am. Naturalist 125, 1–15 (1985). 92. 92. Haller, B. C. & Messer, P. W. asymptoticMK: a web-based tool for the asymptotic McDonald-Kreitman test. G3 7, 1569–1575 (2017). 93. 93. Messer, P. W. & Petrov, D. A. Frequent adaptation and the McDonald-Kreitman test. Proc. Natl Acad. Sci. USA 110, 8615–8620 (2013). 94. 94. Gonzalez-Perez, A., Sabarinathan, R. & Lopez-Bigas, N. Local determinants of the mutational landscape of the human genome. Cell 177, 101–114 (2019). 95. 95. Kitano, S., Kurasawa, H. & Aizawa, Y. Transposable elements shape the human proteome landscape via formation of cis-acting upstream open reading frames. Genes Cells 23, 274–284 (2018). 96. 96. Stark, A. et al. Discovery of functional elements in 12 Drosophila genomes using evolutionary signatures. Nature 450, 219 (2007). 97. 97. Hayashi, N. et al. Identification of Arabidopsis thaliana upstream open reading frames encoding peptide sequences that cause ribosomal arrest. Nucleic Acids Res. 45, 8844–8858 (2017). 98. 98. Auton, A. et al. A global reference for human genetic variation. Nature 526, 68–74 (2015). 99. 99. Mackay, T. F. et al. The Drosophila melanogaster genetic reference panel. Nature 482, 173–178 (2012). 100. 100. Lin, M. F., Jungreis, I. & Kellis, M. PhyloCSF: a comparative genomics method to distinguish protein coding and non-coding regions. Bioinformatics 27, i275–i282 (2011). 101. 101. Xing, X. et al. Qualitative and quantitative analysis of the adult Drosophila melanogaster proteome. Proteomics 14, 286–290 (2014). 102. 102. Casas-Vila, N. et al. The developmental proteome of Drosophila melanogaster. Genome Res. 27, 1273–1285 (2017). 103. 103. Ashley, J. et al. Retrovirus-like Gag protein Arc1 binds RNA and traffics across Synaptic Boutons. Cell 172, 262–274 e211 (2018). 104. 104. Kuznetsova, K. G. et al. Proteogenomics of Adenosine-to-Inosine RNA Editing in the Fruit Fly. J. Proteome Res. 17, 3889–3903 (2018). 105. 105. Sabbadin, F. et al. An ancient family of lytic polysaccharide monooxygenases with roles in arthropod development and biomass digestion. Nat. Commun. 9, 756 (2018). 106. 106. Sample, P. J. et al. Human 5’ UTR design and variant effect prediction from a massively parallel translation assay. Nat. Biotechnol. 37, 803–809 (2019). 107. 107. Noderer, W. L. et al. Quantitative analysis of mammalian translation initiation sites by FACS-seq. Mol. Syst. Biol. 10, 748–748 (2014). 108. 108. Duret, L. & Galtier, N. Biased gene conversion and the evolution of mammalian genomic landscapes. Annu. Rev. Genom. Hum. Genet 10, 285–311 (2009). 109. 109. Katju, V. & Bergthorsson, U. Old trade, new tricks: insights into the spontaneous mutation process from the partnering of classical mutation accumulation experiments with high-throughput genomic approaches. Genome Biol. Evol. 11, 136–165 (2019). 110. 110. Gentles, A. J. & Karlin, S. Genome-scale compositional comparisons in eukaryotes. Genome Res. 11, 540–546 (2001). 111. 111. Cavener, D. R. Comparison of the consensus sequence flanking translational start sites in Drosophila and vertebrates. Nucleic Acids Res. 15, 1353–1361 (1987). 112. 112. Hernandez, G., Osnaya, V. G. & Perez-Martinez, X. Conservation and variability of the AUG initiation codon context in eukaryotes. Trends Biochem. Sci. https://doi.org/10.1016/j.tibs.2019.07.001 (2019). 113. 113. Schleich, S. et al. DENR-MCT-1 promotes translation re-initiation downstream of uORFs to control tissue growth. Nature 512, 208–212 (2014). 114. 114. Spealman, P. et al. Conserved non-AUG uORFs revealed by a novel regression analysis of ribosome profiling data. Genome Res. 28, 214–222 (2018). 115. 115. Battle, A. et al. Genomic variation. Impact of regulatory variation from RNA to protein. Science 347, 664–667 (2015). 116. 116. Signor, S. A. & Nuzhdin, S. V. The evolution of gene expression in cis and trans. Trends Genet. 34, 532–544 (2018). 117. 117. McManus, C. J., May, G. E., Spealman, P. & Shteyman, A. Ribosome profiling reveals post-transcriptional buffering of divergent gene expression in yeast. Genome Res. 24, 422–430 (2014). 118. 118. Artieri, C. G. & Fraser, H. B. Evolution at two levels of gene expression in yeast. Genome Res. 24, 411–421 (2014). 119. 119. Wang, S. H., Hsiao, C. J., Khan, Z. & Pritchard, J. K. Post-translational buffering leads to convergent protein expression levels between primates. Genome Biol. 19, 83–83 (2018). 120. 120. Khan, Z. et al. Primate transcript and protein expression levels evolve under compensatory selection pressures. Science 342, 1100 (2013). 121. 121. Lin, Y. et al. Impacts of uORF codon identity and position on translation regulation. Nucleic Acids Res. (2019). 122. 122. Ivanov, I. P. et al. Polyamine control of translation elongation regulates start site selection on antizyme inhibitor mRNA via ribosome queuing. Mol. Cell 70, 254–264.e256 (2018). 123. 123. Mackowiak, S. D. et al. Extensive identification and analysis of conserved small ORFs in animals. Genome Biol. 16, 179 (2015). 124. 124. van der Horst, S., Snel, B., Hanson, J. & Smeekens, S. Novel pipeline identifies new upstream ORFs and non-AUG initiating main ORFs with conserved amino acid sequences in the 5’ leader of mRNAs Arabidopsis thaliana. Rna 25, 292–304 (2019). 125. 125. Kim, J. H., Park, S. M., Park, J. H., Keum, S. J. & Jang, S. K. eIF2A mediates translation of hepatitis C viral mRNA under stress conditions. EMBO J. 30, 2454–2464 (2011). 126. 126. Starck, S. R. et al. Translation from the 5’ untranslated region shapes the integrated stress response. Science 351, aad3867 (2016). 127. 127. Sendoel, A. et al. Translation from unconventional 5’ start sites drives tumour initiation. Nature 541, 494–499 (2017). 128. 128. Burki, F., Roger, A. J., Brown, M. W. & Simpson, A. G. B. The new tree of eukaryotes. Trends Ecol. Evol. 35, 43–55 (2020). 129. 129. Xu, G. et al. uORF-mediated translation allows engineered plant disease resistance without fitness costs. Nature 545, 491–494 (2017). 130. 130. Zhang, H. et al. Genome editing of upstream open reading frames enables translational control in plants. Nat. Biotechnol. 36, 894–898 (2018). 131. 131. Ferreira, J. P., Overton, K. W. & Wang, C. L. Tuning gene expression with synthetic upstream open reading frames. Proc. Natl Acad. Sci. 110, 11284 (2013). 132. 132. Aken, B. L. et al. The Ensembl gene annotation system. Database 2016, baw093 (2016). 133. 133. Park, D., Morris, A. R., Battenhouse, A. & Iyer, V. R. Simultaneous mapping of transcript ends at single-nucleotide resolution and identification of widespread promoter-associated non-coding RNA governed by TATA elements. Nucleic Acids Res. 42, 3736–3749 (2014). 134. 134. Coyne, R. S. et al. Comparative genomics of the pathogenic ciliate Ichthyophthirius multifiliis, its free-living relatives and a host species provide insights into adoption of a parasitic lifestyle and prospects for disease control. Genome Biol. 12, R100 (2011). 135. 135. Jiang, M., Anderson, J., Gillespie, J. & Mayne, M. uShuffle: a useful tool for shuffling biological sequences while preserving the k-let counts. BMC Bioinform. 9, 192 (2008). 136. 136. Camacho, C. et al. BLAST+: architecture and applications. BMC Bioinform. 10, 421 (2009). 137. 137. Edgar, R. C. MUSCLE: multiple sequence alignment with high accuracy and high throughput. Nucleic Acids Res. 32, 1792–1797 (2004). 138. 138. Rice, P., Longden, I. & Bleasby, A. EMBOSS: the European Molecular Biology Open Software Suite. Trends Genet 16, 276–277 (2000). 139. 139. Yang, Z. PAML 4: phylogenetic analysis by maximum likelihood. Mol. Biol. Evol. 24, 1586–1591 (2007). 140. 140. Hinchliff, C. E. et al. Synthesis of phylogeny and taxonomy into a comprehensive tree of life. Proc. Natl Acad. Sci. USA 112, 12764–12769 (2015). 141. 141. Simao, F. A., Waterhouse, R. M., Ioannidis, P., Kriventseva, E. V. & Zdobnov, E. M. BUSCO: assessing genome assembly and annotation completeness with single-copy orthologs. Bioinformatics 31, 3210–3212 (2015). 142. 142. Capella-Gutiérrez, S., Silla-Martínez, J. M. & Gabaldón, T. trimAl: a tool for automated alignment trimming in large-scale phylogenetic analyses. Bioinformatics 25, 1972–1973 (2009). 143. 143. Paradis, E., Claude, J. & Strimmer, K. APE: analyses of phylogenetics and evolution in R language. Bioinformatics 20, 289–290 (2004). 144. 144. Haeussler, M. et al. The UCSC Genome Browser database: 2019 update. Nucleic Acids Res. 47, D853–d858 (2019). 145. 145. Kim, M. S. et al. A draft map of the human proteome. Nature 509, 575–581 (2014). 146. 146. Kimura, M. Diffusion models in population genetics. J. Appl. Probab (1964). 147. 147. Kronja, I. et al. Widespread changes in the posttranscriptional landscape at the Drosophila oocyte-to-embryo transition. Cell Rep. 7, 1495–1508 (2014). 148. 148. Dunn, J. G. & Weissman, J. S. Plastid: nucleotide-resolution analysis of next-generation sequencing and genomics data. BMC Genom. 17, 958 (2016). 149. 149. Kent, W. J., Zweig, A. S., Barber, G., Hinrichs, A. S. & Karolchik, D. BigWig and BigBed: enabling browsing of large distributed datasets. Bioinformatics 26, 2204–2207 (2010). 150. 150. Alexa, A. & Rahnenfuhrer, J. topGO: enrichment analysis for gene ontology. R package (2019). 151. 151. Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc.: Ser. B (Methodol.) 57, 289–300 (1995). 152. 152. Afgan, E. et al. The Galaxy platform for accessible, reproducible and collaborative biomedical analyses: 2018 update. Nucleic Acids Res. 46, W537–W544 (2018). 153. 153. Slater, G. S. & Birney, E. Automated generation of heuristics for biological sequence comparison. BMC Bioinform. 6, 31 (2005). 154. 154. Cingolani, P. et al. A program for annotating and predicting the effects of single nucleotide polymorphisms, SnpEff: SNPs in the genome of Drosophila melanogaster strain w1118; iso-2; iso-3. Fly 6, 80–92 (2012). 155. 155. Vizcaíno, J. A. et al. ProteomeXchange provides globally coordinated proteomics data submission and dissemination. Nat. Biotechnol. 32, 223–226 (2014). 156. 156. Tyanova, S., Temu, T. & Cox, J. The MaxQuant computational platform for mass spectrometry-based shotgun proteomics. Nat. Protoc. 11, 2301–2319 (2016). 157. 157. Röst, H. L. et al. OpenMS: a flexible open-source software platform for mass spectrometry data analysis. Nat. Methods 13, 741–748 (2016). 158. 158. Chi, H. et al. Comprehensive identification of peptides in tandem mass spectra using an efficient open search engine. Nat Biotechnol. https://doi.org/10.1038/nbt.4236 (2018). 159. 159. Chen, C., Li, Z., Huang, H., Suzek, B. E. & Wu, C. H. A fast Peptide Match service for UniProt Knowledgebase. Bioinformatics 29, 2808–2809 (2013). 160. 160. Tan, G. & Lenhard, B. TFBSTools: an R/bioconductor package for transcription factor binding site analysis. Bioinformatics 32, 1555–1556 (2016). 161. 161. Dobin, A. et al. STAR: ultrafast universal RNA-seq aligner. Bioinformatics 29, 15–21 (2013). 162. 162. Anders, S., Pyl, P. T. & Huber, W. HTSeq-a Python framework to work with high-throughput sequencing data. Bioinformatics 31, 166–169 (2015). 163. 163. Love, M. I., Huber, W. & Anders, S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 15, 550 (2014). ## Acknowledgements This work was supported by grants from the National Natural Science Foundation of China (No. 91731301) and the Ministry of Science and Technology of the People’s Republic of China (2016YFA0500800) awarded to J.L. and the China Postdoctoral Science Foundation (2019M650003) to Y.W. H.Z. and Y.W. are supported by grants from the National Postdoctoral Innovative Talents Supporting Program. Some of the analyses were performed on the High-Performance Computing Platform of the Center for Life Sciences. The authors thank the National Center for Protein Sciences at Peking University in Beijing, China, for the assistance with the analysis of mass spectrometry data. ## Author information Authors ### Contributions J.L. supervised the entire project and conceived and designed the research. H.Z., Y.W., X.W., X.T., and C.W. contributed to the data analyses. X.W. and X.T. performed the experiments. J.L., H.Z., and Y.W. wrote the manuscript. ### Corresponding author Correspondence to Jian Lu. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Communications thanks Pavel Baranov and the other anonymous reviewer for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Zhang, H., Wang, Y., Wu, X. et al. Determinants of genome-wide distribution and evolution of uORFs in eukaryotes. Nat Commun 12, 1076 (2021). https://doi.org/10.1038/s41467-021-21394-y • Accepted: • Published:
2021-05-09 22:48:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5999919176101685, "perplexity": 11995.641453470422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00424.warc.gz"}
http://www.randform.org/blog/?p=1115
## na-ming ga-ming This is not this In their upcoming paper “The emergence of simple languages in an experimental coordination game”, Proceedings of the National Academy of Science, 2007 Reinhard Selten and Massimo Warglien describe an experiment in which the development of languages was evaluated via a bonus system within a game-like environment: At the beginning of the game pairs of gamers were chosen randomly. On a screen each pair was confronted with a sequence of objects like triangles, squares and circles, which could differ by color and filling pattern. The task was to find names for the objects given a limited set of letters. Then one person of a pair was chosen randomly to act as a “sender” for objects from the given repertoire of the sender – and if I understand correctly (I couldn’t find an online version of the article but just this report: Sprachevolution im Labor) – the sender had to send the object to his/her partner (the “receiver”) who had in the turn to repeat the name of the object. If both names – that of the sender and that of the receiver were identical the pair was awarded 10 Eurocents, which resulted in a cost of approx. 2-3 cents per letter for the lab. After one round the pair could chose new names in order to make the partner more sucessful. Altogether the pairs played 60 rounds. In between the rounds the players were confronted with new objects so that the question “How do I describe objects which didn’t exist before?” was getting important. As it turned out the most successful participants developped a kind of “compositional grammar”, i.e. they found “adjectives” to describe their objects, like “R” was for “round” und “S” for “square”; M for “red” and “Z” for “blue”. “RM” meant then “red circle”, “SM” was “red square”. For a small recurring sequence this may not always the most effective strategy but in the long run this strategy turned out to be the most successful since it made the naming of new objects easier, like if you have a collection of red circles and squares and add a blue one then it is immediately clear what ZS means. As it turned out participants who chose a compositional grammar earned on average 3 times more money than those without. The below box is for leaving comments. Interesting comments in german, french and russian will eventually be translated into english. If you write a comment you consent to our data protection practices as specified here. If your comment text is not too rude and if your URL is not clearly SPAM then both will be published after moderation. Your email adress will not be published. Moderation is done by hand and might take up to a couple of days. you can use LaTeX in your math comments, by using the $shortcode: [latex] E = m c^2$
2018-08-20 22:25:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4125381410121918, "perplexity": 1075.6969039460407}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217354.65/warc/CC-MAIN-20180820215248-20180820235248-00539.warc.gz"}
https://events.berkeley.edu/index.php/calendar/sn/?event_ID=128531&date=2019-09-25&tab=all_events
## Topology Seminar: Small knots of large Heegaard genus Seminar | September 25 | 4:10-5 p.m. | 3 Evans Hall William Worden, Rice University Department of Mathematics Building off ideas developed by Agol, we construct a family of hyperbolic knots $K_n$ whose complements contain no closed incompressible surfaces (i.e., they are small) and have Heegaard genus exactly $n$. These are the first known examples of small knots having large Heegaard genus. In the first part of the talk we will describe a beautiful construction due to Agol for building hyperbolic 3-manifolds that decompose into a union of regular ideal octahedra. Using this technology, we will then show how to build the knots $K_n$, and outline the proof showing that they have the desired properties. nickmbmiller@berkeley.edu
2020-01-21 21:19:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7012560963630676, "perplexity": 894.2841322403126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00289.warc.gz"}
http://www.physicsforums.com/showthread.php?p=4265978
## The rank of a block matrix as a function of the rank of its submatrice Hello everyone, I would like to post this problem here in this forum. Having the following block matrix: $$M=\begin{bmatrix} S_1 &C\\ C^T &S_2\\ \end{bmatrix}$$ I would like to find the function $f$ that holds $$rank(M)=f( rank(S1), rank(S2))$$. $$S_1$$ and $$S_2$$ are covariance matrices-> symmetric and positive semi-definite. $$C$$ is the cross covariance that may be positive semi-definite. Can you help me? I sincerely thank you! :) All the best GoodSpirit PhysOrg.com science news on PhysOrg.com >> City-life changes blackbird personalities, study shows>> Origins of 'The Hoff' crab revealed (w/ Video)>> Older males make better fathers: Mature male beetles work harder, care less about female infidelity Mentor Are you sure that this function exists? $$M=\begin{bmatrix} 1 &1\\ 1 &1\\ \end{bmatrix}$$ => rank(M)=1 $$M=\begin{bmatrix} 1 &.5\\ .5 &1\\ \end{bmatrix}$$ => rank(M)=2 Hi mfb, Thank you for answering! :) True! it depends on something more! M is also a covariance matrix so C is related to S1 and S2. It is a good idea to make the rank M dependent of the C rank. The rank of M may be dependent of the eigen values that are shared by S1 and S2 Thankk you again All the best GoodSpirit Similar discussions for: The rank of a block matrix as a function of the rank of its submatrice Thread Forum Replies Calculus & Beyond Homework 8 Precalculus Mathematics Homework 2 Differential Geometry 0 Calculus & Beyond Homework 19 Quantum Physics 4
2013-06-19 18:12:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27355077862739563, "perplexity": 1446.3582558788237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709006458/warc/CC-MAIN-20130516125646-00001-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.dealii.org/developer/doxygen/deal.II/classSUNDIALS_1_1IDA_1_1AdditionalData.html
Reference documentation for deal.II version Git bef661081b 2019-09-23 12:55:27 -0400 SUNDIALS::IDA< VectorType >::AdditionalData Class Reference #include <deal.II/sundials/ida.h> ## Public Types enum  InitialConditionCorrection { none = 0, use_y_diff = 1, use_y_dot = 2 } ## Public Member Functions AdditionalData (const double initial_time=0.0, const double final_time=1.0, const double initial_step_size=1e-2, const double output_period=1e-1, const double minimum_step_size=1e-6, const unsigned int maximum_order=5, const unsigned int maximum_non_linear_iterations=10, const double absolute_tolerance=1e-6, const double relative_tolerance=1e-5, const bool ignore_algebraic_terms_for_errors=true, const InitialConditionCorrection &ic_type=use_y_diff, const InitialConditionCorrection &reset_type=use_y_diff, const unsigned int maximum_non_linear_iterations_ic=5) void add_parameters (ParameterHandler &prm) ## Public Attributes double initial_time double final_time double initial_step_size double minimum_step_size double absolute_tolerance double relative_tolerance unsigned int maximum_order double output_period bool ignore_algebraic_terms_for_errors InitialConditionCorrection ic_type InitialConditionCorrection reset_type unsigned maximum_non_linear_iterations_ic unsigned int maximum_non_linear_iterations ## Detailed Description ### template<typename VectorType = Vector<double>> class SUNDIALS::IDA< VectorType >::AdditionalData Additional parameters that can be passed to the IDA class. Definition at line 242 of file ida.h. ## ◆ InitialConditionCorrection template<typename VectorType = Vector<double>> IDA is a Differential Algebraic solver. As such, it requires initial conditions also for the first order derivatives. If you do not provide consistent initial conditions, (i.e., conditions for which F(y_dot(0), y(0), 0) = 0), you can ask SUNDIALS to compute initial conditions for you by specifying InitialConditionCorrection for the initial conditions both at the initial_time (ic_type) and after a reset has occurred (reset_type). Enumerator none Do not try to make initial conditions consistent. use_y_diff Compute the algebraic components of y and differential components of y_dot, given the differential components of y. This option requires that the user specifies differential and algebraic components in the function get_differential_components. use_y_dot Compute all components of y, given y_dot. Definition at line 254 of file ida.h. ## ◆ AdditionalData() template<typename VectorType = Vector<double>> SUNDIALS::IDA< VectorType >::AdditionalData::AdditionalData ( const double initial_time = 0.0, const double final_time = 1.0, const double initial_step_size = 1e-2, const double output_period = 1e-1, const double minimum_step_size = 1e-6, const unsigned int maximum_order = 5, const unsigned int maximum_non_linear_iterations = 10, const double absolute_tolerance = 1e-6, const double relative_tolerance = 1e-5, const bool ignore_algebraic_terms_for_errors = true, const InitialConditionCorrection & ic_type = use_y_diff, const InitialConditionCorrection & reset_type = use_y_diff, const unsigned int maximum_non_linear_iterations_ic = 5 ) inline Initialization parameters for IDA. Global parameters: Parameters initial_time Initial time final_time Final time initial_step_size Initial step size output_period Time interval between each output Running parameters: Parameters minimum_step_size Minimum step size maximum_order Maximum BDF order maximum_non_linear_iterations Maximum number of nonlinear iterations Error parameters: Parameters absolute_tolerance Absolute error tolerance relative_tolerance Relative error tolerance ignore_algebraic_terms_for_errors Ignore algebraic terms for error computations Initial condition correction parameters: Parameters ic_type Initial condition correction type reset_type Initial condition correction type after restart maximum_non_linear_iterations_ic Initial condition Newton max iterations Definition at line 306 of file ida.h. ## ◆ add_parameters() template<typename VectorType = Vector<double>> void SUNDIALS::IDA< VectorType >::AdditionalData::add_parameters ( ParameterHandler & prm ) inline Add all AdditionalData() parameters to the given ParameterHandler object. When the parameters are parsed from a file, the internal parameters are automatically updated. The following parameters are declared: set Final time = 1.000000 set Initial time = 0.000000 set Time interval between each output = 0.2 subsection Error control set Absolute error tolerance = 0.000001 set Ignore algebraic terms for error computations = true set Relative error tolerance = 0.00001 set Use local tolerances = false end subsection Initial condition correction parameters set Correction type at initial time = none set Correction type after restart = none set Maximum number of nonlinear iterations = 5 end subsection Running parameters set Initial step size = 0.1 set Maximum number of nonlinear iterations = 10 set Maximum order of BDF = 5 set Minimum step size = 0.000001 end These are one-to-one with the options you can pass at construction time. The options you pass at construction time are set as default values in the ParameterHandler object prm. You can later modify them by parsing a parameter file using prm. The values of the parameter will be updated whenever the content of prm is updated. Make sure that this class lives longer than prm. Undefined behaviour will occur if you destroy this class, and then parse a parameter file using prm. Definition at line 381 of file ida.h. ## ◆ initial_time template<typename VectorType = Vector<double>> double SUNDIALS::IDA< VectorType >::AdditionalData::initial_time Initial time for the DAE. Definition at line 463 of file ida.h. ## ◆ final_time template<typename VectorType = Vector<double>> double SUNDIALS::IDA< VectorType >::AdditionalData::final_time Final time. Definition at line 468 of file ida.h. ## ◆ initial_step_size template<typename VectorType = Vector<double>> double SUNDIALS::IDA< VectorType >::AdditionalData::initial_step_size Initial step size. Definition at line 473 of file ida.h. ## ◆ minimum_step_size template<typename VectorType = Vector<double>> double SUNDIALS::IDA< VectorType >::AdditionalData::minimum_step_size Minimum step size. Definition at line 478 of file ida.h. ## ◆ absolute_tolerance template<typename VectorType = Vector<double>> double SUNDIALS::IDA< VectorType >::AdditionalData::absolute_tolerance Absolute error tolerance for adaptive time stepping. Definition at line 483 of file ida.h. ## ◆ relative_tolerance template<typename VectorType = Vector<double>> double SUNDIALS::IDA< VectorType >::AdditionalData::relative_tolerance Relative error tolerance for adaptive time stepping. Definition at line 488 of file ida.h. ## ◆ maximum_order template<typename VectorType = Vector<double>> unsigned int SUNDIALS::IDA< VectorType >::AdditionalData::maximum_order Maximum order of BDF. Definition at line 493 of file ida.h. ## ◆ output_period template<typename VectorType = Vector<double>> double SUNDIALS::IDA< VectorType >::AdditionalData::output_period Time period between each output. Definition at line 498 of file ida.h. ## ◆ ignore_algebraic_terms_for_errors template<typename VectorType = Vector<double>> bool SUNDIALS::IDA< VectorType >::AdditionalData::ignore_algebraic_terms_for_errors Ignore algebraic terms for errors. Definition at line 503 of file ida.h. ## ◆ ic_type template<typename VectorType = Vector<double>> InitialConditionCorrection SUNDIALS::IDA< VectorType >::AdditionalData::ic_type Type of correction for initial conditions. If you do not provide consistent initial conditions, (i.e., conditions for which $$F(y_dot(0), y(0), 0) = 0$$), you can ask SUNDIALS to compute initial conditions for you by using the ic_type parameter at construction time. Notice that you could in principle use this capabilities to solve for steady state problems by setting y_dot to zero, and asking to compute $$y(0)$$ that satisfies $$F(0, y(0), 0) = 0$$, however the nonlinear solver used inside IDA may not be robust enough for complex problems with several millions unknowns. Definition at line 519 of file ida.h. ## ◆ reset_type template<typename VectorType = Vector<double>> InitialConditionCorrection SUNDIALS::IDA< VectorType >::AdditionalData::reset_type Type of correction for initial conditions to be used after a solver restart. If you do not have consistent initial conditions after a restart, (i.e., conditions for which F(y_dot(t_restart), y(t_restart), t_restart) = 0), you can ask SUNDIALS to compute the new initial conditions for you by using the reset_type parameter at construction time. Definition at line 531 of file ida.h. ## ◆ maximum_non_linear_iterations_ic template<typename VectorType = Vector<double>> unsigned SUNDIALS::IDA< VectorType >::AdditionalData::maximum_non_linear_iterations_ic Maximum number of iterations for Newton method in IC calculation. Definition at line 536 of file ida.h. ## ◆ maximum_non_linear_iterations template<typename VectorType = Vector<double>> unsigned int SUNDIALS::IDA< VectorType >::AdditionalData::maximum_non_linear_iterations Maximum number of iterations for Newton method during time advancement. Definition at line 541 of file ida.h. The documentation for this class was generated from the following file:
2019-09-23 20:00:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44516611099243164, "perplexity": 12220.121338821007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514578201.99/warc/CC-MAIN-20190923193125-20190923215125-00398.warc.gz"}
https://www.tutorialspoint.com/koko-eating-bananas-in-cplusplus
# Koko Eating Bananas in C++ C++Server Side ProgrammingProgramming Suppose we have N piles of bananas, the i-th pile has piles[i] bananas. Here the guards have gone and will come back in H hours. Koko can decide her bananas-per-hour eating speed is K. Each hour, she takes some pile of bananas, and eats K bananas from that pile. If the pile has less than K bananas, then she consumes all of them instead, and won't eat any more bananas during this hour. Consider Koko likes to eat slowly, but still wants to finish eating all the bananas before the guards come back. We have to find the minimum integer K such that she can eat all the bananas within H hours. So if the input is like [3,6,7,11], and H = 8, then the output will be 4. To solve this, we will follow these steps − • Define a method called ok(), this will take array a, two values x and h • time := 0 • for i in range 0 to size of a • time := time + a[i] / x • time := time + i when a[i] mod x is 0 • return true when time <= H • From the main method do the following • n := size of the piles array, set sum := 0, low := 1, high := 0 • for i in range 0 to n – 1 • high := max of piles[i] and high • while low < high • mid := low + (high – low ) /2 • if ok(piles, mid, H) is true, then set high := mid, otherwise low := mid + 1 • if ok(piles, mid, H) is true, then set high := mid, otherwise low := mid + 1 • return high Let us see the following implementation to get better understanding − ## Example Live Demo #include <bits/stdc++.h> using namespace std; typedef long long int lli; class Solution { public: bool ok(vector <int>& a, int x, int H){ int time = 0; for(int i = 0; i < a.size(); i++){ time += a[i] / x; time += (a[i] % x ? 1 : 0); } return time <= H; } int minEatingSpeed(vector<int>& piles, int H) { int n = piles.size(); lli low = 1; lli sum = 0; lli high = 0; for(int i = 0; i < n; i++)high = max((lli)piles[i], high); while(low < high){ int mid = low + (high - low) / 2; if(ok(piles, mid, H)){ high = mid; }else{ low = mid + 1; } } return high; } }; main(){ vector<int> v = {3,6,7,11}; Solution ob; cout << (ob.minEatingSpeed(v, 8)); } ### Input [3,6,7,11] 8 ## Output 4 Published on 10-Apr-2020 09:37:15 Advertisements
2022-01-20 03:23:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18861998617649078, "perplexity": 6901.451330979675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301670.75/warc/CC-MAIN-20220120005715-20220120035715-00454.warc.gz"}
https://www.autobusesmex.com/lineas-de-autobuses/primera-plus/
## Primera Plus - Horarios de autobuses ### Horarios de corridas 2019 y precios de boletos de Primera Plus Origen a Destinos de PRIMERA PLUS HORARIOS 2019 PRECIOS DE BOLETOS PRIMERA PLUS AEROPUERTO T1/ T2 CDMX a SAN JUAN DEL RÍO TERMINAL 1: 09:00, 18:15 / TERMINAL 2: 08:40, 17:55 \$278.00 AEROPUERTO T1 / T2 CDMX a QUERÉTARO TERMINAL 1: 01:20, 03:00, 05:30, 06:30, 07:30, 08:30, 09:30, 10:30, 11:00, 11:30, 12:00, 12:20, 12:40, 13:00, 13:30, 14:00, 14:20, 14:40, 15:00, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30, 19:00, 19:20, 19:40, 20:00, 20:30, 21:00, 21:30, 22:00, 22:30, 23:00, 23:50 / TERMINAL 2: 01:00, 02:40, 05:10, 06:10, 07:10, 08:10, 09:10, 10:10, 10:40, 11:10, 11:40, 12:00, 12:20, 12:40, 13:10, 13:40, 14:00, 14:20, 14:40, 15:10, 15:40, 16:10, 16:40, 17:10, 17:40, 18:10, 18:40, 19:00, 19:20, 19:40, 20:10, 20:40, 21:10, 21:40, 22:10, 22:40, 23:30  \$413.00 AEROPUERTO T1/ T2 CDMX a CELAYA TERMINAL 1: 07:30, 15:10, 16:30, 18:00, 20:00, 22:00 / TERMINAL 2: 07:10, 14:50, 16:10, 17:40, 19:40, 21:40 \$479.00 AGUASCALIENTES a CELAYA 07:00, 09:00, 11:20, 14:10, 17:00 \$424.00 AGUASCALIENTES a CENTRAL DEL NORTE CDMX 00:45, 08:00, 09:30, 11:00, 12:30, 14:00, 15:30, 17:00, 22:15, 23:15, 23:40  \$609.00 AGUASCALIENTES a GUADALAJARA 04:30, 07:00, 09:00, 11:00, 13:00, 14:00, 15:00, 16:00, 17:00, 18:00, 20:20, 23:45 \$356.00 AGUASCALIENTES a IRAPUATO 07:40, 11:00, 14:30, 16:00, 18:00, 23:45 \$325.00 AGUASCALIENTES a LEÓN 05:20, 05:50, 06:20, 06:40, 07:00, 07:20, 07:40, 08:10, 08:40, 09:00, 09:20, 09:50, 10:20, 10:40, 11:00, 11:20, 11:40, 12:10, 12:40, 13:10, 13:40, 14:10, 14:30, 15:00, 15:30, 16:00, 16:30, 17:00, 17:30, 18:00, 18:30, 19:00, 19:30, 20:00, 20:30, 21:00, 22:45, 23:45 \$206.00 AGUASCALIENTES a MORELIA 06:40, 07:40, 09:20, 11:00, 14:30, 16:30, 19:00, 23:45 \$533.00 AGUASCALIENTES a QUERÉTARO 07:20, 08:40, 11:40, 13:40, 16:00, 18:30, 22:45 \$495.00 CENTRAL DEL NORTE CDMX a AGUASCALIENTES 00:30, 08:00, 10:15, 12:00, 13:30, 15:10, 17:00, 18:00, 21:00, 22:00, 23:00, 23:45 \$609.00 CENTRAL DEL NORTE CDMX a GUADALAJARA 00:20, 00:40, 01:00, 01:20, 05:20, 06:10, 07:10, 07:40, 08:20, 09:00, 09:40, 10:20, 11:00, 11:40, 12:20, 13:00, 13:40, 14:20, 15:00, 15:40, 16:20, 17:00, 17:40, 18:40, 19:40, 20:40, 22:00, 22:20, 22:40, 23:00, 23:15, 23:30, 23:40, 23:50, 23:59 \$806.00 CENTRAL DEL NORTE CDMX a GUANAJUATO 06:00, 08:30, 10:40, 12:20, 13:40, 15:00, 16:20, 17:40, 19:00, 20:20, 23:59 \$614.00 CENTRAL DEL NORTE CDMX a LEÓN 00:45, 01:30, 04:40, 05:20, 06:00, 06:40, 07:10, 07:40, 08:10, 08:40, 09:10, 09:40, 10:10, 10:40, 11:10, 11:40, 12:10, 12:40, 13:10, 13:40, 14:10, 14:40, 15:10, 15:40, 16:10, 16:40, 17:10, 17:40, 18:10, 18:40, 19:10, 19:40, 20:10, 20:40, 21:20, 22:00, 22:40, 23:20, 23:59 \$560.00 CENTRAL DEL NORTE CDMX a MAZATLÁN 16:40, 20:00, 21:00 \$1,330.00 CENTRAL DEL NORTE CDMX a PUERTO VALLARTA 19:00, 20:00, 21:00, 22:00 \$1,185.00 CENTRAL DEL NORTE CDMX a QUERÉTARO 04:50, 05:30, 06:00, 06:15, 06:30, 06:45, 07:00, 07:15, 07:30, 07:45, 08:00, 08:15, 08:30, 08:45, 09:00, 09:15, 09:30, 09:45, 10:00, 10:15, 10:30, 10:45, 11:00, 11:15, 11:30, 11:45, 12:00, 12:15, 12:30, 12:45, 13:00, 13:15, 13:30, 13:45, 14:00, 14:15, 14:30, 14:45, 15:00, 15:15, 15:30, 16:00, 16:15, 16:30, 16:45, 17:00, 17:15, 17:30, 17:45, 18:00, 18:15, 18:45, 19:00, 19:15, 19:30, 19:45, 20:00, 20:15, 20:30, 20:50, 21:10, 21:40, 22:10, 22:40, 23:10, 23:40 \$322.00 GUADALAJARA a AGUASCALIENTES 07:30, 09:00, 10:30, 12:00, 13:00, 14:00, 15:00, 16:00, 17:00, 18:00, 19:20, 20:40 \$356.00 GUADALAJARA a CENTRAL DEL NORTE CDMX 00:20, 00:40, 01:00, 01:30, 03:15, 06:00, 07:00, 07:50, 08:40, 09:30, 10:20, 11:00, 11:30, 12:00, 12:30, 13:00, 13:30, 14:00, 14:30, 15:00, 15:40, 16:20, 17:00, 18:30, 20:00, 21:30, 22:00, 22:20, 22:40, 23:00, 23:10, 23:20, 23:30, 23:40, 23:50, 23:59 \$806.00 GUADALAJARA a GUANAJUATO 01:45, 07:40, 09:40, 11:00, 12:00, 13:30, 15:00, 16:30, 17:30, 18:40 \$479.00 GUADALAJARA a MANZANILLO 01:15, 02:00, 03:00, 05:30, 06:50, 06:55, 07:00, 08:20, 08:25, 08:30, 09:50, 10:00, 10:50, 11:00, 12:00, 13:00, 14:00, 14:50, 15:00, 16:20, 17:10, 17:20, 18:20, 18:30, 19:30, 21:00, 23:15 \$482.00 GUANAJUATO a CENTRAL DEL NORTE CDMX 05:30, 07:00, 09:30, 11:00, 12:00, 13:00, 14:20, 15:40, 17:00, 18:30, 23:59 \$614.00
2019-02-17 19:31:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 141.83893256710843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482478.14/warc/CC-MAIN-20190217192932-20190217214932-00045.warc.gz"}
https://www.jobilize.com/course/section/friction-forces-force-momentum-and-impulse-by-openstax?qcr=www.quizover.com
0.2 Force, momentum and impulse  (Page 16/35) Page 16 / 35 Friction forces When the surface of one object slides over the surface of another, each body exerts a frictional force on the other. For example if a book slides across a table, the table exerts a frictional force onto the book and the book exerts a frictional force onto the table (Newton's Third Law). Frictional forces act parallel to surfaces. A force is not always powerful enough to make an object move, for example a small applied force might not be able to move a heavy crate. The frictional force opposing the motion of the crate is equal to the applied force but acting in the opposite direction. This frictional force is called static friction . When we increase the applied force (push harder), the frictional force will also increase until the applied force overcomes it. This frictional force can vary from zero (when no other forces are present and the object is stationary) to a maximum that depends on the surfaces. When the applied force is greater than the maximum frictional force, the crate will move. Once the object moves, the frictional force will decrease and remain at that level, which is also dependent on the surfaces, while the objects are moving. This is called kinetic friction . In both cases the maximum frictional force is related to the normal force and can be calculated as follows: For static friction: F ${}_{f}$ $\le$ ${\mu }_{s}$ N Where ${\mu }_{s}$ = the coefficient of static friction and N = normal force For kinetic friction: F ${}_{f}$ = ${\mu }_{k}$ N Where ${\mu }_{k}$ = the coefficient of kinetic friction and N = normal force Remember that static friction is present when the object is not moving and kinetic friction while the object is moving. For example when you drive at constant velocity in a car on a tar road you have to keep the accelerator pushed in slightly to overcome the friction between the tar road and the wheels of the car. However, while moving at a constant velocity the wheels of the car are rolling, so this is not a case of two surfaces “rubbing” against eachother and we are in fact looking at static friction. If you should break hard, causing the car to skid to a halt, we would be dealing with two surfaces rubbing against eachother and hence kinetic friction. The higher the value for the coefficient of friction, the more 'sticky' the surface is and the lower the value, the more 'slippery' the surface is. The frictional force (F ${}_{f}$ ) acts in the horizontal direction and can be calculated in a similar way to the normal for as long as there is no movement. If we use the same example as in [link] and we choose to the rightward direction as positive, $\begin{array}{ccc}\hfill {F}_{f}+{F}_{x}& =& 0\hfill \\ \hfill {F}_{f}+\left(+8\right)& =& 0\hfill \\ \hfill {F}_{f}& =& -8\hfill \\ \hfill {F}_{f}& =& 8\phantom{\rule{0.166667em}{0ex}}\mathrm{N}\phantom{\rule{3.33333pt}{0ex}}\mathrm{to}\mathrm{the}\mathrm{left}\hfill \end{array}$ A 50 kg crate is placed on a slope that makes an angle of 30 ${}^{\circ }$ with the horizontal. The box does not slide down the slope. Calculate the magnitude and direction of the frictional force and the normal force present in this situation. 1. Draw a force diagram and fill in all the details on the diagram. This makes it easier to understand the problem. 2. The normal force acts perpendicular to the surface (and not vertically upwards). It's magnitude is equal to the component of the weight perpendicular to the slope. Therefore: $\begin{array}{ccc}\hfill N& =& {F}_{g}\phantom{\rule{3.33333pt}{0ex}}cos\phantom{\rule{3.33333pt}{0ex}}{30}^{\circ }\hfill \\ \hfill N& =& 490\phantom{\rule{3.33333pt}{0ex}}cos\phantom{\rule{3.33333pt}{0ex}}{30}^{\circ }\hfill \\ \hfill N& =& 224\phantom{\rule{0.166667em}{0ex}}\mathrm{N}\phantom{\rule{3.33333pt}{0ex}}\mathrm{perpendicular}\mathrm{to}\mathrm{the}\mathrm{surface}\hfill \end{array}$ 3. The frictional force acts parallel to the sloped surface. It's magnitude is equal to the component of the weight parallel to the slope. Therefore: $\begin{array}{ccc}\hfill {F}_{f}& =& {F}_{g}\phantom{\rule{3.33333pt}{0ex}}sin\phantom{\rule{3.33333pt}{0ex}}{30}^{\circ }\hfill \\ \hfill {F}_{f}& =& 490\phantom{\rule{3.33333pt}{0ex}}sin\phantom{\rule{3.33333pt}{0ex}}{30}^{\circ }\hfill \\ \hfill {F}_{f}& =& 245\phantom{\rule{0.166667em}{0ex}}\mathrm{N}\phantom{\rule{3.33333pt}{0ex}}\mathrm{up}\mathrm{the}\mathrm{slope}\hfill \end{array}$ what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment? absolutely yes Daniel how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej Abigail for teaching engĺish at school how nano technology help us Anassong How can I make nanorobot? Lily Do somebody tell me a best nano engineering book for beginners? there is no specific books for beginners but there is book called principle of nanotechnology NANO how can I make nanorobot? Lily what is fullerene does it is used to make bukky balls are you nano engineer ? s. fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball. Tarell what is the actual application of fullerenes nowadays? Damian That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes. Tarell how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Got questions? Join the online conversation and get instant answers!
2020-02-21 22:20:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42383646965026855, "perplexity": 899.9128595086206}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145538.32/warc/CC-MAIN-20200221203000-20200221233000-00041.warc.gz"}
https://solvedlib.com/n/phosphomus-32-has-hall-lite-ol-14-0-duys-sturting-with-4,21305325
# Phosphomus 32 has hall-lite ol 14,0 duys. Sturting with 4.00 Eor 32P ,how many Orama will remain altcr 420 dayu ###### Question: Phosphomus 32 has hall-lite ol 14,0 duys. Sturting with 4.00 Eor 32P ,how many Orama will remain altcr 420 dayu Exprett your anawer numerlcally grami VleY Avallable HInt(e) ASP #### Similar Solved Questions ##### The manufacture of aluminum includes the production of cryolite (Na3AlF6) from the following reaction: 6HF +... The manufacture of aluminum includes the production of cryolite (Na3AlF6) from the following reaction: 6HF + 3NaAlO2 ----> Na3AlF + 3H2O + AlO3 How much NaAlO2 (sodium aluminate) is required to produce 1.49 kg of Na3AlF6? _______kg... ##### Number of visits before major purchase0.40.40.30.25[ 0.2 10.150_0.10.0235Number of visits Number of visits before major purchase 0.4 0.4 0.3 0.25 [ 0.2 1 0.15 0_ 0.1 0.0 2 3 5 Number of visits... ##### Electrical Potential The difference in potential between the accelerating plates in theelectron gun of a TV picture tube is about 24960 V. If the distance between these plates is1.30 cm,find the magnitude of theuniform electric field in this region... ##### Ifm = 2.0 kg and a = 2.5 m/s? as shown in the figure; what is the magnitude F? Write the numerical value only with 2 decimal places. Do not write the unit:30m Ifm = 2.0 kg and a = 2.5 m/s? as shown in the figure; what is the magnitude F? Write the numerical value only with 2 decimal places. Do not write the unit: 30 m... ##### Which of the following was an unexpected benefit of the bubonic plague? a. It resulted in... Which of the following was an unexpected benefit of the bubonic plague? a. It resulted in a better understanding of aseptic practices and how to prevent the spread of infection. b. The population decline enabled the cultural advancement of the Renaissance. c. The population of Europe ... ##### Lot of noise. The kitchen has "ENTER" and 'EXIT" (6 points) The kitchen at a restaurant makes doors separated by 120 m, as shown below: If the sound waves coming from the kitchen move at 343 m/s and have frequency of 625 Hz, where would you seat the customers s0 that they do not hear the noise from the kitchen? (That is, what is the value of Y1Yz and Ya in meters?)CustomerDining RoomCustomer 21KitchenCustomer 115 0m50,0 m lot of noise. The kitchen has "ENTER" and 'EXIT" (6 points) The kitchen at a restaurant makes doors separated by 120 m, as shown below: If the sound waves coming from the kitchen move at 343 m/s and have frequency of 625 Hz, where would you seat the customers s0 that they do not ... ##### Evaluate each expression. $\frac{21}{25} \cdot \frac{100}{3}$ Evaluate each expression. $\frac{21}{25} \cdot \frac{100}{3}$... ##### QUesion and income of an Individual are given In table form: Find the net monthly cash flow (lt could be positive The expenses salaries and wages are after taxes, that month weeks, and that 1 ycar - 12 montha Round negative) Assume your answer to the nearest dollar.uncame Exrnes Par-ame pb: 5900 /monthh Rent S58honth Student loan S70Olrear Gmers 570h+eek Ssoofvear Tuton and fecs: 53500 nice Jrar Schobrhp Entertaiment Slmont!5497 {264 5579 5287 QUesion and income of an Individual are given In table form: Find the net monthly cash flow (lt could be positive The expenses salaries and wages are after taxes, that month weeks, and that 1 ycar - 12 montha Round negative) Assume your answer to the nearest dollar. uncame Exrnes Par-ame pb: 5900 /m... ##### 1- Which of the following compounds is a strong electrolyte? Group of answer choicesHClNH4ClNaClall2- A student has 300mL of a 1M solution of HCl. He wants to use all of this solution and made a 0.5M solution of HCl. Calculate how much water must be added to accomplish this.Group of answer choices1.2L300mL800mL600mL3- Which of the following equilibrium constants indicates the reaction that gives the smallest amount of the product? Group of answer choices5X10-15X1015X1005X10-104- What is the mola 1- Which of the following compounds is a strong electrolyte? Group of answer choicesHClNH4ClNaClall2- A student has 300mL of a 1M solution of HCl. He wants to use all of this solution and made a 0.5M solution of HCl. Calculate how much water must be added to accomplish this.Group of answer choices1.... ##### Indicate the name and draw the structure of the following compounds. (*): A maximum of 12... Indicate the name and draw the structure of the following compounds. (*): A maximum of 12 carbons will be used for the main chain and the rest will be used as a secondary chain. D. The isomers of an ester having a total of 15 carbons in its structure. a. Two isomers having two propyl groups: b. Tw... ##### The pipe is subjected to the force of F = 60 lbf. Determine the angle between... The pipe is subjected to the force of F = 60 lbf. Determine the angle between F and the pipe segment BA and the projection of F along this segment. ft 4ft - > 4 ft til 2 ft F = 60 lb degrees FBA Ibf... ##### A displacement current of 1.23 mu A flows through a 1.03-cm^2 area perpendicular to a spatially... A displacement current of 1.23 mu A flows through a 1.03-cm^2 area perpendicular to a spatially uniform electric field. At what rate is the field changing? delta/dt =... ##### Which of the following statements is true?a. Dissolving KCâ„“ in pure water produces an acidic pHb. Dissolving KCâ„“ in pure water produces a basic pHc. Dissolving NH 4 Câ„“ in pure water produces an acidic pHd. Dissolving NH 4 Câ„“ in pure water produces a basic pH Which of the following statements is true? a. Dissolving KCâ„“ in pure water produces an acidic pH b. Dissolving KCâ„“ in pure water produces a basic pH c. Dissolving NH 4 Câ„“ in pure water produces an acidic pH d. Dissolving NH 4 Câ„“ in pure water produces a basic pH... ##### Differential media crossword Differential Media Section Name DOWN ACROSS 5 Medium for S. aureus (3wds) 6... differential media crossword Differential Media Section Name DOWN ACROSS 5 Medium for S. aureus (3wds) 6 Microbe that preters just a bit of oxygen 11 Medium for coliforms (3wds) 14 anaerobe grows better with oxygen 15 Cannot grow in presence of oxygen 16 Microbes that require oxygen 17 Complete ... ##### The joint probability density function of X and Y is given by f(x,y) (r?_ if 0 < x < 1 and 0 < y < 2 (03,00 Marls} Verify thatthe given function is indeed Joint density function; (U4,0o Matki Compute the marginal density function ofX (03.00 Marle} Find the probability P(X > Y)_ (03,00 Markes) Find E(X) The joint probability density function of X and Y is given by f(x,y) (r?_ if 0 < x < 1 and 0 < y < 2 (03,00 Marls} Verify thatthe given function is indeed Joint density function; (U4,0o Matki Compute the marginal density function ofX (03.00 Marle} Find the probability P(X > Y)_ (03,00... ##### Ifwe have the following valuesdata point6.18 01.05 3.58 212 9.03 7.49 4.65 8.93 and we can fit a linear regression model Where X is the independent variable and y is the dependent variable find the summation for all the mean response bo + bax; without finding bo and b1a 19,59b. zero 141, 44 d.23.44 ifwe have the following values data point 6.18 01.05 3.58 212 9.03 7.49 4.65 8.93 and we can fit a linear regression model Where X is the independent variable and y is the dependent variable find the summation for all the mean response bo + bax; without finding bo and b1 a 19,59 b. zero 141, 44 d.2... ##### Which one of the following statements is correct? Why?P-value is always smaller than the signilicant level &. P-value can be greater than (or 1009) _ Probability of Type error is always the same aS 0. P-value can be negative in some situations_ We cannot calculate P-value if we do not have large sample. Which one of the following statements is correct? Why? P-value is always smaller than the signilicant level &. P-value can be greater than (or 1009) _ Probability of Type error is always the same aS 0. P-value can be negative in some situations_ We cannot calculate P-value if we do not have larg... ##### Required information The following information applies to the questions displayed below.] Gipps Business College (GBC) has... Required information The following information applies to the questions displayed below.] Gipps Business College (GBC) has two degree programs, undergraduate (UG) and graduate (GRAD) and three service departments, information technology (IT), Event Planning (EP), and Career Services (CS). The CFO of... ##### A block of mass m, starting from rest at Position A, slides down a frictionless track (see Figure 19.2). At some later time; the block will travel through a circular loop- to-loop part of the track at Position B. What is the centripetal force that acts on the block when it's at Position B, upside down at the top of the loop? The radius of curvature of the loop is R = h/3.0 mg012/3)mg2mg3mg A block of mass m, starting from rest at Position A, slides down a frictionless track (see Figure 19.2). At some later time; the block will travel through a circular loop- to-loop part of the track at Position B. What is the centripetal force that acts on the block when it's at Position B, upsi... ##### Step by step for 8 1) Given (1 2 3 1 0 11 1 5 2... Step by step for 8 1) Given (1 2 3 1 0 11 1 5 2 1 A= -2 -5 -4 -1 1 ( 3 5 11 4 1 Find the basis and dimension for the row, the column spaces, and the null space NA Also, state the rank, the nullity of A 2) The subspace of W in R spanned by vectors u =(2.-2.1) v =(1,2,2) is a plane passing thru the ... ##### The business college computing center wants to determine the proportion of business students who have laptop... The business college computing center wants to determine the proportion of business students who have laptop computers. If the proportion differs from 30%, then the lab will modify a proposed enlargement of its facilities. Suppose a hypothesis test is conducted and the test statistic is 2.5. Find th... ##### (1 point) The cylinder x2 + y2 = 16 divides the sphere 22 + y2 + 22 = 25 into two regions I (for the region inside the cylinder) , and (for the region outside the cylinder): Find the ratio of the areas A(O)/A(I): (1 point) The cylinder x2 + y2 = 16 divides the sphere 22 + y2 + 22 = 25 into two regions I (for the region inside the cylinder) , and (for the region outside the cylinder): Find the ratio of the areas A(O)/A(I):... ##### Anprormate thc drea undtr the arve Ovtr te qen interval uslng (1 Point) Ient endpoint rectangkes: +4 ^ 1-4,05, Find the drea under the aunve avtr tha given Interval (1 Point) [+.]070 anprormate thc drea undtr the arve Ovtr te qen interval uslng (1 Point) Ient endpoint rectangkes: +4 ^ 1-4,0 5, Find the drea under the aunve avtr tha given Interval (1 Point) [+.] 070... ##### When (R)-2-bromobutane is reacted with sodium methoxide mechanism,fored via a(n)(R)-2-methoxybutanc; Sn2 (R)-Z-methoxybutane: Sn] (R)-2-methoxybutane: E2(S)-2-methoxybutane: Sn2 (S)-2-mcthoxybutanc: Sn] When (R)-2-bromobutane is reacted with sodium methoxide mechanism, fored via a(n) (R)-2-methoxybutanc; Sn2 (R)-Z-methoxybutane: Sn] (R)-2-methoxybutane: E2 (S)-2-methoxybutane: Sn2 (S)-2-mcthoxybutanc: Sn]... ##### 16. Provide the reagents necessary to carry out the following conversion CHz MIIOH enantiomer'CHa 16. Provide the reagents necessary to carry out the following conversion CHz MIIOH enantiomer 'CHa... ##### You deposit S2000 each year into an account earning 4% interest compounded annually: How much will you have in the account in 30 years?Question Help:VideoVideo 2Submit Question You deposit S2000 each year into an account earning 4% interest compounded annually: How much will you have in the account in 30 years? Question Help: Video Video 2 Submit Question... ##### QuestionOf 1000 randomly selected cases of lung cancer; 200 resulted in death within 10 years What is the error of the 90% two sided confidence interval of the proportion of deaths? Would you reject the claim that the proportion is 0.1820 (YesINo) What would be the sample size if the error iS to be 0.0521(xxxx) Question Of 1000 randomly selected cases of lung cancer; 200 resulted in death within 10 years What is the error of the 90% two sided confidence interval of the proportion of deaths? Would you reject the claim that the proportion is 0.1820 (YesINo) What would be the sample size if the error iS to be... ##### Chapter S2, Problem S2/083 Determine the x- and y-axis intercepts of the line of action of... Chapter S2, Problem S2/083 Determine the x- and y-axis intercepts of the line of action of the resultant of the three loads applied to the gearset. 2.6 kN 262 1.6 kN 120 mm 220 280 mm 26° 3.7 kN Answers:... ##### What is the leadership role of Director of Centers for Disease Control and Prevention (C What is the leadership role of Director of Centers for Disease Control and Prevention (C... ##### Tonty (oUr Dorcent ct U 5.omployees Wto Dro Iate Ix work blame ovorsleepng You randornly selact Ibur U.S amploycos wo are Iate (or Work and aak bom Mhotht Ihoy Dama Grereleapng The randon varabye represents the number ol U,S . employees Who are la'8 fr work and bieme ovonideon} Frrd the mean &l the binomial dntnbuton -D] (Round Io tra naarest huncredth a8 noeded ) Fndte vanance 0l the binomial Orsnbuton: 0 (Round (0 Iha ngarast hundrodin a3 noeded } Find Ln0 elandud datiuban Ju binam d Tonty (oUr Dorcent ct U 5.omployees Wto Dro Iate Ix work blame ovorsleepng You randornly selact Ibur U.S amploycos wo are Iate (or Work and aak bom Mhotht Ihoy Dama Grereleapng The randon varabye represents the number ol U,S . employees Who are la'8 fr work and bieme ovonideon} Frrd the mean &a...
2022-09-30 00:30:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4763438403606415, "perplexity": 3920.6104697715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00298.warc.gz"}
https://www.hackmath.net/en/math-problem/1064
# Two valves Water fill the pool by two valves for 11 days. After 7 days the first valve was stopped and the second valve fill pool for 7 days. How many days took to fill the pool each of the valves individually? Result x =  25.67 d y =  19.25 d #### Solution: $x=1/(1/11-1/(7/(1-7/11)))=25.67 \ \text{d}$ $\dfrac{1}{x}+\dfrac{1}{y} = \dfrac{1}{ 11 } \ \\ 7 \cdot \dfrac{1}{y} = 1- \dfrac{ 7}{ 11 } \ \\ \ \\ y = 7 / (1 - 7/11) = 19.25 \ \text{d} \ \\ \ \\ \dfrac{1}{x} = \dfrac{1}{ 11 } - \dfrac{1}{y} \ \\ \dfrac{1}{x} = \dfrac{1}{ 11 } - \dfrac{1}{ 19.25 } \ \\ \dfrac{1}{x} = 0.039 \ \\ x = 25.67 \ d$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Tips to related online calculators Looking for calculator of harmonic mean? Looking for a statistical calculator? Do you have a system of equations and looking for calculator system of linear equations? Do you want to convert time units like minutes to seconds? ## Next similar math problems: 1. Two numbers We have two numbers. Their sum is 140. One-fifth of the first number is equal to half the second number. Determine those unknown numbers. 2. Factory and divisions The factory consists of three auxiliary divisions total 2,406 employees. The second division has 76 employees less than 1st division and 3rd division has 212 employees more than the 2nd. How many employees has each division? 3. Guppies for sale Paul had a bowl of guppies for sale. Four customers were milling around the store. 1. Rod told paul - I'll take half the guppies in the bowl, plus had a guppy. 2. Heather said - I'll take half of what you have, plus half a guppy. The third customer, Na 4. Elimination method Solve system of linear equations by elimination method: 5/2x + 3/5y= 4/15 1/2x + 2/5y= 2/15 5. Mother and son Mother is four times older than her son. In 16 years, the son will be two times younger than his mother. How many years are mother and son? 6. Linear system Solve a set of two equations of two unknowns: 1.5x+1.2y=0.6 0.8x-0.2y=2 7. Linear system Solve this linear system (two linear equations with two unknowns): x+y =36 19x+22y=720 8. Minute average In a factory, four workers are assigned to complete an order received for dispatching 1400 boxes of a particular commodity. Worker A takes 4 mins per box, Worker B takes 6 minutes per box, C takes 10 mins per box, D takes 15 mins per box. Find the average 9. Three brothers The three brothers have a total of 42 years. Jan is five years younger than Peter and Peter is 2 years younger than Michael. How many years has each of them? 10. Legs Cancer has 5 pairs of legs. The insect has 6 legs. 60 animals have a total of 500 legs. How much more are cancers than insects? 11. Theatro Theatrical performance was attended by 480 spectators. Women were in the audience 40 more than men and children 60 less than half of adult spectators. How many men, women and children attended a theater performance? 12. Equations - simple Solve system of linear equations: x-2y=6 3x+2y=4 13. The dormitory The dormitory accommodates 150 pupils in 42 rooms, some of which are triple and some are quadruple. Determine how many rooms are triple and how many quadruples. 14. Equations Solve following system of equations: 6(x+7)+4(y-5)=12 2(x+y)-3(-2x+4y)=-44 15. Theorem prove We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started? 16. Linsys2 Solve two equations with two unknowns: 400x+120y=147.2 350x+200y=144 17. Two equations Solve equations (use adding and subtracting of linear equations): -4x+11y=5 6x-11y=-5
2020-05-29 01:05:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5347612500190735, "perplexity": 3003.273759360691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00172.warc.gz"}
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1179.37059
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 1179.37059 Khanin, K.; Teplinsky, A. Herman's theory revisited. (English) [J] Invent. Math. 178, No. 2, 333-344 (2009). ISSN 0020-9910; ISSN 1432-1297/e Authors' abstract: We prove that a $C^{2+\alpha }$-smooth orientation-preserving circle diffeomorphism with rotation number in Diophantine class $D _{\delta }, 0\leq \delta < \alpha \leq 1, \alpha - \delta \neq 1$, is $C^{1+\alpha - \delta }$-smoothly conjugate to a rigid rotation. This is the first sharp result on the smoothness of the conjugacy. We also derive the most precise version of Denjoy's inequality for such diffeomorphisms. [Ljubiša Kocić (Niš)] MSC 2000: *37E10 Maps of the circle Keywords: Diophantine class; cross ratio distortion; circle diffeomorphisms; Denjoy's type inequality Cited in: Zbl 1224.37022 Highlights Master Server
2013-05-21 01:40:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8283208012580872, "perplexity": 9965.95927683733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699632815/warc/CC-MAIN-20130516102032-00093-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/901765/hypothesis-testing-help-verify-and-interpret-the-solution
# Hypothesis testing. Help verify and interpret the solution. This is a question from Statistical Theory I have encountered. I have almost solved it, but have some trouble interpreting the solution. Something seems weird, and I am not sure whether I am entirely correct. X is a random variable with distribution $\frac3{\theta^3} x^2 {1}_{(0,\theta)}$. We want to test the hypotheses: $H_0: \theta = 1$ vs. $H_1: \theta = 1.1$ with significance level $\alpha$ = $P(\text{reject }H_0| H_0 \text{ is true})$. We gather $n$ = 100 observations. Okay, we perform the likelihood ratio test: $$\Lambda = \frac{L(x_i,\theta_1)}{L(x_i\theta_0)} = \frac{(\frac{3}{1.1^3})^n(\Pi x_i)^2 1_{(0,1.1)}}{3^n(\Pi x_i)^2 1_{(0,1)}} = \begin{cases} c_n, & \text{if \max x_i \leq 1} \\ +\infty, & \text{if \max x_i \geq 1} \\ \end{cases}$$ Where $c_n = (\frac 1{1.1^{3n}}) = c_{100} = c$ I get trouble attempting to match the hypothesis level: the test should go like this: reject $H_0$ is $\Lambda$ > $C = C(\alpha)$. If $C < c$ we always reject $H_0$. Therefore significance level is 1. if $C > c$ we reject the null hypothesis only if $\max x_i > 1$. But under $H_0$ this never happens, therefore $\alpha = 0$ The power of the test is $\pi = 1 - P(\text{do not reject } H_0| H_1 \text{ is true}) = \text{either } 1 \text{ or some other constant, which I have calculated to be ~~$1-0.75^n$, depending on$C >< c$}$ So, first of all, can somebody please verify whether I have made any significant mistakes? (I may have gotten some number wrong, but am I conceptually correct?) How do I calculate $C = C(\alpha)?$ What is the precise significance and power of this test for, say $\alpha = 0.01$? The fact the the test is so "$\alpha$-independent" makes me suspicious. OK, you found the ratio correctly. Now you need to construct the critical region in the form $\Omega = \{x:\Lambda(x) > C_{\alpha}\}$, where $C_{\alpha}$ corresponds to your significance level $\alpha$. Let's vary $C$ and look what we have. If $C=0$, then our critical region consists of all possible $x$, i.e. $\Omega_1 = [0, 1.1]$. In this case the probability of 1 type error equals 1. And we have the same result for all $C \in [0,c_n)$. If $C \ge c_n$, then our critical region consists only of $x$ with $\max x_i \ge 1$, i.e. $\Omega_2 = \{ x: \max x_i \ge 1 \}$. In this case the probability of 1 type error equals 0. So, we see, that neither $\Omega_1$ nor $\Omega_2$ are not critical regions what we are looking for. But we've got through all possible values $C$. In this case a randomization process can be applied: we consider the critical function $$\varphi \left( x \right) = \left\{ \begin{gathered} 1, \text{ for } x \in \Omega_2 \\ \alpha, \text{ for }x \in \Omega_1 \setminus \Omega_2 \\ \end{gathered} \right.$$ If $x\in \Omega_2$, then $H_0$ is rejected. If $x \in \Omega_1 \setminus \Omega_2$, then $H_0$ is rejected with probability $\frac{\alpha-\alpha_0}{p_0}$, where $\alpha_0 = \Pr(\Omega_2 | H_0)=0$ and $p_0 = \Pr(\Omega_2 \setminus \Omega_1 | H_0) - \Pr(\Omega_2|H_0)=1$. Note also that you accept $H_0$ with probability 1 on $\Omega_1 \setminus \Omega_1 = \varnothing$ as $\Omega_1$ contains all possible $x$ in our case. This criterion is proved to be optimal in the sense of minimizing the probability of 2 type error. • So, if I understood correctly, in this case I reject $H_0$ when it's obvious it's not true (i.e. I have observed something impossible under $H_0$), and otherwise I flip an $\alpha$ coin? • Note also that in general case after we found the "nearest" (in terms of the ratio) $\Omega_1$ and $\Omega_2$ such that $\Pr(\Omega_2) < \alpha \le \Pr(\Omega_1)$, the critical function would be $\varphi(x) = 1$ for $x \in \Omega_2$, $\varphi(x) = (\alpha - \alpha_0)/p_0$ for $x \in \Omega_1 \setminus \Omega_2$ and $\varphi(x) = \Omega \setminus \Omega_1$, where $\Omega$ is the sample space and $\alpha_0$, $p_0$ are defined above. Aug 18 '14 at 13:48
2021-11-27 18:20:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9422098994255066, "perplexity": 93.20403835086152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00311.warc.gz"}
https://nforum.ncatlab.org/discussion/8908/
Start a new discussion Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorUrs • CommentTimeAug 29th 2018 • (edited Aug 29th 2018) added three references, as per Harry’s request here. But it remains very incomplete • CommentRowNumber2. • CommentAuthorUrs • CommentTimeAug 30th 2018 Discussion as an operad in spectra (in stable homotopy theory) is in • Gijs Heuts, around def. 4.12 of Lie algebras and $v_n$-periodic spaces (arXiv:1803.06325) based on • Michael Ching, _Bar constructions for topological operads and the Goodwillie derivatives of the identity, Geom. Topol. , 9:833–933, 2005 (arXiv:math/0501429) • Michael Ching, Bar-cobar duality for operads in stable homotopy theory, Journal of Topology, 2012 (arXiv:1009.5034) • Please log in or leave your comment as a "guest post". If commenting as a "guest", please include your name in the message as a courtesy. Note: only certain categories allow guest posts. • To produce a hyperlink to an nLab entry, simply put double square brackets around its name, e.g. [[category]]. To use (La)TeX mathematics in your post, make sure Markdown+Itex is selected below and put your mathematics between dollar signs as usual. Only a subset of the usual TeX math commands are accepted: see here for a list. • (Help)
2019-12-06 20:18:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8386628031730652, "perplexity": 7471.670174287752}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540490972.13/warc/CC-MAIN-20191206200121-20191206224121-00526.warc.gz"}
https://oedes.readthedocs.io/en/latest/physics.html
# Physical models¶ ## Concentrations of charges in thermal equilibrium¶ Probability that an electronic state with energy is occupied is given by the Fermi-Dirac distribution: with denoting the Fermi energy, denoting the Boltzmann constant and denoting the temperature. The states in the conduction band are distributed in energy according to the density of states function . The total concentration of electrons is given by integral In the valence band, almost all states are occupied by the electrons. It is therefore useful to track unoccupied states, holes, instead of the occupied states. The concentration of holes is given by: Noting that and changing the integration variable, both concentrations can be written in a common form as: (1) Practical note: The SI base unit of energy is Joule (J), however the energies such as are very small and should be expressed in electronvolts (eV). The SI basic unit of concentration is , although is often encountered. The value of at the room temperature is approximately 26 meV. The SI unit of temperature is Kelvin, . ## Band energies¶ In an idealized case, the energies and of the conduction and valence bands are (2) where is the electron affinity energy, and is the bandgap energy. is the electrostatic potential. q is the elementary charge. ## Electrostatic potential¶ The electric field is related to the elestrostatic potential as (3) In linear, isotropic, homogeneous medium the electric displacement field is with permittivity where is the vacuum permittivity, and is the relative permittivity of the material. The electric displacement field satisfies the electric the Gauss’s equation (4) where is the density of free charge with denoting the elementary charge. Above, denotes other charges, such as ionized dopants. Combining the above equations gives the usual Poisson’s equation for electrostatics: (5) The SI unit of electrostatic potential is Volt, and the unit of electric field is Volt/meter. The unit of permittivity is Farad/meter. The unit of charge density is . ## Approximation for low concentrations¶ If the concentration of charge carriers is low enough, only states on the edge of band gap are important. In such case, the density of states can be assumed as a sharp energetic level, in case of electrons and in case of holes. Substiting into (1) gives At low charge carrier concentrations, Fermi-Dirac distribution is simplified as The approximation is considered valid when . Approximate charge carrier concentrations are (6) ## Gaussian density of states¶ In the case of Gaussian DOS, the density of states shape function is the Gaussian distribution function scaled by total density of states : Concentrations of species are given by integral ## Conservation equation¶ The conservation equation is: where denotes the concentration, is time, and is the flux density. denotes source term, which is positive for generating particles, and negative for sinking particles of type i. The SI unit of source term is . The conservation equation must be satisfied for each species separately. In the case of transport of electrons and holes, this gives (7) where the source S term contains for example generation and recombination terms The conservation of electric charge must be satisfied everywhere. Therefore, the source terms acting at given point must not create a net electric charge. In the case of system of electron and holes, this requires ## Current density¶ Current density is related to the density flux by the charge of single particle . Obviously, for electrons and for holes , therefore (8) Note that a convention is adopted to denote the electric current with uppercase letter , and the flux density with lowercase letter . The SI unit of density flux is , while the unit of electric current density is . ## Equilibrium conditions¶ In the equilibrium conditions, Fermi level energy has the same value everywhere. The electrostatic potential can vary, and the density of free charge does not need to be zero. Equations (1), (2), (4) are satisfied simultaneously. The current flux, the source terms, and the time dependence are all zeros, so conservation (7) is trivially satisfied. ## Nonequilibrium conditions¶ In the non-equilibrium conditions, the transport is introduced as a perturbation from equilibrium. The Fermi energy level is replaced with quasi Fermi level, which is different for each species. In (1), the equilibrium Fermi level for electrons is replaced with a quasi Fermi level . Similarly,, the equilibrium Fermi level for holes is replaced wuth quasi Fermi level for holes , giving (9) Quasi Fermi levels have associated quasi Fermi potential according to the formula for energy of an electron in electrostatic field : (10) The transport is modeled by approximating electric current density as (11) where denotes the respective mobilities. The SI unit of mobility is , although is often used. Equations (2), (5), (9), (11), (7) are simultaneously satisfied in non-equilibrium conditions. ## Drift-diffusion system¶ Standard form of density fluxes in the drift-diffusion system is (12) or more generally, allowing arbitrary charge per particle (13) is the diffusion coefficient, with SI unit . ## Drift-diffusion system: low concentration limit¶ To obtain the conventional drift-diffusion formulation (12), the the low concentration approximation (6) should be used. After introducing quasi Fermi levels, as it is done in (9), one obtains From that, the quasi Fermi energies are calculated as Using (2), and assuming constant ionization potential , bandgap , constant total densities of states , and constant temperature , substituting into (11), and using (3) In terms of density flux (8), this reads (14) where thermal voltage ## Einstein’s relation¶ Equation (14) is written in the standard drift-diffusion form (12) when the diffusion coefficient satisfies (15) This is called Einstein’s relation. ## Drift-diffusion system: general case¶ Using functions defined in (1), bands (2) and approximation (9) current densities under assumptions are (16) ## Generalized Einstein’s relation¶ In equation (16), assuming In order to express equation (16) in the standard drift-diffusion form (12), the diffusion coefficient must satisfy This is so called generalized Einstein’s relation . ## Intrinsic concentrations¶ Intrinsic concentrations , , and intrinsic Fermi level satisfy electric neutrality conditions ## Direct recombination¶ Direct recombination introduces source term where can be chosen freely. ## Unidimensional form¶ By substituting and , the equations (5), (7), (12) of the basic drift-diffusion device model are ## Total electric current density¶ Total electric current is a sum of currents due to transport of each species and the displacement current Total electric current satisfies the conservation law This can be verified by taking time derivative (4), using (7) and considering that the sum of all charge created by the source terms must be zero. ## Electrode current¶ Current passing through a surface of electrode is (17) ## Metal¶ In metal, the relation between the electrostatic potential , the workfunction energy and the Fermi level is On the other hand, the Fermi potential corresponds to the applied voltage This leads to electrostatic potential at metal surface ## Ohmic contact¶ Ohmic contact is an idealization assuming that there is no charge accumulation at the contact, and the applied voltage is equal to quasi Fermi potentials (10) of charged species Above three conditions uniquely determine the charge concentrations , , and the electrostatic potential at the contact. ## Electrochemical transport¶ Electrochemical potential for ionic species is It should be noticed that so defined “potential” has the unit of energy, unlike the electrostatic potential and quasi Fermi potentials. Above denote corrections, for example due to steric interactions. Electrochemical potential should not be confused with mobility . Density flux is approximated as yielding the standard form (13) using Einstein’s relation (15). Electrochemical species should be included in Poisson’s equation, by including proper source terms of form . A variant of Poisson’s equation (5) where are free charges are ions can be written as ## Steric corrections¶ To account for finite size of ions, the electrochemical potential in the form introduced in [LE13] is useful where denotes volume of particle of type . is the unoccupied fraction of space where summing is taken over all species occupying space, including solvent. [LE13] Jinn-Liang Liu and Bob Eisenberg. Correlated Ions in a Calcium Channel Model: A Poisson–Fermi Theory. The Journal of Physical Chemistry B, 117(40):12051–12058, 2013. PMID: 24024558. URL: https://doi.org/10.1021/jp408330f, arXiv:https://doi.org/10.1021/jp408330f, doi:10.1021/jp408330f.
2018-12-17 06:24:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9228748679161072, "perplexity": 1473.5092789530922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828318.79/warc/CC-MAIN-20181217042727-20181217064727-00323.warc.gz"}
https://www.quizover.com/trigonometry/section/performing-row-operations-on-a-matrix-by-openstax
# 7.6 Solving systems with gaussian elimination  (Page 2/13) Page 2 / 13 ## Writing a system of equations from an augmented matrix form Find the system of equations from the augmented matrix. When the columns represent the variables $\text{\hspace{0.17em}}x,\text{\hspace{0.17em}}$ $y,\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}z,$ Write the system of equations from the augmented matrix. $\left[\begin{array}{ccc}1& -1& \text{\hspace{0.17em}}\text{\hspace{0.17em}}1\\ 2& -1& \text{\hspace{0.17em}}\text{\hspace{0.17em}}3\\ 0& \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}1& \text{\hspace{0.17em}}\text{\hspace{0.17em}}1\end{array}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}|\text{\hspace{0.17em}}\text{\hspace{0.17em}}\begin{array}{c}\text{\hspace{0.17em}}\text{\hspace{0.17em}}5\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}1\\ -9\end{array}\right]$ $\begin{array}{c}x\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}y\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}z=5\\ 2x\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\text{\hspace{0.17em}}\text{\hspace{0.17em}}y\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\text{\hspace{0.17em}}\text{\hspace{0.17em}}3z=1\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}y\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}z=-9\end{array}$ ## Performing row operations on a matrix Now that we can write systems of equations in augmented matrix form, we will examine the various row operations    that can be performed on a matrix, such as addition, multiplication by a constant, and interchanging rows. Performing row operations on a matrix is the method we use for solving a system of equations. In order to solve the system of equations, we want to convert the matrix to row-echelon form    , in which there are ones down the main diagonal    from the upper left corner to the lower right corner, and zeros in every position below the main diagonal as shown. We use row operations corresponding to equation operations to obtain a new matrix that is row-equivalent    in a simpler form. Here are the guidelines to obtaining row-echelon form. 1. In any nonzero row, the first nonzero number is a 1. It is called a leading 1. 2. Any all-zero rows are placed at the bottom on the matrix. 3. Any leading 1 is below and to the right of a previous leading 1. 4. Any column containing a leading 1 has zeros in all other positions in the column. To solve a system of equations we can perform the following row operations to convert the coefficient matrix    to row-echelon form and do back-substitution to find the solution. 1. Interchange rows. (Notation: $\text{\hspace{0.17em}}{R}_{i}\text{\hspace{0.17em}}↔\text{\hspace{0.17em}}\text{\hspace{0.17em}}{R}_{j}$ ) 2. Multiply a row by a constant. (Notation: $\text{\hspace{0.17em}}c{R}_{i}$ ) 3. Add the product of a row multiplied by a constant to another row. (Notation: $\text{\hspace{0.17em}}{R}_{i}+c{R}_{j}\right)$ Each of the row operations corresponds to the operations we have already learned to solve systems of equations in three variables. With these operations, there are some key moves that will quickly achieve the goal of writing a matrix in row-echelon form. To obtain a matrix in row-echelon form for finding solutions, we use Gaussian elimination, a method that uses row operations to obtain a 1 as the first entry so that row 1 can be used to convert the remaining rows. ## Gaussian elimination The Gaussian elimination    method refers to a strategy used to obtain the row-echelon form of a matrix. The goal is to write matrix $\text{\hspace{0.17em}}A\text{\hspace{0.17em}}$ with the number 1 as the entry down the main diagonal and have all zeros below. The first step of the Gaussian strategy includes obtaining a 1 as the first entry, so that row 1 may be used to alter the rows below. Given an augmented matrix, perform row operations to achieve row-echelon form. 1. The first equation should have a leading coefficient of 1. Interchange rows or multiply by a constant, if necessary. 2. Use row operations to obtain zeros down the first column below the first entry of 1. 3. Use row operations to obtain a 1 in row 2, column 2. 4. Use row operations to obtain zeros down column 2, below the entry of 1. 5. Use row operations to obtain a 1 in row 3, column 3. 6. Continue this process for all rows until there is a 1 in every entry down the main diagonal and there are only zeros below. 7. If any rows contain all zeros, place them at the bottom. find the 15th term of the geometric sequince whose first is 18 and last term of 387 The given of f(x=x-2. then what is the value of this f(3) 5f(x+1) hmm well what is the answer Abhi how do they get the third part x = (32)5/4 can someone help me with some logarithmic and exponential equations. 20/(×-6^2) Salomon okay, so you have 6 raised to the power of 2. what is that part of your answer I don't understand what the A with approx sign and the boxed x mean it think it's written 20/(X-6)^2 so it's 20 divided by X-6 squared Salomon I'm not sure why it wrote it the other way Salomon I got X =-6 Salomon ok. so take the square root of both sides, now you have plus or minus the square root of 20= x-6 oops. ignore that. so you not have an equal sign anywhere in the original equation? hmm Abhi is it a question of log Abhi 🤔. Abhi Commplementary angles hello Sherica im all ears I need to learn Sherica right! what he said ⤴⤴⤴ Tamia hii Uday what is a good calculator for all algebra; would a Casio fx 260 work with all algebra equations? please name the cheapest, thanks. a perfect square v²+2v+_ kkk nice algebra 2 Inequalities:If equation 2 = 0 it is an open set? or infinite solutions? Kim The answer is neither. The function, 2 = 0 cannot exist. Hence, the function is undefined. Al y=10× if |A| not equal to 0 and order of A is n prove that adj (adj A = |A| rolling four fair dice and getting an even number an all four dice Kristine 2*2*2=8 Differences Between Laspeyres and Paasche Indices No. 7x -4y is simplified from 4x + (3y + 3x) -7y
2018-07-18 00:54:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7820814847946167, "perplexity": 509.47777400890755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589980.6/warc/CC-MAIN-20180718002426-20180718022426-00614.warc.gz"}
https://astronomy.stackexchange.com/questions/27629/upgrading-telescope-from-5-to-8-or-10/27630
# Upgrading telescope from 5“ to 8” or 10"? I currently have Celestron Nexstar 130STL. I am trying to upgrade my telescope to next one. I examined Meade 8"ACF Goto telescope, but I was not too thrilled about it. It gave me better view, but the details were not too different from my 130mm. Maybe my expectation was too big. I have an option of getting Orion 10" dob motorized alternatively. I am slightly concerned about the size and the portability. I would appreciate any recommendation, thanks! • How is the local seeing? Round here we have continuous high level Cirrus that makes anything over 120mm useless for all but 5-10 days a year. – Wayfaring Stranger Sep 12 '18 at 15:03 • The best telescope for you is the one that actually gets used- so portability and ease-of-use might outweigh getting a larger aperture. – antlersoft Sep 12 '18 at 15:10 • I really agree with antlersoft! thanks for the inputs!! – Kay Sep 12 '18 at 19:00 • I live in metro area. I typically drive out one or two hours if I want to see faint objects. If I need a quick fix, I go to local golf course , a 20 minute drive. – Kay Sep 12 '18 at 19:17 I am trying to upgrade my telescope to next one. To compare the light-gathering power of two telescopes, all other properties being equal, you can calculate the ratio of the squares of their radii. In this sense, a telescope with a larger aperture will provide a "better" view than one with a smaller one. Keep in mind, though, that there are other factors to consider: the impact of different optical systems on light gathering power, the use of filters and eyepieces, the unpredictable effects of atmospheric interference, and skill in using whatever telescope you buy. You'll also have to consider specifically what you want to do with the scope (visual observation, astrophotography, citizen science, etc). I would appreciate any recommendation, thanks! This forum tends to be for scientific questions in astronomy, not for equipment recommendations. There's a forum primarily for equipment discussion on a separate site called Cloudy Nights that you might want to visit. You can certainly post here, but you may not find people who are able to advise you on specific equipment issues. • I dislike the phrase "Increases exponentially with aperture". The light gathering capability should increase with the square of the aperture. It's not exponential, but it is a decent growth rate. – Ingolifs Sep 12 '18 at 3:05 • I was actually hesitating about posting this question since it was asking for personal opinion. However, I am glad I asked. I learned a lot from you thanks! BTW, I used to teach in planetarium, but I am trying to teach kids under the real sky. – Kay Sep 12 '18 at 19:13 • @Kay Sure thing. – Alphecca Sep 13 '18 at 0:01 The first thing I would recommend is to join an astronomy club or society. There's most likely one near you, and membership is generally very affordable (in the US, prices usually are $50 or less per year). Most clubs conduct freqeunt star parties, which would give you a chance to see a wide variety of telescopes in action and talk to their owners to find out more. As Alphecca replied, the light gathering power of a telescope is proportional to the square of the radius of the aperture. For example, your NexStar 130SLT has an aperture of 130 mm (just over 5"). If you use the formula for area (pi * r^2), this gives you an aperture area of about 13,273 mm^2 (square millimeters). If you were to upgrade to an 8" telescope (203 mm aperture), you'd get an aperture area of about 32,365 mm^2, which is about 240% the light gathering area of your 5" scope. If you go to a 10" (254 mm), you end up with about 50,671 mm^2, or about 380% as much as the 5" or 157% as much as the 8" What does this actually mean? That's a little more complex. First, there is more light gathering. Contrary to the common opinion of those not experienced in using telescopes, magnification is not the key concern with a telescope, light gathering is. But there's more to it than just that, and I'll circle back around shortly. In the case of light gathering, larger apertures are better. The telescope, in a sense, acts as a funnel for light, collecting from a larger area and concentrating on a smaller area. In this sense, a larger telescope can, quite simply, gather more light. But there's a catch: it's gathering ALL of the light that hits it, not just the light from the target object. It will gather the light of your target object and any stray light, such as light pollution. To deal with this, we need to increase contrast to make the dim deep sky object stand out from the sky. This is heavily influenced, then, by things such as focal ratio and your eyepiece selection. I spend a fair amount of time on Reddit r/telescopes, and we've had some discussions about this. One of the other regulars there wrote a pretty good article on medium.com about this. So, to put it mildly, there's more invovled than just getting a bigger aperture. Though, in fairness, having the biggest aperture you can afford and handle as far as weight and size goes is always the best way to go - you will never have too LITTLE aperture. The real questions then become what you want to do with it. If you want to do astrophotography, you're going in entirely the wrong direction. For visual observation, however, an 8 or 10 inch telescope is a good option. I personally recommend an 8 inch for any beginner that can afford one. But, based on what you said about being disappointed with the Meade 8", I think there is a question of expectation. If you expect to see what you see in photographs, then you're just plain out of luck. Even a very large scope (say a 24" Dob) won't do that for you. Our eyes are simply not capable of long exposure imaging as are cameras. But a good observer who has learned to train their eye properly can still see some pretty amazing sights. And this is another good reason to join a club: to spend time observing with other, more experienced observers and learning how to truly see through the eyepiece. Visual observation is not just a matter of pointing the telescope in the right spot and taking a look. Observing requires spending time letting your eye and brain work together to really see what's out there. I jsut said our eyes aren't capable of long exposures. This is true, but it's also true that extended observing does do something that's not entirely unlike long exposure imaging. As you spend time at the eyepiece, your eyes collect more and more light and your brain does start to synthesize a more complete image. It starts to fill in gaps and show you more and more. This is especially the case when observing from darker skies where there is less light pollution to muddle the view. A great example in my experience is M51. From my club's dark site, when it's up in the sky, I can almost alawys make out the two galactic cores, but frequently the spiral structure is hard to see. But if I have a comfortable seated viewing position and spend a few minutes, my eyes pick up hints of the spiral arms, which my brain starts to fill in. A good technique for developing this is sketching. If you sketch the object, your brain process it differently and you start to catch hints and glimpses of more detail which end up in your sketch. Your current telescope is a GoTo, as is the Meade you mentioned, and you talked about an 10" motorized Dob. While GoTo technology and tracking can be very helpful, they are also often more of a hindrance than expected. Most particuarly, the cost of the GoTo mounting inflates the overall cost of the instrument. An 8" standard Dob can be bought or under \$500 (occasionally under \$400). An 8" GoTo telescope is likely to run over \$1,000. The GoTo equipment typically will at least double the price. For the price of an 8" GoTo dob you could get a 12" manual Trus dob with almost 550% the light gathering area of your 5". (Not to mention a larger aperture offers greater detail resolution capability for planetary observation). In the end, my recommendation is to start by joining a club and spending some time learning more. Then you can make a more informed decision on a purchase (additionally, many clubs have a thriving buy/sell/trade culture with members constantly selling off something to fund their next purchse - I've bought most of my equipment at very good prices this way). Good luck and clear skies! • Wow, what a good lesson provided. Thank you so much for taking time to teach me. This is superb introductory article about selecting telescope for beginners. Thank you again!!! – Kay Sep 13 '18 at 23:52 • Happy to help. We have a really good sticky-post on Reddit r/telescopes (reddit.com/r/telescopes) answering "what telescope should I get." As Alphecca mentioned, this forum is less about buying a telescope and more about the science, but the r/telescopes is very much about buying, using, and working with amateur telescopes. – J.M. Haynes Sep 14 '18 at 3:02
2020-10-21 22:36:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.277365118265152, "perplexity": 1702.434571254212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878633.8/warc/CC-MAIN-20201021205955-20201021235955-00489.warc.gz"}
http://mathoverflow.net/revisions/114426/list
3 improved clarity of exposition and fixed a slight error In one sense, the answer to the question of when a Riemannian metric has an orthonormal coframing that diagonalizes the curvature in the manner requested by the OP is an algebraic problem: As is well-known, the space of curvature tensors (when regarded as quadratic forms on $\Lambda^2$ that satisfy the first Bianchi identity) has dimension $D_n = \tfrac{1}{12}n^2(n^2{-}1)$, while the set of those diagonalizable in some orthonormal coframe is the $\mathrm{O}(n)$-orbit of a linear subspace of dimension $\tfrac{1}{2}n(n{-}1)$, so (when $n\ge 3$) it is a cone $\mathcal{R}_n$ of dimension $n(n{-}1)$. Thus, a Riemannian metric will have to satisfy a set of at least $$R_n = \tfrac{1}{12}n^2(n^2{-}1) - n(n{-}1) = \tfrac{1}{12}n(n{-}1)(n{-}3)(n{+}4)$$ polynomial relations on its curvature in order for such a diagonalization to be possible at every point. Writing out a set of generators for these relations is not likely to be easy and probably won't be enlightening, even for $n=4$, which is when it first becomes nontrivial. Moreover, there is no guarantee that this ideal $\mathcal{I}_n$ of polynomial relations is generated by only $R_n$ polynomials or that you won't still have to impose some inequalities to make sure that the curvature is diagonalizable by an element in $\mathrm{O}(n)$ rather than by an element of $\mathrm{O}(n,\mathbb{C})$ that doesn't lie in $\mathrm{O}(n)$. However, this approach would, in theory, give the answer to the OP's literal question. On the other hand, one might want to interpret the question as asking how one could 'generate' all of the metrics that satisfy this diagonalizability property, at least locally. This is a more interesting (and more challenging) problem. Willie and Thomas have each given examples of classes of such metrics that essentially depend on one function of $n$ variables: Willie cited the conformally flat metrics, which are locally of the form $e^u g_0$ where $g_0$ is the standard metric on $\mathbb{R}^n$, and Thomas cited the induced metrics on hypersurfaces in a space form of dimension $n{+}1$, each of which, locally, can be described as the graph of one function of $n$-variables). The interesting question is whether these are, themselves, special cases of some more general class of metrics with the desired property. Might there be a class of examples that depend on more than one arbitrary function of $n$ variables? Another interesting question is whether their examples 'reach' all the curvature tensors that satisfy the relations $R_n$ \mathcal{I}_n$and, if not, whether are there other examples that do. This latter question is easier to answer than the former. It is easy to see, just by an algebraic count, that neither the conformally flat metrics nor those induced on hypersurfaces in space forms can actually reach' all of the$\mathrm{O}(n)$-orbits in$\mathcal{R}_n$. (In fact, these The two sets of orbits that they reach do overlap, but and they are distinct, proper closed subsets of$\mathcal{R}_n$.) On the other hand, examples provided by É. Cartan of nondegenerate submanifolds of dimension$n$in$\mathbb{R}^{2n}$that have flat normal bundle turn out to have their curvature tensors in$\mathcal{R}_n$and, using these, one can reach every orbit an open subset of the orbits in$\mathcal{R}_n$. HoweverNow, Cartan's examples depend locally on$n^2{-}n$arbitrary functions of two variables (not$n$variables), and it turns out that they satisfy many more differential equations (of higher order) than just the$R_n$equations on the curvature. (\mathcal{I}_n$. For example, in Cartan's examples, the diagonalizing coframe coframing $\omega=(\omega_i)$ turns out to be integrable, i.e., $\omega_i\wedge d\omega_i = 0$ for all $i$, so that the metric itself can be diagonalized in a local coordinate chart , i.e., it and thus is locally of the form $$g = e^{2f_1}\ {dx_1}^2 +e^{2f_2}\ {dx_2}^2 + \cdots + e^{2f_n}\ {dx_n}^2.$$ ConverselyMeanwhile, the condition for such a metric in this form to have its curvature tensor be diagonal with respect to the coframing $\omega_i \omega = e^{f_i}\ dx_i$ (\omega_i) = (e^{f_i}\ dx_i)$and, hence, lie take values in$\mathcal{R}_n$turns out to be an involutive system of second order PDE for the functions$f_i$whose general local solution depends on$n^2{-}n$arbitrary functions of two variables. Using These turn out to be slightly more general than the ones that arise as Cartan's examples, and, using solutions of this type, one can reach all of the$\mathrm{O}(n)$-orbits in$\mathcal{R}_n$.)\mathcal{R}_n$. However, the question of how to 'generate' the 'general' metric whose curvature tensor lies takes values in $\mathcal{R}_n$ for $n\ge 4$ seems to be a very difficult problem. It's It is an overdetermined system for the metric that is not involutive, and computing its first two prolongations, even in the $n=4$ case, yields a system that is extremely algebraically complicated and still not involutive. Thus, I do not know (and I believe that it is not known) whether the general local solution of this problem depends (modulo diffeomorphism) on more than one arbitrary function of $n$ variables. 2 fixed some typos and added information In one sense, the answer to the question of when a Riemannian metric has an orthonormal coframing that diagonalizes the curvature in the manner requested by the OP is an algebraic problem: As is well-known, the space of curvature tensors (when regarded as quadratic forms on $\Lambda^2$ that satisfy the first Bianchi identity) has dimension $D_n = \tfrac{1}{12}n^2(n^2{-}1)$, while the set of those diagonalizable in some orthonormal coframe is the $\mathrm{O}(n)$-orbit of a linear subspace of dimension $\tfrac{1}{2}n(n{-}1)$, so (when $n\ge 3$) it is a cone $\mathcal{R}_n$ of dimension $n(n{-}1)$. Thus, a Riemannian metric will have to satisfy a set of at least $$R_n = \tfrac{1}{12}n^2(n^2{-}1) - n(n{-}1) = \tfrac{1}{12}n(n{-}1)(n{-}3)(n{+}4)$$ polynomial relations on its curvature in order for such a diagonalization to be possible at every point. Writing out a set of generators for these relations is not likely to be easy and probably won't be enlightening, even for $n=4$, which is when it first becomes nontrivial. Moreover, there is no guarantee that the this ideal $\mathcal{I}_n$ of polynomial relations is generated by only $R_n$ relations polynomials or that you won't still have to impose some inequalities to make sure that the element curvature is diagonalizable by an element in $\mathrm{O}(n)$ rather than by an element of $\mathrm{O}(n,\mathbb{C})$ that doesn't lie in $\mathrm{O}(n)$. This However, this approach would, howeverin theory, give the answer to the OP's literal question. On the other hand, one might want to interpret the question as asking how one could 'generate' all of the metrics that satisfy this diagonalizability property, at least locally. This is a more interesting (and more challenging) problem. Willie and Richard Thomas have each given examples of classes of such metrics that essentially depend on one function of $n$ variables: Willie cited the conformally flat metrics, which are locally of the form $e^u g_0$ where $g_0$ is the standard metric on $\mathbb{R}^n$, and Richard Thomas cited the induced metrics on hypersurfaces in a space form of dimension $n{+}1$ (which, n{+}1$, which, locally, can be described as the graph of one function of$n$-variables). The interesting question is whether these are, themselves, special cases of a some more general class of metrics with the desired property. Might there be a class of examples that depend on more then than one arbitrary function of$n$variables? Another interesting question is : Do whether their examples 'reach' all the curvature tensors that satisfy the relations$R_n$and, if not, then whether are there other examples that do?. This latter question is easier to answer than the former. It is easy to see, just by an algebraic count, that neither the conformally flat metrics nor those induced on hypersurfaces in space forms can actually reach' all of the$\mathrm{O}(n)$-orbits in$\mathcal{R}_n$. (In fact, the these two sets of orbits that they each reach do overlap, but they are distinct, proper subsets of$\mathcal{R}_n$.) On the other hand, examples provided by É. Cartan of nondegenerate submanifolds of dimension$n$in$\mathbb{R}^{2n}$that have flat normal bundle also turn out to have their curvature tensors in$\mathcal{R}_n$and, using these, one can reach every orbit of$\mathcal{R}_n$. However, Cartan's examples depend locally on$n^2{-}n$arbitrary functions of two variables (not$n$variables), and it turns out that they satisfy many more differential equations (of higher order) than just the$R_n$equations on the curvature. (For example, in Cartan's examples, the diagonalizing coframe$\omega=(\omega_i)$turns out to be integrable, i.e.,$\omega_i\wedge d\omega_i = 0$for all$i$, so that the metric itself can be diagonalized in a local coordinate chart, i.e., it is locally of the form $$g = e^{2f_1}\ {dx_1}^2 +e^{2f_2}\ {dx_2}^2 + \cdots + e^{2f_n}\ {dx_n}^2.$$ Conversely, in order the condition for such a metric to have its curvature tensor be diagonal with respect to the coframing$\omega_i = e^{f_i}\ dx_i$and, hence, lie in$\mathcal{R}_n$turns out to be an involutive system of second order PDE for the functions$f_i$whose general local solution depends on$n^2{-}n$arbitrary functions of two variables.)variables. Using solutions of this type, one can reach all of the$\mathrm{O}(n)$-orbits in$\mathcal{R}_n$.) However, the question of how to 'generate' the 'general' metric whose curvature tensor lies in$\mathcal{R}_n$for$n\ge 4$seems to be a very difficult problem. It's an overdetermined system for the metric that is not involutive, and computing its first two prolongations, even in the$n=4$case, yields a system that is extremely algebraically complicated and still not involutive. Thus, I do not know (and I believe that it is not known) whether the general local solution of this problem depends (modulo diffeomorphism) on more than one arbitrary function of$n$variables. 1 There are still a few interesting things to say about this question, so I thought I'd add some comments. In one sense, the answer to the question of when a Riemannian metric has an orthonormal coframing that diagonalizes the curvature in the manner requested by the OP is an algebraic problem: As is well-known, the space of curvature tensors (when regarded as quadratic forms on$\Lambda^2$that satisfy the first Bianchi identity) has dimension$D_n = \tfrac{1}{12}n^2(n^2{-}1)$, while the set of those diagonalizable in some orthonormal coframe is the$\mathrm{O}(n)$-orbit of a linear subspace of dimension$\tfrac{1}{2}n(n{-}1)$, so (when$n\ge 3$) it is a cone$\mathcal{R}_n$of dimension$n(n{-}1)$. Thus, a Riemannian metric will have to satisfy a set of at least $$R_n = \tfrac{1}{12}n^2(n^2{-}1) - n(n{-}1) = \tfrac{1}{12}n(n{-}1)(n{-}3)(n{+}4)$$ polynomial relations in order for such a diagonalization to be possible at every point. Writing out a set of generators for these relations is not likely to be easy and probably won't be enlightening, even for$n=4$, which is when it first becomes nontrivial. Moreover, there is no guarantee that the ideal of polynomial relations is generated by only$R_n$relations or that you won't still have to impose some inequalities to make sure that the element is diagonalizable by an element in$\mathrm{O}(n)$rather than by an element of$\mathrm{O}(n,\mathbb{C})$that doesn't lie in$\mathrm{O}(n)$. This would, however, give the answer to the OP's literal question. On the other hand, one might want to interpret the question as asking how one could 'generate' all of the metrics that satisfy this diagonalizability property, at least locally. This is a more interesting (and more challenging) problem. Willie and Richard have each given examples of classes of such metrics that essentially depend on one function of$n$variables: Willie cited the conformally flat metrics, which are locally of the form$e^u g_0$where$g_0$is the standard metric on$\mathbb{R}^n$, and Richard cited the induced metrics on hypersurfaces in a space form of dimension$n{+}1$(which, locally, can be described as the graph of one function of$n$-variables). The interesting question is whether these are, themselves, special cases of a more general class of metrics with the desired property. Might there be a class of examples that depend on more then one arbitrary function of$n$variables? Another interesting question is: Do their examples 'reach' all the curvature tensors that satisfy the relations$R_n$and, if not, then are there other examples that do? This latter question is easier to answer than the former. It is easy to see, just by an algebraic count, that neither the conformally flat metrics nor those induced on hypersurfaces in space forms can actually `reach' all of the$\mathrm{O}(n)$-orbits in$\mathcal{R}_n$. (In fact, the sets of orbits that they each reach overlap but are distinct, proper subsets of$\mathcal{R}_n$.) On the other hand, examples provided by É. Cartan of nondegenerate submanifolds of dimension$n$in$\mathbb{R}^{2n}$that have flat normal bundle also have their curvature tensors in$\mathcal{R}_n$and, using these, one can reach every orbit of$\mathcal{R}_n$. However, Cartan's examples depend locally on$n^2{-}n$arbitrary functions of two variables (not$n$variables), and it turns out that they satisfy many more differential equations (of higher order) than just the$R_n$equations on the curvature. (For example, the diagonalizing coframe$\omega=(\omega_i)$turns out to be integrable, i.e.,$\omega_i\wedge d\omega_i = 0$for all$i$, so that the metric itself can be diagonalized in a local coordinate chart, i.e., it is locally of the form $$g = e^{2f_1}\ {dx_1}^2 +e^{2f_2}\ {dx_2}^2 + \cdots + e^{2f_n}\ {dx_n}^2.$$ Conversely, in order for such a metric to have its curvature tensor lie in$\mathcal{R}_n$turns out to be an involutive system of PDE for the functions$f_i$whose general local solution depends on$n^2{-}n$arbitrary functions of two variables.) However, the question of how to 'generate' the 'general' metric whose curvature tensor lies in$\mathcal{R}_n$for$n\ge 4$seems to be a very difficult problem. It's an overdetermined system for the metric that is not involutive, and computing its first two prolongations, even in the$n=4$case, yields a system that is extremely algebraically complicated and still not involutive. Thus, I do not know (and I believe that it is not known) whether the general local solution of this problem depends (modulo diffeomorphism) on more than one arbitrary function of$n\$ variables.
2013-05-20 01:44:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8596829175949097, "perplexity": 184.33024714736732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698203920/warc/CC-MAIN-20130516095643-00085-ip-10-60-113-184.ec2.internal.warc.gz"}
https://repository.kaust.edu.sa/handle/10754/124545
As a condition of graduation, KAUST requires master's students who complete a thesis to deposit it in the KAUST digital archive. Similarly, doctoral students must submit an electronic copy of their dissertation to the KAUST digital archive. ### Recent Submissions • #### Towards Improved Rechargeable Zinc Ion Batteries: Design Strategies for Vanadium-Based Cathodes and Zinc Metal Anodes (2021-12-21) [Dissertation] Committee members: Da Costa, Pedro M. F. J.; Ooi, Boon S.; Fan, Hongjin The need for renewable energy is increasing as a result of global warming and other environmental challenges. Renewable energy systems are intermittent in nature and require energy storage solutions. Lithium-ion batteries are the first choice for storing electrical energy due to their high energy density, long cycle life, and small size. However, their widespread use in grid-scale applications is limited by high cost, low lithium resources, and security issues. Among the various options, the rechargeable zinc ion water battery has the advantages of high economic efficiency, high safety, and environmental friendliness, and there are great expectations for energy storage on a network scale. Inspired by these benefits, people have put a lot of effort into developing and manufacturing zinc-based energy storage devices. As the main component of zinc ion battery, the cathode material plays an important role in the storage / release of zinc ions during insertion and extraction. Vanadium-based materials are attracting attention due to their various oxidation states, diverse structures, and abundant natural resources. However, the details of suitable cathode materials and Zn2+ storage mechanism for rechargeable zinc ion battery are not yet fully understood. In this thesis, firstly, the prepared zinc pyrovanadate delivers good zinc ion storage properties owing to its open-framework crystal structure and multiple oxidation states. Mechanistic details of the Zn-storage mechanism in zinc pyrovanadate were also elucidated. Then, a calcium vanadium oxide bronze with expanding cavity size, smaller molecular weight, and higher electrical conductivity are proposed to deeply understand the impact of the crystal structure on battery performance. To improve the stability of the cathode in rechargeable zinc ion battery, an artificial solid electrolyte interphase strategy has been proposed by inducing an ultrathin HfO2 layer via the Atomic layer deposition method, which effectively alleviates the dissolution of active material. Finally, a nitrogen-doped 3D laser scribed graphene with a large surface area and uniform distribution of nucleation sites has been used as the interlayer to control Zn nucleation behavior and suppress Zn dendrite growth, which brings new possibilities for the practical rechargeable zinc ion battery. • #### Investigation of the Long-Term Operational Stability of Perovskite/Silicon Tandem Solar Cells (2021-12-14) [Thesis] Committee members: Laquai, Frédéric; Lanza, Mario; Ooi, Boon S. With the global energy demand projected to grow rapidly, it is imperative to divest from traditional greenhouse gas-based power production toward renewable energy sources such as solar. In recent years, solar photovoltaics (PV) hold a large share among renewables sources. Currently, the market is dominated by crystalline silicon solar cells due to their low levelized cost of energy (LCOE) values. However, to sustain this progress, the power conversion efficiency of PV devices must be further improved since tiny costs cut from the other expenses is difficult. On the other hand, the margin for the PCE improvement in c-Si technology is also quite limited since the technology is approaching its practical limits. At this stage, coupling c-Si devices with another efficient solar cell in tandem configuration is a promising way to overcome this challenge. Perovskite solar cells (PSCs) represent a breakthrough solar technology to enable this target due to their proven high efficiency and potential cost-effectiveness. Whereas perovskite/silicon tandem solar cells are promising, their operational stabilities are still a significant concern for market entry. Here, the degradation mechanism of n-i-p perovskite/Si tandem solar cells was investigated. Thermal stability tests have shown severe degradation in such tandem devices. On the other hand, tandem devices were relatively stable when placed in a humidity cabinet with 25% relative humidity (RH). Conversely, temperature degraded devices showed cracks all over the perovskite surface and rupture in the top electrode after 1000 hrs at 85 oC. Additionally, silver iodide formation was depicted in XRD and XPS analysis. To enhance the stability, methods to reduce the hysteresis were studied. First, potassium chloride (KCl) was applied as a passivation agent to the electron transport layer (ETL) to reduce surface defects. Second, 2D passivation was applied to reduce trap density and enhance the crystallinity of the perovskite film. Finally, organic molecules were placed between the hole transport layer (HTL) and metal-oxide interface as interlayers to prevent diffusion of metal oxide to the HTL and accumulation of the dopant at the metal-oxide interface. After passivation and interface layers, stability enhanced but further improvement is still required. • #### Poly-Silicon Passivating Contacts for Crystalline Silicon Solar Cells (2021-12-14) [Dissertation] Committee members: Laquai, Frédéric; Ooi, Boon S.; Isabella, Olindo Passivating-contact technologies fabricated from polycrystalline-silicon (poly-Si) are increasingly considered by the crystalline silicon (c-S) PV industry to be key enablers towards record performance. This is largely thanks to their ability to provide excellent carrier collection and surface passivation, while being compatible with industrial scale production. Poly-Si based passivating contacts consist of a stack of an ultrathin silicon oxide (SiOx) film on the surface of crystalline silicon (c-Si), covered by a doped silicon film. Thin films of SiOx can be grown by several different methods: chemically, thermally, or via UV-ozone exposure. However, each of these methods presents challenges towards industrial implementation. Here, we report an alternative method to grow SiOx films using an in-situ plasma process, where we subsequently deposit the doped poly-Si layer in the same process chamber by plasma enhanced chemical vapor deposition (PECVD). This process presents several advantages, such as ease of fabrication, inherently single-side oxide growth and poly-Si deposition, and the combined deposition in one chamber, lowering capital expenditure. Subsequently, we studied the structure of the SiOx films and the doped poly-Si(p+) capping layers using X-ray photoelectron spectroscopy (XPS) and ultraviolet photoelectron spectroscopy (UPS) in order to determine the films’ elemental composition, and the band alignment at the semiconductor/oxide interfaces. A less p-type polysilicon was observed grown on top of a wet SiOx/c-Si with the origin tentatively attributed to depletion of the boron dopant via pin holes evidenced by AFM. A surface photo-voltage (SPV) was observed by XPS under in-situ light bias (AM 1.5) and a representation of the band alignment of the c-Si/SiOx/p-polysilicon under illumination is derived. The SPV was attributed to the photo accumulation of holes at the p-polysilicon and a splitting of quasi-fermi levels with its magnitude correlated to the device measured iVoc . Finally, a valuable application for this contact technology is the integration of silicon with perovskite solar cells, in the so-called monolithic tandem configuration. This approach is very promising to develop a new generation of PV with unmatched performances. Here, poly-Si contacts offer a variety of advantages, thanks to their broader material selection and to the stability at high processing temperature. • #### Localized Heating in Membrane Distillation for Performance Enhancement (2021-12) [Dissertation] Committee members: Sarathy , S. Mani; Mishra, Himanshu; Warsinger, David Membrane distillation (MD) is an emerging technology capable of treating high-saline feeds and operating with low-grade heat energy. However, commercial implementation of MD is limited by so-called temperature polarization, which is the deviation in the temperature at the feed-membrane interface with respect to the bulk fluid. This work presents solutions to alleviate temperature polarization in MD by employing a localized heating concept to deliver heat at the vicinity of the feed-membrane interface. This can be realized in multiple ways, including Joule heating, photothermal heating, electromagnetic induction heating, and nanofluid heating. In the first experiment, a Joule heating concept was implemented and tested, and the results showed a 45% increase in permeate flux and a 57% decrease in specific energy consumption. This concept was further improved by implementing a new dead-end MD configuration, which led to a 132% increase in the gained output ratio. In addition, the accumulation of foulants on the membrane surface could be successfully controlled by intermittent flushing of feedwater. Three-dimensional CFD calculations of conjugate heat transfer revealed a more uniform heat transfer and temperature gradient across the membrane due to the increased feedwater temperature over a larger membrane area. In another approach, a photothermal MD concept was used to heat the feed water locally. A 2-D photothermal material, MXene, recently known for its photothermal property, was used to coat commercial MD membranes. The coated membranes were evaluated under one-sun illumination to yield a permeate flux of 0.77 kg.m$^{−2}$h$^{−1}$ with a photothermal efficiency of 65.3% for a feed concentration of 0.36 g.L$^{−1}$. The system can produce around 6 liters of water per day per square meter of membrane. An energy analysis was also performed to compare the efficiency of various energy sources. Considering the sun as a primary energy source, the performance of different heating modes was compared in terms of performance and scale-up opportunities. Overall this work demonstrates that the application of localized heating will enable the scale-up and the use of renewable energy sources to make the MD process more efficient and sustainable. • #### Polycyclic Aromatic Hydrocarbons and Soot Particle Formation in the Combustion Process (2021-12) [Dissertation] Committee members: Roberts, William L.; Dally, Bassam; Gascon, Jorge; Yuan, Xuan The threat to the environment and human health posed by the emission of soot particles and their precursors during the combustion process has attracted widespread attention for some time. Generation of soot particles includes the precursor’s formation, particle nucleation, and the growth and oxidation of soot particles; these processes are experimentally and numerically studied in this dissertation. Fuel composition is one of the most important parameters in the study of the combustion emissions. In the first portion of this research, quantified soot precursors were detected in a jet stirred reactor and a flow reactor of several gasoline surrogates, which covered various fuel compositions and different MON numbers. A kinetic model was made to capture the polycyclic aromatic formations and help to clarify the chemistry behind them. Major reaction pathways were discussed, as well as the role of important intermediate species, such as acetylene, and resonantly stabilized radicals like allyl, propargyl, cyclopentadienyl, and benzyl in the formation of polycyclic aromatic hydrocarbons. In the second section, a Fourier-transform ion cyclotron resonance mass spectrometry was first used to probe the chemical constituents of soot particles. By examining the soot particle generated in the early stage of nucleation, some information about the nucleation process was gained. The aromatics in the infant soot particles were all peri-condensed, of a size and shape easily linked by Van der Waals forces to form aromatic dimers and bigger clusters under the specified flame conditions. Compositions in the mature soot particles indicated that soot particles grow through the carbonization process. As a hydrogen carrier, ammonia was considered a good additive for controlling soot formation. In the third portion of this work, chemical effects of ammonia on soot formation were studied. Ammonia can suppress soot formation by reducing the precursor’s formation. Chemical kinetic analysis revealed that C-N species generated in ethylene-ammonia flames removed carbon from participating in soot precursor formation, thereby reducing soot formation, however, high concentrations of toxic hydrogen cyanide may be formed, which warrants further investigation. • #### Eikonal Solution Using Physics-Informed Neural Networks for Global Seismic Travel Time Modelling (2021-12) [Thesis] Committee members: Peter, Daniel; Ravasi, Matteo Being able to determine how much time it takes for a seismic wave to travel from one point to another is essential in geophysics. One can achieve this goal under the asymptotic ray assumption and end up with the so-called Eikonal equation. The equation finds itself to be beneficial across science and engineering. In geophysics, especially the global seismology field, the solution of this equation is primarily used to perform travel time tomography and earthquake relocation application. In this research I propose a novel scheme to solve the Eikonal equation under two main objectives in mind: being able to compute more accurate first-arrival travel time using Three-dimensional (3-D) velocity model and also being as efficient as the standard procedure. The proposed method is using a physics-informed neural network (PINN). The forward problem is formulated such that the physical equation is the driving component of the minimization of the objective function. The velocity model used on this research is the second generation of the three-dimensional global adjoint tomographic model, GLAD-M25, to account for anelastic behaviour of the Earth. From the numerical tests, I observed one unique feature in using PINNs to solve the Eikonal equation. I demonstrate that I can use a velocity model which has incomplete velocity information in it and still able to model accurately in some regions the travel time. The results show that the proposed method achieves a significant improvement on the velocity validation and more importantly, is able to calculate the first-arrival travel time using a full three-dimensional global tomographic model (GLAD-M25). The validation process is done by comparing the input velocity data with the recovered velocity from the modelled travel time. The residuals for all depth is below -1 to 1 % error and the recovered velocity and input data are align with a cosine similarity value around 0.999. The main limitation pertaining to the first iteration model proposed on this research is its training cost. For each epoch, given the large number of batches, the training takes around 52.383 minutes. However, once the model is trained, the inference process is comparable to a standard Eikonal solver. • #### Production of Linear Alpha Olefins via Heterogeneous Metal-OrganicFramework (MOF) Catalysts (2021-12) [Dissertation] Committee members: Pinnau, Ingo; Castaño, Pedro; Huang, Kuo-Wei; Yan, Ning Linear Alpha Olefins (LAOs) are one of the most important commodities in the chemical industry, which are currently mainly produced via homogenous catalytic processes. Heterogeneous catalysts have always been desirable from an industrial viewpoint due to their advantages of low operation cost, ease of separation, and catalyst reusability. However, the development of highly active, selective, and stable heterogeneous catalysts for the production of LAOs has been a challenge throughout the last 60 years. In this dissertation, we designed and prepared a series of heterogeneous catalysts by incorporating structural moieties of homogenous benchmark catalysts into metal-organic-frameworks (MOFs), aiming to provide a feasible solution to this long-standing challenge. First, we reviewed the background and state of the art of this field and put forward the main objectives of our research. Then, we thoroughly discussed a novel heterogeneous catalyst (Ni-ZIF-8) that we developed for ethylene dimerization to produce 1-butene, focusing on its designed principle, detailed characterizations, catalytic performance evaluation, and reaction mechanisms. Ni-ZIF-8 exhibits an average ethylene turnover frequency greater than 1,000,000 h$^{-1}$ (1-butene selectivity >85%), far exceeding the activities of previously reported heterogeneous and many homogenous catalysts under similar conditions. Compared with homogenous nickel catalysts, Ni-ZIF-8 has significantly higher stability and showed constant activity during four hours of continuous reaction for at least two reaction cycles. The combination of isotopic labeling studies and Density Functional Theory calculations demonstrated that ethylene dimerization on Ni-ZIF-8 follows the Cossee-Arlman mechanism, and that the full exposure and square-planer coordination of the nickel sites account for the observed high activity. After that, we further optimized the Ni-ZIF-8 catalytic system from the perspective of practical applications. We achieved double productivity of 1-butene by optimizing the synthetic conditions and explored its usability and performances under solvent-free conditions. Then, we extended our catalyst design concept to prepare heterogeneous catalysts comprising other metals and MOFs, which provided a suitable platform for studying the effects of the metallic center and coordination environment on the catalytic production of LAOs. Finally, we gave our perspectives on the further development of heterogeneous catalysts for the production of LAOs. • #### Effects of Pharmacotherapy, Neurodevelopment, Sex and Structural Asymmetry on Regional Intrinsic Homotopic Connectivity in Youths with Attention Deficit Hyperactivity Disorder. (2021-12) [Thesis] Committee members: Hauser, Charlotte; Al-Naffouri, Tareq Y. Functional magnetic resonance imaging studies have long demonstrated a high degree of correlated activity between the left and right hemispheres of the brain. Interregional correlations between the time series of each brain voxel or region and its homotopic pair have recently been identified by methods such as homotopic resting-state functional connectivity (H-RSFC). However, little is known about whether interhemispheric regions in patients with Attention-deficit/hyperactivity disorder (ADHD) are functionally abnormal. The aim of this thesis is to examine the association between H-RSFC and medication status, age, sex, and volumetric asymmetry index (AI). In our approach, region-based activity was obtained using three different methods. To test for associations, two linear mixed-effects models were used. Across results, H-RSFC variation was found in subcortical regions and portions of cortical regions. In addition, changes in functional connectivity were found to be linked with structural asymmetry in two cortical regions. More importantly, shifting in homotopic functional activation was found as a result of medication intake in youths with ADHD. These findings demonstrate the utility of homotopic resting-state functional connectivity for measuring differences among pharmacotherapy intake, gender, neurodevelopment, and structural asymmetry. • #### A Bayesian Approach to D2D Proximity Estimation using Radio CSI Measurements (2021-12) [Thesis] Committee members: Alouini, Mohamed-Slim; Ombao, Hernando; Bader, Ahmed Channel State Information (CSI) refers to a set of measurements used to characterize a radio communication link. Radio infrastructure collects CSI and derives useful metrics that indicate changes to modulation and coding to be made to improve the link performance (e.g. throughput, reliability). The CSI, however, has a wider potential use. It contains an environment-specific signature that can be used to extract information about users’ position and activity. In our work, we explore the problem of proximity estimation, which consists of identifying how close a pair of devices are to each other. By assuming that Cellular Base Stations (BSs) are distributed spatially according to a Poisson Point Process (PPP), and that the channel is under Rayleigh fading, we were able to probabilistically model radio measurements and use Bayesian inference to estimate the separation between two devices given their measurements only. We first explore a shadowless channel model, then we investigate how spatially-correlated shadowing can prove useful for estimation. For both cases, Bayesian estimators are proposed and tested through simulations. We also perform experiments and evaluate how well the estimators fit to actual data. • #### Deep Learning Action Anticipation for Real-time Control of Water Valves: Wudu use case (2021-12) [Thesis] Committee members: Ghanem, Bernard; Elhoseiny, Mohamed H.; Bader, Ahmed; Masood, Mudassir Human-machine interaction could support many daily activities in making it more convenient. The development of smart devices has flourished the underlying smart systems that process smart and personalized control of devices. The first step in controlling any device is observation; through understanding the surrounding environment and human activity, a smart system can physically control a device. Human activity recognition (HAR) is essential in many smart applications such as self-driving cars, human-robot interaction, and automatic systems such as infrared (IR) taps. For human-centric systems, there are some requirements to perform a physical task in real-time. For human-machine interactions, the anticipation of human actions is essential. IR taps have delay limitations because of the proximity sensor that signals the solenoid valve only when the user’s hands are exactly below the tap. The hardware and electronics delay causes inconvenience in use and water waste. In this thesis, an alternative control based on deep learning action anticipation is proposed. Humans interact with taps for various tasks such as washing hands, face, brushing teeth, just to name a few. We focus on a small subset of these activities. Specifically, we focus on the activities carried out sequentially during an Islamic cleansing ritual called Wudu. Skeleton modality is widely used in HAR because of having abstract information that is scale-invariant and robust against imagery variances. We used depth cameras to obtain accurate 3D human skeletons of users performing Wudu. The sequences were manually annotated with ten atomic action classes. This thesis investigated the use of different Deep Learning networks with architectures optimized for real-time action anticipation. The proposed methods were mainly based on the Spatial-Temporal Graph Convolutional Network. With further improvements, we proposed a Gated Recurrent Unit (GRU) model with Spatial-Temporal Graph Convolution Network (ST-GCN) backbone to extract local temporal features. The GRU process the local temporal latent features sequentially to predict future actions. The proposed models scored 94.14% recall on binary classification to turn on and off the water tap. And higher than 81.58-89.08% recall on multiclass classification. • #### Characterizing the chemical contaminants diversity and toxic potential of untreated hospital wastewater (2021-12) [Thesis] Committee members: Saikaly, Pascal; Mahfouz, Magdy M. This study characterizes 21 wastewater samples collected from Al-Amal hospital between the period of 12 April till 8 July 2020. Al Almal is a hospital that provides drug addiction and psychological treatment to patients. Using solid-phase extraction and liquid chromatography with tandem mass spectrometry (LC-MS/MS), chemical contaminants profiles in these wastewater samples were determined in a non-targeted manner. These chemicals were then individually analyzed in an in-silico manner by checking against databases and literature to determine if they were mutagenic. By determining the proportion of mutagenic chemicals against the non-mutagenic ones, we aim to determine if untreated hospital wastewater may potentially negatively impact the downstream municipal biological wastewater treatment process. It was determined that 64% of the identified chemicals were not tested for their mutagenic effect, and hence no prior information is available in the literature and databases. Instead, we further performed in-vitro mutagenicity tests using Ames test to determine if the wastewater sample, with all of its chemical constituents, would be mutagenic. Ames test results showed that majority of the samples were non-mutagenic except for 1 sample that imposed a mutagenic effect on Salmonella enterica serovar Typhimurium TA98 and 3 samples with mutagenic effect on TA100. In addition, 1 sample showed a toxic effect on TA100. However, in all 5 instances, these samples only imposed a mutagenic and toxic effect at high concentrations of > 10x. The findings in this study suggest that a specialty hospital like Al Amal does not contribute substantially to mutagenic wastewater streams to the municipal sewer, and hence unlikely to significantly perturb the downstream biological treatment processes. However, there may still be a need to consider ad-hoc contributions of mutagenic and/or toxic wastewater streams from the hospitals. • #### Signal Processing and Optimization Techniques for High Accuracy Indoor Localization, Tracking, and Attitude Determination (2021-12) [Dissertation] Committee members: Al-Naffouri, Tareq Y.; Shihada, Basem; Laleg-Kirati, Taous-Meriem; Park, Shinkyu; Lohan, Elena S. High-accuracy indoor localization and tracking systems are essential for many modern applications and technologies. However, accurate location estimation of mov- ing targets remains challenging. Various factors can degrade the estimation accuracy, including the Doppler effect, interference, and high noise. This thesis addresses the challenges of indoor localization and tracking systems and proposes several solutions. Using a novel signal design, which we named Differential Zadoff-Chu, we developed al- gorithms that accurately estimate the distances of static and moving targets, even un- der random Doppler shifts. We then developed a high-resolution multi-target ranging algorithm that estimates the ranges to targets at proximity based on the Levenberg- Marquardt algorithm. These ranging algorithms require a line of sight (LOS) between the transmitter and the receiver. Therefore, we designed an algorithm to classify re- ceived signals as LOS and non-LOS by exploiting a room’s geometry. Transforming distances into a 2D or 3D location and orientation requires solving an optimization problem. We propose using three nodes arranged as an isosceles triangle to deter- mine the position and orientation of a target. Utilizing the geometry of the isosceles triangle, we developed a highly accurate location and orientation estimation algo- rithm by solving a constrained optimization problem. Finally, we propose a Kalman filter to improve the tracking accuracy of moving targets even under non-LOS condi- tions. This filter fuses the position and orientation estimated using our Riemannian localization algorithm with the position and orientation estimated using an inertial measurement unit (IMU) to obtain a more accurate estimate of a moving target’s position and orientation. We validated the proposed algorithms via numerical simu- lations and real experiments using low-cost ultrasound hardware. The results showed that the proposed algorithms outperformed current state-of-the-art in accuracy and complexity. • #### High Speed Imaging of Splashing by Fuel Droplet Impacts inside Combustion Engine (2021-12) [Thesis] Committee members: Magnotti, Gaetano; Ng, Kim Choon The impact of fuel drops on the walls of combustion chambers is unavoidable in direct-injection automotive engines. These drop-solid interactions can lead to splashing of the lubrication oil, its dilution or removal, which can damage the piston or the liner from dewetting. This can also cause irregular and inferior combustion or soot formation. Understanding the drop-splashing dynamics is therefore important, especially as modern IC engines are being down-sized to achieve higher thermal efficiency. Typical cylinders of IC engines contain metal liners on their walls, which have fine azimuthal grooves to support the lubricating oil as the piston moves inside the cylinder. In this thesis we study how these grooves affect the deposition or splashing of impacting diesel drops, while the solid surface is kept dry without the lubricating oil. For these experiments we use sections of actual cylinder liners and apply high-speed video imaging to capture the details of the drop impacts. The first set of experiments used normal impacts on horizontal substrates. These experiments include a range of drop sizes and impact velocities, to identify impact conditions in Reynolds and Weber number space where the transition from deposition to splashing occurs. We also study the maximum radial spreading factor of the impact lamella, finding about 8% larger spreading along the grooves than perpendicular to them. In the second set of experiments we look at the impact on inclined substrates, where the inclination angle is between 30o–60o. This produces strong asymmetry in the maximum spreading, with the tangential velocity governing the maximum radial motion. The inclined impacts change the splashing threshold, requiring larger impact velocities for splashing. The splashing threshold deviates quantitatively from earlier theories, but shows the same qualitative trends. Furthermore, a new splashing mechanism is observed, where the impact forms a prominent ejecta crown from the downstream edge. This crown ruptures first from the grooves at the sides and subsequently the capillarity detaches the downstream levitated liquid sheet from the substrate generating a myriad of splashed droplets. Preliminary observations with impacts on wet substrates show much stronger crown-formation from the lubricating oil film, with potential for dewetting. • #### Understanding biological motions with improved resolution and accuracy by NMR (2021-12) [Dissertation] Committee members: Hamdan, Samir; Fischle, Wolfgang; Hirt, Heribert; Cierpicki, Tomasz Biological motion constitutes a key and indispensable element of all biomolecules, as dynamics tightly link spatial architecture with function. Several computational and experimental techniques have been developed to study biomolecular dynamics. Nevertheless, few label-free and atomic or sub-atomic resolution techniques are able to capture biological motions at close to native conditions. Indeed, the only label-free technique giving atomic level access to dynamics from picoseconds down to seconds is nuclear magnetic resonance (NMR) spectroscopy. In this dissertation, I identify the imperfections and inaccuracies accompanying the routine and well-accepted methods of probing protein dynamics via 15N spin relaxation NMR measurements. Subsequently, I propose and develop solutions and experimental approaches to overcome the limitations and eliminate artefacts. The routine procedures applying heavy water as an internal locking standard lead to artifacts in every type of relaxation rate of 15N amides due to reaction with exchangeable deuterons. The deviations from correct values are most pronounced for highly dynamic and exposed protein fragments. I introduce a novel set of directly detected 15N spin relaxation experiments yielding an unprecedent resolution resolving the signal overlap, although of lower sensitivity. I propose a more accurate. Finally, I present how the 15N spin relaxation techniques and improved routines can be applied to understand biological processes that cannot be described without monitoring molecular motions. Using the example of human BTB domains, which are directly linked to human cancer, I demonstrate the ability to detect cryptic binding sites on the surfaces of proteins. The cryptic binding site was verified by a comprehensive NMR-monitored fragment-based screening that revealed a hit-rate only for MIZ1BTB, which was the only protein displaying slow segmental motions. I also managed to track subtle and biologically-relevant dynamic modulations of an exposed H3 histone tail affected by H1 histones or other histone variants. Enhancement of H3 tail dynamics led to increased H3K36 methylation, while restriction of motions resulted in the opposite effect. These observed correlations unequivocally support the essential role of molecular mobility in biological functions. • #### Corrosion of Buried Metals: Soil Texture and Pore Fluid Saturation (2021-12) [Dissertation] Committee members: Alshareef, Husam N.; Burns, Susan; Ahmed, Shehab The corrosion of buried metals affects geosystems that range from pipelines and nuclear waste disposal to reinforced concrete and archeology. Associated costs exceed 1 trillion dollars per year worldwide, yet current classification methods for soil corrosivity have limited predictive capacity. This study -triggered by the recent development of the Revised Soil Classification System RSCS- seeks to identify the critical soil and environment properties that can improve the prediction of buried metal corrosion. The experimental studies conducted as part of this research recognize the inherently electro-chemo-transport coupled nature of buried metal corrosion, and places emphasis on phenomena that have been inadequately captured in previous studies, such as the effect of soil texture and fines plasticity, partial saturation and moisture cycles, and conditions in Sabkha environments. The comprehensive experimental program involves detailed protocols for specimen preparation, advanced visualization (X-ray micro-CT), corrosion residual characterization (XRD), and detailed image analyses of extracted coupons. Experiments include both laboratory mixtures and a wide range of field specimens gathered throughout Saudi Arabia; furthermore, field observations expand soil assessment to native environmental conditions. Theoretical analyses based on mass conservation and electrochemical phenomena complement the experimental study. Experimental and analytical results lead to new soil corrosivity assessment guidelines. Results show the relevance of the sediment pore fluid saturation, sediment texture, air and water connectivity, active corroding areas, the effect of environmental cycles on buried metal corrosion and evolving backfill contamination. • #### Integral Methods for Versatile Fluid Simulation (2021-11-30) [Dissertation] Committee members: Pottmann, Helmut; Heidrich, Wolfgang; Batty, Christopher Physical simulations of natural phenomena usually boil down to solving an ordinary or partial differential equation system. Partial differential equation systems can be formulated either in differential form or in integral form. This dissertation explores integral methods for the simulation of magnetic fluids, so-called ferrofluids, and the surface of the vast ocean. The first two parts of this dissertation aim to contribute to the development of accurate and efficient methods for simulating ferrofluids on the macroscopic (in the order of millimeters) scale. The magnetic nature of these fluids imposes challenges for the simulation. The two most important challenges are to first model the influence of ferrofluids on surrounding magnetic fields and second the influence of magnetic forces on the fluids’ dynamics. To tackle these challenges, two Lagrangian simulation methods have been proposed. The first method discretizes the magnetic substance as clusters of particles carrying radial basis functions and applies magnetic forces between these particles. This is a mesh-free method suitable for particle-based fluid simulation frameworks such as smoothed-particle hydrodynamics. The second method follows another direction, only discretizing the fluid’s surface as triangles and vertices. A surface-based simulation for the fluid part is employed, and a boundary element method is utilized for the magnetic part. The magnetic forces are added as gradients of the magnetic energy defined on the fluid’s surface. The second approach has to solve significantly fewer unknowns in the underlying equations, and uses a more accurate surface tension model compared to the radial basis function approach. The proposed methods are able to reproduce a series of characteristic phenomena of magnetic fluids, both qualitatively and in some cases even quantitatively which leads to a better understanding of such kind of materials. The boundary element method employed in the second part shows advantages beyond ferrofluids. In the third part of this thesis, a boundary element method is coupled with a particle-based fluid simulator for ocean simulation. The wavy motion of the ocean is simulated using large triangle meshes, while water splashes are simulated using particles. This approach is much more efficient in terms of computation time and memory consumption. • #### Towards Affective Vision and Language (2021-11-30) [Thesis] Committee members: Wonka, Peter; Michels, Dominik Developing intelligent systems that can recognize and express human affects is essential to bridge the gap between human and artificial intelligence. This thesis explores the creative and emotional frontiers of artificial intelligence. Specifically, in this thesis, we investigate the relation between the affective impact of visual stimuli and natural language by collecting and analyzing a new dataset called ArtEmis. Furthermore, capitalizing on this dataset, we demonstrate affective AI models that can emotionally talk about artwork and generate them given their affective descriptions. In text-to-image generation task, we present HyperCGAN: a conceptually simple and general approach for text-to-image synthesis that uses hypernetworks to condition a GAN model on text. In our setting, the generator and the discriminator weights are controlled by their corresponding hypernetworks, which modulate weight parameters based on the provided text query. We explore different mechanisms to modulate the layers depending on the underlying architecture of a target network and the structure of the conditioning variable. • #### Computational Challenges in Sampling and Representation of Uncertain Reaction Kinetics in Large Dimensions (2021-11-29) [Dissertation] Committee members: Hoteit, Ibrahim; Farooq, Aamir; Alexanderian, Alen This work focuses on the construction of functional representations in high-dimensional spaces.Attention is focused on the modeling of ignition phenomena using detailed kinetics, and on the ignition delay time as the primary quantity of interest (QoI). An iso-octane air mixture is first considered, using a detailed chemical mechanism with 3,811 elementary reactions. Uncertainty in all reaction rates is directly accounted for using associated uncertainty factors, assuming independent log-uniform priors. A Latin hypercube sample (LHS) of the ignition delay times was first generated, and the resulting database was then exploited to assess the possibility of constructing polynomial chaos (PC) representations in terms of the canonical random variables parametrizing the uncertain rates. We explored two avenues, namely sparse regression (SR) using LASSO, and a coordinate transform (CT) approach. Preconditioned variants of both approaches were also considered, namely using the logarithm of the ignition delay time as QoI. A global sensitivity analysis is performed using the representations constructed by SR and CT. Next, the tangent linear approximation is developed to estimate the sensitivity of the ignition delay time with respect to individual rate parameters in a detailed chemical mechanism. Attention is focused on a gas mixture reacting under adiabatic, constant-volume conditions. The approach is based on integrating the linearized system of equations governing the evolution of the partial derivatives of the state vector with respect to individual random variables, and a linearized approximation is developed to relate the ignition delay sensitivity to the scaled partial derivatives of temperature. In particular, the computations indicate that for detailed reaction mechanisms the TLA leads to robust local sensitivity predictions at a computational cost that is order-of-magnitude smaller than that incurred by finite-difference approaches based on one-at-a-time rate parameters perturbations. In the last part, we explore the potential of utilizing TLA-based sensitivities to identify active subspace and to construct suitable representations. Performance is assessed based contrasting experiences with CT-based machinery developed earlier. • #### Domain-Aware Continual Zero-Shot Learning (2021-11-29) [Thesis]
2022-01-22 21:16:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4814375638961792, "perplexity": 2941.0759386703594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00219.warc.gz"}
https://zbmath.org/?q=an:1373.55010
# zbMATH — the first resource for mathematics Profinite and discrete $$G$$-spectra and iterated homotopy fixed points. (English) Zbl 1373.55010 The paper under review is a contribution to the study of relations between the notions of homotopy fixed point spectrum for the action of a profinite group, and the associated homotopy fixed point spectral sequence. The authors investigate the following important question: Can one simplify the analysis of the homotopy fixed points under the group $$G$$ by reducing it to the study of those under proper closed normal subgroups $$K$$ and the quotients $$G/K$$? This would require two steps. For $$X$$ a fibrant profinite $$G$$-spectrum, determine whether $$X^{hK}$$ is a profinite $$G/K$$-spectrum. If this is the case, then determine whether the comparison map $X^{hG}\to \left(X^{hK}\right)^{hG/K}$ is an equivalence. The authors provide various sets of sufficient conditions on $$G$$ and $$X$$, namely $$X$$ is a $$K$$-Postnikov $$G$$-spectrum and $$G/K$$ has finite virtual cohomological dimension, that allow for obtaining the equivalence. The main application of these results is the important example of the extended Morava stabilizer group $$G_{n}$$ on the Lubin-Tate spectrum $$E_{n}$$. For this purpose, the previous results are extended to homotopy inverse limits of diagrams of spectra as $$E_{n}^{hK}$$ can be identified with the homotopy inverse limit of a suitable diagram, where $$K$$ is a normal subgroup of some closed subgroup $$G$$ of $$G_{n}$$. Therefore, $$E_{n}^{hK}$$ is a profinite $$G/K$$-spectrum and the homotopy fixed point spectrum $$\left(E_{n}^{hK}\right)^{hG/K}$$ is defined and is equivalent to $$E_{n}^{hG}$$. The associated spectral sequence has the $$E_{2}$$-page of the form $$H^{s}_{c}\left(G/K;\pi_{t}\left(E_{n}^{hK}\right)\right)$$. Other constructions of the homotopy fixed point with respect to a continuous action of a closed subgroup of $$G_{n}$$ have also been studied by Devinatz, Hopkins and the first author. However, none of these constructions enjoys the fact that the equivalence of iterated homotopy fixed points $$E_{n}^{hG}\cong\left(E_{n}^{hK}\right)^{hG/K}$$ always holds. The authors expect that $$\left(E_{n}^{hK}\right)^{hG/K}$$ will be a useful tool in chromatic theory. ##### MSC: 55P42 Stable homotopy theory, spectra 55S45 Postnikov systems, $$k$$-invariants 55T15 Adams spectral sequences 55T99 Spectral sequences in algebraic topology Full Text:
2021-09-16 18:08:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8312200903892517, "perplexity": 157.35625745062578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053717.37/warc/CC-MAIN-20210916174455-20210916204455-00026.warc.gz"}
https://dougiewougie.com/category/maths/
# Category: Maths • ## Triangular numbers (Euler 11) What’s a triangular number? It is the sequence found by summing all the natural numbers, for example the third number is $1+2+3=6$. Interestingly, it counts objects arranged as a triangle. This also has closed form $T_n=\sum_{i=1}^{n}i=\frac{n(n+1)}{2}$. I started with a brute force approach – iterate through the triangular numbers and test if the number of divisors […]
2023-02-09 06:12:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6777291893959045, "perplexity": 518.3899716246741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00200.warc.gz"}
https://rs2007.limsi.fr/AERO_Page_7.html
# Characterization of the flow topology past an open cavity and reduction of the dynamics complexity ## Object Flow over open cavities is a test case for several problems of industrial interest. The cavity can indeed mimic structural discontinuities, such as car open roofs, landing gear cavities in airplanes, or cavities on the top of high speed trains, but also at lower velocity can represent problems of matter and heat transfers in cavity-like systems. This research is conducted within the theme "Unsteady Flows" of the group [[AERO]Unsteady Aerodynamics]. It aims at describing the coupling between the flow structures for a medium range Reynolds number Re between 860 and 32300) subsonic flow, with the purpose of implementing control techniques. One of the present challenging tasks, from a fundamental viewpoint as well as a practical one, lays in our ability to optimally control this kind of flow, so as to increase the aerodynamical performances, at the minimum cost. The experimental configuration investigated is the interaction between a laminar boundary layer and an open cavity, which aspect ratio R = L / H (length over height) is varied from 0.25 to 2.5. ## Description The study is performed in both an experimental setup (Figure 1), using LDV (laser Doppler velocimetry), PIV (particle image velocity) techniques and visualizations of the unsteady dynamic flow features in various planes of observation, and direct 3D numerical simulations based on the Navier-Stokes equations, in the incompressible domain. Our aim is to reduce the flow complexity by identifying some characteristic features of the flow, such as coherent stuctures, or by identifying underlying dynamical systems. $\Rightarrow$ $\Rightarrow$ Figure 1. Cavity scheme and iso-factor Q of a DNS ## Results ### Parametric investigation of the flow morphology Three-dimensional spanwise structures are developing in the flow generated by the interaction between a boundary layer and an open cavity. This 3D-segmentation is even stronger at middle external speed than at high speed corresponding the aero-acoustic interaction. The study of the cavity flow with aspect ratio and Reynolds number has shown three morphological behaviors, explored through flow visualizations and measures of the velocity field. PIV measurements, using an optical flow technique with dynamical programming, have been therefore conducted for cavity aspect ratios from R = 0.5 to 2 and for Reynolds number between 1150 and 10670. They are conducted in a (x,y) plane in cavity midspan and confirmed results obtained by flow visualizations (Figure 2)[Faure et al. 2005 a,b]. The streamlines of an instantaneous field show the dynamical flow morphology with a high accuracy despite of a strong velocity gradient between the cavity flow and the external flow. For R = 2, the flow exhibits a main vortex in the downstream part of the cavity and a secondary counter-rotating vortex in the upstream part of the cavity (Figure 3). This secondary vortex is limited to a corner vortex for R = 1 (Figure 4). For R = 2 to 1, the main vortex spreads on the entire cavity height but is limited to the upper half of the cavity for R = 0.5 (Figure 5) [Faure et al. 2007, b]. Figure 2. Flow morphology visualized by smoke injection in a midspan (x,y) plane inside the cavity for R = 2. Figure 3. Flow morphology and streamlines in a midspan (x,y) plane inside the cavity for R = 2. Figure 4. Flow morphology and streamlines in a midspan (x,y) plane inside the cavity for R = 1. Figure 5. Flow morphology and streamlines in a midspan (x,y) plane inside the cavity for R = 0.5. In order to understand the spatial development of flow instabilities, visualizations are carried out in a (x,z) plane located inside the cavity. The issue is to emphasize the three-dimensional development of the flow. In particular, we show that the cavity dynamical structures are not due to secondary shear layer instabilities. The study is conducted by changing the cavity length and height and the external flow velocity, and therefore the existence or non-existence of the longitudinal instabilities is addressed. These instabilities are resulting of an imbalance between centrifugal forces and radial pressure gradient, where the centrifugal forces are caused by the motion of the main cavity vortex. The flow instabilities can be identified as pairs of counter-rotating vortices inside the cavity (Figure 6) and correspond to the Görler linear instability mecanisme (Figure 7). They are also isolated pairs of counter-rotating vortices that do not form a loop, identified as Görtler vortices (Figure 8). Figure 6. Görtler diagram of stability from Swearingen 1987; the stars correspond to our measurements. Figure 7. Development of annular Görtler-like instabilities inside the cavity for R = 1 and Re = 4233: view in a (x,z) plane. Figure 8. Development of Görtler-like instabilities inside the cavity for R = 0.5 and Re = 7500: view in a (x,z) plane. The investigation of the range R = 0.25 to 2.5 and Re = 860 to 14000 of flow visualizations shows the domain of existence of the instabilities, as functions of Reynolds number and aspect ratio (Figure 9). The region where instabilities vortices are present in the plane (R, Re) is forming a compact domain. Figure 9. Existence diagram of the instabilities for different aspect ratio and Reynolds numbers. The instabilities are also confirmed from visualisation by PIV measurements (Figure 10). The presence of the pairs of counter-rotating vortices is identified by the y-component of the vorticity as blue and red regions along the cavity upstream and downstream walls (Figure 11) [Faure et al. 2007, a]. Figure 10. PIV mesurements of the velocity in a (x,z) plane for R = 1.5 and Re = 4450. Figure 11. Component of the vorticity component orthogonal to the (x,z) plane for R = 1.5 and Re = 4450. ### Dynamical reduction and topological characterization Coherent structures have been extracted out of the flow by applying a proper orthogonal decomposition (POD). The decomposition is performed on either the fully 3D velocity fields computed by DNS, 2D restrictions of the numerical flow such as to mimic the experimental data available from 2D PIV snapshots, or directly to the experimental PIV snapshots (Figure 12). Although the convergence towards the "true" coherent structures is slower in the 2D case than in the 3D case, we have shown that the POD can achieve, at least in the longitudinal plane, a fairly good separation of the coherent structures initiated within the shear layer from the structures confined inside the cavity [Pastur et al. :Phys. Rev. E, 72, 065301(R) (2005)]. Figure 12. First four coherent structures identified by POD in 2D PIV snapshots. From time-resolved LDV measurements, we have shown that, over a wide range of the Reynolds number, the shear layer is the theatre of a nonlinear competition between two modes of oscillation, whose frequencies $f_1$ and $f_2$ depend on $Re$,$R$,$H$. In the experiment, both modes tend to exclude each other over time (Figure 12). Surprisingly, the DNS flow does not exhibit such a dual mode competition, although the running and limit conditions are analogous to the experimental ones. A breakthrough in the study of the coherent structures has been achieved by synchronizing (time-resolved) LDV mesurements to (space-resolved) PIV snapshots. Indeed, from the LDV well-resolved cycle of oscillation, it has become possible to sort the temporally sparsed PIV snapshots in phase with the shear layer oscillations [Faure et al. 2006]. Figure 13. Axial velocity spectrogram downstream of the cavity for Ue = 1,75 m.s-1 and R = 2. Phase averaging the velocity field snapshots at a given phase $\phi _n$ of the cycle of oscillation, and varying $\phi _n$, provides the spatial sequence of the mean cycle of oscillation shown in Figure 14. Each mean phase field $\bar{u}(x,y;\phi _n)$ still exhibits the unstable wave developping in the shear layer, while smoothing out the instantaneous velocity fluctuations. Therefore, it provides an efficient way for estimating the wavelength of the spatially growing unstable wave. In order to identify which coherent structures may be associated to each mode $f_1$ and $f_2$, one must introduce criterions to decide whether mode $f_1$ or $f_2$ is present in the shear layer, at any given time $t$. This is done using Hilbert-based complex demodulation of the LDV signal, around each mode $f_1$ and $f_2$. Recovering at any time $t$ the instantaneous amplitudes $A_1(t)$ and $A_2(t)$ of the modes $f_1$ and $f_2$, and comparing them to some threshold value (for example the mode mean amplitudes), it becomes possible to separate furthermore the PIV snapshots into two subsets $\mathcal{S}_1$ and $\mathcal{S}_2$, each corresponding to a given mode of the shear layer instantaneous oscillation. The knowledge of the dispersion relation (not known yet), determined by plotting the LDV measured pulsation $\omega$ versus the mean phase field extracted wavenumber $k$, would help in disciminating between the different models of the cavity flow dynamics. Figure 14. Averaged cycle of the cavity flow obtained by phase averaging LDV-synchronized time-sparsed PIV snapshots. LDV measurements, spatially localized downstream of the cavity, exhibit dynamical behaviours whose fractal dimension remain less than 5 (Figure 15). This means that the flow underlying dynamics can reasonably be considered as strongly be deterministic. Figure 15. 2D views of the 3D phase space portrait of the embedded LDV signal. In the DNS flow, we have shown that the fully 3D space-time behaviour of the flow could be reconstructed from 2D restrictions of the 3D coherent structures determined by POD [Podvin et al], which open new perspectives for reconstructing fully 3D flows from simple 2D observations. ## Perspectives Two main aspects of the cavity flow will be under investigation in the next two years. • Using time-resolved (high speed) PIV, we plan to simultaneously determine the velocity and acceleration fields in the cavity flow. From their simultaneous knowledge, it may become possible to identify the dynamical system underlying the temporal dynamics of the coherent structures extracted by the POD. This work will be conducted in the framework of the HiSpeed ANR contract. • The flow topological squeleton does not keep very much degree of freedom to the flow, which has to conform to it. Henceforth, its knowledge would provide a very efficient tool for designing black boxes for flow control applications. This work will be supported by the DIB ANR contract, in collaboration with the LEA (Poitiers) and Peugeot. ## References PASTUR, L. R., LUSSEYRAN, F., FAURE, Th. M., FRAIGNEAU, Y., PETHIEU, R., DEBESSE, Ph. (2007) Quantifying the non-linear mode competition in the flow over an open cavity at medium Reynolds number, Experiments in Fluids (accepted) FAURE, Th. M., ADRIANOS, P., LUSSEYRAN, F., PASTUR, L. R. (2007, b) Visualizations of the flow inside an open cavity at medium range Reynolds numbers, Experiments in Fluids, vol. 42, n°2, pp. 169-184 FAURE, Th. M., PECHLIVANIAN, N., LUSSEYRAN, F., PASTUR, L. R. (2007, a) Apparition de structures tourbillonnaires de type Görtler dans une cavité parallélépipédique ouverte de forme variable, Actes du 18ème Congrès Français de Mécanique, CFM2007-0008, Grenoble, 27-31 août 2007 LUSSEYRAN, F., PASTUR, L. R., FAURE, Th. M., PETHIEU, R. (2007) Structures tourbillonnaires cohérentes et intermittence des modes fréquentiels dans un écoulement en cavité ouverte, Actes du 18ème Congrès Français de Mécanique, CFM2007-1197, Grenoble, 27-31 août 2007 PETHIEU, R., PASTUR, L. R., LUSSEYRAN, F., DEBESSE, Ph., FAURE, Th. M. (2007) Caractérisation expérimentale de la compétition non-linéaire de modes de Kelvin-Helmholtz dans un écoulement en cavité, 10ème Rencontre du Non-Linéaire, Paris (France), 14-16 Mars 2007, pp. 143-148 FAURE, Th. M., LUSSEYRAN, F., PASTUR, L. R., PETHIEU, R., DEBESSE, Ph. (2006) Développement d’instabilités dans un écoulement subsonique se développant au-dessus d’une cavité : mesures synchronisées PIV-LDV, 10ème Congrès Francophone de Techniques Laser, Toulouse (France), 19-22 Septembre 2006, pp. 577-584 LUSSEYRAN, F., PASTUR, L. R., DEBESSE, Ph., FAURE, Th. M. (2005) Dynamique locale de l'écoulement dans une cavité ouverte, 5e Colloque de Chaos Temporel et de Chaos Spatio-temporel, Le Havre (France), 12-13 Décembre 2005, pp. 63-68 PASTUR, L. R.,LUSSEYRAN, F., FRAIGNEAU, Y., PODVIN, B. (2005) Determining the Spectral Signature of Spatial Coherent Structures, Physical Review E, Phys. Rev. E, 72, 065301(R). PASTUR, L. R., LUSSEYRAN, F., FAURE, Th. M., FRAIGNEAU, Y., GOUGAT, P. (2005) Dynamical reduction of a subsonic flow over a cavity with an aim of control, 6th SIAM Conference on Control and its Applications, New Orleans (USA), July 11-14, 2005 LUSSEYRAN, F., FAURE, Th., ESCHENBRENNER, C. & FRAIGNEAU, Y.(2004) Shear layer instability and frequency modes inside an open cavity, 21st International Congress of Theoretical and Applied Mechanics, Warsaw (Poland), August 15-21, 2004, FM13S_10188. FAURE, Th. M., DEBESSE, Ph., LUSSEYRAN, F., GOUGAT, P. (2005, b) Structures tourbillonnaires engendrées par l’interaction entre une couche limite laminaire et une cavité, 11ème Colloque de Visualisation et de Traitement d’Images en Mécanique des Fluides, Lyon (France), 6-9 Juin 2005 - FAURE, Th. M., ADRIANOS, P., DEBESSE, Ph., LUSSEYRAN, F., PASTUR, L. R. (2005, a) Dynamique tourbillonnaire 3D dans une cavité ouverte de rapport de forme variable, 6ème Journée de Dynamique des Fluides sur le Plateau, Université Paris XI, Orsay, 14 Novembre 2005 -
2017-03-23 16:22:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5717387199401855, "perplexity": 3098.325305768272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187144.60/warc/CC-MAIN-20170322212947-00135-ip-10-233-31-227.ec2.internal.warc.gz"}
http://italianroots.blogspot.com/
# Some new faces in the Parliament for The Good Politics Of course the problem of ungovernability in Italy is due to the fact that the Left, although obtained the relative majority of votes, it didn't reach the lower bound of the absolute majority of Deputies/Senators in the chambers. In addition there is the apparent irreconcilability among different parties in order to join the number and reach that lower bound. Part of the problem is due to the electoral law that allows a situation like this. It could be even worse: the relative majority of one chamber could have been different from the relative majority of the other. If that was the situation i believe (but i am not sure) that the winner Presidente del Consiglio would have been the winner candidate of the Camera. But the ability for him to produce a Govern would have been unrealistic at all. One can think that the absolute majority into the two chambers shouldn't be a necessary precondition to give the Country a Government. And, in fact, it is not necessary. In Italy we had some cases of Governo di Minoranza (Minority Government): in 1953 with Einaudi and in 1976 with Andreotti. But such a Government is not stable, because it cannot count on a safe support of the Parliament. So, it is pretty obvious that an electoral law like that should be changed, in order to give anyway an absolute majority in both of the chambers to the relative winner. But i still believe that it is unfair to the citizens that the proportions were not respected. The Left had about 30% of the votes in both the chambers. If a better electoral system gave the Left 50% of the Deputies and the Senators to the Left, the preference of 20% of citizens would not be respected. Italian population is about 60 millions people. 20 percent is 12 millions people which, in that case, would have voted for somebody but elected somebody else. Unfair. Under this point of view, a Minority Government wouldn't be so unacceptable. A Governo di Minoranza can still work in the Parliament thanks to article 67 of the constitution, which specify that the mandate is personal. This means that, although a deputy/senator is elected thanks to the support of the party he is candidate for, he is personally responsible to make a decision, which can be different to the guide-line the party suggest. In this way a Government can have the necessary Fiducia votes although the parties that support that Government do not have numbers enough. This makes possible to form a Government in this situation in which the relative winner (Left) does not want to compromise with criminal Clown #1, while M5S does not want to compromise with the Left. There is still a strong feeling that this situation is a wasted opportunity for the Left that was not able to find a stable majority after 20 years of criminal Government of Berlusconi. There is the suspicion that in these 20 years some games were played under the table to artificially create a settlement of the Country despite the will of the citizens. The actors of these games are Massimo D'Alema (background leader of the Left, although with no official charge) and, of course, Berlusconi. And, obviously, international financing and economics interests. Also without considering this conspiracy theory, there is no doubt that the leading people of the Left (D'Alema on top) run the games so that they, or their puppets, keep their claws on the seats in the control room. That's why a lot of citizens are disaffected to Politics. They reject Berlusconi's corruption world, but in the same time they feel that a war against corruption cannot be fought with other corruption. This is one of the keys of the success of M5S. Beppe Grillo (Clown #2), which does not have any political quality, but is a great communicator, has the merit of being able to canalize the people's disillusion. With a simple paradigm: people is not disaffected to Politics: they are simply disaffected to this kind of system in which whatever they vote corruption is always the winner. And who can ever disagree with this point of view? The paradox is that electors gave so many votes to M5S not for their program (indeed somehow limited and somewhere self-contradictory), but against the corrupted system of (the other) traditional Parties. The real problem is that such a protest obtained one fourth of the valid votes, enough to invest a big number of deputies/senators. So big that their votes are determinant for any majority in both of the chambers. Enough big that it was unexpected by the M5S itself. Of course also M5S had its blocked lists, but there was no leadership that chose the names on those lists: they were chosen among common citizens by a sort of internet voting (i don't really agree with this method, which didn't look very transparent at all, but for sure i like this much better than the nomination by the management of the party). One of the main point of the program of M5S is the renovation of the electoral system, problem that had been faced both by the Left and the Right with no success because not convenient for their leaderships. The approach of M5S is the right one: if the leaderships cannot or do not want to do the good for the Country, they have to go home. Unfortunately M5S don't have a clear idea on what kind of electoral law they want. In my opinion, Article 67 that preserve the individuality of judgement of the deputies/senators, is a great thing, because it puts the power on the shoulders of concrete people that are supposed to be there for the good of the Country, and not on the Parties, whose aim is to collect consent among electors. Nevertheless the same Article allow any elected one to change idea, which one of the main reason of corruption in the Parliament. As soon as they had to take their first decision, the elected candidates of M5S had to face the problem to decide if to vote upon their own conscience for the good of the Country or follow the lines of the Party in accordance with the declared intentions they were elected for. The problem was if to vote for Laura Boldrini as Presidente della Camera and Pietro Grasso as Presidente del Senato (both elected for the Left, both newbies in the Parliament). Pietro Grasso was a judge that played a big role in investigating and arresting some of the main Mafia bosses in the last decade. He worked with the more famous judges Giovanni Falcone and Paolo Borsellino, both killed by Mafia in the early 90s. So, undoubtly an upright person. Of course criticized by the Right (being that their leader Berlusconi is a criminal, they don't like judges!!!) I also like Laura Boldrini a lot. She worked for FAO in the UN, she had been Italian representative in WFP, she was into UNHCR. She spent a big part of her life for the poors and the marginalized ones. Significant, in my opinion was her settlement speech when she was elected: (...) I arrive at the office after spending so many years to defend and represent the rights of The Last Ones in Italy, as in many suburbs of the world. It is an experience that will stay with me forever and that today I put at the service of this Camera. I will make so that this Institution be also the place of to be for those who need it most. My thoughts go out to those who have lost certainties and hopes. We should all work together to restore full dignity to any Right. We will have to engage in a real battle against poverty, and not against the Poors (1). In this Camera the universal rights of our Constitution were written, the most beautiful in the world. The responsibility of this institution is also measured in the ability to know how to represent and garantee each one of them. This Camera will have to listen to the social suffering. A generation that has lost itself, prisoner of precariousness, often forced to bring their talents away from Italy (2). We'll have to make ourselves responsible for the humiliation of women suffering violence masquerading as love. And it is a commitment that from the first day we entrust the responsibility of politics and Parliament (3). We will stand beside those who have fallen without the help or find the strength to rise up again, to the many prisoners who are now living in a state inhuman and degrading treatment as authoritatively denounced the European Court of Human Rights in Strasbourg (4). We will have to give tools to those who have lost their job or has never found, who risks losing even the last relief of Cassa Integrazione (5), the so-called esodati (6), which none of us have forgotten. For many entrepreneurs who are a vital asset for the Italian economy, and which are crushed by the weight of the crisis (7), the earthquake victims and those who suffer the effects of low daily care of our territory (8). We commit ourselves to return confidence to those retired people who have worked all their lives and now they can not go on (9). We must learn to understand the world through the eyes open of who comes from far away, with the intensity and the wonder of a child, with the unexplored inner wealth of a disabled person (10). These rights were written in the Parliament, but they were built out of here, freeing Italy from Fascism and the Italians (11). We remember the sacrifice of those who died for the institutions and for this democracy. Even with this in mind we are ideally close to those today in Florence, together with Luigi Ciotti, that remember all the deaths at the hands mafia (12). To their sacrifice each of us and this country owe a lot. And much, much we owe also to the sacrifice of Aldo Moro and his bodyguards that today we remember with emotion the day on which the anniversary of their murder (13). This is a Parliament largely renewed. Lets shake out of ourselves any delay in giving back full dignity to our Institution that will be able to take back the centrality and responsibility of its role. Let us make this Camera the home of good Politics. Let the Parliament and our work transparent, also in a choice of sobriety that we owe to Italians (14). I will be the president of all, strting from who didn't vote for me, I will for my function be a place of guarantee for each one of you and for the whole country. Italy is part of the core of the founders of the European integration process, we strive to bring the Italian citizens in this challenge, to a project that is able to recover the entire vision and mission that were designed with foresight, by Altiero Spinelli (15). Let's work for Europe being a big dream, a crossroads of peoples and cultures, a safe port for the rights of people, a place of freedom, brotherhood and peace. Even the protagonists of religious spiritual urge us to be more daring: for this we've received with joy the actions and words of the new pope, who came symbolically "from the end of the world". To Pope Francis, a greeting full of hope for all of us. Let me also salute the International Institutions, Associations and Organizations of the United Nations in which I worked for 24 years and let me - since it has been so far my efforts - a thought for the many, too many nameless dead the our Mediterranean houses (16). A sea that will have to increasingly become a bridge to other places, other cultures, other religions (17). I feel strongly the call of the Presidente della Repubblica about the unity of the Country (18), a reminder that this court is called to fully collect with conviction. Politics must return to be a hope, a service, a passion. We are starting a journey, today we start a journey. I will try to bring together with each one of you, with humility and care, the demand for change that now all Italians, especially our children, ask to Politics. (Translation of mine) Notes: (1) Lots of the measures to contrast the crisis engaged by Elsa Fornero (minister of the outgoing Monti's Government) affect mainly the lower classes (2) The state of Italian industry, and once again Fornero's measures made so that the the crisis affected more the youth. Almost 40% of young people are unemployed, and if they ever find a job, in most of cases it is a temporary job on which one cannot found his choices of life. (3) Statistics of violence to women in Italy are impressive. (4) She refers to the overcrowded italian prisons, which make unhuman the conditions of the prisoners, as denounced by the European Court. (5) "Cassa Integrazione" is a State support for employers whose work is temporarily suspended due for the crisis. The fund is lowering down because of the crisis itself. (6) "Esodati" are those people who were asked to quit working in change to an anticipate retirement program, due for the crisis again. Mrs. Fornero declared that this form of anticipated retirement was not valid, so that they don't get those money and they are no reintegrated to their job. (7) Small enterpreneurs feel so much the weight of this crisis that we had a lot of cases of suicides because they cannot stay in business and they cannot pay the work of their employees either. (8) She refers to the last disastrous heartquake happened last year in Parma surroundings, who are waiting in vain for money from the State. But also for the heartquake we had few years ago in Ancona, and also for other disasters, for example floods due to overbuilding (9) The retirement fund should be integrated proportionally to the inflaction. Instead retired people's purchasing power is reducing. This too is an effect of the crisis. (10) one word also for immigrants, right of children, handicapped people. (11) Italy is a republic founded on anti-fascism (12) During the day this speach was pronounced in Florence there was a big popular demonstration against Mafia remembering those ones that died fighting against it. (13) Aldo Moro was a great state-man, killed, along with his bodyguards, by some terrorist group in the early 70s. (14) She perfectly points out the problem of a Parliament that till now was decaying, and there is now a good opportunity to raise to the dignity it supposedly should have. (15) This is addressed to who proposes (not very realistically, indeed) to exit European Community, as a controverse point of the program of M5S (16) This refers to all those poor people immigrating from Africa that die on the boats before reaching Italy (17) In the speach there is some polemics to the Bossi-Fini law (made by the Right) that made more difficult for immigrants to obtain a regular visa. (18) Polemics also against Lega that wants to divide rich north from poor south. Article 67 gives the deputies/senators freedom to vote for whatever also against the directives of the party they belong to. For that reason it also allows a criminal commerce of deputies/senators, favoring corruption (especially if the majority in the chambers is not wide). That's why the internal guideline of M5S, in order to fight corruption, is to find an agreement each other before voting a measure in Parliament and give all the same vote. This indeed eliminates the risk that somebody vote for a personal interest (in this way they nullify de facto the effect of Article 67). While Boldrini (Camera) won with the votes of the Left (thanks to Porcellum the Left has the majority of deputies in the Camera), for Grasso the things were different. Being that the number of senators of the Left were not enough, he was elected thanks to some votes of M5S also. Also considering the level of the other candidate (Renato Schifani of the Right), it would have been really a shame if the senators of M5S wouldn't vote for Grasso. This shows the importance of Article 67, also for the deputies/senators of M5S, who despise that article so much. Of course Clown #2 (which is not a deputy nor a senator - he's just a front man of M5S) was pissed for the fact that the vote of "his" senators were divided (most of them abstained and few voted for Grasso). But that is another political game: Grillo wishes that Bersani fails in forming a Government, or, even better, that Bersani finds an agreement with Berlusconi in order to form a wide-coalition Government (as it is in Germany with the "Grosse Koalition", or as it happened last year with Monti as an extreme measure). Such a Government would be based on an unstable majority, so it would probably fall after something like one year. The citizens would be called to elections again, and after another disaster the traditional parties (Left and Right), that will appear incapable to give a government to the Nation, would loose consent. This way M5S will have more and more popularity. Too bad, in the mean time, Italy will suffer a period in which important decisions to recover from such a social and economical dramatic situation cannot be taken. Anyway, if the situation is like that due to a bad electoral system, going to vote with the same electoral system would appear a nonsense. And to change the electoral law we need a Government. The election of Grasso and Boldrini is a wonderful thing. They both come from the Left coalition (this is why M5S deputies/senators initially didn't want to vote for them). But in the intention of the Left, the candidates for the Left for those offices should have been Anna Finocchiaro and Dario Franceschini. I don't like them both, and anyway, whatever judgement one can give them, there is no doubts that they both belong to that kind of Politics i described in the previous posts, which created disaffection to the citizens because the parties completely ignore the real needs of the citizens, and they work much strongly for sharing the powers instead of solving the problems. Not to mention any case of corruption here. Both Finocchiaro and Franceschini belong to the "D'Alema way". It is clear enough that nor Finocchiaro nor Franceschini could ever have any vote from M5S. Which thing would have made even more difficult the task to form a Government. In other words, although i somehow don't share the principles and the programs of M5S, i strongly believe that the ruinous party-centered system of power can be fixed only by a political force like M5S. Both Left and Right, and also other minor parties and coalitions have all the guilt not to be able to represent the values of the citizens. To tell the truth i believed that at the end Bersani would have found an agreement with Berlusconi, which would have leaded to three disastrous effects: Firstly there would have been a terrible government just when Italy needs some equality and solidarity (another Monti-kind Government would't be bearable by the lower classes). Secondly the duration of the government would have been short anyway, and after that the Left would have lost even more consent for any new election, which would have made the Right (if not M5S alone) at the power again. And as a Third point, Berlusconi with an Institutional charge, would have avoided to be processed again. Now, with the election of Boldrini and Grasso, this alliance looks more far. Thanks heavens. The last perfect step of Bersani now would be a back step, with the proposal to Napolitano (Presidente della Repubblica) of another person, as Presidente del Consiglio. Somebody upright and irreproachable like Boldrini and Grasso. Somebody that could be supported by M5S too (although they said they would never support a "traditional" government). And i would be happy of it. And i also bet that a large share of Italians would be happy too. # Antitrust, corruption, sobriety in Politics One other key point in M5S programs is an efficient antitrust law, to fill the legislative hole that allows Berlusconi to own a so big share of media. It's obvious that in a country where media enter all the houses, who owns them shouldn't be allowed to run for an institutional office, because it wouldn't be a fair competition. Popularity of M5S was in fact possible also because they based all of their communication on the Internet instead of the traditional media (Italy is having a total coverage of reasonable speed internet only in these last years). To tell the truth fair media is a point also of the programs of all the other political forces (except Berlusconi's Right, of course). But none of them did solve the problem in the past, which thing makes people believe one of these options: the Left agreed with Berlusconi under the table (some kind of power seat in change of the freedom to do whatever he wants with Media) or, at least, the Left is incapable to do a simple law against free propaganda. In both the cases, the Left shows unable to solve the problem. In this way Berlusconi will always be able to do whatever he wants, supported by some kind of propaganda (a lot of people, me included, just hope in the Power of Death to solve what the Left cannot). # Money Another effort of M5S is to try to reduce corruption in the Parliament. In order to do that they suggest the reduction of salary for the public charges, strict control of the exchange of money, abolition of party public financing, limitation of the time in which one single person can cover a mandate. Everybody looks like agreeing with these points, especially in a crisis period like this. I believe we should be careful also with these points. First of all i believe that a reasonable salary should be given to the deputies and senators, because otherwise Politics would become a thing for rich people, and i believe that this would be the right opposite of democracy. Politics should be a place to govern society, not to protect the privileges of the higher classes of people. Moreover i believe that the parties should be covered of the expenses with public money, because otherwise they would need to find private sponsors. And private companies would pay money only if they have something in change. I'd like a system of parties that try to do things for the Country, not for the lobbies. Finally, although i believe that there must be a change in the people in the Parliament, i also believe that the work of the Politician is something that is learnt thanks to experience. One for all, a politician i like is the Presidente della Repubblica Giorgio Napolitano. He is a honest old grandpa that devoted all his life for politics. I believe it is unfair that the politicians can decide their own salary, that the parties waste money, and that the politicians self protect their own seats for personal interests. This problem put serious obstacles to democracy. But, also, i believe that changing the rules is a very delicate subject, if we want to protect democracy. # Parliamentarian immunity Finally, a subject for which M5S is so popular is that they are against parliamentarian immunity. They actually want to abolish it at all. This kind of immunity was abused in a lot of cases. One for all Berlusconi. Corruption, implication with Mafia, even underage prostitution. Subjects like these are accuses the Magistrature is trying to make Berlusconi responsible of, but he is avoiding processes thanks to immunity. It's pretty obvious that a situation like this must be changed. But it is also true that Immunity was introduced to protect Legislative and Executive powers from any possible attack of the Judical power. The equilibrium among these powers is the base of Italian democracy. It needs a lot of care to change the rules that support this equilibrium. Moreover immunity is thought to protect deputies and senators from each other. An accuse of some crime cannot be used to block the works in progress into the Parliament or the Government. If a politician cannot do his job because he is busy answering the Justice for potential fake accusations, we are in trouble. M5S propose to make ineligible those ones that are or have been investigated for some crimes. If we apply this rule, i believe that somebody would build fake accusations in order to drive the judges to investigate some political enemy, just in order to get rid of those enemies. # Other weird points on M5S program One point that they suggest, in which i am very fascinated since before M5S existed at all, is the philosophy of Degrowth. I believe it is something to take in consideration because the world resources are not infinite. So, the global economy cannot just indefinitely grow. Therefore if economy grows in some countries, it has to reduce in some others, and this is the base of poverty in the world. What happens is that the rich countries are more powerful, and they can grow. Poor countries instead do not have the power to contrast this, so they are getting poorer and poorer. Moreover i believe that this happens within the single country. So that the difference between rich and poor (in Italian we call it "forbice"="scissors") grow. This is not fair, if we want a just world. That's why we, rich countries, have to stop growing. The only acceptable way to stop growing is to level the wealth of everybody to ensure that everybody would be able to access to the essential needs. But it is clear enough that Degrowth can be applied only globally. If an economy like the United States, for example, unilaterally decide to stop growing, in few months other more aggressive economies like China, for example, would reduce United States to the level of third world. That's why Degrowth cannot be seriously part of a single nation political program. # Exiting from the Euro? One problem that aggravate the crisis in Italy is the unbalance of distribution of European economy. Rich nations like Germany have some privileges that poor countries like Greece don't have. And understandably German people want to keep those privileges. The effect is that Greece is going down and down because Europe want it to refund the debts. This is a myopic way to see the problem, because punishing Greece because its economy is doing bad, make its economy do even worse. The real truth is that European community doesn't make sense if there is not a political integration among the countries. If Germany (and the rich countries) do not want to help economy in Greece there is no reason for an European union. And if we abolish Europe everybody would loose, Germany included. To make a comparison, if a State of the US, say for example Mississippi, suffer more than others for crisis, the United States won't ever think to expel Mississippi from the union. Instead the rich states would help the poor ones. This point of view is obvious because it is socially accepted that USA are a inseparable union of States. Somebody from Boston and somebody else from Los Angeles both feel like Americans. Under this point of view, if we really believe in Europe, a German should consider a Greek part of the same Nation. There cannot be bankrupt in Greece and privileges in Germany. Italian economy is not at the level of Greece, but we still are one of the worse countries of the union. Somehow Italians feel abandoned by Europe because we are paying for the fact that we are expected to be at the level of other more rich countries. This feed the anti-Europe feeling of somebody. The fact is that economists say that if we exit from Europe it will be the total bankrupt of Italy, being that our economy cannot compete. Nevertheless M5S proposes to find the way to exit the union. This would be a suicide, but the option is somehow popular among the citizens. # What i think of M5S I believe that Beppe Grillo is a Clown. In a different way than Berlusconi. Berlusconi acts like a clown because he wants to do whatever he likes no matter the wealth of Italy. Grillo instead is a comedian, and he is proposing a different way of doing politics. The things are more complex than how it was reduced by Peer Steinbrück (a German Politician that commenting Italian elections said that in Italy Two Clowns won - hence the title of these posts of mine). The problem is deeper in Italian Politics, and there is nothing that can be depicted so funnily by some German dude. M5S is the answer to the need of Italians to be part of their own democracy, because the power, that should be in the hands of citizens, had progressively moved to the hands of a politicians caste. I believe that it is time to renew the behavior of Politics, and to do that some rules must be fixed. I also believe that the task to do this cannot be done by who take advantage of the situation. In my opinion Italy needs some honest people that impersonate the needs of the citizens and work for them. M5S is the answer of this need. Not because they are more capable than the other politicians, but because those old faces do not represent us anymore. Politics need a renovation. It's not yet very clear the way that renovation should be done: the only clear thing is that we need that renovation. I want to get rid of Berlusconi and those ones that allowed him to be there umpunished for so many years. Grillo is a Clown, but the deputies/senators of M5S are just common people (and the fact that they are "common" people is a good news itself) that try to do their best for the Country. In several points i do not agree with their program, and i also don't trust in their capability to obtain some good results. But, for sure, they are a control for real politicians to do their best (that they guiltily didn't do till now). # What are we voting for I voted for the Left, because my personal values are more similar to the values that are traditionally associated with the Left. I cannot vote for the Right, both because their ideals are different from mine, and because with Berlusconi leading the right, nothing good will ever come out from that coalition. I didn't vote for M5S because i do not agree with their programs and i do not think that they are enough expert to run a Country. The intention of most of the people that voted for M5S was not to vote for them, but to vote against everybody else. But i don't think that we were called to vote against somebody, and i don't think that the vote express a personal judgement on something. I think we were called to decide which people and which forces should compose the government of the Country. I didn't like the way the Left proposed itself, but i was not called to say what i like. I was called to choose, among the options they gave, which one is the best. And that is the Left. Politicians try to interpret the result of the vote. I don't think it is a correct way. The result of the vote is that one about 30% is for the Left, 30% for the right, 30% for M5S. But my vote was only one. I voted 100% for the Left. If they want to understand the result of the vote they should ask themselves why so many people voted 100% against them. # What's going on now Politics in Italy looks really slow, in this period. We need a government now, but still it looks we are navigating in the middle of the ocean. Though it looks like my posts are even slower than Politics. In the mean time, yesterday Napolitano finished the Consultazioni, so today he's expected to give somebody the mandate to try to form a Government (and ask for a Fiducia vote). It looks like Napolitano is giving the charge to Bersani. But still it looks like the M5S senators won't vote the trust for him. If it will end up like that, all the cards still have to be played, but none looks good to reach some kind of result. In few hours we should know for sure Napolitano's strategy to exit this pool of mud. # How did it happen? The result of the last election was about that one third of the electors voted for Left, one third for Right and the last third for M5S. Not exactly divided, but pretty much: SENATO Left: 31.6% Right: 30.7% M5S: 23.8% CAMERA Left: 29.5% Right: 29.1% M5S: 25.5% The Left has the relative majority at the Camera, so, for the rules of the electoral law, it gets the absolute majority of the Deputies. In the Senate, as i said in the previous post, the relative majority of votes to the Left was enough to obtain the relative majority of the Senators, but it was far from the lower bound of the half+1. This was an extreme case, but there is no doubt that an electoral system that can lead, although within an extreme case, to the impossibility to govern the Country has some serious problems. # Porcellum One main subject in the political agenda for every political parties is the reform of the electoral law. This law is named Legge Calderoli (from the name of Roberto Calderoli, the guy that invented such a tricky thing). It is also nicknamed "Porcellum" (it could be translated from Latin as "piggy thing", "dirty trick"). This nickname was invented by Roberto Calderoli himself when he finally realized that the mechanism he invented was really a masterpiece of shit. In year 2005, Porcellum substituted Mattarellum, the previous law (a little better than this), which itself substituted a previous law that was a "perfect proportional" law. In the Proportional the nomination of candidates was based on a proportion to the number of the electoral votes. In other words, in these last couple of decades Italian electoral rules moved from a Proportional to a Majority system. The goal is to have only two (main) parties or coalitions, in which case, of course, there would not be any difference between the two systems (who wins more votes, has the majority of votes, which makes the majority of the elected, both in a proportional and a majority-system law). In my opinion, the proportional law we had before was the ideal system, because it better respected the will of the citizens. In a proportional system, if an idea is supported by a party which has a certain amount of consent among the citizens, the number of elected Parliamentarians belonging to that party that support that idea is proportional to the number of the citizens that support that party. So, the strength of that idea is proportional to the number of electors that like it. Of course the proportional system favorites the fragmentation of the Parliament is small parties. And, under that system, it didn't ever happen, as far as i remember, that one single party won the absolute majority of the two chambers (which of course could happen only if a party obtained the absolute majority of votes). Moreover, with the proportional system, also coalitions among different parties are discouraged, because, for a party, entering a coalition means to accept compromises, which in general means that the strength with which a party supports an idea is limited, which thing is less appealing for the citizens. For this facts, the real main difference that came out when we passed from the Proportional to the Majority-system law is that, while in the Proportional, in order to reach the absolute majority, the parties had to form alliances AFTER the elections, with the Majority-system the parties are pushed to form coalitions BEFORE the elections, being that the number of seats assigned to a coalition is increased if that coalition obtains a good result of votes. In other words, with the Majority-system it is more convenient to be part of a big party/coalition instead of a small one. Still, i believe that the Proportional system is better, because, with the Majority system, important values tend too often to be excluded from the the parties/coalitions agenda just because they are not very popular or they were canceled in the process of finding a compromise. Which, in my opinion, is the right opposite of the task of Politics: Politics should support the needs of weaker citizens against the power of the stronger ones. Politics should protect minorities against the power of majorities. I believe that the most important rule of a democratic society is that ALL the citizens are even, also who belongs to small minorities. A good thing in the Majority systems is that it should (should!!!) reduce the risk that a small party (so that, supported by few citizens), has a a big power. This can happen because the number of its deputies/senator, although small, is determinant to form an absolute majority with the ones of a big party. We call this big-power/small-party situation "Ago della Bilancia" ("hand of the scale"). This is what happened, for example, for the PSI small party, which, in the 80's played the game to be allied to one or the other big party (DC, PCI) obtaining favors (political or [!!!] personal) in change of an instrumental alliance of deputies/senators (is it just a coincidence that Bettino Craxi -PSI-, best friend of Clown #1, was Prime Minister?!?). The fact is that Porcellum didn't solve the Ago della Bilancia problem. In fact, even if we exclude the good result for M5S (Clown #2), we would end up to a situation in which in the Senate, the Left didn't obtain the absolute majority anyway. There would have been a need for a compromise between the Left and Monti's Center party, in order to have the absolute majority both in the Camera and in the Senato. The center would have had a big power although only few deputies/senators in Parliament. The Ago della Bilancia would have been Monti, which would have had the power to dismiss the Government whenever he wanted. In other words, with this majority system, the goal to form the alliances BEFORE the elections, with the advantage to have clear political programs BEFORE the citizens are called to vote, is reached. But that does not mean a big lot if AFTER the elections the winner party/coalition ends up to change its program again, in order to find a compromise with other parties. Another difference versus the Proportional system is that in the Porcellum the candidate Prime Minister is declared by the coalition BEFORE the elections (involving therefore the electors), and not AFTER. This is an obvious consequence of the fact that the coalitions, so that their political programs, are already given BEFORE the elections (while, if the alliance are settled after the elections, also the Government program must be mediated among the programs of the allied parties). The candidate Presidente del Consiglio is supposed to be the best person to actualize that program if ever that coalition wins the elections. So, formally the Presidente del Consiglio is still nominated by the Presidente della Repubblica, but it's obvious that the best supported nomination would be the candidate of the winner coalition, which would be the one supported by the relative majority of the citizens. Of course also in this point the Porcellum failed, being that, while the candidate of the Left is Pierluigi Bersani (which i don't really like, nevertheless is the one i voted), any compromise that will be found will probably ask for somebody else. If the purpose to vote for the Prime Minister is a kind of control by the citizens, this is not reached by the Porcellum, in fact Berlusconi is still around after 20 years of criminal government. # Corruption in Parliament Another criticism that is traditionally made to the Porcellum is the lack of "Preferenze". In the old Mattarellum, and even before in the Proportional system, the citizen was called to vote with a sign on the symbol of the party/coalition and, optionally, a name of his preferred candidate Deputy/Senator (one voting paper for each chamber). When the Porcellum was introduced, the citizens couldn't express this name (Preferenza) anymore. When it was firstly introduced i didn't think this was a bad idea. In fact i believe that the choice of a candidate deputy/senator among hundreds is a too difficult task for a citizen. In order to choose in a serious way, the citizen should at least know something about each candidate (something more than his/her membership to a party). Moreover i believe that a good or bad feeling of the citizen is more driven by something that has nothing to do with his capability in the political office he is candidate to. Just to make a stupid example, when Obama was elected for the first mandate, i liked Hillary Clinton better than Obama. This for the much i knew (not a big lot, indeed) about their declared programs when they run for the Primaries. Nevertheless i also believed that, against the Republicans, Obama appealed much better to the citizens than how Clinton would have (between the two of them, Obama is the one i would like to sit in a pub with, for a beer). Maybe "sympathy", "pleasantness" could be an important peculiarity for the President of the USA, but for sure, in the Italian Parliament i prefer a capable deputy/senator than a handsome/pretty one. I believe that an appealing candidate for the Senate/Camera has more chance to win in a system that allows Preferenze. His/her capability in Politics would be more hidden to the elector than other appearance attributes like good looking, sympathy... In other words, i believe that the job of Deputy/Senator needs a technical skill that cannot be easily judged by the average citizen (when i apply for a job [I am a software engineer], i prefer to be interviewed and evaluated for my competence about computer science; this quite never happens, being that the people that are in charge to judge me are usually totally unqualified - instead, some psycho-dumb questions like "which are the three qualities and the three defects that describe yourself better?" are more common). Also under this point of view, the Porcellum failed. In the Porcellum the candidates are nominated with "Liste Bloccate" (blocked lists) of the parties/coalitions. In other words, any coalition (and any party within the coalition) establishes, before the elections, an ordered list of candidates. After the election, the topmost names of that list are selected to cover the number of deputies/senators seats assigned to that coalition (and to any party within the coalition). In other words, a party can make sure that some candidate will be elected, putting that candidate on the top of the list, despite the citizens like that candidate or not. In this way, if the party leaders want to have a full control of the people in the Parliament, they would put on the top of the list the most controllable candidates. In the best case, when the leaders are honest, we end up to have a Parliament of "sheeps" which task is to vote upon the instruction of their leaders (the Parliament is this way transformed to an oligarchy de facto). In the worst case, if there are criminals among the leaders (and in Italy we do have a main one: Clown #1), the candidates will enforce their crimes. What happens is that who is not accustomed to think with his own head (maybe they can think with some other part of their bodies - at the end this is why they were nominated) is more controllable. But for the same reason he is also more corruptible. And this fact was often used by Clown #1 to move the forces into the Parliament in his favor. Romano Prodi's Government #2, for example, fell in 2008 because for a rejected Fiducia, being that Berlusconi "bought" some deputies of the Left (the news in this period, are just speaking about these facts - google "Sergio De Gregorio" for more infos about it: he confessed he accepted 3 millions euros from Berlusconi for changing his vote). The astonishing thing is that, although the games between corruptors and corrupted are clear enough to the citizens, thanks to the Porcellum the leaders of some parties still perpetrate corruption, while the corrupted deputies/senators are still "elected" by the citizens themselves. One name for all, Domenico Scilipoti passed to the Right side, receiving a "gift" for that from Berlusconi (Scilipoti was firstly elected in the Camera thanks to a blocked list of IdV (Left) in the previous elections). That "cambio di casacca" (change of jacket) helped a couple of times Berlusconi's Government not to fall in the last legislation, despite the defection of some "traitors" of the Right. Nevertheless this time Scilipoti is again re-elected in the blocked list of PdL (Right), and he is still seating his dirty ass in the Camera. The obvious questions are "why the PdL party still put a guy like Scilipoti in its blocked list, although it is prooved he was corrupted?" and "why the citizens still vote for Berlusconi's Party, although it is prooved that he is a corrupter, and that in his lists there are corrupted candidates?" Berlusconi corrupted Scilipoti, so he knows that, in case of need, he can corrupt him again. That's why Scilipoti was in Berlusconi's list. The answer to the second question is more complex, and i will try it below. But first another thought about Porcellum: This law favorites the corruption in the Parliament, in fact if a Deputy/Senator wants to be re-elected, his goal is not to convince the citizens he acted good in his office. Because the citizen has no power to re-elect him, being that he cannot express a Preferenza on the voting paper. Instead what he has to do is to show the leaders of the parties that he will be prone to their will whenever it is convenient to. His goal is not to have the electors consent, but to enter the blocked lists as topmost as possible. On the other hand, a Party has no convenience to nominate a "clever enough" candidate (or that looks so to the citizens), because, being that the mandate is personal, if a clever candidate is elected, in order to work for the good of the country he can decide not to follow the direction of the party (this right is established by article 67 of the Constitution). This of course reduce the power of the Party in the Parliament. So, the interest of the Party is to have a blocked list full of stupid and corruptible people. At the end we can say that Porcellum tend to give a lot of power to the parties, in particular to the ones that win the elections (thanks to the Premio di Maggioranza). Therefore the Parties, especially those ones that win the elections (which have the absolute majority in the Parliament), do not put their energy to change the electoral system, although they agree that Porcellum is a shameful law. In this scenario, the parties all look dishonest to the eyes of the elector. An elector, if traditionally votes for the Left, evaluating the fact that Berlusconi corrupted Scilipoti will certainly be enforced not to vote for Berlusconi, But, in the same time, his affection to the Left is weakened, being that Scilipoti was firstly elected in the Parliament because the Left put him in its blocked list thanks to his stupidity and corruptibility. I am an elector for the Left. But i have to admit that also the Left itself played an important role to the spread of corruption in the Parliament. So, to me, while it is very clear why i am not gonna vote for the Right, it is much less clear why i should ever vote for the Left. I can imagine that for somebody culturally oriented to the Right would do the same type of reasoning ending up to vote Berlusconi anyway, just because for rejecting to vote for the Left (to tell the truth i believe there are thousands of other reason for which Berlusconi should go home, but should all those fact really be on the shoulders of a normal elector?). If by chance i am discussing with somebody that votes for the Right, i usually point out that Berlusconi's party is full of criminals, because i believe that this is the main cause of the disaster that was guiltily perpetrated to the Country for his personal interests. In those cases i would like that not to have back the answer that in the lines of the Left they are not better at all. Of course the subject "they are all the same" is too much populist and simple. Right and Left are not the same, both in the contents and in the appearance (so, still, i am surprised that there is still an amount of people that vote for Berlusconi). But the responsibility of a corrupted Parliament is also of the Left parties, although on a smaller scale. # The same old faces are here again Another consequence of the electoral system is that, one legislation after another, the elected Deputies and Senators are always the same. If the citizen cannot decide who are the bad dudes to send away and which newbies are trustworthy enough to be introduced, then, in order to maintain the power in the same hands, the blocked lists will be filled with the same names of the previous legislation. Also in an utopian situation of perfect honesty (which is not our case at all!), this has disastrous consequences. If the party i vote for doesn't reach the majority, or at least a relevant part of the Parliament, in most of cases i believe it is due to the incapacity of that party's candidates to win against the competitors. If in the next election the same party presents as candidates the same people, i am pretty much sure that it will loose again. I vote for the Left coalition because i identify the values of that coalition, but if I am already sure that my vote is not useful to give the country a Parliament that support those values, why am i supposed to waste my vote? In the USA there is the good habit to change the people that loose, even if they are valid people and politicians. For example i like a lot Al Gore. But, when he lost the Presidential election, he disappeared from the political scene. At least from the run to the Presidentials. Moreover, in the USA there is also the good habit to change the people also when they win. The president of the USA mandate is only 4 years. Which can be renewed for only one other mandate, so one person can be in charge of that office for only 8 years. This is a wonderful rule, because in the worst case when the office is given to an incapable mean person (as for example George W. Bush) he cannot make so much disaster as, in proportion, in Italy Berlusconi did for over 20 years. I don't think in Italy the same rule can strictly be applied, because for Italian Constitution the power is in the hands of the Parliament, which is made of a lot of people. If we want to limit to 8 years each Deputy/Senator mandate, it would be a mess to change them all. Nevertheless i believe some tricks can be invented so that the power is not de facto centralized always to the same small group of people. In Italy, instead, we have no limits for the number of mandates one single person can have (there are limits in the local administrations, but not for the Parliament or other national offices. Think only that there is no limit at all for the duration of the mandate of a Presidente del Consiglio: Traditionally when the Parliament is renewed (every 5 years at most), the Government is renewed too with a new Fiducia vote, but theoretically while a Government keeps having the trust of the Parliament, the Prime Minister keeps leading the Country forever. To tell the truth the Left is trying to change the things, in order to have some replacement of people. This goal is pursued with the Primaries. The goal of this kind of elections is to let the citizens decide who will be the candidate of the Left that will run for the charge of Prime Minister the Political elections. The Left few months ago, for the first time, also performed the "Primarie Parlamentari", in order to decide who to put into the blocked lists. In this way they wanted to give back the citizens the right to decide their representative in the Parliament. I don't think that the Primarie is the correct instrument to decide the people that will run for the Parliament seats, but, within this Porcellum law, this is a good solution. It must be said that the freedom to decide is deeply limited on who is the candidates for the Primaries. In the Primaries 2005, for example, there were 2 main Prime Minister candidates: Pierluigi Bersani and Ignazio Marino. At the end also Dario Franceschini decided to run. Of course Bersani won, being that the newbie Marino was not very popular. The suspicious thing was that the settlement of the candidates was perfect to make so that Marino lost and Bersani won. This all look like a political game played by some "powerful ghosts" of the Left (Massimo D'Alema and Walter Veltroni), saving the look of democracy that the Primaries give. Everybody thought that Marino was the new man, nevertheless most people voted for Bersani (or Franceschini) because they wanted a strong man to fight a strong battle against Berlusconi. In other words, when a new face faces the scene of the Left, despite the Primaries, the old faces play "dirty" games to cut his wings. Similar dirty games were played this year for the Prime Minister primaries and for the Parlamentarian primaries. That's why Bersani was still the candidate for the Left and the blocked lists were still holding some of the same names (although there were also some newbies). # Antitrust Given the criminal things he did during his Government, Berlusconi and his party lost a lot of consent. Nevertheless the expected disaster didn't happen. Nowadays Berlusconi's consent is still about 25% of the electors. 25% of Italians still believe in Berlusconi! Why? To obtain some consent, he filled his program with lies. For example he promised that he would have given back the IMU tax (a tax on the house that was introduced last year by Monti), without explaining where he would find enough money for that (which, especially in a period of crisis, is the real problem to solve). Being that he didn't expect to win the elections, he didn't also expect also to keep the promise, but that program looked indeed attractive to a lot of people. The real question is: how can a lot of people still believe that raving Clown? Berlusconi based all his political life on the "art of appearing". He owns a half of the main mass-media in Italy, including 3 out of the 6 main TV networks. Obviously those media are allowed to show news, speak of Politics, in other words unfair canvass. In a "normal Country" possession of media would not be compatible with the office of Politics. When Berlusconi came to power, Italian laws were unprepared. There had never been a problem of such a conflict of interests. So, after he had the power he obviously didn't make any law to control this kind of problem. On the opposite, his Government/Parliament made laws with the precise intention to enforce his own business. Berlusconi's power is essentially based on free propaganda of lies. The surprising thing is that in the last 20+ years, during those few and short periods in which Berlusconi was not at the power, the Government didn't settle down any antitrust law. Why? The suspicion is that in those times there were some exchange of interests - if not of money - under the table between Berlusconi and the leaders of the Left. My general opinion is that when somebody has the power, he doesn't want to chenge the rules. He instead wants to keep the status-quo, which means to still keep the power. Of course this is a wrong strategy, because the looser will do anything to change the status-quo. But anyway, i believe that the Government efforts should try to perceive the good of the Country, and not the good of the Party. # Public financing of political parties Another theme that is in discussion in Italian political scene is the Finanziamento pubblico ai Partiti. This subject would sound strange in the US, but in Italy the political parties are supported, for the electoral expenses, by public financing. Just considering the skin of the problem, it seems like a shame that, especially during a deep economical crisis, the State wastes money on such a thing as electoral expenses. So, abolishing public financing of Parties is always an appealing theme to the electors. This kind of use of public money is something that irritates the citizens, also because of dishonesty of the Political class, which often used part of those money for personal interests. Just to make a minor example, from some investigations in the last months, it came out that a politician of the Left charged the fiscal receipt of the purchase of a jar of Nutella on the reimbursement of electoral expenses. Which, technically could be also legal (that jar of Nutella could be used within some kind of public event tied to the electoral campaign - i don't know...). But this expense won't ever be accepted by somebody that works hard long hours to feed his children. Of course these investigations found also bigger and more evident illegal expenses, but this case of the Nutella jar, in my opinion, is emblematic. # Parliamentarian immunity Immunita' Parlamentare is a law that protects the Deputies/Senators and the main institutional offices of the State restricting the Judical power of the Magistrature. In theory this is done to avoid interferences among the Three Powers (Executive, Legislative, Judical), making so that the investigations about some kind of crimes cannot condition the work of the Parliament. This law is often criminally used to protect against the public Justice, and so it has the opposite effect of making interference into the works of the Parliament. For example Berlusconi is now under process. Everybody know he is a corrupter. For some crimes he has already be condemned (although the penalty have been invalidated by prescription). Now he is investigated about some rake-off he paid for "buying" some deputies (De Gregorio case) and for underage prostitution (Ruby case). His political games, so, are to try to obtain an institutional charge as soon as possible so that he can abuse of Parliamentarian immunity until the crime will be prescribed. Of course citizens are kind of pissed about this behavior, and in general about this law. A normal person cannot understand why to submit to some laws while there are other people that can do pretty much whatever they want unpunished. The law should be the same for everyone, or not? This is the context in which we went to vote last February. I'll tell my interpretations and opinions on the result in the next (and hopefully last) post. Stay tuned! # Electoral disaster As i was saying in the last post, in Italy you need half of the Parliament, or, better, half +1 of the Deputies in the Camera and half +1 of the Senators in the Senato to obtain a Fiducia vote that can support a new Government. Half+1 is enough, but as bigger the majority is, as better it is, in fact within the Parliament, each deputy/senator is responsible for his own personal opinion (and his vote cannot be conditioned by the party he belongs to), so the deputies/senators can change their trust to the Government. This rule is meant to do so that the deputies/senators do the best for the country and not for their parties, which things, sometimes do not coincide. Of course in the moment of the first Fiducia vote which ratify the Government, there is a reasonable certainity about the positive result of the vote, because the Presidente del Consiglio is nominated by the Presidente della Repubblica after he knows that a good enough part of the Parliament agrees with that choice. In order to estimate that choice, the Presidente della Repubblica makes a big work before the nomination. This job is called "Consultazioni". The Presidente della Repubblica formally meets each leader of the parties/Parliamentarian groups/Coalitions, one after the other, and tries to find out possible convergences of political programs that can rise an acceptable compromise among the parts. After the Consultazioni, the Presidente della Repubblica has a good estimation on which is the most stable agreement to support one candidate. The Fiducia vote's purpose is just to officially ratify what the parties/groups leaders already told the Presidente della Repubblica. It can happen that, within a legislature period (5 years), the Parliament retires the Fiducia to the Government. Infact, if one important action of the Government for some reason does not obtain the majority of votes, the Government can ask, as an extreme measure, for a new Fiducia vote. If the Trust is given, the Government can act that action anyway, although it still does not have the support of the majority. If the trust is not given (the result the Fiducia vote is negative), the Government is dissolved, the Presidente della Repubblica repeats the Consultazioni thing, he hopefully finds a new compromise among the parts and nominate another Presidente del Consiglio, who will nominate a new Government which will be subjected to another Fiducia vote. It can happen that the Presidente della Repubblica, at this point, cannot find such a compromise among the parts, because the opinion of the parts are irreconciliable. Note that this is a very unfortunate event: usually there is more possible nomination as Presidente del Consiglio that can satisfy atleast the half+1 of the deputies and senators. But it can happen. This event means that the Parliament cannot perform its main task, which is to express and support a Government. In this case the Presidente della Republica dissolve the Parliament and indict anticipated Political Elections ("Elezioni anticipate"). The citizens go to vote and elect a new Parliament in which, hopefully, the Presidente della Repubblica will be able to find a useful majority of the two chambers in order to nominate a Presidente del Consiglio and, hence, a Government. # The result of the 2013 Political Elections The elections that we just made last week outcome with a very weird result. There were the two main opposite coalitions: the Left (the main party is PD - Partito Democratico) and the Right (the main party is PdL - Popolo delle Liberta'). Then, historically, there was a third center coalition (UdC), which is numerically much smaller. Plus other small and unsignificant parties. The really news of this year was M5S (Movimento 5 Stelle), which in the previous Political Election was not yet born. In a Political Election, every coalition can suggest a candidate Presidente del Consiglio. A coalition engages to support that candidate, which thing makes the task of the Presidente della Repubblica easier and more appliable to the will of the citizens: he has to choose firstly among those elected candidates. Follows the list of the options, with the supported candidate. 1- Left: Pierluigi Bersani 2- Right: Angelino Alfano (Berlusconi is the leader, but the candidate was Alfano) 3- Center: Mario Monti (the outgoing Presidente del Consiglio - this coalition includes UdC) 4- M5S: Gianroberto Casaleggio (the leader of the Movimento is Beppe Grillo) 5- Rivoluzione Civile: Antonio Ingroia 6- Fare per Fermare il Declino: Oscar Giannino In this discussion i won't take in any consideration options 5 and 6, because numerically unsignificant. Being that Berlusconi (and also a lot of people in his party) was judged corruptor and corruptable, and being that he really acted in the internal and foreign politics like a clown (clown #1 of the title of this post), the opponent coalition (Left) was expected to obtain a lot of consent among the citizens. A lot of Berlusconi voters were expected to vote, if not for the Left, for the Center, being that Mario Monti, although performed a very rigid politics due to the international and national disastrous economical situation, was anyway very popular, helping a big raise on the international Italian credibility. Also, some votes for Berlusconi were expected to be carried to M5S, which presented itself as a contestation movement against corruption in Politics. In other words the expected result was a huge growth of the Left, a huge default of the Right, a good result for Monti and a some good numbers for M5S. Also a rise of the non-voters was expected (supporters of the Right are never expected to vote for the left on mass). This could lead to an absolute majority of the Parliament for the Left, so a good opportunity for a stable Government. There still was the risk of not reaching the absolute majority in the Senate, in which case the Left could coalize upon a valid political program with Monti's Center. That was an option i wouldn't have liked, but still it was an option for a stable and secure government in a difficult situation, after almost 20 disastrous years of corruption, personal interest and degrade as prostitution in the hall of buttons of Berlusconi's Era. The final result is that Berlusconi lost, but not so much, the Left gained, but not so much, Monti lost more than what expected and M5S, thanks also to the front man Beppe Grillo (clown #2) made a lot of consent. Also the abstention was not that big. Thanks to the "premio di maggioranza" of the electoral system, in the Camera dei Deputati, nowadays, there is an absolute majority of the Left (with Bersani candidate prime minister), but in the Senate there is a mess. The Left has a relative majority of Senators, but that number is not enough to reach the half+1 needed, not even adding the senators elected with Monti party (which is just a small number). So, the only possible agreements for the Senate are these: 1) Left+Berlusconi, which, as you can imagine, the Left does not want. And i hope that won't be the conclusion, because i pray everyday that Italy won't have to bear such a jackass as Berlusconi anymore. It's humiliating. 2) Left+M5S. But, for now, the M5S refuses this solution. Actually M5S refuses any possible alliance with other "traditional" parties, willing to stay at the opposition. It is understandable, because M5S was born for contesting against the political model, in particular with the structure of the Parties, which is accused to generate corruption and to protect the powerful hierarcs against any renovation of the Political class. In fact the elected deputies/senator of M5S are perfectly unkown people and they look like a bounch of Don Quixote fighting against windmills. On the other hands, a lot of elected deputies/senators of the big parties are the same old ones that are there since decades. Bersani himself (which i somehow like, but...) participied to the first (1996-98) and the second (2006-08) Government with Romano Prodi as Prime Minister. And he is in the PD's hierarchy since before the PD was born. Another reason for Clown #2 not wanting to support any parties is that it is political convenience of his Movimento. If they don't participate in any Government, the Presidente della Repubblica must find an agreement between Left and Right. This will generate a very unstable Government mandate, which will probably fall after less than one year, showing everybody that nor Left nor Right is really able to govern the country, as Clown #2 keeps saying. So, as it looks now, it seems that there is no condition of a possible agreement among the parts in order to form a majority able to support any Government. Which thing, as i was saying above, should lead to anticipate elections, called by the Presidente della Repubblica. Personally i don't like this solution, because we just voted one week ago, so i don't see how the opinion of the citizens would be different now, after nothing happened. Or, worse, it could be that the citizens would find out that this disaster we have after these past elections, are a proof that the traditional Politics cannot give an answer to the needs of the citizens, so, a new election would lead to an even more powerful Clown #2 (Beppe Grillo), which would lead to an even more difficult situation. And, anyway, if this is not enough, there is another problem that complicates the situation even more. # The White Semester As i was explaining in the previous post, the Parliament is renewed every 5 years (or when there is no more valid Trust in a Fiducia vote). The Presidente della Repubblica, instead, is elected every 7 years (or earlier in case of death or resignation). The Parliament is renewed when the Presidente della Repubblica dissolves it and the Presidente della Repubblica is renewed by mean of elections by the Parliament. Therefore there is a contraddiction of facts if the mandate of the Presidente della Repubblica terminates when there is no available Parliament to elect a new one. In that case there would be no Presidente della Repubblica to ratify a Parliament and no Parliament to elect a new Presidente della Repubblica. There is a special law that is meant to avoid this contraddiction: during a period of six month before the expiration of the Presidente della Repubblica mandate, the Presidente della Repubblica cannot anticipately dissolve the Parliament. This period is called "Semestre Bianco" (White Semester). Kind of obvious why "Semester"; don't ask me why "White"... Anyway, to complicate the actual situation, Giorgio Napolitano's mandate expires on May 2013. So, we hare in the right middle of the Semestre Bianco. So, if there is no way to find an agreement between Left and M5S or between Left and Right, there is total ingovernability, because the Parliament is not able to ratify a Government, but in the same time it cannot be renewed. To tell the truth, there is a last option. The so-called "Governo di Minoranza" (Minority Government). Although the minority parties (Right and M5S) still declare not to support the proposed government, a big share of the Senators decide to abstain. In this way the absolute majority in the Senate can be reached with the favor of a minority of the Senators. This is a conscentious resolution of the opponency. It is also called "Non Sfiducia" ("un-mistrust"?!?). In this way the Government can be effective, but it's obvious that it cannot assume a support by the Parliament when some important actions need to be taken, because as there is one decision to be voted, also the deputies/senators that declared not to trust (or, better didn't declare to trust) the Government will be asked to vote. Of course this solution is not desirable, being that it's not stable. But it can be adopted in order to take few and limited actions for few months untill a new Presidente della Repubblica will be elected and will be able to call for new Political Elections. This is the description (of mine) of the disastrous political situation we have in Italy. It's more interesting to analyze the reasons we arrived to such a situation. I will give my opinion in the next post. Stay tuned! ## Friday, March 1, 2013 ### The two clowns (part 1) The purpose of this post is to answer some hints that my blog-friend Dick gives me: I am curious what thoughts you might have on this: Italy just had an election to give the proletariat what they want. What motivates voters to make such a decision? Is the driving force a desire for power by the Ruling Class to get elected at any cost? Is it in fact a good thing for the economy of Italy in the long run. [editing after writing the draft of this post: While i was writing this post, i realized that the subject is so big and articulated that i will never be able to finish in a reasonable amount of time, so i will split it in severan ones. This first is about the rules that drive the politics in Italy] # Rules I'd like to make at first a digression to explain what it does mean "political" elections upon the electoral rules we have in Italy. In Italy this type of elections are meant to renew the two Parliament chambers: the "Senato della Repubblica" and the "Camera dei Deputati". It is the most important type of elections in Italy, being that in our Republic, the State is leaded by the Parliament (and not, as it happens in USA, for example, by the President of the USA). Unfortunately we have an electoral law open to the risk that, upon some conditions, it can give ingovernability to the Parliament (although few years ago it was changed in order to be safer than the previous one). I think i'll make a digression later about this. Those unfortunate conditions just happened at this last elections. I try hereby to explain better: As i said, in Italy we vote every five years to elect the people that seat on the seats of the Parliament (315 senators and 630 deputies). Every seven years, the Parliament elects the "Presidente della Repubblica" (which usually is a super-partes Senator that ensures imparciality). The one currently in charge is Giorgio Napolitano. One of the main tasks of the Presidente della Repubblica is to nominate a "Presidente del Consiglio dei Ministri" (the Prime Minister). Mario Monti was the last one. The Prime Minister himself/herself nominates the "Ministri della Repubblica" (Ministers). The number and the tasks of this team ("Consiglio dei Ministri" - the Government) is variable upon the program that the Presidente del Consiglio intends to develop during his/her mandate. Usually it is a dozen to twenty people. Although the Prime Minister is nominated by the Presidente della Repubblica, his charge is mainly based on the Parliament, because, since the Government's decisions are voted by the Parliament, the Government itself needs the explicit support of atleast the half of the Senators in the Senato and the Deputies in the Camera. This "explicit support" is named "Fiducia". After the nomination of the Prime Minister by the Presidente della Repubblica, the Prime Minister selects and nominates the Ministers to form the Government, and after that, the Parliament is called to a first Fiducia vote, in order to to check out that the Government has in fact the support of the half of the Parliament. Italian Republic is based on a sharp division of the three powers: Legislative (run by the Parlamento), Executive (run by the Consiglio dei Ministri) and Judical (run by the Magistratura). The Magistrature is directly leaded by the Presidente della Repubblica, the Parliament is leaded by the Presidente della Camera and the Presidente del Senato, the Consiglio dei Ministri is leaded by the Prime Minister. Presidente della Repubblica, Presidente della Camera and Presidente del Senato are elected by the Parliament. The Prime Minister is nominated by the Presidente della Repubblica and is supported by the Parliament. As you see, all the Powers are based on the Parliament, which is elected by the citizens. All the actions to run the Country, both for relationship with foreign countries and for internal policy are of course taken by the Government. So, you can see, although the main "institutional" power is kept by the Presidente della Repubblica, which has also the task to prevent any interference among the Three Powers, the one leads the Country is the Prime Minister. As i was telling before, in order to lead the country the Prime Minister and his Consiglio dei Ministri need the trust (Fiducia) of the Parliament. The ENTIRE Parliament: both the Camera dei Deputati and the Senato della Repubblica. Which means that a Government needs the support of the absolute majority in the Senato (50% +1 of the senators) and the absolute majority in the Camera (50%+1 of the deputies). Being that in Italy we don't have, only 2 parties (such as it happens in America, omitting Nader), this means that if after the elections there is not a party/condition that obtains the absolute majority in both the chambers, the Prime Minister needs to ask for an agreement to other parties, untill the amount of senators and deputies is enough big. Of course such an agreement is possible only if there is an acceptable compromise within the program. If such an agreement cannot be found the Government is not considered stable, although, theoretically, it can still lead. The problem is that in a Minority Government, when an action to be taken is needed to be supported by a vote of the Parliament, the result of that vote is not predictable, being that the votes against could be more than the votes for at any moment. When an agreement on a single decision, cannot be found, the action cannot be taken, or the Prime Minister asks the Parlament for a Fiducia vote. If the absolute majority of the parliaments is OK, the action can be taken anyway. If not, the government is dismissed. In this case the Presidente della Repubblica tries to find another Prime Minister that form another Government which hopefully has the Trust of the Parliament. If such a Prime Minister cannot be found, also the Parliament is dismissed, and new elections must be performed. I believe that it is theoretically possible that only one chamber is dismissed, in which case the elections are performed only for that chamber. But this never happened in the history of Italian Republic, and i am not sure what kind of scenary this can drive to. Of course the number of the senators and deputies is given by some rules in the electoral law, basing on the votes of the citizens (although i don't like, the number of the elected is not exactly proportional to the number of the electors that voted for them). One could guess that the proportion of the number of elected Senators of a particular party within the Senato is similar to the proportion of the number of elected Deputies of that party within the Camera. In fact, when an elector goes to vote, he is given two voting papers (one for each chamber): we can presume that he will vote on both for the same party. Unfortunately it's not that simple. The Camera's Deputies are elected by a proportional law on the whole national territory, with a "sbarramento" and a "premio di maggioranza". The "sbarramento" is a mechanism intended to eliminate any too small party. If a party or a coaliotion of parties doesn't reach a minimum percentage of votes, it doesn't gain any deputy at all. That party is excluded from the Camera. The party or coalition that makes the best will add extra deputies enough to reach a minimum of 340 (which is the absolute majority of the Camera. For the Senate the mechanism is very similar, but it is evaluated on a regional base. Italian territory is divided in 20 regions. Each region assigns a number of senators proportional to the amounto of inhabitants of that region (for example Lombardy is worth 47 senators, being that it is has a lot of inhabitants, while Valle d'Aosta only 1 Senator since not a big lot of people live in that mountainous region). The Senators elected on each region compose the entire Senate. Therefore, being that the mechanism to elect the Senate is different from the one to elect the Camera, the power of a particular party within one of the two chambers could be different from the power of that party within the other chamber. Moreover the electors of the Camera are all the citizens over 18 years old, while the electors of the Senate are all the citizens over 25 years old. Being that a portion of the electors vote for the Camera but not for the Senate, the distribution of the votes can be even more different. At the end one party/coalition will have the relative majority of votes in the Camera, which fact will give that party/coalition the absolute majority of deputies. But it is not ensured that the same party/coalition, although it had the majority of the votes in the Senate, it doesn't obtain the absolute majority of the Senators number. And this is exactly what happened in these last elections. ...but this will be delayed to the next episode... 10/10/2007 ## Friday, May 4, 2012 ### Fibonacci sequence, divine proportion, democratic order and the beauty of Nature Watching the tv-series Touch, I came across some things I studied at the high school and the university, which i found very fascinating. I thought it was time to revise those subjects and i went to surf Wikipedia. Of course the show does not dig the topic very deeply, but it gives a good hint for some thoughts. # Fibonacci sequence Leonardo Fibonacci was a guy that lived around year 1200 and invented a particular sequence of integer numbers. Here the first numbers of the sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946... Apparently, for those not yet familiar with Fibonacci numbers, this looks a more or less random set of numbers. One can guess that it's a growing set of numbers (except for the second and third, which are equal), and the sequence diverges (i.e., as the numbers grow, also the difference between consecutive numbers grow), but nothing more interesting than that can be seen at a first look. The series is defined by a very easy rule, and the number that compare in it are also very easily computable (so much that, to write those first ones above i didn't consult any table and i calculated them in my mind in no more than five minutes). Here's the rule: By definition the first two numbers are 0 and 1. The next ones are given by the sum of the two previous ones. Or, more formally: N0=0 N1=1 For each n>1 Nn=Nn-1+Nn-2 This is a recursive definition, which means that the result, for a particular value of n, is given by the composition of the results of other values of n. Given the definition of the rule, it is easy to compute the sequence: N0 = 0 N1 = 1 N2 = N1+N0 = 1+0 = 1 N3 = N2+N1 = 1+1 = 2 N4 = N3+N2 = 2+1 = 3 N5 = N4+N3 = 3+2 = 5 N6 = N5+N4 = 5+3 = 8 N7 = N6+N5 = 8+5 = 13 N8 = N7+N6 = 13+8 = 21 N9 = N8+N7 = 21+13 = 34 N10 = N9+N8 = 34+21 = 55 N11 = N10+N9 = 55+34 = 89 ... Apart from the elegance of its definition, this construction looks completely artificial and totally useless, on the opposite, we'll see that in its extreme simplicity there are a big number of surprising practical applications. But no rush, let's meet the "golden ratio", first. # The Golden Ratio The Golden Ratio, Golden section, Golden number, or, with some excess of drama also called "proportion of God", is a number given by the ratio of two lengths, so that the first is the middle term proportion between the sum of two and the second. Given a segment AB we have to find an internal point C so that the length AC is the middle term proportional between AB and CB, or in other words: AB: AC = AC: CB. (AB is to AC as AC is to CB) Naming AC=a and CB=b, the proportion becomes: (a+b):a=a:b The golden number φ is then equal to the ratio a:b Its value can be computed as $\dpi{80} \fn_jvn {\varphi = \frac{1+\sqrt 5}{2}}$ It can be easily shown that also (a+b):a=a:b=b:(a-b) The number φ, along with its multiplicative inverse $\dpi{80} \fn_jvn {\Phi =\frac{1}{\varphi}$ has a load of interesting mathematical properties. First of all φ=1.618033988749894848204586834... is an irrational number. Surprisingly Φ = 0.618033988749894848204586834... (for who didn't notice, the decimal part is identical!) It is also true that the squared value of φ, φ2 = 2.618033988749894848204586834... (the decimal part is still identical!). Another strange mathematical property is that φ2 = φ10 and in general φn = φn-1 + φn-2 which makes the sequence φn computable with a recursive function, just like Fibonacci numbers: φ0 = 1 φ1 = φ φ2 = φ10 = φ+1 φ3 = φ21 = 2φ+1 φ4 = φ32 = 3φ+2 ... One feature that seems remarkable to me is that the increasing exponentiation of φ calculate numbers more and more "almost-integer". I mean not exactly integer numbers, but irrational ones that approximate integers better and better. As it happens to all irrational numbers, also φ can be expressed as a "continued fraction" (this, I really did not remember!). A continued fraction, expressed as a sequence of integers [a1, a2, a3, a4, ...] is the number calculated as $\dpi{80} \fn_jvn {[a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, ...] = a_{0}+\frac{1}{a_{1}+\frac{1}{a_{2}+\frac{1}{a_{3}+\frac{1}{a_{4}+\frac{1}{...}}}}}}$ (of course for an irrational number the integer numbers that appear in the continued fraction is an infinite sequence). Well, number φ can be expressed as the continued fraction [1, 1, 1, 1, 1, 1, ...], in this way: $\dpi{80} \fn_jvn {\varphi = 1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\frac{1}{...}}}}}}$ Since the sequence is made of all numbers 1, which is the smallest integer number, at any next step of the approximation, being that the number appears in the denominator, the largest possible amount is added. So, at the n-th step this continuous fraction calculates a rational number that approximates the irrational number φ worse than how any other continuous fraction, at the n-th step, approximates another irrational number. In other words φ is "the most irrational" number. The one that "escapes" the approximation more than all the others. And, for someone like me, who gets carried away with Math as one nerd of the worse breed, these properties are already exciting, but there's much more. The Golden Section was discovered by the Greeks in the sixth century AC. For the Greeks, the number 5 had a symbolic importance: it was the sum of the masculine (3) and the feminine (2). This property has contributed to give a kind of magical taste to the Golden Section, in fact if you draw a regular pentagon and its diagonal (obtaining a five-pointed star inscribed in a pentagon), the segments make a ratio to each other as φ : in Figure AB:AC=AC:CB But since CD=AC-CB, then also AC:CB=CB:CD AC is also equal to the side of the pentagon, so all the drawn segments in the picture are equal to the first, the last or the middle term of the proportion. In the middle of the star, then, there is another regular pentagon. If you draw the diagonals to this pentagon, you obtain this picture: which obviously has the same properties of the previous, and so on to the infinite. Similar properties can de observed on the "golden triangle"... The symbolic meaning of the Golden Section has influenced art. Fidia used the Golden Section to proportion the statues of the Parthenon (hence the use of the symbol φ - the greek letter for F - for naming its value). Leonardo used φ to map the Mona Lisa. It's easy to find a lot of other examples just searching on the Net. But, what does the Fibonacci sequence matter with the Golden section? We take the sequence. We exclude the first number (which is zero) and calculate the ratio between the third and second, between the fourth and third, between the fifth and fourth and so on. I threw this calculation in an Excel spreadsheet, and this is the result: In the first column there is the index of the Fibonacci number reported in the second column, at its right. In the third column there is the value of the ratio between the corresponding number in the second column and its previous (obviously, not being able to divide by 0, I started from the third number divided by the second). You can easily note that the values of the third column converge very rapidly to the value of φ. On the right it's graphically shown this convergence. Mathematically it can be said that $\dpi{80} \fn_jvn {\lim_{n \to \infty}\frac{N_n}{N_{n-1}} = \varphi }$ (as n approaches to infinity, the n-th Fibonacci number divided by its previous approaches to the Golden section) Another strange fact about Fibonacci numbers and Golden Section, is the way they appeared in history. The Golden Section, was invented by the ancient Greeks, but after the decline of the Hellenistic period, it went into disuse and had been almost forgotten for over a millennium. Fibonacci in the thirteenth century invented his sequence, for applications that were totally unrelated to the properties of the Golden Section, and in fact nor he nor anybody else noticed that correlation, which was discovered only a few centuries later. It's remarkable that Fibonacci was the first one that defined a recursive function, anyway ignoring its importance. Of course, the Golden Section and many other more ancient mathematical stuffs can be calculated using recursive functions (which I find really ingenious, even fascinating, as the computer dude I am), but their definition in this way was found only after Fibonacci. Okay, you will say. They discovered two mathematical tricks and after more than one millennium they put them together. Everything very fascinating, but sill we didn't see what all of this is for. # Applications in nature Take some squared paper and mark with a pen on one square more or less in the center. Below, highlight the next sauqare in the same way. To the right of it, draw a square adjacent to the two squares drawn previously, so that its side is 1 + 1 = 2. Above this drawing, trace another adjacent square, with a side equal to the length drawn (2 + 1 = 3). To the left of all these squares trace another one whose side rests to the other squares. The side of this last one will be 3 + 2 = 5. Do the same thing below. The new square has side 5 + 3 = 8. Continue like this until there's space on the sheet. It's obvious that the drawn squares side lengths are equal to the Fibonacci numbers. Now we can inscribe one fourth of a circle into each drawn square so that each circle is tangent to the one inscribed in the next square and the one in the previoius. The curve that we obtained is called the Fibonacci Spiral. To tell the truth, this is not exactly a "spiral": in mathematics, a spiral is a curve such that its derivative in polar coordinates is continuous in each point. Here, instead, it is not: the curvature is constant within each square but it has a discontinuity every $\dpi{80} \fn_jvn {\frac{\pi }{2} }$ where it goes to the next one. In other words, a "real" spiral cannot be drawn with a pair of compasses. Anyway, the Fibonacci spiral is a good approximation of the Golden Spiral, which is a "real" spiral (a particular "logarithmic spiral" for which i omit the mathematical details). The beauty of all of it is that in nature there are a lot of examples of this spiral. One scenic one is the layout of the seeds in some flowers like the sunflower. In the same way are arranged the elements of the pinecones, of the pineapple, the corn seed on the cob... Then, there is the Golden Angle, which is an angle that divide the perigon in two parts between which the proportion is equal to φ. In most of the plants and trees the leaves on the branches develop such as there is a golden angle between the previous and the next leaves. There are several of cases in which different applications of the Golden section or the Fibonacci numbers can be noticed. For example, most of the flowers have a number of petals even to a Fibonacci number (from Wikipedia: "Lilies have three petals, buttercups five, delphinia have often eight of them, calendulae thirteen, asters twenty-one and daisies usually have thirty-four or fifty-five or eighty-nine petals") An explaination of this behavior in nature is given by the fact that, as we saw above, the Golden Section is the "most irrational" number. For example, in the picture above, the fact that between each pair of successive leaves there is a Golden angle ensures that each leaf is "covered" by the subsequent ones as little as possible, and so it receives the largest possible amount of light. Another reason, is that since Fibonacci numbers do not follow a replicable order, (which is an effect of their definition), each one contributes to the solidity of the whole. The idea is this: if the disposition of corn seeds on the ear was regular, say for example 50 seeds for each round, each seed would have been exactly aligned to those of the previous and the next rounds. The cob might break along those lines. Also on those lines where the seeds would lie would have been very crowded while the lines in between would have been empty. Of course a solution of this last problem could have been that the seed were disposed as an exagone, like the cells of a beehive. In this way the seeds would have been spread as most evenly as possible. But one could anyway found an alignment (or better, three of them, at $\dpi{80} \fn_jvn {\frac{\pi}{3} }$ each other), and along these lines the alignment would weaken the cob. In other words, even if nor the Golden section, nor the Fibonacci numbers have been invented for this reason, they both describe very well some behaviors of Nature. I imagine that Darwin evolution developed some shapes that follow very well these rules, because they are winner on all the other schemes. The disposition of the leaves upon golden angles around the branches ensures a better insolation of the leaves themselves, with respect to any other disposition you can think of. # The democratic order Recursivity, in the definition of the Fibonacci sequence, and then also in the Golden section makes me think of an order which is not imposed from the top, but built on the base, from the collaboration of the individuals themselves who suffer and benefit from the rule. The n-th Fibonacci number is difficult to calculate, unless you know its two previous ones. Knowing them, instead, the calculation is a piece of cake. The arrangement of the n-th leaf around the branch is uniquely determined by the previous leaf, and itself determines the arrangement of the next one. So, the rule is not "centralized", but applied locally. I think this is a good metaphor for democracy. Everyone contributes, through his small self, to build the order for the survival of the entire society which he belongs to. Everyone's right place is given by his ancestors, and will itself determine the right place of the future generations. And everybody have the responsibility to work within the rules, which are not imposed from the top but developed for necessity, and oriented to the conservation of the species. I believe that humanity doesn't need an established power to regulate man's life. Rather, I believe that every man should acknowledge to be a part of a naturally organized society, and give up some ambition for the common good. The leaf that embezzle a place that it does not own leads to a deterioration of all the other leaves conditions, compromising the efficiency of the entire branch and therefore the survival of all the leaves (including itself). (Lot of references and some pictures from Wikipedia)
2013-05-26 05:47:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4706779420375824, "perplexity": 1642.6062350469265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706631378/warc/CC-MAIN-20130516121711-00080-ip-10-60-113-184.ec2.internal.warc.gz"}
https://datascience.stackexchange.com/questions/82424/why-kmeans-cluster-breakup-is-like-this
# Why kmeans cluster breakup is like this [closed] I have a galaxy spectrum data set (total 22000). Similar to an electronic wave data, two dimensional (Flux vs Wavelength). A typical set of wavelength plot looks like below Now I am doing kmeans on this data set to cluster them based on their spectrum shape/pattern only (using sci-kit learn). Some results of the k means clustering is baffling me, I have made a flow chart of how the candidates clustered as I would go on increasing the number of clusters from k=2 to k=5. The flow chart graph of the division of candidates is below (this is on a smaller subset), Now my most basic question is why is the division of points in clusters happening like it is seen in the plot, as we traverse down the graph. More specifically, why for example in k=4 case in Cluster=1 it makes a group with such a mix bag ? couldnt kmeans instead select it's constituent 1078 or 580 candidates into a single (more cleaner) cluster ? Also is it a coincidence that there is always an identified 13 members group (in golden arrow) (or a 47). • It is an interesting way to visualize k-means. Since you have a 2D data, have you tried visualizing it something like the graph you see here? – sai Oct 1, 2020 at 10:58 • @sai , individual waves are of array 5000 length. So it is not a 2D data. Kmeans would cluster based on the wavelength distribution on this arrays. so it is a high dimenionsal data. Am I correct ? Oct 1, 2020 at 12:08 • I do not think the length matters. Just the space in which your waves vary i.e., the min and max values of the flux and wavelength matter. – sai Oct 1, 2020 at 12:13 • How do you generate that chart? What do the individual arrows/colors mean? Oct 1, 2020 at 13:56 • You can't visualize directly in 2D as you are in 5000D however you can use a tool such as t-SNE to visualize your data points in 2D (try different parameters of perplexity) and check in the 2D space from t-SNE how are distributed clusters from KNN (@ sai I think KNN is not performed on indivual (time, value) points but rather on the whole signals, signals with similar shapes will be closer) Oct 1, 2020 at 15:10
2023-03-22 02:14:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3631899058818817, "perplexity": 1349.2781359034234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00370.warc.gz"}
https://datascience.stackexchange.com/questions/103304/cross-validation-in-neural-networks
# Cross Validation in Neural Networks I am training a neural network and doing 10-fold cross validation to measure performance. I have read lots of documentation and forums telling that the set of weights that should be saved or checkpointed are the ones that results to lowest val_loss and not highest val_accuracy, since the former usually results to higher testing accuracy. Out of curiosity, I checkpointed both highest val_accuracy and lowest val_loss during my training. However, I found out that for some folds, I am getting better testing accuracy when I use the set of weights with highest val_accuracy compared to lowest val_loss. So during my cross-validation, I chose the set of weights that resulted to higher testing accuracy regardless of whether it came from the highest val_accuracy or lowest val_loss, and then just averaged the resulting testing accuracy across the 10 folds. Is my methodology valid? • Oct 20 at 8:56 • Oct 20 at 21:47
2021-12-05 14:36:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48304879665374756, "perplexity": 1689.784759377393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363189.92/warc/CC-MAIN-20211205130619-20211205160619-00541.warc.gz"}
http://mathoverflow.net/questions/36802/is-cosimplicialalgebras-left-proper
# Is CosimplicialAlgebras left proper? The model category structure on co-simplicial commutative $k$-algebras, $CAlg_k^\Delta$, with fibrations degreewise surjections: is it left proper? - Hi Urs, this may or may not be useful to you, but there is a proof of properness for a certain model structure on commutative monoids in symmetric spectra. See Hornbostel (arXiv:1005.4546, Thm 3.17), and the reference he gives to Shipley: A convenient model structure... (article available on her webpage). Maybe some idea used in this proof could also be useful in your setting. –  Andreas Holmstrom Aug 26 '10 at 22:04 Thanks, Andreas. Not sure yet if that helps, but I'll see. Maybe I should mention that I am quite aware of a few statements that are awefully close to the one I am after. For instance I know that simplicial commutative algebras are left proper. And for simplicial algebras in a very general sense there is this powerful article by Charles Rezk "Every homotopy theory of simplicial algebras admits a proper model" (arxiv.org/abs/math/0003065) . Maybe I am being dense and this implies the statement for cosimplicial algebras trivially, but I am not sure. –  Urs Schreiber Aug 27 '10 at 9:11
2015-05-22 19:12:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6331346035003662, "perplexity": 732.7281363948475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207926620.50/warc/CC-MAIN-20150521113206-00048-ip-10-180-206-219.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1557211/approximately-compute-absolute-largest-eigenvalue-of-symmetrix-3x3-matrix
# (approximately) compute absolute largest eigenvalue of symmetrix 3x3 matrix I need to compute (an approximation may be good enough) the largest (by absolute value) eigenvalue of a real symmetric 3x3 matrix many ($10^{6-12}$) times. Is there anything better than just computing the eigenvalues (say as described here) and then finding the absolute largest? • @PVAL defined "many". Definitely many more than 20. – Walter Dec 2 '15 at 22:25 For a $3\times3$ matrix, there is nothing wrong with computing all three eigenvalues since the resulting characteristic polynomial is of low degree, and most computer algebra systems are happy to find the roots of low degree polynomials for you. • I'm only interested in $3\times3$ matrices (as stated). Solving for all 3 eigenvalues can be done using the exact formula for the roots of a 3rd order polynomial, but does call $\arccos$ once and $\cos$ twice, which are computationally expensive. – Walter Dec 2 '15 at 22:22
2021-05-13 21:11:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8977497220039368, "perplexity": 307.98727316428705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992514.37/warc/CC-MAIN-20210513204127-20210513234127-00584.warc.gz"}
https://rosettacode.org/wiki/Factors_of_an_integer
CloudFlare suffered a massive security issue affecting all of its customers, including Rosetta Code. All passwords not changed since February 19th 2017 have been expired, and session cookie longevity will be reduced until late March.--Michael Mol (talk) 05:15, 25 February 2017 (UTC) # Factors of an integer Factors of an integer You are encouraged to solve this task according to the task description, using any language you may know. Basic Data Operation This is a basic data operation. It represents a fundamental action on a basic data type. You may see other such operations in the Basic Data Operations category, or: Integer Operations Arithmetic | Comparison Boolean Operations Bitwise | Logical String Operations Concatenation | Interpolation | Comparison | Matching Memory Operations Compute the   factors   of a positive integer. These factors are the positive integers by which the number being factored can be divided to yield a positive integer result. (Though the concepts function correctly for zero and negative integers, the set of factors of zero has countably infinite members, and the factors of negative integers can be obtained from the factors of related positive numbers without difficulty;   this task does not require handling of either of these cases). Note that every prime number has two factors:   1   and itself. <:1:~>|~#:end:>~x}:str:/={^:wei:~%x<:a:x=$~ =}:wei:x<:1:+{>~>x=-#:fin:^:str:}:fin:{{~% ## 360 Assembly Very compact version. * Factors of an integer - 07/10/2015 FACTOR CSECT USING FACTOR,R15 set base register LA R7,PG [email protected] LA R6,1 i L R3,N loop count LOOP L R5,N n LA R4,0 DR R4,R6 n/i LTR R4,R4 if mod(n,i)=0 BNZ NEXT XDECO R6,PG+120 edit i MVC 0(6,R7),PG+126 output i LA R7,6(R7) pgi=pgi+6 NEXT LA R6,1(R6) i=i+1 BCT R3,LOOP loop XPRNT PG,120 print buffer XR R15,R15 set return code BR R14 return to caller N DC F'12345' <== input value PG DC CL132' ' buffer YREGS END FACTOR Output: 1 3 5 15 823 2469 4115 12345 ## ACL2 (defun factors-r (n i) (declare (xargs :measure (nfix (- n i)))) (cond ((zp (- n i)) (list n)) ((= (mod n i) 0) (cons i (factors-r n (1+ i)))) (t (factors-r n (1+ i))))) (defun factors (n) (factors-r n 1)) ## ActionScript function factor(n:uint):Vector.<uint> { var factors:Vector.<uint> = new Vector.<uint>(); for(var i:uint = 1; i <= n; i++) if(n % i == 0)factors.push(i); return factors; } ## Ada with Ada.Text_IO; with Ada.Command_Line; procedure Factors is Number : Positive; Test_Nr : Positive := 1; begin if Ada.Command_Line.Argument_Count /= 1 then Ada.Text_IO.Put (Ada.Text_IO.Standard_Error, "Missing argument!"); Ada.Command_Line.Set_Exit_Status (Ada.Command_Line.Failure); return; end if; Number := Positive'Value (Ada.Command_Line.Argument (1)); Ada.Text_IO.Put ("Factors of" & Positive'Image (Number) & ": "); loop if Number mod Test_Nr = 0 then Ada.Text_IO.Put (Positive'Image (Test_Nr) & ","); end if; exit when Test_Nr ** 2 >= Number; Test_Nr := Test_Nr + 1; end loop; Ada.Text_IO.Put_Line (Positive'Image (Number) & "."); end Factors; ## Aikido import math function factor (n:int) { var result = [] function append (v) { if (!(v in result)) { result.append (v) } } var sqrt = cast<int>(Math.sqrt (n)) append (1) for (var i = n-1 ; i >= sqrt ; i--) { if ((n % i) == 0) { append (i) append (n/i) } } append (n) return result.sort() } function printvec (vec) { var comma = "" print ("[") foreach v vec { print (comma + v) comma = ", " } println ("]") } printvec (factor (45)) printvec (factor (25)) printvec (factor (100)) ## ALGOL 68 Works with: ALGOL 68 version Revision 1 - no extensions to language used Works with: ALGOL 68G version Any - tested with release 1.18.0-9h.tiny Works with: ELLA ALGOL 68 version Any (with appropriate job cards) - tested with release 1.8-8d Note: The following implements generators, eliminating the need of declaring arbitrarily long int arrays for caching. MODE YIELDINT = PROC(INT)VOID; PROC gen factors = (INT n, YIELDINT yield)VOID: ( FOR i FROM 1 TO ENTIER sqrt(n) DO IF n MOD i = 0 THEN yield(i); INT other = n OVER i; IF i NE other THEN yield(n OVER i) FI FI OD ); []INT nums2factor = (45, 53, 64); FOR i TO UPB nums2factor DO INT num = nums2factor[i]; STRING sep := ": "; print(num); # FOR INT j IN # gen factors(num, # ) DO ( # ## (INT j)VOID:( print((sep,whole(j,0))); sep:=", " # OD # )); print(new line) OD Output: +45: 1, 45, 3, 15, 5, 9 +53: 1, 53 +64: 1, 64, 2, 32, 4, 16, 8 ## ALGOL W begin % return the factors of n ( n should be >= 1 ) in the array factor % % the bounds of factor should be 0 :: len (len must be at least 1) % % the number of factors will be returned in factor( 0 ) % procedure getFactorsOf ( integer value n ; integer array factor( * ) ; integer value len ) ; begin for i := 0 until len do factor( i ) := 0; if n >= 1 and len >= 1 then begin integer pos, lastFactor; factor( 0 ) := factor( 1 ) := pos := 1; % find the factors up to sqrt( n ) % for f := 2 until truncate( sqrt( n ) ) + 1 do begin if ( n rem f ) = 0 and pos <= len then begin % found another factor and there's room to store it % pos := pos + 1; factor( 0 ) := pos; factor( pos ) := f end if_found_factor end for_f; % find the factors above sqrt( n ) % lastFactor := factor( factor( 0 ) ); for f := factor( 0 ) step -1 until 1 do begin integer newFactor; newFactor := n div factor( f ); if newFactor > lastFactor and pos <= len then begin % found another factor and there's room to store it % pos := pos + 1; factor( 0 ) := pos; factor( pos ) := newFactor end if_found_factor end for_f; end if_params_ok end getFactorsOf ; % prpocedure to test getFactorsOf % procedure testFactorsOf( integer value n ) ; begin integer array factor( 0 :: 100 ); getFactorsOf( n, factor, 100 ); i_w := 1; s_w := 0; % set output format % write( n, " has ", factor( 0 ), " factors:" ); for f := 1 until factor( 0 ) do writeon( " ", factor( f ) ) end testFactorsOf ; % test the factorising % for i := 1 until 100 do testFactorsOf( i ) end. Output: 1 has 1 factors: 1 2 has 2 factors: 1 2 3 has 2 factors: 1 3 4 has 3 factors: 1 2 4 ... 96 has 12 factors: 1 2 3 4 6 8 12 16 24 32 48 96 97 has 2 factors: 1 97 98 has 6 factors: 1 2 7 14 49 98 99 has 6 factors: 1 3 9 11 33 99 100 has 9 factors: 1 2 4 5 10 20 25 50 100 ## APL factors←{(0=(⍳⍵)|⍵)/⍳⍵} factors 12345 1 3 5 15 823 2469 4115 12345 factors 720 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 30 36 40 45 48 60 72 80 90 120 144 180 240 360 720 ## AppleScript Translation of: JavaScript -- integerFactors :: Int -> [Int] on integerFactors(n) if n = 1 then {1} else set realRoot to n ^ (1 / 2) set intRoot to realRoot as integer set blnPerfectSquare to intRoot = realRoot -- isFactor :: Int -> Bool script isFactor on lambda(x) (n mod x) = 0 end lambda end script -- Factors up to square root of n, set lows to filter(isFactor, range(1, intRoot)) -- integerQuotient :: Int -> Int script integerQuotient on lambda(x) (n / x) as integer end lambda end script -- and quotients of these factors beyond the square root. lows & map(integerQuotient, ¬ items (1 + (blnPerfectSquare as integer)) thru -1 of reverse of lows) end if end integerFactors -- TEST on run integerFactors(120) --> {1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60, 120} end run -- GENERIC LIBRARY FUNCTIONS -- filter :: (a -> Bool) -> [a] -> [a] on filter(f, xs) tell mReturn(f) set lst to {} set lng to length of xs repeat with i from 1 to lng set v to item i of xs if lambda(v, i, xs) then set end of lst to v end repeat return lst end tell end filter -- map :: (a -> b) -> [a] -> [b] on map(f, xs) tell mReturn(f) set lng to length of xs set lst to {} repeat with i from 1 to lng set end of lst to lambda(item i of xs, i, xs) end repeat return lst end tell end map -- range :: Int -> Int -> [Int] on range(m, n) if n < m then set d to -1 else set d to 1 end if set lst to {} repeat with i from m to n by d set end of lst to i end repeat return lst end range -- Lift 2nd class handler function into 1st class script wrapper -- mReturn :: Handler -> Script on mReturn(f) if class of f is script then f else script property lambda : f end script end if end mReturn Output: {1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60, 120} ## AutoHotkey msgbox, % factors(45) "n" factors(53) "n" factors(64) Factors(n) { Loop, % floor(sqrt(n)) { v := A_Index = 1 ? 1 "," n : mod(n,A_Index) ? v : v "," A_Index "," n//A_Index } Sort, v, N U D, Return, v } Output: 1,3,5,9,15,45 1,53 1,2,4,8,16,32,64 ## AutoIt ;AutoIt Version: 3.2.10.0$num = 45 MsgBox (0,"Factors", "Factors of " & $num & " are: " & factors($num)) consolewrite ("Factors of " & $num & " are: " & factors($num)) Func factors($intg)$ls_factors="" For $i = 1 to$intg/2 if ($intg/$i - int($intg/$i))=0 Then $ls_factors=$ls_factors&$i &", " EndIf Next Return$ls_factors&$intg EndFunc Output: Factors of 45 are: 1, 3, 5, 9, 15, 45 ## AWK # syntax: GAWK -f FACTORS_OF_AN_INTEGER.AWK BEGIN { print("enter a number or C/R to exit") } { if ($0 == "") { exit(0) } if ($0 !~ /^[0-9]+$/) { printf("invalid: %s\n",$0) next } n =$0 printf("factors of %s:",n) for (i=1; i<=n; i++) { if (n % i == 0) { printf(" %d",i) } } printf("\n") } Output: enter a number or C/R to exit invalid: -1 factors of 0: factors of 1: 1 factors of 2: 1 2 factors of 11: 1 11 factors of 64: 1 2 4 8 16 32 64 factors of 100: 1 2 4 5 10 20 25 50 100 factors of 32766: 1 2 3 6 43 86 127 129 254 258 381 762 5461 10922 16383 32766 factors of 32767: 1 7 31 151 217 1057 4681 32767 ## BASIC Works with: QBasic This example stores the factors in a shared array (with the original number as the last element) for later retrieval. Note that this will error out if you pass 32767 (or higher). DECLARE SUB factor (what AS INTEGER) REDIM SHARED factors(0) AS INTEGER DIM i AS INTEGER, L AS INTEGER INPUT "Gimme a number"; i factor i PRINT factors(0); FOR L = 1 TO UBOUND(factors) PRINT ","; factors(L); NEXT PRINT SUB factor (what AS INTEGER) DIM tmpint1 AS INTEGER DIM L0 AS INTEGER, L1 AS INTEGER REDIM tmp(0) AS INTEGER REDIM factors(0) AS INTEGER factors(0) = 1 FOR L0 = 2 TO what IF (0 = (what MOD L0)) THEN 'all this REDIMing and copying can be replaced with: 'REDIM PRESERVE factors(UBOUND(factors)+1) 'in languages that support the PRESERVE keyword REDIM tmp(UBOUND(factors)) AS INTEGER FOR L1 = 0 TO UBOUND(factors) tmp(L1) = factors(L1) NEXT REDIM factors(UBOUND(factors) + 1) FOR L1 = 0 TO UBOUND(factors) - 1 factors(L1) = tmp(L1) NEXT factors(UBOUND(factors)) = L0 END IF NEXT END SUB Output: Gimme a number? 17 1 , 17 Gimme a number? 12345 1 , 3 , 5 , 15 , 823 , 2469 , 4115 , 12345 Gimme a number? 32765 1 , 5 , 6553 , 32765 Gimme a number? 32766 1 , 2 , 3 , 6 , 43 , 86 , 127 , 129 , 254 , 258 , 381 , 762 , 5461 , 10922 , 16383 , 32766 ## Batch File Command line version: @echo off set res=Factors of %1: for /L %%i in (1,1,%1) do call :fac %1 %%i echo %res% goto :eof :fac set /a test = %1 %% %2 if %test% equ 0 set res=%res% %2 Output: >factors 32767 Factors of 32767: 1 7 31 151 217 1057 4681 32767 >factors 45 Factors of 45: 1 3 5 9 15 45 >factors 53 Factors of 53: 1 53 >factors 64 Factors of 64: 1 2 4 8 16 32 64 >factors 100 Factors of 100: 1 2 4 5 10 20 25 50 100 Interactive version: @echo off set /p limit=Gimme a number: set res=Factors of %limit%: for /L %%i in (1,1,%limit%) do call :fac %limit% %%i echo %res% goto :eof :fac set /a test = %1 %% %2 if %test% equ 0 set res=%res% %2 Output: >factors Gimme a number:27 Factors of 27: 1 3 9 27 >factors Gimme a number:102 Factors of 102: 1 2 3 6 17 34 51 102 ## BBC BASIC INSTALL @lib$+"SORTLIB" sort% = FN_sortinit(0, 0) PRINT "The factors of 45 are " FNfactorlist(45) PRINT "The factors of 12345 are " FNfactorlist(12345) END DEF FNfactorlist(N%) LOCAL C%, I%, L%(), L$ DIM L%(32) FOR I% = 1 TO SQR(N%) IF (N% MOD I% = 0) THEN L%(C%) = I% C% += 1 IF (N% <> I%^2) THEN L%(C%) = (N% DIV I%) C% += 1 ENDIF ENDIF NEXT I% CALL sort%, L%(0) FOR I% = 0 TO C%-1 L$+= STR$(L%(I%)) + ", " NEXT > ^ ^ ," "< ## C #include <stdio.h> #include <stdlib.h> typedef struct { int *list; short count; } Factors; void xferFactors( Factors *fctrs, int *flist, int flix ) { int ix, ij; int newSize = fctrs->count + flix; if (newSize > flix) { fctrs->list = realloc( fctrs->list, newSize * sizeof(int)); } else { fctrs->list = malloc( newSize * sizeof(int)); } for (ij=0,ix=fctrs->count; ix<newSize; ij++,ix++) { fctrs->list[ix] = flist[ij]; } fctrs->count = newSize; } Factors *factor( int num, Factors *fctrs) { int flist[301], flix; int dvsr; flix = 0; fctrs->count = 0; free(fctrs->list); fctrs->list = NULL; for (dvsr=1; dvsr*dvsr < num; dvsr++) { if (num % dvsr != 0) continue; if ( flix == 300) { xferFactors( fctrs, flist, flix ); flix = 0; } flist[flix++] = dvsr; flist[flix++] = num/dvsr; } if (dvsr*dvsr == num) flist[flix++] = dvsr; if (flix > 0) xferFactors( fctrs, flist, flix ); return fctrs; } int main(int argc, char*argv[]) { int nums2factor[] = { 2059, 223092870, 3135, 45 }; Factors ftors = { NULL, 0}; char sep; int i,j; for (i=0; i<4; i++) { factor( nums2factor[i], &ftors ); printf("\nfactors of %d are:\n ", nums2factor[i]); sep = ' '; for (j=0; j<ftors.count; j++) { printf("%c %d", sep, ftors.list[j]); sep = ','; } printf("\n"); } return 0; } ### Prime factoring #include <stdio.h> #include <stdlib.h> #include <string.h> /* 65536 = 2^16, so we can factor all 32 bit ints */ char bits[65536]; typedef unsigned long ulong; ulong primes[7000], n_primes; typedef struct { ulong p, e; } prime_factor; /* prime, exponent */ void sieve() { int i, j; memset(bits, 1, 65536); bits[0] = bits[1] = 0; for (i = 0; i < 256; i++) if (bits[i]) for (j = i * i; j < 65536; j += i) bits[j] = 0; /* collect primes into a list. slightly faster this way if dealing with large numbers */ for (i = j = 0; i < 65536; i++) if (bits[i]) primes[j++] = i; n_primes = j; } int get_prime_factors(ulong n, prime_factor *lst) { ulong i, e, p; int len = 0; for (i = 0; i < n_primes; i++) { p = primes[i]; if (p * p > n) break; for (e = 0; !(n % p); n /= p, e++); if (e) { lst[len].p = p; lst[len++].e = e; } } return n == 1 ? len : (lst[len].p = n, lst[len].e = 1, ++len); } int ulong_cmp(const void *a, const void *b) { return *(const ulong*)a < *(const ulong*)b ? -1 : *(const ulong*)a > *(const ulong*)b; } int get_factors(ulong n, ulong *lst) { int n_f, len, len2, i, j, k, p; prime_factor f[100]; n_f = get_prime_factors(n, f); len2 = len = lst[0] = 1; /* L = (1); L = (L, L * p**(1 .. e)) forall((p, e)) */ for (i = 0; i < n_f; i++, len2 = len) for (j = 0, p = f[i].p; j < f[i].e; j++, p *= f[i].p) for (k = 0; k < len2; k++) lst[len++] = lst[k] * p; qsort(lst, len, sizeof(ulong), ulong_cmp); return len; } int main() { ulong fac[10000]; int len, i, j; ulong nums[] = {3, 120, 1024, 2UL*2*2*2*3*3*3*5*5*7*11*13*17*19 }; sieve(); for (i = 0; i < 4; i++) { len = get_factors(nums[i], fac); printf("%lu:", nums[i]); for (j = 0; j < len; j++) printf(" %lu", fac[j]); printf("\n"); } return 0; } Output: 3: 1 3 120: 1 2 3 4 5 6 8 10 12 15 20 24 30 40 60 120 1024: 1 2 4 8 16 32 64 128 256 512 1024 3491888400: 1 2 3 4 5 6 7 8 9 10 11 ...(>1900 numbers)... 1163962800 1745944200 3491888400 ## C++ #include <iostream> #include <vector> #include <algorithm> #include <iterator> std::vector<int> GenerateFactors(int n) { std::vector<int> factors; factors.push_back(1); factors.push_back(n); for(int i = 2; i * i <= n; ++i) { if(n % i == 0) { factors.push_back(i); if(i * i != n) factors.push_back(n / i); } } std::sort(factors.begin(), factors.end()); return factors; } int main() { const int SampleNumbers[] = {3135, 45, 60, 81}; for(size_t i = 0; i < sizeof(SampleNumbers) / sizeof(int); ++i) { std::vector<int> factors = GenerateFactors(SampleNumbers[i]); std::cout << "Factors of " << SampleNumbers[i] << " are:\n"; std::copy(factors.begin(), factors.end(), std::ostream_iterator<int>(std::cout, "\n")); std::cout << std::endl; } } ## C# C# 3.0 using System; using System.Linq; using System.Collections.Generic; public static class Extension { public static List<int> Factors(this int me) { return Enumerable.Range(1, me).Where(x => me % x == 0).ToList(); } } class Program { static void Main(string[] args) { Console.WriteLine(String.Join(", ", 45.Factors())); } } C# 1.0 static void Main(string[] args) { do { Console.WriteLine("Number:"); Int64 p = 0; do { try { break; } catch (Exception) { } } while (true); Console.WriteLine("For 1 through " + ((int)Math.Sqrt(p)).ToString() + ""); for (int x = 1; x <= (int)Math.Sqrt(p); x++) { if (p % x == 0) Console.WriteLine("Found: " + x.ToString() + ". " + p.ToString() + " / " + x.ToString() + " = " + (p / x).ToString()); } Console.WriteLine("Done."); } while (true); } Output: Number: 32434243 For 1 through 5695 Found: 1. 32434243 / 1 = 32434243 Found: 307. 32434243 / 307 = 105649 Done. ## Ceylon shared void run() { {Integer*} getFactors(Integer n) => (1..n).filter((Integer element) => element.divides(n)); for(Integer i in 1..100) { print("the factors of i are getFactors(i)"); } } ## Chapel Inspired by the Clojure solution: iter factors(n) { for i in 1..floor(sqrt(n)):int { if n % i == 0 then { yield i; yield n / i; } } } ## Clojure (defn factors [n] (filter #(zero? (rem n %)) (range 1 (inc n)))) (print (factors 45)) (1 3 5 9 15 45) Improved version. Considers small factors from 1 up to (sqrt n) -- we increment it because range does not include the end point. Pair each small factor with its co-factor, flattening the results, and put them into a sorted set to get the factors in order. (defn factors [n] (into (sorted-set) (mapcat (fn [x] [x (/ n x)]) (filter #(zero? (rem n %)) (range 1 (inc (Math/sqrt n)))) ))) Same idea, using for comprehensions. (defn factors [n] (into (sorted-set) (reduce concat (for [x (range 1 (inc (Math/sqrt n))) :when (zero? (rem n x))] [x (/ n x)])))) ## COBOL IDENTIFICATION DIVISION. PROGRAM-ID. FACTORS. DATA DIVISION. WORKING-STORAGE SECTION. 01 CALCULATING. 03 NUM USAGE BINARY-LONG VALUE ZERO. 03 LIM USAGE BINARY-LONG VALUE ZERO. 03 CNT USAGE BINARY-LONG VALUE ZERO. 03 DIV USAGE BINARY-LONG VALUE ZERO. 03 REM USAGE BINARY-LONG VALUE ZERO. 03 ZRS USAGE BINARY-SHORT VALUE ZERO. 01 DISPLAYING. 03 DIS PIC 9(10) USAGE DISPLAY. PROCEDURE DIVISION. MAIN-PROCEDURE. DISPLAY "Factors of? " WITH NO ADVANCING ACCEPT NUM DIVIDE NUM BY 2 GIVING LIM. PERFORM VARYING CNT FROM 1 BY 1 UNTIL CNT > LIM DIVIDE NUM BY CNT GIVING DIV REMAINDER REM IF REM = 0 MOVE CNT TO DIS PERFORM SHODIS END-IF END-PERFORM. MOVE NUM TO DIS. PERFORM SHODIS. STOP RUN. SHODIS. MOVE ZERO TO ZRS. INSPECT DIS TALLYING ZRS FOR LEADING ZERO. DISPLAY DIS(ZRS + 1:) EXIT PARAGRAPH. END PROGRAM FACTORS. ## CoffeeScript # Reference implementation for finding factors is slow, but hopefully # robust--we'll use it to verify the more complicated (but hopefully faster) # algorithm. slow_factors = (n) -> (i for i in [1..n] when n % i == 0) # The rest of this code does two optimizations: # 1) When you find a prime factor, divide it out of n (smallest_prime_factor). # 2) Find the prime factorization first, then compute composite factors from those. smallest_prime_factor = (n) -> for i in [2..n] return n if i*i > n return i if n % i == 0 prime_factors = (n) -> return {} if n == 1 spf = smallest_prime_factor n result = prime_factors(n / spf) result[spf] or= 0 result[spf] += 1 result fast_factors = (n) -> prime_hash = prime_factors n exponents = [] for p of prime_hash exponents.push p: p exp: 0 result = [] while true factor = 1 for obj in exponents factor *= Math.pow obj.p, obj.exp result.push factor break if factor == n # roll the odometer for obj, i in exponents if obj.exp < prime_hash[obj.p] obj.exp += 1 break else obj.exp = 0 return result.sort (a, b) -> a - b verify_factors = (factors, n) -> expected_result = slow_factors n throw Error("wrong length") if factors.length != expected_result.length for factor, i in expected_result console.log Error("wrong value") if factors[i] != factor for n in [1, 3, 4, 8, 24, 37, 1001, 11111111111, 99999999999] factors = fast_factors n console.log n, factors if n < 1000000 verify_factors factors, n Output: > coffee factors.coffee 1 [ 1 ] 3 [ 1, 3 ] 4 [ 1, 2, 4 ] 8 [ 1, 2, 4, 8 ] 24 [ 1, 2, 3, 4, 6, 8, 12, 24 ] 37 [ 1, 37 ] 1001 [ 1, 7, 11, 13, 77, 91, 143, 1001 ] 11111111111 [ 1, 21649, 513239, 11111111111 ] 99999999999 [ 1, 3, 9, 21649, 64947, 194841, 513239, 1539717, 4619151, 11111111111, 33333333333, 99999999999 ] ## Common Lisp We iterate in the range 1..sqrt(n) collecting ‘low’ factors and corresponding ‘high’ factors, and combine at the end to produce an ordered list of factors. (defun factors (n &aux (lows '()) (highs '())) (do ((limit (1+ (isqrt n))) (factor 1 (1+ factor))) ((= factor limit) (when (= n (* limit limit)) (push limit highs)) (remove-duplicates (nreconc lows highs))) (multiple-value-bind (quotient remainder) (floor n factor) (when (zerop remainder) (push factor lows) (push quotient highs))))) ## D ### Procedural Style import std.stdio, std.math, std.algorithm; T[] factors(T)(in T n) pure nothrow { if (n == 1) return [n]; T[] res = [1, n]; T limit = cast(T)real(n).sqrt + 1; for (T i = 2; i < limit; i++) { if (n % i == 0) { res ~= i; immutable q = n / i; if (q > i) res ~= q; } } return res.sort().release; } void main() { writefln("%(%s\n%)", [45, 53, 64, 1111111].map!factors); } Output: [1, 3, 5, 9, 15, 45] [1, 53] [1, 2, 4, 8, 16, 32, 64] [1, 239, 4649, 1111111] ### Functional Style import std.stdio, std.algorithm, std.range; auto factors(I)(I n) { return iota(1, n + 1).filter!(i => n % i == 0); } void main() { 36.factors.writeln; } Output: [1, 2, 3, 4, 6, 9, 12, 18, 36] ## Dart import 'dart:math'; factors(n) { var factorsArr = []; for(var test = n - 1; test >= sqrt(n).toInt(); test--) if(n % test == 0) { } return factorsArr; } void main() { print(factors(5688)); } ## E This example is in need of improvement: Use a cleverer algorithm such as in the Common Lisp example. def factors(x :(int > 0)) { var xfactors := [] for f ? (x % f <=> 0) in 1..x { xfactors with= f } return xfactors } ## EchoLisp prime-factors gives the list of n's prime-factors. We mix them to get all the factors. ;; ppows ;; input : a list g of grouped prime factors ( 3 3 3 ..) ;; returns (1 3 9 27 ...) (define (ppows g (mult 1)) (for/fold (ppows '(1)) ((a g)) (set! mult (* mult a)) (cons mult ppows))) ;; factors ;; decomp n into ((2 2 ..) ( 3 3 ..) ) prime factors groups ;; combines (1 2 4 8 ..) (1 3 9 ..) lists (define (factors n) (list-sort < (if (<= n 1) '(1) (for/fold (divs'(1)) ((g (map ppows (group (prime-factors n))))) (for*/list ((a divs) (b g)) (* a b)))))) Output: (lib 'bigint) (factors 666) (1 2 3 6 9 18 37 74 111 222 333 666) (length (factors 108233175859200)) 666 ;; 💀 (define huge 1200034005600070000008900000000000000000) (time ( length (factors huge))) (394ms 7776) ## Ela ### Using higher-order function open list factors m = filter (\x -> m % x == 0) [1..m] ### Using comprehension factors m = [x \\ x <- [1..m] | m % x == 0] ## Elixir defmodule RC do def factor(1), do: [1] def factor(n) do (for i <- 1..div(n,2), rem(n,i)==0, do: i) ++ [n] end # Recursive (faster version); def divisor(n), do: divisor(n, 1, []) |> Enum.sort defp divisor(n, i, factors) when n < i*i , do: factors defp divisor(n, i, factors) when n == i*i , do: [i | factors] defp divisor(n, i, factors) when rem(n,i)==0, do: divisor(n, i+1, [i, div(n,i) | factors]) defp divisor(n, i, factors) , do: divisor(n, i+1, factors) end Enum.each([45, 53, 60, 64], fn n -> IO.puts "#{n}: #{inspect RC.factor(n)}" end) IO.puts "\nRange: #{inspect range = 1..10000}" funs = [ factor: &RC.factor/1, divisor: &RC.divisor/1 ] Enum.each(funs, fn {name, fun} -> {time, value} = :timer.tc(fn -> Enum.count(range, &length(fun.(&1))==2) end) IO.puts "#{name}\t prime count : #{value},\t#{time/1000000} sec" end) Output: 45: [1, 3, 5, 9, 15, 45] 53: [1, 53] 60: [1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60] 64: [1, 2, 4, 8, 16, 32, 64] Range: 1..10000 factor prime count : 1229, 7.316 sec divisor prime count : 1229, 0.265 sec ## Erlang ### with Built in fuctions factors(N) -> [I || I <- lists:seq(1,trunc(N/2)), N rem I == 0]++[N]. ### Recursive Another, less concise, but faster version -module(divs). -export([divs/1]). divs(0) -> []; divs(1) -> []; divs(N) -> lists:sort(divisors(1,N))++[N]. divisors(1,N) -> [1] ++ divisors(2,N,math:sqrt(N)). divisors(K,_N,Q) when K > Q -> []; divisors(K,N,_Q) when N rem K =/= 0 -> [] ++ divisors(K+1,N,math:sqrt(N)); divisors(K,N,_Q) when K * K == N -> [K] ++ divisors(K+1,N,math:sqrt(N)); divisors(K,N,_Q) -> [K, N div K] ++ divisors(K+1,N,math:sqrt(N)). Output: 58> timer:tc(divs, factors, [20000]). {2237, [1,2,4,5,8,10,16,20,25,32,40,50,80,100,125,160,200,250,400, 500,625,800,1000,1250,2000,2500,4000|...]} 59> timer:tc(divs, divs, [20000]). {106, [1,2,4,5,8,10,16,20,25,32,40,50,80,100,125,160,200,250,400, 500,625,800,1000,1250,2000,2500,4000|...]} The first number is milliseconds. I'v ommitted repeating the first fuction. ## ERRE PROGRAM FACTORS !$DOUBLE PROCEDURE FACTORLIST(N->L$) LOCAL C%,I,FLIPS%,I% LOCAL DIM L[32] FOR I=1 TO SQR(N) DO IF N=I*INT(N/I) THEN L[C%]=I C%=C%+1 IF N<>I*I THEN L[C%]=INT(N/I) C%=C%+1 END IF END IF END FOR ! BUBBLE SORT ARRAY L[] FLIPS%=1 WHILE FLIPS%>0 DO FLIPS%=0 FOR I%=0 TO C%-2 DO IF L[I%]>L[I%+1] THEN SWAP(L[I%],L[I%+1]) FLIPS%=1 END FOR END WHILE L$="" FOR I%=0 TO C%-1 DO L$=L$+STR$(L[I%])+"," END FOR L$=LEFT$(L$,LEN(L$)-1) END PROCEDURE BEGIN PRINT(CHR$(12);) ! CLS FACTORLIST(45->L$) PRINT("The factors of 45 are ";L$) FACTORLIST(12345->L$) PRINT("The factors of 12345 are ";L$) END PROGRAM Output: The factors of 45 are 1, 3, 5, 9, 15, 45 The factors of 12345 are 1, 3, 5, 15, 823, 2469, 4115, 12345 ## F# If number % divisor = 0 then both divisor AND number / divisor are factors. So, we only have to search till sqrt(number). Also, this is lazily evaluated. let factors number = seq { for divisor in 1 .. (float >> sqrt >> int) number do if number % divisor = 0 then yield divisor if number <> 1 then yield number / divisor //special case condition: when number=1 then divisor=(number/divisor), so don't repeat it } ### Prime factoring let mutable a=6 let mutable b=0 let mutable c=120 let mutable d=2048 let mutable e=402642 let mutable f=1206432 printf "6 :" for j=1 to a do if a%j=0 then b <- b+1 printf " %i "j printfn "" printf "120 :" for j=1 to c do if c%j=0 then b <- b+1 printf " %i "j printfn "" printf "2048 :" for j=1 to d do if d%j=0 then b <- b+1 printf " %i "j printfn "" printf "402642 :" for j=1 to e do if e%j=0 then b <- b+1 printf " %i "j printfn "" printf "120643200 :" for j=1 to f do if f%j=0 then b <- b+1 printf " %i "j Output: OUTPUT : 6 : 1 2 3 6 120 : 1 2 3 4 5 6 8 10 12 15 20 24 30 40 60 120 2048 : 1 2 4 8 16 32 64 128 256 512 1024 2048 402642 : 1 2 3 6 9 18 22369 44738 67107 134214 201321 402642 120643200 : 1 2 3 4 6 8 9 12 16 18 24 32 36 48 59 71 72 96 118 142 144 177 213 236 284 288 354 426 472 531 568 639 708 852 944 1062 1136 12 78 1416 1704 1888 2124 2272 2556 2832 3408 4189 4248 5112 5664 6816 8378 8496 10224 12567 16756 16992 20448 25134 33512 37701 50268 67024 75402 10053 6 134048 150804 201072 301608 402144 603216 1206432 ## Factor USE: math.primes.factors ( scratchpad ) 24 divisors . { 1 2 3 4 6 8 12 24 } ## FALSE [1[\$@$@-][\$@$@$@$@\/*=[$." "]?1+]#.%]f: 45f;! 53f;! 64f;! ## Fish 0v >i:0(?v'0'%+a* >~a,:1:>r{%  ?vr:nr','ov ^:&:;?(&:+1r:< < Must be called with pre-polulated value (Positive Integer) in the input stack. Try at Fish Playground[1]. For Input Number : 120 The following output was generated: 1,2,3,4,5,6,8,10,12,15,20,24,30,40,60,120, ## Forth This is a slightly optimized algorithm, since it realizes there are no factors between n/2 and n. The values are saved on the stack and - in true Forth fashion - printed in descending order. : factors dup 2/ 1+ 1 do dup i mod 0= if i swap then loop ; : .factors factors begin dup dup . 1 <> while drop repeat drop cr ; 45 .factors 53 .factors 64 .factors 100 .factors ## Fortran Works with: Fortran version 90 and later program Factors implicit none integer :: i, number write(*,*) "Enter a number between 1 and 2147483647" do i = 1, int(sqrt(real(number))) - 1 if (mod(number, i) == 0) write (*,*) i, number/i end do ! Check to see if number is a square i = int(sqrt(real(number))) if (i*i == number) then write (*,*) i else if (mod(number, i) == 0) then write (*,*) i, number/i end if end program ## FreeBASIC ' FB 1.05.0 Win64 Sub printFactors(n As Integer) If n < 1 Then Return Print n; " =>"; For i As Integer = 1 To n / 2 If n Mod i = 0 Then Print i; " "; Next i Print n End Sub printFactors(11) printFactors(21) printFactors(32) printFactors(45) printFactors(67) printFactors(96) Print Print "Press any key to quit" Sleep Output: 11 => 1 11 21 => 1 3 7 21 32 => 1 2 4 8 16 32 45 => 1 3 5 9 15 45 67 => 1 67 96 => 1 2 3 4 6 8 12 16 24 32 48 96 ## Frink Frink has built-in factoring functions which use wheel factoring, trial division, Pollard p-1 factoring, and Pollard rho factoring. It also recognizes some special forms (e.g. Mersenne numbers) and handles them efficiently. Integers can either be decomposed into prime factors or all factors. The factors[n] function will return the prime decomposition of n. The allFactors[n, include1=true, includeN=true, sort=true, onlyToSqrt=false] function will return all factors of n. The optional arguments include1 and includeN indicate if the numbers 1 and n are to be included in the results. If the optional argument sort is true, the results will be sorted. If the optional argument onlyToSqrt=true, then only the factors less than or equal to the square root of the number will be produced. The following produces all factors of n, including 1 and n: allFactors[n] ## FunL Function to compute set of factors: def factors( n ) = {d | d <- 1..n if d|n} Test: for x <- [103, 316, 519, 639, 760] println( 'The set of factors of ' + x + ' is ' + factors(x) ) Output: The set of factors of 103 is {1, 103} The set of factors of 316 is {158, 4, 79, 1, 2, 316} The set of factors of 519 is {1, 3, 173, 519} The set of factors of 639 is {9, 639, 71, 213, 1, 3} The set of factors of 760 is {8, 19, 4, 40, 152, 5, 10, 76, 1, 95, 190, 760, 20, 2, 38, 380} ## FutureBasic include "ConsoleWindow" clear local mode local fn IntegerFactors( f as long ) as Str255 dim as long i, s, l(100), c : c = 0 dim as Str255 factorStr for i = 1 to sqr(f) if ( f mod i == 0 ) l(c) = i c++ if ( f <> i ^ 2 ) l(c) = ( f / i ) c++ end if end if next i s = 1 while ( s = 1 ) s = 0 for i = 0 to c-1 if l(i) > l(i+1) and l(i+1) <> 0 swap l(i), l(i+1) s = 1 end if next i wend for i = 0 to c-1 if ( i < c -1 ) factorStr = factorStr + str$(l(i)) + "," else factorStr = factorStr + str$(l(i)) end if next end fn = factorStr print "Factors of 25 are:"; fn IntegerFactors( 25 ) print "Factors of 45 are:"; fn IntegerFactors( 45 ) print "Factors of 103 are:"; fn IntegerFactors( 103 ) print "Factors of 760 are:"; fn IntegerFactors( 760 ) print "Factors of 12345 are:"; fn IntegerFactors( 12345 ) print "Factors of 32766 are:"; fn IntegerFactors( 32766 ) print "Factors of 32767 are:"; fn IntegerFactors( 32767 ) print "Factors of 57097 are:"; fn IntegerFactors( 57097 ) print "Factors of 12345678 are:"; fn IntegerFactors( 12345678 ) print "Factors of 32434243 are:"; fn IntegerFactors( 32434243 ) Output: Factors of 25 are: 1, 5, 25 Factors of 45 are: 1, 3, 5, 9, 15, 45 Factors of 103 are: 1, 103 Factors of 760 are: 1, 2, 4, 5, 8, 10, 19, 20, 38, 40, 76, 95, 152, 190, 380, 760 Factors of 12345 are: 1, 3, 5, 15, 823, 2469, 4115, 12345 Factors of 32766 are: 1, 2, 3, 6, 43, 86, 127, 129, 254, 258, 381, 762, 5461, 10922, 16383, 32766 Factors of 32767 are: 1, 7, 31, 151, 217, 1057, 4681, 32767 Factors of 57097 are: 1, 57097 Factors of 12345678 are: 1, 2, 3, 6, 9, 18, 47, 94, 141, 282, 423, 846, 14593, 29186, 43779, 87558, 131337, 262674, 685871, 1371742, 2057613, 4115226, 6172839, 12345678 Factors of 32434243 are: 1, 307, 105649, 32434243 ## GAP # Built-in function DivisorsInt(Factorial(5)); # [ 1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60, 120 ] # A possible implementation, not suitable to large n div := n -> Filtered([1 .. n], k -> n mod k = 0); div(Factorial(5)); # [ 1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60, 120 ] # Another implementation, usable for large n (if n can be factored quickly) div2 := function(n) local f, p; f := Collected(FactorsInt(n)); p := List(f, v -> List([0 .. v[2]], k -> v[1]^k)); return SortedList(List(Cartesian(p), Product)); end; div2(Factorial(5)); # [ 1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60, 120 ] ## Go Trial division, no prime number generator, but with some optimizations. It's good enough to factor any 64 bit integer, with large primes taking several seconds. package main import "fmt" func main() { printFactors(-1) printFactors(0) printFactors(1) printFactors(2) printFactors(3) printFactors(53) printFactors(45) printFactors(64) printFactors(600851475143) printFactors(999999999999999989) } func printFactors(nr int64) { if nr < 1 { fmt.Println("\nFactors of", nr, "not computed") return } fmt.Printf("\nFactors of %d: ", nr) fs := make([]int64, 1) fs[0] = 1 apf := func(p int64, e int) { n := len(fs) for i, pp := 0, p; i < e; i, pp = i+1, pp*p { for j := 0; j < n; j++ { fs = append(fs, fs[j]*pp) } } } e := 0 for ; nr & 1 == 0; e++ { nr >>= 1 } apf(2, e) for d := int64(3); nr > 1; d += 2 { if d*d > nr { d = nr } for e = 0; nr%d == 0; e++ { nr /= d } if e > 0 { apf(d, e) } } fmt.Println(fs) fmt.Println("Number of factors =", len(fs)) } Output: Factors of -1 not computed Factors of 0 not computed Factors of 1: [1] Number of factors = 1 Factors of 2: [1 2] Number of factors = 2 Factors of 3: [1 3] Number of factors = 2 Factors of 53: [1 53] Number of factors = 2 Factors of 45: [1 3 9 5 15 45] Number of factors = 6 Factors of 64: [1 2 4 8 16 32 64] Number of factors = 7 Factors of 600851475143: [1 71 839 59569 1471 104441 1234169 87625999 6857 486847 5753023 408464633 10086647 716151937 8462696833 600851475143] Number of factors = 16 Factors of 999999999999999989: [1 999999999999999989] Number of factors = 2 ## Gosu var numbers = {11, 21, 32, 45, 67, 96} numbers.each(\ number -> printFactors(number)) function printFactors(n: int) { if (n < 1) return var result ="${n} => " (1 .. n/2).each(\ i -> {result += n % i == 0 ? "${i} " : ""}) print("${result}${n}") } Output: 11 => 1 11 21 => 1 3 7 21 32 => 1 2 4 8 16 32 45 => 1 3 5 9 15 45 67 => 1 67 96 => 1 2 3 4 6 8 12 16 24 32 48 96 ## Groovy A straight brute force approach up to the square root of N: def factorize = { long target -> if (target == 1) return [1L] if (target < 4) return [1L, target] def targetSqrt = Math.sqrt(target) def lowfactors = (2L..targetSqrt).grep { (target % it) == 0 } if (lowfactors == []) return [1L, target] def nhalf = lowfactors.size() - ((lowfactors[-1] == targetSqrt) ? 1 : 0) [1] + lowfactors + (0..<nhalf).collect { target.intdiv(lowfactors[it]) }.reverse() + [target] } Test: ((1..30) + [333333]).each { println ([number:it, factors:factorize(it)]) } Output: [number:1, factors:[1]] [number:2, factors:[1, 2]] [number:3, factors:[1, 3]] [number:4, factors:[1, 2, 4]] [number:5, factors:[1, 5]] [number:6, factors:[1, 2, 3, 6]] [number:7, factors:[1, 7]] [number:8, factors:[1, 2, 4, 8]] [number:9, factors:[1, 3, 9]] [number:10, factors:[1, 2, 5, 10]] [number:11, factors:[1, 11]] [number:12, factors:[1, 2, 3, 4, 6, 12]] [number:13, factors:[1, 13]] [number:14, factors:[1, 2, 7, 14]] [number:15, factors:[1, 3, 5, 15]] [number:16, factors:[1, 2, 4, 8, 16]] [number:17, factors:[1, 17]] [number:18, factors:[1, 2, 3, 6, 9, 18]] [number:19, factors:[1, 19]] [number:20, factors:[1, 2, 4, 5, 10, 20]] [number:21, factors:[1, 3, 7, 21]] [number:22, factors:[1, 2, 11, 22]] [number:23, factors:[1, 23]] [number:24, factors:[1, 2, 3, 4, 6, 8, 12, 24]] [number:25, factors:[1, 5, 25]] [number:26, factors:[1, 2, 13, 26]] [number:27, factors:[1, 3, 9, 27]] [number:28, factors:[1, 2, 4, 7, 14, 28]] [number:29, factors:[1, 29]] [number:30, factors:[1, 2, 3, 5, 6, 10, 15, 30]] [number:333333, factors:[1, 3, 7, 9, 11, 13, 21, 33, 37, 39, 63, 77, 91, 99, 111, 117, 143, 231, 259, 273, 333, 407, 429, 481, 693, 777, 819, 1001, 1221, 1287, 1443, 2331, 2849, 3003, 3367, 3663, 4329, 5291, 8547, 9009, 10101, 15873, 25641, 30303, 37037, 47619, 111111, 333333]] Using D. Amos'es Primes module for finding prime factors import HFM.Primes (primePowerFactors) import Data.List (product) -- primePowerFactors :: Integer -> [(Integer,Int)] factors = map product . mapM (\(p,m)-> [p^i | i<-[0..m]]) . primePowerFactors Returns list of factors out of order, e.g.: ~> factors 42 [1,7,3,21,2,14,6,42] Or, prime decomposition task can be used (although, a trial division-only version will become very slow for large primes), import Data.List (group) primePowerFactors = map (\x-> (head x, length x)) . group . factorize The above function can also be found in the package arithmoi, as Math.NumberTheory.Primes.factorise :: Integer -> [(Integer, Int)], which performs "factorisation of Integers by the elliptic curve algorithm after Montgomery" and "is best suited for numbers of up to 50-60 digits". Or, deriving cofactors from factors up to the square root: import Control.Arrow ((&&&)) integerFactors :: Int -> [Int] integerFactors n | n < 1 = [] | otherwise = lows ++ (quot n <$> (if intSquared == n -- A perfect square, then tail -- and cofactor of square root would be redundant. else id) (reverse lows)) where (intSquared, lows) = (^ 2) &&& (filter ((0 ==) . rem n) . enumFromTo 1)$ floor (sqrt $fromIntegral n) main :: IO () main = print$ integerFactors 600 Output: [1,2,3,4,5,6,8,10,12,15,20,24,25,30,40,50,60,75,100,120,150,200,300,600] ### List comprehension Naive, functional, no import, in increasing order: factors_naive n = [i | i <-[1..n], mod n i == 0] ~> factors_naive 25 [1,5,25] Factor, cofactor. Get the list of factor–cofactor pairs sorted, for a quadratic speedup: import Data.List factors_co n = sort [ i | i <- [1..floor (sqrt (fromIntegral n))] , (d,0) <- [divMod n i], i <- [i]++[d|d>i] ] A version of the above without the need for sorting, making it to be online (i.e. productive immediately, which can be seen in GHCi); factors in increasing order: import Data.List factors_o n = ds ++ [r | (d,0) <- [divMod n r], r <- [r]++[d|d>r]] ++ reverse (map (n div) ds) where r = floor (sqrt (fromIntegral n)) ds = [i | i <- [1..r-1], mod n i == 0] Testing: *Main> :set +s ~> factors_o 120 [1,2,3,4,5,6,8,10,12,15,20,24,30,40,60,120] (0.00 secs, 0 bytes) ~> factors_o 12041111117 [1,7,41,287,541,3787,22181,77551,155267,542857,3179591,22257137,41955091,2936856 37,1720158731,12041111117] (0.09 secs, 50758224 bytes) ## HicEst DLG(NameEdit=N, TItle='Enter an integer') DO i = 1, N^0.5 IF( MOD(N,i) == 0) WRITE() i, N/i ENDDO END ## Icon and Unicon procedure main(arglist) numbers := arglist ||| [ 32767, 45, 53, 64, 100] # combine command line provided and default set of values every writes(lf,"factors of ",i := !numbers,"=") & writes(divisors(i)," ") do lf := "\n" end Output: factors of 32767=1 7 31 151 217 1057 4681 32767 factors of 45=1 3 5 9 15 45 factors of 53=1 53 factors of 64=1 2 4 8 16 32 64 factors of 100=1 2 4 5 10 20 25 50 100 divisors ## J J has a primitive, q: which returns its argument's prime factors. q: 40 2 2 2 5 Alternatively, q: can produce provide a table of the exponents of the unique relevant prime factors __ q: 420 2 3 5 7 2 1 1 1 With this, we can form lists of each of the potential relevant powers of each of these prime factors (^ i.@>:)&.>/ __ q: 420 ┌─────┬───┬───┬───┐ 1 2 41 31 51 7 └─────┴───┴───┴───┘ From here, it's a simple matter (*/&>@{) to compute all possible factors of the original number factrs=: */&>@{@((^ i.@>:)&.>/)@q:~&__ factrs 40 1 5 2 10 4 20 8 40 However, a data structure which is organized around the prime decomposition of the argument can be hard to read. So, for reader convenience, we should probably arrange them in a monotonically increasing list: factors=: [: /:~@, */&>@{@((^ i.@>:)&.>/)@q:~&__ factors 420 1 2 3 4 5 6 7 10 12 14 15 20 21 28 30 35 42 60 70 84 105 140 210 420 A less efficient, but concise variation on this theme: ~.,*/&> { 1 ,&.> q: 40 1 5 2 10 4 20 8 40 This computes 2^n intermediate values where n is the number of prime factors of the original number. Another less efficient approach, in which remainders are examined up to the square root, larger factors obtained as fractions, and the combined list nubbed and sorted might be: Y=. y"_ /:~ ~. ( , Y%]) ( #~ 0=]|Y) 1+i.>.%:y ) factorsOfNumber 40 1 2 4 5 8 10 20 40 Another approach: odometer =: #: i.@(*/) factors=: (*/@:^"1 odometer@:>:)[email protected]:~&__ ## Java Works with: Java version 5+ public static TreeSet<Long> factors(long n) { TreeSet<Long> factors = new TreeSet<Long>(); for(long test = n - 1; test >= Math.sqrt(n); test--) if(n % test == 0) { } return factors; } ## JavaScript ### Imperative function factors(num) { var n_factors = [], i; for (i = 1; i <= Math.floor(Math.sqrt(num)); i += 1) if (num % i === 0) { n_factors.push(i); if (num / i !== i) n_factors.push(num / i); } n_factors.sort(function(a, b){return a - b;}); // numeric sort return n_factors; } factors(45); // [1,3,5,9,15,45] factors(53); // [1,53] factors(64); // [1,2,4,8,16,32,64] ### Functional #### ES5 Translating the naive list comprehension example from Haskell, using a list monad for the comprehension // Monadic bind (chain) for lists function chain(xs, f) { return [].concat.apply([], xs.map(f)); } // [m..n] function range(m, n) { return Array.apply(null, Array(n - m + 1)).map(function (x, i) { return m + i; }); } function factors_naive(n) { return chain( range(1, n), function (x) { // monadic chain/bind return n % x ? [] : [x]; // monadic fail or inject/return }); } factors_naive(6) Output: [1, 2, 3, 6] Translating the Haskell (lows and highs) example console.log( (function (lstTest) { // INTEGER FACTORS function integerFactors(n) { var rRoot = Math.sqrt(n), intRoot = Math.floor(rRoot), lows = range(1, intRoot).filter(function (x) { return (n % x) === 0; }); // for perfect squares, we can drop the head of the 'highs' list return lows.concat(lows.map(function (x) { return n / x; }).reverse().slice((rRoot === intRoot) | 0)); } // [m .. n] function range(m, n) { return Array.apply(null, Array(n - m + 1)).map(function (x, i) { return m + i; }); } /*************************** TESTING *****************************/ // TABULATION OF RESULTS IN SPACED AND ALIGNED COLUMNS var lstColWidths = range(0, lstRows.reduce(function (a, x) { return x.length > a ? x.length : a; }, 0) - 1).map(function (iCol) { return lstRows.reduce(function (a, lst) { var w = lst[iCol] ? lst[iCol].toString().length : 0; return (w > a) ? w : a; }, 0); }); return lstRows.map(function (lstRow) { return lstRow.map(function (v, i) { }).join('') }).join('\n'); } function alignRight(n, lngWidth) { var s = n.toString(); return Array(lngWidth - s.length + 1).join(' ') + s; } // TEST return '\nintegerFactors(n)\n\n' + alignedTable( lstTest.map(integerFactors).map(function (x, i) { return [lstTest[i], '-->'].concat(x); }), 2, alignRight ) + '\n'; })([25, 45, 53, 64, 100, 102, 120, 12345, 32766, 32767]) ); Output: integerFactors(n) 25 --> 1 5 25 45 --> 1 3 5 9 15 45 53 --> 1 53 64 --> 1 2 4 8 16 32 64 100 --> 1 2 4 5 10 20 25 50 100 102 --> 1 2 3 6 17 34 51 102 120 --> 1 2 3 4 5 6 8 10 12 15 20 24 30 40 60 120 12345 --> 1 3 5 15 823 2469 4115 12345 32766 --> 1 2 3 6 43 86 127 129 254 258 381 762 5461 10922 16383 32766 32767 --> 1 7 31 151 217 1057 4681 32767 #### ES6 (function (lstTest) { 'use strict'; // INTEGER FACTORS // integerFactors :: Int -> [Int] let integerFactors = (n) => { let rRoot = Math.sqrt(n), intRoot = Math.floor(rRoot), lows = range(1, intRoot) .filter(x => (n % x) === 0); // for perfect squares, we can drop // the head of the 'highs' list return lows.concat(lows .map(x => n / x) .reverse() .slice((rRoot === intRoot) | 0) ); }, // range :: Int -> Int -> [Int] range = (m, n) => Array.from({ length: (n - m) + 1 }, (_, i) => m + i); /*************************** TESTING *****************************/ // TABULATION OF RESULTS IN SPACED AND ALIGNED COLUMNS let alignedTable = (lstRows, lngPad, fnAligned) => { var lstColWidths = range( 0, lstRows .reduce( (a, x) => (x.length > a ? x.length : a), 0 ) - 1 ) .map((iCol) => lstRows .reduce((a, lst) => { let w = lst[iCol] ? lst[iCol].toString() .length : 0; return (w > a) ? w : a; }, 0)); return lstRows.map((lstRow) => lstRow.map((v, i) => fnAligned( )) .join('') ) .join('\n'); }, alignRight = (n, lngWidth) => { let s = n.toString(); return Array(lngWidth - s.length + 1) .join(' ') + s; }; // TEST return '\nintegerFactors(n)\n\n' + alignedTable(lstTest .map(integerFactors) .map( (x, i) => [lstTest[i], '-->'].concat(x) ), 2, alignRight ) + '\n'; })([25, 45, 53, 64, 100, 102, 120, 12345, 32766, 32767]); Output: integerFactors(n) 25 --> 1 5 25 45 --> 1 3 5 9 15 45 53 --> 1 53 64 --> 1 2 4 8 16 32 64 100 --> 1 2 4 5 10 20 25 50 100 102 --> 1 2 3 6 17 34 51 102 120 --> 1 2 3 4 5 6 8 10 12 15 20 24 30 40 60 120 12345 --> 1 3 5 15 823 2469 4115 12345 32766 --> 1 2 3 6 43 86 127 129 254 258 381 762 5461 10922 16383 32766 32767 --> 1 7 31 151 217 1057 4681 32767 ## jq Works with: jq version 1.4 # This implementation uses "sort" for tidiness def factors: . as $num | reduce range(1; 1 + sqrt|floor) as$i ([]; if ($num %$i) == 0 then ($num /$i) as $r | if$i == $r then . + [$i] else . + [$i,$r] end else . end ) | sort; (45, 53, 64) | "\(.): \(factors)" ; Output: $jq -n -M -r -c -f factors.jq 45: [1,3,5,9,15,45] 53: [1,53] 64: [1,2,4,8,16,32,64] ## Julia function factors(n) f = [one(n)] for (p,e) in factor(n) f = reduce(vcat, f, [f*p^j for j in 1:e]) end return length(f) == 1 ? [one(n), n] : sort!(f) end Output: julia> factors(45) 6-element Array{Int64,1}: 1 3 5 9 15 45 ## K f:{i:{y[&x=y*x div y]}[x;1+!_sqrt x];?i,x div|i} equivalent to: q)f:{i:{y where x=y*x div y}[x ; 1+ til floor sqrt x]; distinct i,x div reverse i} f 120 1 2 3 4 5 6 8 10 12 15 20 24 30 40 60 120 f 1024 1 2 4 8 16 32 64 128 256 512 1024 f 600851475143 1 71 839 1471 6857 59569 104441 486847 1234169 5753023 10086647 87625999 408464633 716151937 8462696833 600851475143 #f 3491888400 / has 1920 factors 1920 / Number of factors for 3491888400 .. 3491888409 #:'f' 3491888400+!10 1920 16 4 4 12 16 32 16 8 24 ## Kotlin // version 1.0.5-2 fun printFactors(n: Int) { if (n < 1) return print("$n => ") for (i in 1 .. n/2) if (n % i == 0) print("$i ") println(n) } fun main(args: Array<String>) { val numbers = intArrayOf(11, 21, 32, 45, 67, 96) for (number in numbers) printFactors(number) } Output: 11 => 1 11 21 => 1 3 7 21 32 => 1 2 4 8 16 32 45 => 1 3 5 9 15 45 67 => 1 67 96 => 1 2 3 4 6 8 12 16 24 32 48 96 ## LFE ### Using List Comprehensions This following function is elegant looking and concise. However, it will not handle large numbers well: it will consume a great deal of memory (on one large number, the function consumed 4.3GB of memory on my desktop machine): (defun factors (n) (list-comp ((<- i (when (== 0 (rem n i))) (lists:seq 1 (trunc (/ n 2))))) i)) ### Non-Stack-Consuming This version will not consume the stack (this function only used 18MB of memory on my machine with a ridiculously large number): (defun factors (n) "Tail-recursive prime factors function." (factors n 2 '())) (defun factors ((1 _ acc) (++ acc '(1))) ((n _ acc) (when (=< n 0)) #(error undefined)) ((n k acc) (when (== 0 (rem n k))) (factors (div n k) k (cons k acc))) ((n k acc) (factors n (+ k 1) acc))) Output: > (factors 10677106534462215678539721403561279) (104729 104729 104729 98731 98731 32579 29269 1) ## Liberty BASIC num = 10677106534462215678539721403561279 maxnFactors = 1000 dim primeFactors(maxnFactors), nPrimeFactors(maxnFactors) global nDifferentPrimeNumbersFound, nFactors, iFactor print "Start finding all factors of ";num; ":" nDifferentPrimeNumbersFound=0 dummy = factorize(num,2) nFactors = showPrimeFactors(num) dim factors(nFactors) dummy = generateFactors(1,1) sort factors(), 0, nFactors-1 for i=1 to nFactors print i;" ";factors(i-1) next i print "done" wait function factorize(iNum,offset) factorFound=0 i = offset do if (iNum MOD i)=0 _ then if primeFactors(nDifferentPrimeNumbersFound) = i _ then nPrimeFactors(nDifferentPrimeNumbersFound) = nPrimeFactors(nDifferentPrimeNumbersFound) + 1 else nDifferentPrimeNumbersFound = nDifferentPrimeNumbersFound + 1 primeFactors(nDifferentPrimeNumbersFound) = i nPrimeFactors(nDifferentPrimeNumbersFound) = 1 end if if iNum/i<>1 then dummy = factorize(iNum/i,i) factorFound=1 end if i=i+1 loop while factorFound=0 and i<=sqr(iNum) if factorFound=0 _ then nDifferentPrimeNumbersFound = nDifferentPrimeNumbersFound + 1 primeFactors(nDifferentPrimeNumbersFound) = iNum nPrimeFactors(nDifferentPrimeNumbersFound) = 1 end if end function function showPrimeFactors(iNum) showPrimeFactors=1 print iNum;" = "; for i=1 to nDifferentPrimeNumbersFound print primeFactors(i);"^";nPrimeFactors(i); if i<nDifferentPrimeNumbersFound then print " * "; else print "" showPrimeFactors = showPrimeFactors*(nPrimeFactors(i)+1) next i end function function generateFactors(product,pIndex) if pIndex>nDifferentPrimeNumbersFound _ then factors(iFactor) = product iFactor=iFactor+1 else for i=0 to nPrimeFactors(pIndex) dummy = generateFactors(product*primeFactors(pIndex)^i,pIndex+1) next i end if end function Output: Start finding all factors of 10677106534462215678539721403561279: 10677106534462215678539721403561279 = 29269^1 * 32579^1 * 98731^2 * 104729^3 1 1 2 29269 3 32579 4 98731 5 104729 6 953554751 7 2889757639 8 3065313101 9 3216557249 10 3411966091 11 9747810361 12 10339998899 13 10968163441 14 94145414120981 15 99864835517479 16 285308661456109 17 302641427774831 18 317573913751019 19 321027175754629 20 336866824130521 21 357331796744339 22 1020878431297169 23 1082897744693371 24 1148684789012489 25 9295070881578575111 26 9859755075476219149 27 10458744358910058191 28 29880090805636839461 29 31695334089430275799 30 33259198413230468851 31 33620855089606540541 32 35279725624365333809 33 37423001741237879131 34 106915577231321212201 35 113410797903992051459 36 973463478356842592799919 37 1032602289299548955255621 38 1095333837964291484285239 39 3129312029983540559911069 40 3319420643851943354153471 41 3483202590619213772296379 42 3694810384914157044482761 43 11197161487859039232598529 44 101949856624833767901342716951 45 108143405156052462534965931709 46 327729719588146219298926345301 47 364792324112959639158827476291 48 10677106534462215678539721403561279 done ### A Simpler Approach This is a somewhat simpler approach for finding the factors of smaller numbers (less than one million). print "ROSETTA CODE - Factors of an integer" 'A simpler approach for smaller numbers [Start] print input "Enter an integer (< 1,000,000): "; n n=abs(int(n)): if n=0 then goto [Quit] if n>999999 then goto [Start] FactorCount=FactorCount(n) select case FactorCount case 1: print "The factor of 1 is: 1" case else print "The "; FactorCount; " factors of "; n; " are: "; for x=1 to FactorCount print " "; Factor(x); next x if FactorCount=2 then print " (Prime)" else print end select goto [Start] [Quit] print "Program complete." end function FactorCount(n) dim Factor(100) for y=1 to n if y>sqr(n) and FactorCount=1 then 'If no second factor is found by the square root of n, then n is prime. FactorCount=2: Factor(FactorCount)=n: exit function end if if (n mod y)=0 then FactorCount=FactorCount+1 Factor(FactorCount)=y end if next y end function Output: ROSETTA CODE - Factors of an integer Enter an integer (< 1,000,000): 1 The factor of 1 is: 1 Enter an integer (< 1,000,000): 2 The 2 factors of 2 are: 1 2 (Prime) Enter an integer (< 1,000,000): 4 The 3 factors of 4 are: 1 2 4 Enter an integer (< 1,000,000): 6 The 4 factors of 6 are: 1 2 3 6 Enter an integer (< 1,000,000): 999999 The 64 factors of 999999 are: 1 3 7 9 11 13 21 27 33 37 39 63 77 91 99 111 117 143 189 231 259 273 297 333 351 407 429 481 693 777 819 999 1001 1221 1287 1443 2079 2331 2457 2849 3003 3367 3663 3861 4329 5291 6993 8547 9009 10101 10989 129 87 15873 25641 27027 30303 37037 47619 76923 90909 111111 142857 333333 999999 Enter an integer (< 1,000,000): Program complete. ## Lingo on factors(n) res = [1] repeat with i = 2 to n/2 if n mod i = 0 then res.add(i) end repeat res.add(n) return res end put factors(45) -- [1, 3, 5, 9, 15, 45] put factors(53) -- [1, 53] put factors(64) -- [1, 2, 4, 8, 16, 32, 64] ## Logo to factors :n output filter [equal? 0 modulo :n ?] iseq 1 :n end show factors 28 ; [1 2 4 7 14 28] ## Lua function Factors( n ) local f = {} for i = 1, n/2 do if n % i == 0 then f[#f+1] = i end end f[#f+1] = n return f end ## Maple numtheory:-divisors(n); ## Mathematica / Wolfram Language Factorize[n_Integer] := Divisors[n] ## MATLAB / Octave function fact(n); f = factor(n); % prime decomposition K = dec2bin(0:2^length(f)-1)-'0'; % generate all possible permutations F = ones(1,2^length(f)); for k = 1:size(K) F(k) = prod(f(~K(k,:))); % and compute products end; F = unique(F); % eliminate duplicates printf('There are %i factors for %i.\n',length(F),n); disp(F); end; Output: >> fact(12) There are 6 factors for 12. 1 2 3 4 6 12 >> fact(28) There are 6 factors for 28. 1 2 4 7 14 28 >> fact(64) There are 7 factors for 64. 1 2 4 8 16 32 64 >>fact(53) There are 2 factors for 53. 1 53 ## Maxima The builtin divisors function does this. (%i96) divisors(100); (%o96) {1,2,4,5,10,20,25,50,100} Such a function could be implemented like so: divisors2(n) := map( lambda([l], lreduce("*", l)), apply( cartesian_product, map( lambda([fac], setify(makelist(fac[1]^i, i, 0, fac[2]))), ifactors(n)))); ## MAXScript fn factors n = ( return (for i = 1 to n+1 where mod n i == 0 collect i) ) Output: factors 3 #(1, 3) factors 7 #(1, 7) factors 14 #(1, 2, 7, 14) factors 60 #(1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60) factors 54 #(1, 2, 3, 6, 9, 18, 27, 54) ## Mercury Mercury is both a logic language and a functional language. As such there are two possible interfaces for calculating the factors of an integer. This code shows both styles of implementation. Note that much of the code here is ceremony put in place to have this be something which can actually compile. The actual factoring is contained in the predicate factor/2 and in the function factor/1. The function form is implemented in terms of the predicate form rather than duplicating all of the predicate code. The predicates main/2 and factor/2 are shown with the combined type and mode statement (e.g. int::in) as is the usual case for simple predicates with only one mode. This makes the code more immediately understandable. The predicate factor/5, however, has its mode broken out onto a separate line both to show Mercury's mode statement (useful for predicates which can have varying instantiation of parameters) and to stop the code from extending too far to the right. Finally the function factor/1 has its mode statements removed (shown underneath in a comment for illustration purposes) because good coding style (and the default of the compiler!) has all parameters "in"-moded and the return value "out"-moded. This implementation of factoring works as follows: 1. The input number itself and 1 are both considered factors. 2. The numbers between 2 and the square root of the input number are checked for even division. 3. If the incremental number divides evenly into the input number, both the incremental number and the quotient are added to the list of factors. This implementation makes use of Mercury's "state variable notation" to keep a pair of variables for accumulation, thus allowing the implementation to be tail recursive. !Accumulator is syntax sugar for a *pair* of variables. One of them is an "in"-moded variable and the other is an "out"-moded variable. !:Accumulator is the "out" portion and !.Accumulator is the "in" portion in the ensuing code. Using the state variable notation avoids having to keep track of strings of variables unified in the code named things like Acc0, Acc1, Acc2, Acc3, etc. ### fac.m :- module fac. :- interface. :- import_module io. :- pred main(io::di, io::uo) is det. :- implementation. :- import_module float, int, list, math, string. main(!IO) :- io.command_line_arguments(Args, !IO), list.filter_map(string.to_int, Args, CleanArgs), list.foldl((pred(Arg::in, !.IO::di, !:IO::uo) is det :- factor(Arg, X), io.format("factor(%d, [", [i(Arg)], !IO), io.write_list(X, ",", io.write_int, !IO), io.write_string("])\n", !IO) ), CleanArgs, !IO). :- pred factor(int::in, list(int)::out) is det. factor(N, Factors) :- Limit = float.truncate_to_int(math.sqrt(float(N))), factor(N, 2, Limit, [], Unsorted), list.sort_and_remove_dups([1, N | Unsorted], Factors). :- pred factor(int, int, int, list(int), list(int)). :- mode factor(in, in, in, in, out) is det. factor(N, X, Limit, !Accumulator) :- ( if X > Limit then true else ( if 0 = N mod X then !:Accumulator = [X, N / X | !.Accumulator] else true ), factor(N, X + 1, Limit, !Accumulator) ). :- func factor(int) = list(int). %:- mode factor(in) = out is det. factor(N) = Factors :- factor(N, Factors). :- end_module fac. ### Use and output Use of the code looks like this:$ mmc fac.m && ./fac 100 999 12345678 booger factor(100, [1,2,4,5,10,20,25,50,100]) factor(999, [1,3,9,27,37,111,333,999]) factor(12345678, [1,2,3,6,9,18,47,94,141,282,423,846,14593,29186,43779,87558,131337,262674,685871,1371742,2057613,4115226,6172839,12345678]) ## МК-61/52 П9 1 П6 КИП6 ИП9 ИП6 / П8 ^ [x] x#0 21 - x=0 03 ИП6 С/П ИП8 П9 БП 04 1 С/П БП 21 ## MUMPS factors(num) New fctr,list,sep,sqrt If num<1 Quit "Too small a number" If num["." Quit "Not an integer" Set sqrt=num**0.5\1 For fctr=1:1:sqrt Set:num/fctr'["." list(fctr)=1,list(num/fctr)=1 Set (list,fctr)="",sep="[" For Set fctr=$Order(list(fctr)) Quit:fctr="" Set list=list_sep_fctr,sep="," Quit list_"]" w $$factors(45) ; [1,3,5,9,15,45] w$$factors(53) ; [1,53] w$$factors(64) ; [1,2,4,8,16,32,64] ## NetRexx Translation of: REXX /* NetRexx *********************************************************** * 21.04.2013 Walter Pachl * 21.04.2013 add method main to accept argument(s) *********************************************************************/ options replace format comments java crossref symbols nobinary class divl method main(argwords=String[]) static arg=Rexx(argwords) Parse arg a b Say a b If a='' Then Do help='java divl low [high] shows' help=help||' divisors of all numbers between low and high' Say help Return End If b='' Then b=a loop x=a To b say x '->' divs(x) End method divs(x) public static returns Rexx if x==1 then return 1 /*handle special case of 1 */ lo=1 hi=x odd=x//2 /* 1 if x is odd */ loop j=2+odd By 1+odd While j*j<x /*divide by numbers<sqrt(x) */ if x//j==0 then Do /*Divisible? Add two divisors:*/ lo=lo j /* list low divisors */ hi=x%j hi /* list high divisors */ End End If j*j=x Then /*for a square number as input */ lo=lo j /* add its square root */ return lo hi /* return both lists */ Output: java divl 1 10 1 -> 1 2 -> 1 2 3 -> 1 3 4 -> 1 2 4 5 -> 1 5 6 -> 1 2 3 6 7 -> 1 7 8 -> 1 2 4 8 9 -> 1 3 9 10 -> 1 2 5 10 ## Nim import intsets, math, algorithm proc factors(n): seq[int] = var fs = initIntSet() for x in 1 .. int(sqrt(float(n))): if n mod x == 0: fs.incl(x) fs.incl(n div x) result = @[] for x in fs: result.add(x) sort(result, system.cmp[int]) echo factors(45) ## Niue [ 'n ; [ negative-or-zero [ , ] if [ n not-factor [ , ] when ] else ] n times n ] 'factors ; [ dup 0 <= ] 'negative-or-zero ; [ swap dup rot swap mod 0 = not ] 'not-factor ; ( tests ) 100 factors .s .clr ( => 1 2 4 5 10 20 25 50 100 ) newline 53 factors .s .clr ( => 1 53 ) newline 64 factors .s .clr ( => 1 2 4 8 16 32 64 ) newline 12 factors .s .clr ( => 1 2 3 4 6 12 ) ## Oberon-2 Oxford Oberon-2 MODULE Factors; IMPORT Out,SYSTEM; TYPE LIPool = POINTER TO ARRAY OF LONGINT; LIVector= POINTER TO LIVectorDesc; LIVectorDesc = RECORD cap: INTEGER; len: INTEGER; LIPool: LIPool; END; PROCEDURE New(cap: INTEGER): LIVector; VAR v: LIVector; BEGIN NEW(v); v.cap := cap; v.len := 0; NEW(v.LIPool,cap); RETURN v END New; PROCEDURE (v: LIVector) Add(x: LONGINT); VAR newLIPool: LIPool; BEGIN IF v.len = LEN(v.LIPool^) THEN (* run out of space *) v.cap := v.cap + (v.cap DIV 2); NEW(newLIPool,v.cap); SYSTEM.MOVE(SYSTEM.ADR(v.LIPool^),SYSTEM.ADR(newLIPool^),v.cap * SIZE(LONGINT)); v.LIPool := newLIPool END; v.LIPool[v.len] := x; INC(v.len) END Add; PROCEDURE (v: LIVector) At(idx: INTEGER): LONGINT; BEGIN RETURN v.LIPool[idx]; END At; PROCEDURE Factors(n:LONGINT): LIVector; VAR j: LONGINT; v: LIVector; BEGIN v := New(16); FOR j := 1 TO n DO IF (n MOD j) = 0 THEN v.Add(j) END; END; RETURN v END Factors; VAR v: LIVector; j: INTEGER; BEGIN v := Factors(123); FOR j := 0 TO v.len - 1 DO Out.LongInt(v.At(j),4);Out.Ln END; Out.Int(v.len,6);Out.String(" factors");Out.Ln END Factors. Output: 1 3 41 123 4 factors ## Objeck use IO; use Structure; bundle Default { class Basic { function : native : GenerateFactors(n : Int) ~ IntVector { factors := IntVector->New(); factors-> AddBack(1); factors->AddBack(n); for(i := 2; i * i <= n; i += 1;) { if(n % i = 0) { factors->AddBack(i); if(i * i <> n) { factors->AddBack(n / i); }; }; }; factors->Sort(); return factors; } function : Main(args : String[]) ~ Nil { numbers := [3135, 45, 60, 81]; for(i := 0; i < numbers->Size(); i += 1;) { factors := GenerateFactors(numbers[i]); Console->GetInstance()->Print("Factors of ")->Print(numbers[i])->PrintLine(" are:"); each(i : factors) { Console->GetInstance()->Print(factors->Get(i))->Print(", "); }; "\n\n"->Print(); }; } } } ## OCaml let rec range = function 0 -> [] | n -> range(n-1) @ [n] let factors n = List.filter (fun v -> (n mod v) = 0) (range n) ## Oforth Integer method: factors self seq filter(#[ self isMultiple ]) ; 120 factors println Output: [1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60, 120] ## Oz declare fun {Factors N} Sqr = {Float.toInt {Sqrt {Int.toFloat N}}} Fs = for X in 1..Sqr append:App do if N mod X == 0 then CoFactor = N div X in if CoFactor == X then %% avoid duplicate factor {App [X]} %% when N is a square number else {App [X CoFactor]} end end end in {Sort Fs Value.'<'} end in {Show {Factors 53}} ## PARI/GP divisors(n) ## Panda Panda has a factor function already, it's defined as: fun factor(n) type integer->integer f where n.mod(1..n=>f)==0 45.factor ## Pascal Translation of: Fortran Works with: Free Pascal version 2.6.2 program Factors; var i, number: integer; begin write('Enter a number between 1 and 2147483647: '); readln(number); for i := 1 to round(sqrt(number)) - 1 do if number mod i = 0 then write (i, ' ', number div i, ' '); // Check to see if number is a square i := round(sqrt(number)); if i*i = number then write(i) else if number mod i = 0 then write(i, number/i); writeln; end. Output: Enter a number between 1 and 2147483647: 49 1 49 7 Enter a number between 1 and 2147483647: 353435 1 25755 3 8585 5 5151 15 1717 17 1515 51 505 85 303 101 255 ### small improvement the factors are in ascending order. Works with: Free Pascal program factors; {Looking for extreme composite numbers: http://wwwhomes.uni-bielefeld.de/achim/highly.txt} const MAXFACTORCNT = 1920; //number := 3491888400; var FaktorList : array[0..MAXFACTORCNT] of LongWord; i, number,quot,cnt: LongWord; begin writeln('Enter a number between 1 and 4294967295: '); write('3491888400 is a nice choice '); readln(number); cnt := 0; i := 1; repeat quot := number div i; if quot *i-number = 0 then begin FaktorList[cnt] := i; FaktorList[MAXFACTORCNT-cnt] := quot; inc(cnt); end; inc(i); until i> quot; writeln(number,' has ',2*cnt,' factors'); dec(cnt); For i := 0 to cnt do write(FaktorList[i],' ,'); For i := cnt downto 1 do write(FaktorList[MAXFACTORCNT-i],' ,'); { the last without ','} writeln(FaktorList[MAXFACTORCNT]); end. Output: Enter a number between 1 and 4294967295: 3491888400 is a nice choice 120 120 has 16 factors 1 ,2 ,3 ,4 ,5 ,6 ,8 ,10 ,12 ,15 ,20 ,24 ,30 ,40 ,60 ,120 ## Perl sub factors { my($n) = @_; return grep { $n %$_ == 0 }(1 .. $n); } print join ' ',factors(64), "\n"; Or more intelligently: sub factors { my$n = shift; $n = -$n if $n < 0; my @divisors; for (1 .. int(sqrt($n))) { # faster and less memory than map/grep push @divisors, $_ unless$n % $_; } # Return divisors including top half, without duplicating a square @divisors, map {$_*$_ ==$n ? () : int($n/$_) } reverse @divisors; } print join " ", factors(64), "\n"; One could also use a module, e.g.: Library: ntheory use ntheory qw/divisors/; print join " ", divisors(12345678), "\n"; # Alternately something like: fordivisors { say } 12345678; ## Perl 6 Works with: Rakudo version 2015.12 sub factors (Int $n) { squish sort ($_, $n div$_ if $n %%$_ for 1 .. sqrt $n) } ## Phix There is a builtin factors(n), which takes an optional second parameter to include 1 and n, so eg ?factors(12345,1) displays Output: {1,3,5,15,823,2469,4115,12345} You can find the implementation of factors() and prime_factors() in builtins\pfactors.e ## PHP function GetFactors($n){ $factors = array(1,$n); for($i = 2;$i * $i <=$n; $i++){ if($n % $i == 0){$factors[] = $i; if($i * $i !=$n) $factors[] =$n/$i; } } sort($factors); 1..$a | Where-Object {$a % $_ -eq 0 } } This one uses a range of integers up to the target number and just filters it using the Where-Object cmdlet. It's very slow though, so it is not very usable for larger numbers. ### A little more clever function Get-Factor ($a) { 1..[Math]::Sqrt($a) | Where-Object {$a % $_ -eq 0 } | ForEach-Object {$_; $a /$_ } | Sort-Object -Unique } Here the range of integers is only taken up to the square root of the number, the same filtering applies. Afterwards the corresponding larger factors are calculated and sent down the pipeline along with the small ones found earlier. ## ProDOS Uses the math module: editvar /newvar /value=a /userinput=1 /title=Enter an integer: do /delimspaces %% -a- >b printline Factors of -a-: -b- ## Prolog Simple Brute Force Implementation brute_force_factors( N , Fs ) :- integer(N) , N > 0 , setof( F , ( between(1,N,F) , N mod F =:= 0 ) , Fs ) . A Slightly Smarter Implementation smart_factors(N,Fs) :- integer(N) , N > 0 , setof( F , factor(N,F) , Fs ) . factor(N,F) :- L is floor(sqrt(N)) , between(1,L,X) , 0 =:= N mod X , ( F = X ; F is N // X ) . Not every Prolog has between/3: you might need this: between(X,Y,Z) :- integer(X) , integer(Y) , X =< Z , between1(X,Y,Z) . between1(X,Y,X) :- X =< Y . between1(X,Y,Z) :- X < Y , X1 is X+1 , between1(X1,Y,Z) . Output: ?- N=36 ,( brute_force_factors(N,Factors) ; smart_factors(N,Factors) ). N = 36, Factors = [1, 2, 3, 4, 6, 9, 12, 18, 36] ; N = 36, Factors = [1, 2, 3, 4, 6, 9, 12, 18, 36] . ?- N=53,( brute_force_factors(N,Factors) ; smart_factors(N,Factors) ). N = 53, Factors = [1, 53] ; N = 53, Factors = [1, 53] . ?- N=100,( brute_force_factors(N,Factors);smart_factors(N,Factors) ). N = 100, Factors = [1, 2, 4, 5, 10, 20, 25, 50, 100] ; N = 100, Factors = [1, 2, 4, 5, 10, 20, 25, 50, 100] . ?- N=144,( brute_force_factors(N,Factors);smart_factors(N,Factors) ). N = 144, Factors = [1, 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 36, 48, 72, 144] ; N = 144, Factors = [1, 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 36, 48, 72, 144] . ?- N=32765,( brute_force_factors(N,Factors);smart_factors(N,Factors) ). N = 32765, Factors = [1, 5, 6553, 32765] ; N = 32765, Factors = [1, 5, 6553, 32765] . ?- N=32766,( brute_force_factors(N,Factors);smart_factors(N,Factors) ). N = 32766, Factors = [1, 2, 3, 6, 43, 86, 127, 129, 254, 258, 381, 762, 5461, 10922, 16383, 32766] ; N = 32766, Factors = [1, 2, 3, 6, 43, 86, 127, 129, 254, 258, 381, 762, 5461, 10922, 16383, 32766] . 38 ?- N=32767,( brute_force_factors(N,Factors);smart_factors(N,Factors) ). N = 32767, Factors = [1, 7, 31, 151, 217, 1057, 4681, 32767] ; N = 32767, Factors = [1, 7, 31, 151, 217, 1057, 4681, 32767] . ## PureBasic Procedure PrintFactors(n) Protected i, lim=Round(sqr(n),#PB_Round_Up) NewList F.i() For i=1 To lim If n%i=0 EndIf Next ;- Present the result SortList(F(),#PB_Sort_Ascending) ForEach F() Print(str(F())+" ") Next EndProcedure If OpenConsole() Print("Enter integer to factorize: ") PrintFactors(Val(Input())) Print(#CRLF$+#CRLF$+"Press ENTER to quit."): Input() EndIf Output: Enter integer to factorize: 96 1 2 3 4 6 8 12 16 24 32 48 96 ## Python Naive and slow but simplest (check all numbers from 1 to n): >>> def factors(n): return [i for i in range(1, n + 1) if not n%i] Slightly better (realize that there are no factors between n/2 and n): >>> def factors(n): return [i for i in range(1, n//2 + 1) if not n%i] + [n] >>> factors(45) [1, 3, 5, 9, 15, 45] Much better (realize that factors come in pairs, the smaller of which is no bigger than sqrt(n)): >>> from math import sqrt >>> def factor(n): factors = set() for x in range(1, int(sqrt(n)) + 1): if n % x == 0: return sorted(factors) >>> for i in (45, 53, 64): print( "%i: factors: %s" % (i, factor(i)) ) 45: factors: [1, 3, 5, 9, 15, 45] 53: factors: [1, 53] 64: factors: [1, 2, 4, 8, 16, 32, 64] More efficient when factoring many numbers: from itertools import chain, cycle, accumulate # last of which is Python 3 only def factors(n): def prime_powers(n): # c goes through 2, 3, 5, then the infinite (6n+1, 6n+5) series for c in accumulate(chain([2, 1, 2], cycle([2,4]))): if c*c > n: break if n%c: continue d,p = (), c while not n%c: n,p,d = n//c, p*c, d + (p,) yield(d) if n > 1: yield((n,)) r = [1] for e in prime_powers(n): r += [a*b for a in r for b in e] return r ## R factors <- function(n) { if(length(n) > 1) { lapply(as.list(n), factors) } else { one.to.n <- seq_len(n) one.to.n[(n %% one.to.n) == 0] } } factors(60) 1 2 3 4 5 6 10 12 15 20 30 60 factors(c(45, 53, 64)) [[1]] [1] 1 3 5 9 15 45 [[2]] [1] 1 53 [[3]] [1] 1 2 4 8 16 32 64 ## Racket #lang racket ;; a naive version (define (naive-factors n) (for/list ([i (in-range 1 (add1 n))] #:when (zero? (modulo n i))) i)) (naive-factors 120) ; -> '(1 2 3 4 5 6 8 10 12 15 20 24 30 40 60 120) ;; much better: use factorize' to get prime factors and construct the ;; list of results from that (require math) (define (factors n) (sort (for/fold ([l '(1)]) ([p (factorize n)]) (* x (expt (car p) e))) l)) <)) (naive-factors 120) ; -> same ;; to see how fast it is: (define huge 1200034005600070000008900000000000000000) (time (length (factors huge))) ;; I get 42ms for getting a list of 7776 numbers ;; but actually the math library comes with a divisors' function that ;; does the same, except even faster (divisors 120) ; -> same (time (length (divisors huge))) ;; And this one clocks at 17ms ## REALbasic Function factors(num As UInt64) As UInt64() 'This function accepts an unsigned 64 bit integer as input and returns an array of unsigned 64 bit integers Dim result() As UInt64 Dim iFactor As UInt64 = 1 While iFactor <= num/2 'Since a factor will never be larger than half of the number If num Mod iFactor = 0 Then result.Append(iFactor) End If iFactor = iFactor + 1 Wend result.Append(num) 'Since a given number is always a factor of itself Return result End Function ## REXX ### optimized version This REXX version has no effective limits on the number of decimal digits in the number to be factored   [by adjusting the number of digits (precision)]. This REXX version also supports negative integers and zero. It also indicates   primes   in the output listing as well as the number of factors. It also displays a final count of the number of primes found. /*REXX program displays divisors of any [negative/zero/positive] integer or a range.*/ parse arg LO HI inc . /*obtain the optional args*/ HI=word(HI LO 20, 1); LO=word(LO 1, 1); inc=word(inc 1, 1) /*define the range options*/ w=length(high)+2; numeric digits max(9, w-2); $='∞' /*decimal digits for // */ @.=left('',7); @.1="{unity}"; @.2='[prime]'; @.$=" {"$'} ' /*define some literals. */ say center('n', w) "#divisors" center('divisors', 60) /*display the header. */ say copies('═', w) "═════════" copies('═' , 60) /* " " separator. */ p#=0 /*count of prime numbers. */ do n=LO to HI by inc; divs=divisors(n); #=words(divs) /*get list of divs; # divs*/ if divs==$ then do; #=$; divs= ' (infinite)'; end /*handle case for infinity*/ p=@.#; if n<0 then if n\==-1 then p=@.. /* " " " negative*/ if p==@.2 then p#=p#+1 /*Prime? Then bump counter*/ say center(n, w) center('['#"]", 9) "──► " p ' ' divs end /*n*/ /* [↑] process a range of integers. */ say say left('', 17) p# ' primes were found.' /*display the number of primes found. */ exit /*stick a fork in it, we're all done. */ /*──────────────────────────────────────────────────────────────────────────────────────*/ divisors: procedure; parse arg x 1 b; a=1 /*set X and B to the 1st argument. */ if x<2 then do; x=abs(x); if x==1 then return 1; if x==0 then return '∞'; b=x; end odd=x//2 /* [↓] process EVEN or ODD ints. ___*/ do j=2+odd by 1+odd while j*j<x /*divide by all the integers up to √ x */ if x//j==0 then do; a=a j; b=x%j b; end /*÷? Add factors to α and ß lists.*/ end /*j*/ /* [↑] % ≡ integer division. ___*/ if j*j==x then return a j b /*Was X a square? Then insert √ x */ return a b /*return the divisors of both lists. */ output when the input used is: -6 200 n #divisors divisors ══════ ═════════ ════════════════════════════════════════════════════════════ -6 [4] ──► 1 2 3 6 -5 [2] ──► 1 5 -4 [3] ──► 1 2 4 -3 [2] ──► 1 3 -2 [2] ──► 1 2 -1 [1] ──► {unity} 1 0 [∞] ──► {∞} (infinite) 1 [1] ──► {unity} 1 2 [2] ──► [prime] 1 2 3 [2] ──► [prime] 1 3 4 [3] ──► 1 2 4 5 [2] ──► [prime] 1 5 6 [4] ──► 1 2 3 6 7 [2] ──► [prime] 1 7 8 [4] ──► 1 2 4 8 9 [3] ──► 1 3 9 10 [4] ──► 1 2 5 10 11 [2] ──► [prime] 1 11 12 [6] ──► 1 2 3 4 6 12 13 [2] ──► [prime] 1 13 14 [4] ──► 1 2 7 14 15 [4] ──► 1 3 5 15 16 [5] ──► 1 2 4 8 16 17 [2] ──► [prime] 1 17 18 [6] ──► 1 2 3 6 9 18 19 [2] ──► [prime] 1 19 20 [6] ──► 1 2 4 5 10 20 21 [4] ──► 1 3 7 21 22 [4] ──► 1 2 11 22 23 [2] ──► [prime] 1 23 24 [8] ──► 1 2 3 4 6 8 12 24 25 [3] ──► 1 5 25 26 [4] ──► 1 2 13 26 27 [4] ──► 1 3 9 27 28 [6] ──► 1 2 4 7 14 28 29 [2] ──► [prime] 1 29 30 [8] ──► 1 2 3 5 6 10 15 30 31 [2] ──► [prime] 1 31 32 [6] ──► 1 2 4 8 16 32 33 [4] ──► 1 3 11 33 34 [4] ──► 1 2 17 34 35 [4] ──► 1 5 7 35 36 [9] ──► 1 2 3 4 6 9 12 18 36 37 [2] ──► [prime] 1 37 38 [4] ──► 1 2 19 38 39 [4] ──► 1 3 13 39 40 [8] ──► 1 2 4 5 8 10 20 40 41 [2] ──► [prime] 1 41 42 [8] ──► 1 2 3 6 7 14 21 42 43 [2] ──► [prime] 1 43 44 [6] ──► 1 2 4 11 22 44 45 [6] ──► 1 3 5 9 15 45 46 [4] ──► 1 2 23 46 47 [2] ──► [prime] 1 47 48 [10] ──► 1 2 3 4 6 8 12 16 24 48 49 [3] ──► 1 7 49 50 [6] ──► 1 2 5 10 25 50 51 [4] ──► 1 3 17 51 52 [6] ──► 1 2 4 13 26 52 53 [2] ──► [prime] 1 53 54 [8] ──► 1 2 3 6 9 18 27 54 55 [4] ──► 1 5 11 55 56 [8] ──► 1 2 4 7 8 14 28 56 57 [4] ──► 1 3 19 57 58 [4] ──► 1 2 29 58 59 [2] ──► [prime] 1 59 60 [12] ──► 1 2 3 4 5 6 10 12 15 20 30 60 61 [2] ──► [prime] 1 61 62 [4] ──► 1 2 31 62 63 [6] ──► 1 3 7 9 21 63 64 [7] ──► 1 2 4 8 16 32 64 65 [4] ──► 1 5 13 65 66 [8] ──► 1 2 3 6 11 22 33 66 67 [2] ──► [prime] 1 67 68 [6] ──► 1 2 4 17 34 68 69 [4] ──► 1 3 23 69 70 [8] ──► 1 2 5 7 10 14 35 70 71 [2] ──► [prime] 1 71 72 [12] ──► 1 2 3 4 6 8 9 12 18 24 36 72 73 [2] ──► [prime] 1 73 74 [4] ──► 1 2 37 74 75 [6] ──► 1 3 5 15 25 75 76 [6] ──► 1 2 4 19 38 76 77 [4] ──► 1 7 11 77 78 [8] ──► 1 2 3 6 13 26 39 78 79 [2] ──► [prime] 1 79 80 [10] ──► 1 2 4 5 8 10 16 20 40 80 81 [5] ──► 1 3 9 27 81 82 [4] ──► 1 2 41 82 83 [2] ──► [prime] 1 83 84 [12] ──► 1 2 3 4 6 7 12 14 21 28 42 84 85 [4] ──► 1 5 17 85 86 [4] ──► 1 2 43 86 87 [4] ──► 1 3 29 87 88 [8] ──► 1 2 4 8 11 22 44 88 89 [2] ──► [prime] 1 89 90 [12] ──► 1 2 3 5 6 9 10 15 18 30 45 90 91 [4] ──► 1 7 13 91 92 [6] ──► 1 2 4 23 46 92 93 [4] ──► 1 3 31 93 94 [4] ──► 1 2 47 94 95 [4] ──► 1 5 19 95 96 [12] ──► 1 2 3 4 6 8 12 16 24 32 48 96 97 [2] ──► [prime] 1 97 98 [6] ──► 1 2 7 14 49 98 99 [6] ──► 1 3 9 11 33 99 100 [9] ──► 1 2 4 5 10 20 25 50 100 101 [2] ──► [prime] 1 101 102 [8] ──► 1 2 3 6 17 34 51 102 103 [2] ──► [prime] 1 103 104 [8] ──► 1 2 4 8 13 26 52 104 105 [8] ──► 1 3 5 7 15 21 35 105 106 [4] ──► 1 2 53 106 107 [2] ──► [prime] 1 107 108 [12] ──► 1 2 3 4 6 9 12 18 27 36 54 108 109 [2] ──► [prime] 1 109 110 [8] ──► 1 2 5 10 11 22 55 110 111 [4] ──► 1 3 37 111 112 [10] ──► 1 2 4 7 8 14 16 28 56 112 113 [2] ──► [prime] 1 113 114 [8] ──► 1 2 3 6 19 38 57 114 115 [4] ──► 1 5 23 115 116 [6] ──► 1 2 4 29 58 116 117 [6] ──► 1 3 9 13 39 117 118 [4] ──► 1 2 59 118 119 [4] ──► 1 7 17 119 120 [16] ──► 1 2 3 4 5 6 8 10 12 15 20 24 30 40 60 120 121 [3] ──► 1 11 121 122 [4] ──► 1 2 61 122 123 [4] ──► 1 3 41 123 124 [6] ──► 1 2 4 31 62 124 125 [4] ──► 1 5 25 125 126 [12] ──► 1 2 3 6 7 9 14 18 21 42 63 126 127 [2] ──► [prime] 1 127 128 [8] ──► 1 2 4 8 16 32 64 128 129 [4] ──► 1 3 43 129 130 [8] ──► 1 2 5 10 13 26 65 130 131 [2] ──► [prime] 1 131 132 [12] ──► 1 2 3 4 6 11 12 22 33 44 66 132 133 [4] ──► 1 7 19 133 134 [4] ──► 1 2 67 134 135 [8] ──► 1 3 5 9 15 27 45 135 136 [8] ──► 1 2 4 8 17 34 68 136 137 [2] ──► [prime] 1 137 138 [8] ──► 1 2 3 6 23 46 69 138 139 [2] ──► [prime] 1 139 140 [12] ──► 1 2 4 5 7 10 14 20 28 35 70 140 141 [4] ──► 1 3 47 141 142 [4] ──► 1 2 71 142 143 [4] ──► 1 11 13 143 144 [15] ──► 1 2 3 4 6 8 9 12 16 18 24 36 48 72 144 145 [4] ──► 1 5 29 145 146 [4] ──► 1 2 73 146 147 [6] ──► 1 3 7 21 49 147 148 [6] ──► 1 2 4 37 74 148 149 [2] ──► [prime] 1 149 150 [12] ──► 1 2 3 5 6 10 15 25 30 50 75 150 151 [2] ──► [prime] 1 151 152 [8] ──► 1 2 4 8 19 38 76 152 153 [6] ──► 1 3 9 17 51 153 154 [8] ──► 1 2 7 11 14 22 77 154 155 [4] ──► 1 5 31 155 156 [12] ──► 1 2 3 4 6 12 13 26 39 52 78 156 157 [2] ──► [prime] 1 157 158 [4] ──► 1 2 79 158 159 [4] ──► 1 3 53 159 160 [12] ──► 1 2 4 5 8 10 16 20 32 40 80 160 161 [4] ──► 1 7 23 161 162 [10] ──► 1 2 3 6 9 18 27 54 81 162 163 [2] ──► [prime] 1 163 164 [6] ──► 1 2 4 41 82 164 165 [8] ──► 1 3 5 11 15 33 55 165 166 [4] ──► 1 2 83 166 167 [2] ──► [prime] 1 167 168 [16] ──► 1 2 3 4 6 7 8 12 14 21 24 28 42 56 84 168 169 [3] ──► 1 13 169 170 [8] ──► 1 2 5 10 17 34 85 170 171 [6] ──► 1 3 9 19 57 171 172 [6] ──► 1 2 4 43 86 172 173 [2] ──► [prime] 1 173 174 [8] ──► 1 2 3 6 29 58 87 174 175 [6] ──► 1 5 7 25 35 175 176 [10] ──► 1 2 4 8 11 16 22 44 88 176 177 [4] ──► 1 3 59 177 178 [4] ──► 1 2 89 178 179 [2] ──► [prime] 1 179 180 [18] ──► 1 2 3 4 5 6 9 10 12 15 18 20 30 36 45 60 90 180 181 [2] ──► [prime] 1 181 182 [8] ──► 1 2 7 13 14 26 91 182 183 [4] ──► 1 3 61 183 184 [8] ──► 1 2 4 8 23 46 92 184 185 [4] ──► 1 5 37 185 186 [8] ──► 1 2 3 6 31 62 93 186 187 [4] ──► 1 11 17 187 188 [6] ──► 1 2 4 47 94 188 189 [8] ──► 1 3 7 9 21 27 63 189 190 [8] ──► 1 2 5 10 19 38 95 190 191 [2] ──► [prime] 1 191 192 [14] ──► 1 2 3 4 6 8 12 16 24 32 48 64 96 192 193 [2] ──► [prime] 1 193 194 [4] ──► 1 2 97 194 195 [8] ──► 1 3 5 13 15 39 65 195 196 [9] ──► 1 2 4 7 14 28 49 98 196 197 [2] ──► [prime] 1 197 198 [12] ──► 1 2 3 6 9 11 18 22 33 66 99 198 199 [2] ──► [prime] 1 199 200 [12] ──► 1 2 4 5 8 10 20 25 40 50 100 200 Primes that were found: 46 ### Alternate Version /* REXX *************************************************************** * Program to calculate and show divisors of positive integer(s). * 03.08.2012 Walter Pachl simplified the above somewhat * in particular I see no benefit from divAdd procedure * 04.08.2012 the reference to 'above' is no longer valid since that * was meanwhile changed for the better. * 04.08.2012 took over some improvements from new above **********************************************************************/ Parse arg low high . Select When low='' Then Parse Value '1 200' with low high When high='' Then high=low Otherwise Nop End do j=low to high say ' n = ' right(j,6) " divisors = " divs(j) end exit divs: procedure; parse arg x if x==1 then return 1 /*handle special case of 1 */ Parse Value '1' x With lo hi /*initialize lists: lo=1 hi=x */ odd=x//2 /* 1 if x is odd */ Do j=2+odd By 1+odd While j*j<x /*divide by numbers<sqrt(x) */ if x//j==0 then Do /*Divisible? Add two divisors:*/ lo=lo j /* list low divisors */ hi=x%j hi /* list high divisors */ End End If j*j=x Then /*for a square number as input */ lo=lo j /* add its square root */ return lo hi /* return both lists */ ## Ring nArray = list(100) n = 45 j = 0 for i = 1 to n if n % i = 0 j = j + 1 nArray[j] = i ok next see "Factors of " + n + " = " for i = 1 to j see "" + nArray[i] + " " next ## Ruby class Integer def factors() (1..self).select { |n| (self % n).zero? } end end p 45.factors [1, 3, 5, 9, 15, 45] As we only have to loop up to ${\displaystyle {\sqrt {n}}}$, we can write class Integer def factors 1.upto(Math.sqrt(self)).select {|i| (self % i).zero?}.inject([]) do |f, i| f << self/i unless i == self/i f << i end.sort end end [45, 53, 64].each {|n| puts "#{n} : #{n.factors}"} Output: 45 : [1, 3, 5, 9, 15, 45] 53 : [1, 53] 64 : [1, 2, 4, 8, 16, 32, 64] ## Run BASIC PRINT "Factors of 45 are ";factorlist$(45) PRINT "Factors of 12345 are "; factorlist$(12345) END function factorlist$(f) DIM L(100) FOR i = 1 TO SQR(f) IF (f MOD i) = 0 THEN L(c) = i c = c + 1 IF (f <> i^2) THEN L(c) = (f / i) c = c + 1 END IF END IF NEXT i s = 1 while s = 1 s = 0 for i = 0 to c-1 if L(i) > L(i+1) and L(i+1) <> 0 then t = L(i) L(i) = L(i+1) L(i+1) = t s = 1 end if next i wend FOR i = 0 TO c-1 factorlist$= factorlist$ + STR$(L(i)) + ", " NEXT end function Output: Factors of 45 are 1, 3, 5, 9, 15, 45, Factors of 12345 are 1, 3, 5, 15, 823, 2469, 4115, 12345, ## Rust fn main() { assert_eq!(vec![1, 2, 4, 5, 10, 10, 20, 25, 50, 100], factor(100)); // asserts that two expressions are equal to each other assert_eq!(vec![1, 101], factor(101)); } fn factor(num: i32) -> Vec<i32> { let mut factors: Vec<i32> = Vec::new(); // creates a new vector for the factors of the number for i in 1..((num as f32).sqrt() as i32 + 1) { if num % i == 0 { factors.push(i); // pushes smallest factor to factors factors.push(num/i); // pushes largest factor to factors } } factors.sort(); // sorts the factors into numerical order for viewing purposes factors // returns the factors } ## Sather Translation of: C++ class MAIN is factors(n :INT):ARRAY{INT} is f:ARRAY{INT}; f := #; f := f.append(|1|); f := f.append(|n|); loop i ::= 2.upto!( n.flt.sqrt.int ); if n%i = 0 then f := f.append(|i|); if (i*i) /= n then f := f.append(|n / i|); end; end; end; f.sort; return f; end; main is a :ARRAY{INT} := |3135, 45, 64, 53, 45, 81|; loop l ::= a.elt!; #OUT + "factors of " + l + ": "; r ::= factors(l); loop ri ::= r.elt!; #OUT + ri + " "; end; #OUT + "\n"; end; end; end; ## Scala Brute force approach: def factors(num: Int) = { (1 to num).filter { divisor => num % divisor == 0 } } Since factors can't be higher than sqrt(num), the code above can be edited as follows def factors(num: Int) = { (1 to sqrt(num)).filter { divisor => num % divisor == 0 } } ## Scheme This implementation uses a naive trial division algorithm. (define (factors n) (define (*factors d) (cond ((> d n) (list)) ((= (modulo n d) 0) (cons d (*factors (+ d 1)))) (else (*factors (+ d 1))))) (*factors 1)) (display (factors 1111111)) (newline) Output: (1 239 4649 1111111) ## Seed7$ include "seed7_05.s7i"; const proc: writeFactors (in integer: number) is func local var integer: testNum is 0; begin write("Factors of " <& number <& ": "); for testNum range 1 to sqrt(number) do if number rem testNum = 0 then if testNum <> 1 then write(", "); end if; write(testNum); if testNum <> number div testNum then write(", " <& number div testNum); end if; end if; end for; writeln; end func; const proc: main is func local const array integer: numsToFactor is [] (45, 53, 64); var integer: number is 0; begin for number range numsToFactor do writeFactors(number); end for; end func; Output: Factors of 45: 1, 45, 3, 15, 5, 9 Factors of 53: 1, 53 Factors of 64: 1, 64, 2, 32, 4, 16, 8 ## SequenceL Brute Force Method A simple brute force method using an indexed partial function as a filter. Factors(num(0))[i] := i when num mod i = 0 foreach i within 1 ... num; Slightly More Efficient Method A slightly more efficient method, only going up to the sqrt(n). Factors(num(0)) := let factorPairs[i] := [i] when i = sqrt(num) else [i, num/i] when num mod i = 0 foreach i within 1 ... floor(sqrt(num)); in join(factorPairs); ## Sidef func factors(n) { var divs = [] range(1, n.sqrt.int).each { |d| divs << d if n%%d } divs + [divs[-1]**2 == n ? divs.pop : ()] + divs.reverse.map{|d| n/d } } [53, 64, 32766].each { |n| say "factors(#{n}): #{factors(n)}" } Output: factors(53): 1 53 factors(64): 1 2 4 8 16 32 64 factors(32766): 1 2 3 6 43 86 127 129 254 258 381 762 5461 10922 16383 32766 ## Slate n@(Integer traits) primeFactors [ [| :result | result nextPut: 1. n primesDo: [| :prime | result nextPut: prime]] writingAs: {} ]. where primesDo: is a part of the standard numerics library: n@(Integer traits) primesDo: block "Decomposes the Integer into primes, applying the block to each (in increasing order)." [| div next remaining | div: 2. next: 3. remaining: n. [[(remaining \\ div) isZero] whileTrue: [block applyTo: {div}. remaining: remaining // div]. remaining = 1] whileFalse: [div: next. next: next + 2] "Just looks at the next odd integer." ]. ## Smalltalk Copied from the Python example, but code added to the Integer built in class: Integer>>factors | a | a := OrderedCollection new. 1 to: (self / 2) do: [ :i | ((self \\ i) = 0) ifTrue: [ a add: i ] ]. ^a Then use as follows: 59 factors -> an OrderedCollection(1 59) 120 factors -> an OrderedCollection(1 2 3 4 5 6 8 10 12 15 20 24 30 40 60 120) ## Swift Simple implementation: func factors(n: Int) -> [Int] { return filter(1...n) { n % $0 == 0 } } More efficient implementation: import func Darwin.sqrt func sqrt(x:Int) -> Int { return Int(sqrt(Double(x))) } func factors(n: Int) -> [Int] { var result = [Int]() for factor in filter (1...sqrt(n), { n %$0 == 0 }) { result.append(factor) if n/factor != factor { result.append(n/factor) } } return sorted(result) } Call: println(factors(4)) println(factors(1)) println(factors(25)) println(factors(63)) println(factors(19)) println(factors(768)) Output: [1, 2, 4] [1] [1, 5, 25] [1, 3, 7, 9, 21, 63] [1, 19] [1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96, 128, 192, 256, 384, 768] ## Tcl proc factors {n} { set factors {} for {set i 1} {$i <= sqrt($n)} {incr i} { if {$n %$i == 0} { lappend factors $i [expr {$n / $i}] } } return [lsort -unique -integer$factors] } puts [factors 64] puts [factors 45] puts [factors 53] Output: 1 2 4 8 16 32 64 1 3 5 9 15 45 1 53 ## UNIX Shell This should work in all Bourne-compatible shells, assuming the system has both sort and at least one of bc or dc. factor() { r=echo "sqrt($1)" | bc # or echo$1 v p | dc i=1 while [ $i -lt$r ]; do if [ expr $1 %$i -eq 0 ]; then echo $i expr$1 / $i fi i=expr$i + 1 done | sort -nu } ## Ursa This program takes an integer from the command line and outputs its factors. decl int n set n (int args<1>) decl int i for (set i 1) (< i (+ (/ n 2) 1)) (inc i) if (= (mod n i) 0) out i " " console end if end for out n endl console ## Ursala The simple way: #import std #import nat factors "n" = (filter not remainder/"n") nrange(1,"n") The complicated way: factors "n" = nleq-<&@s <.~&r,quotient>*= "n"-* (not remainder/"n")*~ nrange(1,root("n",2)) Another idea would be to approximate an upper bound for the square root of "n" with some bit twiddling such as &!*K31 "n", which evaluates to a binary number of all 1's half the width of "n" rounded up, and another would be to use the division function to get the quotient and remainder at the same time. Combining these ideas, losing the dummy variable, and cleaning up some other cruft, we have factors = nleq-<&@rrZPFLs+ ^(~&r,division)^*D/~& nrange/1+ &!*K31 where nleq-<& isn't strictly necessary unless an ordered list is required. #cast %nL example = factors 100 Output: <1,2,4,5,10,20,25,50,100> ## VBA Function Factors(x As Integer) As String Application.Volatile Dim i As Integer Dim cooresponding_factors As String Factors = 1 corresponding_factors = x For i = 2 To Sqr(x) If x Mod i = 0 Then Factors = Factors & ", " & i If i <> x / i Then corresponding_factors = x / i & ", " & corresponding_factors End If Next i If x <> 1 Then Factors = Factors & ", " & corresponding_factors End Function Output: cell formula is "=Factors(840)" resultant value is "1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 15, 20, 21, 24, 28, 30, 35, 40, 42, 56, 60, 70, 84, 105, 120, 140, 168, 210, 280, 420, 840" ## Wortel @let { factors1 &n !-\%%n @to n factors_tacit @(\\%% !- @to) [[ !factors1 10 !factors_tacit 100 !factors1 720 ]] } Returns: [ [1 2 5 10] [1 2 4 5 10 20 25 50 100] [1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 30 36 40 45 48 60 72 80 90 120 144 180 240 360 720] ] ## XPL0 include c:\cxpl\codes; int N0, N, F; [N0:= 1; repeat IntOut(0, N0); Text(0, " = "); F:= 2; N:= N0; repeat if rem(N/F) = 0 then [if N # N0 then Text(0, " * "); IntOut(0, F); N:= N/F; ] else F:= F+1; until F>N; if N0=1 then IntOut(0, 1); \1 = 1 CrLf(0); N0:= N0+1; until KeyHit; ] Output: 1 = 1 2 = 2 3 = 3 4 = 2 * 2 5 = 5 6 = 2 * 3 7 = 7 8 = 2 * 2 * 2 9 = 3 * 3 10 = 2 * 5 11 = 11 12 = 2 * 2 * 3 13 = 13 14 = 2 * 7 15 = 3 * 5 16 = 2 * 2 * 2 * 2 17 = 17 18 = 2 * 3 * 3 . . . 57086 = 2 * 17 * 23 * 73 57087 = 3 * 3 * 6343 57088 = 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 223 57089 = 57089 57090 = 2 * 3 * 5 * 11 * 173 57091 = 37 * 1543 57092 = 2 * 2 * 7 * 2039 57093 = 3 * 19031 57094 = 2 * 28547 57095 = 5 * 19 * 601 57096 = 2 * 2 * 2 * 3 * 3 * 13 * 61 57097 = 57097 ## zkl Translation of: Chapel fcn f(n){ (1).pump(n.toFloat().sqrt(), List, 'wrap(m){((n % m)==0) and T(m,n/m) or Void.Skip}) } fcn g(n){ [[(m); [1..n.toFloat().sqrt()],'{n%m==0}; '{T(m,n/m)} ]] } // list comprehension Output: zkl: f(45) L(L(1,45),L(3,15),L(5,9)) zkl: g(45) L(L(1,45),L(3,15),L(5,9)) ## ZX Spectrum Basic Translation of: AWK 10 INPUT "Enter a number or 0 to exit: ";n 20 IF n=0 THEN STOP 30 PRINT "Factors of ";n;": "; 40 FOR i=1 TO n 50 IF FN m(n,i)=0 THEN PRINT i;" "; 60 NEXT i 70 DEF FN m(a,b)=a-INT (a/b)*b
2017-03-24 06:09:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3914181590080261, "perplexity": 7361.0527381339625}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187717.19/warc/CC-MAIN-20170322212947-00378-ip-10-233-31-227.ec2.internal.warc.gz"}
https://squircleart.github.io/animation-nodes/the-essence-of-animation-nodes-scripts.html
# Scripts ### Prerequisites: The following is a list of all prerequisites in order to make the most out of the following tutorial. In this tutorial, I am going to introduce you to script nodes in Animation Nodes and Blender Python API. You don’t have to know python in order to understand this tutorial, however, it is recommended that you know the basics like I do. # Script Node Animation Nodes contains a lot of nodes, some nodes process data like Math nodes and some others communicates with blender like Object Transform Output which tells blender objects are located. More sophisticated node trees can be created using those fundamental nodes that AN provides. However, you may encounter a situation where you need a feature(node) that is not in Animation Nodes already and you can’t create it using provided nodes (Such situations is usually when you try to communicate with blender), you have couple of options: • You can contact the developer to see if it can be added in a future release. You can do that through Github issues. • You may create a new node, the process of creating new node is described in the documentation. • You may use the script node, which enables you to write python scripts and execute it as a subprogram. ## Example 6.1 ### Demonstration Just like any subprogram, script node has inputs and outputs which are defined on the same node. When I add an input, a variable is created with its value, so in this example, it is like we wrote this script: x= 1 y= 2 Consequently, input names follow the same rules as Python, that is, the name should start with a letter or an underscore, it can’t start with numbers or you will get a fatal error. You can’t name it like python operators, for instance, it can’t be named if or while. The script node will then return the values of the variable result which I defined as x+y, so the script basically add two numbers and return the result. # Python API There exist a python API (Application Programming Interface) that lets you communicate with blender, it can be found here. To be able to understand the API, lets open the console and see what the bpy module contains. Using auto complete, blender lists possible choices. bpy includes: bpy. app context data ops path props types utils When it comes to Animation Nodes, we are only interested in data and may be context for some limited cases: • data include all the data in the blend file, you can use it to access and write data to blender. For instance, you can write and read pixel data of an image. • context include data that is represented in current working area. It is access only, you can’t write using it. For instance, selected objects are data that is defined in the 3D viewport. Those data are subjected to context limitations, so sometimes you won’t be able to access this data unless you are in the right area. Also, it should only be used when you don’t plan to run this script during rendering as this can lead to problems. Having chosen data, lets see what it contains: py.data. actions armatures brushes cameras curves fonts grease_pencil groups images objects ... As you may see, we have all the data types, but we are mostly interested in objects as it include all the data of our objects, objects lists all objects in the blend file, you can choose or sample an object by its name or index as follows: bpy.data.objects[0] bpy.data.objects['Cube'] Both of them sample the first object which is named “Cube”. The object includes: bpy.data.objects['Cube']. color active_material constraints dimensions location matrix_world ... And if I want to sample the color of the object and store it in a variable, I could just write the following code: color = bpy.data.objects['Cube'].color #Since "color" include RGBA channels: color[0] #Red value color[1] #Green value color[2] #Blue value color[3] #Alpha value You probably get it by now, to access some data, we carefully follow its location in the data section of the API. ## Example 6.2 ### Demonstration Scripts node are not always needed, if your script can be written inline, then use the expression node directly. Note! Notice that the output of the node is not actually a color data type, so I disabled auto type correction and output it as if it was a color after the output was converted to a color. This is a bad practice, but it works sometimes. ## Example 6.3a Image = bpy.data.images["AN_Image"] Image.pixels = Pixels ### Demonstration In this example, I am writing pixel info to an image. Blender accepts the pixel info as a list of floats with the pattern RGBARGBARGBA…. it means the first float is the red value if the first pixel, second is green for the first pixel, third is blue for the first pixel, fourth is the alpha for the first pixel, fifth is the red value for second pixel, and so on. So, the node expects a float list with length $W \times H \times 4$ where $W,H$ are the width and height of the image respectively. I generated a list of floats that goes from zero to one with 40000 float because my image is $100 \times 100$. This will produce some kind of vertical-horizontal gradient, but lets do something more useful. Note! Notice that we didn't have to import the bpy, that's because it is imported already along with some other modules. ## Example 6.3b Image = bpy.data.images["AN_Image"] Image.generated_width = Width Image.generated_height = Height Image.pixels = Pixels ### Demonstration In this example, I made it possible to change the width and height of the image. I also created a loop that compute a spherical gradient, which is created as follows: • Compute the coordinates of each pixel using what we learned from the procedural modeling tutorial. • Remape the domain into the range of $[-1,1]$ in both axis. • Use pythagoras theorem to compute the gradient. • Create a list with the distance in first three elements and 1 in the forth. Because we want to keep alpha at 1 and the result to be a greyscale image for the gradient. Challenge! Can you get the pixel info from another image, edit those info somehow and then write pixels back to an image?
2018-11-17 13:23:10
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33469241857528687, "perplexity": 970.3306391311484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743521.59/warc/CC-MAIN-20181117123417-20181117145417-00471.warc.gz"}
https://cstheory.stackexchange.com/questions/33825/upper-bound-on-number-of-n-times-n-boolean-matrices-of-boolean-rank-at-most
Upper Bound on Number of $n \times n$ Boolean matrices of Boolean rank at most $k$ An $n \times n$ Boolean matrix $B$ has Boolean rank $k$ if there exist matrices $L \in \{0,1\}^{n \times k}$ and $R \in \{0,1\}^{k \times n}$, s.t. $B = L \circ R$. Here $\circ$ denotes the Boolean matrix product (i.e. $1+1 = 1$). What is the best upper bound we know for $|\{B \in \{0,1\}^{n \times n} : B \text{ has Boolean rank } \leq k\}|$? An equivalent question in terms of bipartite graphs is as follows: How many bipartite graphs $G = (V \cup U, E)$ with $|U| = |V| = n$ which can be covered with at most $k$ bicliques exist? • A trivial upper bound is $2^{2nk}$ because $B$ is determined by $L$ and $R$. A trivial lower bound is around $\binom{n}{k}2^{nk}=2^{(n+\Theta(\log n))k}$ because any matrix with $k$ non-zero column vectors has rank at most $k$. – Thatchaphol Feb 15 '16 at 20:21 • @Turbo: No, it's Boolean rank, i.e. + is given by the bitwise OR and we get 1+1=1. In $\mathbb{F}_2$ you have 1+1=0 (which makes $\mathbb{F}_2$ a field). – tranisstor Feb 16 '16 at 9:23 • @tranisstor is there a relation between this rank and $\Bbb R$ rank and $\Bbb F_2$ rank? – T.... Feb 16 '16 at 10:40 • @Turbo: Yes and no. Yes: They all define a rank on binary matrices. No: Boolean Rank is NP-hard even to approximate, the other two can be computed in poly time using e.g. Gaussian Elimination (as they are fields; the Boolean algebra does not give field as you lack an inverse for +). Otherwise, Boolean Rank and Standard Rank cannot be compared (i.e., none is upper bound to the other); see Monson, Pullman, Rees: "A survey of clique and biclique coverings and factorizations of (0; 1)-matrices". For Boolean rank and $\mathbb{F}_2$ I'm not sure, but I'd be surprised if there was a clear relation. – tranisstor Feb 16 '16 at 14:20 • @Turbo We also cannot compare $\mathbb{F}_2$ and Boolean rank: Consider the matrix [1,0,1; 1,1,1; 0,1,1]. It has Boolean rank 2, but $\mathbb{F}_2$ rank 3. On the other hand, the matrix [1,1,0; 0,1,1; 1,0,1] has Boolean rank 3, but $\mathbb{F}_2$ rank 2. – tranisstor Feb 16 '16 at 15:13
2020-02-24 03:24:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8313621878623962, "perplexity": 605.2194170920403}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145869.83/warc/CC-MAIN-20200224010150-20200224040150-00020.warc.gz"}
https://socratic.org/questions/what-are-the-important-points-to-graph-y-sin-x-1-1
# What are the important points to graph y=sin(x+1)? Nov 30, 2016 Highpoints, lowpoints, x intercepts and such. #### Explanation: Basically any point you can calculate. Numbers like the sin of pi/3 or pi/4 things like that. Depending on how detailed you want to make your graph you can choose the ignore some points. In this case it is most convenient when $x = - 1$ since $\sin \left(1 - 1\right) = \sin \left(0\right) = 0$ similarly you can do $x = \frac{\pi}{3} - 1$, and now you have $\sin \left(\frac{\pi}{3} - 1 + 1\right) = \sin \left(\frac{\pi}{3}\right) = \frac{\sqrt{3}}{2}$. Other good x values are $x = \frac{\pi}{4} - 1$, $x = \frac{\pi}{6} - 1$, $x = \frac{2 \pi}{3} - 1$, $x = \frac{5 \pi}{6} - 1$ and such. A most simple graph would just contain the high points, low points and the x-intercepts. If you wish to do that just plot down the values $x = - 1 , x = \frac{\pi}{2} - 1 , x = \pi - 1$
2021-09-25 03:50:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6536969542503357, "perplexity": 718.8712054072794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057589.14/warc/CC-MAIN-20210925021713-20210925051713-00394.warc.gz"}
https://en.wikipedia.org/wiki/Standard_flag
Flag (linear algebra) (Redirected from Standard flag) In mathematics, particularly in linear algebra, a flag is an increasing sequence of subspaces of a finite-dimensional vector space V. Here "increasing" means each is a proper subspace of the next (see filtration): ${\displaystyle \{0\}=V_{0}\subset V_{1}\subset V_{2}\subset \cdots \subset V_{k}=V.}$ If we write the dim Vi = di then we have ${\displaystyle 0=d_{0} where n is the dimension of V (assumed to be finite-dimensional). Hence, we must have kn. A flag is called a complete flag if di = i, otherwise it is called a partial flag. A partial flag can be obtained from a complete flag by deleting some of the subspaces. Conversely, any partial flag can be completed (in many different ways) by inserting suitable subspaces. The signature of the flag is the sequence (d1, … dk). Under certain conditions the resulting sequence resembles a flag with a point connected to a line connected to a surface. Bases An ordered basis for V is said to be adapted to a flag if the first di basis vectors form a basis for Vi for each 0 ≤ ik. Standard arguments from linear algebra can show that any flag has an adapted basis. Any ordered basis gives rise to a complete flag by letting the Vi be the span of the first i basis vectors. For example, the standard flag in Rn is induced from the standard basis (e1, ..., en) where ei denotes the vector with a 1 in the ith slot and 0's elsewhere. Concretely, the standard flag is the subspaces: ${\displaystyle 0<\left\langle e_{1}\right\rangle <\left\langle e_{1},e_{2}\right\rangle <\cdots <\left\langle e_{1},\ldots ,e_{n}\right\rangle =K^{n}.}$ An adapted basis is almost never unique (trivial counterexamples); see below. A complete flag on an inner product space has an essentially unique orthonormal basis: it is unique up to multiplying each vector by a unit (scalar of unit length, like 1, -1, i). This is easiest to prove inductively, by noting that ${\displaystyle v_{i}\in V_{i-1}^{\perp }, which defines it uniquely up to unit. More abstractly, it is unique up to an action of the maximal torus: the flag corresponds to the Borel group, and the inner product corresponds to the maximal compact subgroup.[1] Stabilizer The stabilizer subgroup of the standard flag is the group of invertible upper triangular matrices. More generally, the stabilizer of a flag (the linear operators on V such that ${\displaystyle T(V_{i}) for all i) is, in matrix terms, the algebra of block upper triangular matrices (with respect to an adapted basis), where the block sizes ${\displaystyle d_{i}-d_{i-1}}$. The stabilizer subgroup of a complete flag is the set of invertible upper triangular matrices with respect to any basis adapted to the flag. The subgroup of lower triangular matrices with respect to such a basis depends on that basis, and can therefore not be characterized in terms of the flag only. The stabilizer subgroup of any complete flag is a Borel subgroup (of the general linear group), and the stabilizer of any partial flags is a parabolic subgroup. The stabilizer subgroup of a flag acts simply transitively on adapted bases for the flag, and thus these are not unique unless the stabilizer is trivial. That is a very exceptional circumstance: it happens only for a vector space of dimension 0, or for a vector space over ${\displaystyle \mathbf {F} _{2}}$ of dimension 1 (precisely the cases where only one basis exists, independently of any flag). Subspace nest In an infinite-dimensional space V, as used in functional analysis, the flag idea generalises to a subspace nest, namely a collection of subspaces of V that is a total order for inclusion and which further is closed under arbitrary intersections and closed linear spans. See nest algebra. Set-theoretic analogs Further information: Field with one element From the point of view of the field with one element, a set can be seen as a vector space over the field with one element: this formalizes various analogies between Coxeter groups and algebraic groups. Under this correspondence, an ordering on a set corresponds to a maximal flag: an ordering is equivalent to a maximal filtration of a set. For instance, the filtration (flag) ${\displaystyle \{0\}\subset \{0,1\}\subset \{0,1,2\}}$ corresponds to the ordering ${\displaystyle (0,1,2)}$.
2017-04-30 21:12:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132667183876038, "perplexity": 1981.1613716482748}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125849.25/warc/CC-MAIN-20170423031205-00078-ip-10-145-167-34.ec2.internal.warc.gz"}
https://zenodo.org/record/3228359/export/json
Book Open Access # Computing and the common. Learning from Participatory Design in the age of platform capitalism. Maurizio Teli; Linda Tonolli; Angela Di Fiore; Vincenzo D'Andrea ### JSON Export { "files": [ { "self": "https://zenodo.org/api/files/378b7a17-a187-41a8-80ba-23f865236eee/CBS4_Compuring_FINAL.pdf" }, "checksum": "md5:15e0ae930992e98843ea014afaa80789", "bucket": "378b7a17-a187-41a8-80ba-23f865236eee", "key": "CBS4_Compuring_FINAL.pdf", "type": "pdf", "size": 2134105 } ], "owners": [ 26095 ], "doi": "10.5281/zenodo.3228359", "stats": { "unique_views": 328.0, "views": 359.0, "version_views": 361.0, "version_unique_views": 330.0, "volume": 625292765.0, "version_volume": 625292765.0 }, "doi": "https://doi.org/10.5281/zenodo.3228359", "conceptdoi": "https://doi.org/10.5281/zenodo.3228358", "bucket": "https://zenodo.org/api/files/378b7a17-a187-41a8-80ba-23f865236eee", "html": "https://zenodo.org/record/3228359", "latest_html": "https://zenodo.org/record/3228359", "latest": "https://zenodo.org/api/records/3228359" }, "conceptdoi": "10.5281/zenodo.3228358", "created": "2019-05-24T14:05:25.518811+00:00", "updated": "2020-01-20T17:27:49.051949+00:00", "conceptrecid": "3228358", "revision": 4, "id": 3228359, "access_right_category": "success", "doi": "10.5281/zenodo.3228359", "description": "<p>Digital technologies have an increasing, often debated, role in our world: in this book we are concerned with the relation between technologies and the common, the ensemble of elements connecting human beings. Our motivation lies in the observation that the common is often dispossessed by platform capitalism. Can we, as scholars, help to identify and build digital technologies that nourish the common rather than dispossessing it?<br>\nTo answer this question, we look at Participatory Design (PD) as an inspiring example for other scholarship. In the light of designing viable alternatives, in this book we review and discuss the actual status of PD research taking into account a reinvigorated political perspective. Our goal is to understand, from the most recent literature in PD, how such field can contribute to socio-technical alternatives to platform capitalism. We also point to the limitations of actual PD, in terms of missing elements when looking at the political agenda on nourishing the common that we propose. More specifically, we look at PD literature trying to answer the following research question: &ldquo;how could PD research contribute to a renewed political research practice in the age of platform capitalism?&rdquo;.<br>\nTo answer this question, we engaged in a narrative literature review of the last years of activity in the field. This literature review is grounded on the framework, developed by us as a contribution to PD itself, of a Participatory Design promoting commoning practices, or nourishing the common, the ensemble of the material and symbolic elements tying together human beings. Such framework identifies four practical strategies for scholars, professionals, and activists in the field of PD interested in building a contemporary activist agenda: 1) to identify an arena of action that is potentially socially transformative; 2) to clarify how the social groups involved in a specific technological process can connect to commoning; 3) to promote and enact an open ended design process that is facilitated but not strongly lead by the designers themselves; and 4) to discuss and evaluate how people participating in a design project see their material conditions changed by the project itself (four themes we referred to, in our review, with the four labels Transformative; Agency; Open Ended; Gains).<br>\nStarting from our four strategies framework we approached the literature review, searching for those works that adhere to one or more strategies. We complete the review with a discussion, based on the reviewed literature, on the strategies that can dialogue with other researchers engaging in an activist agenda aimed at social transformations that supports nourishing the common.</p>", "language": "eng", "title": "Computing and the common. Learning from Participatory Design in the age of platform capitalism.", "id": "CC-BY-4.0" }, "relations": { "version": [ { "count": 1, "index": 0, "parent": { "pid_type": "recid", "pid_value": "3228358" }, "is_last": true, "last_child": { "pid_type": "recid", "pid_value": "3228359" } } ] }, "imprint": { "publisher": "Universit\u00e0 degli Studi di Trento", "place": "Trento, Italy" }, "grants": [ { "code": "687922", "self": "https://zenodo.org/api/grants/10.13039/501100000780::687922" }, "title": "POVERTY, INCOME, AND EMPLOYMENT NEWS", "acronym": "PIE News", "program": "H2020", "funder": { "doi": "10.13039/501100000780", "acronyms": [], "name": "European Commission", "self": "https://zenodo.org/api/funders/10.13039/501100000780" } } } ], "publication_date": "2019-05-24", "creators": [ { "affiliation": "Aalborg University", "name": "Maurizio Teli" }, { "affiliation": "University of Trento", "name": "Linda Tonolli" }, { "affiliation": "University of Trento", "name": "Angela Di Fiore" }, { "affiliation": "University of Trento", "name": "Vincenzo D'Andrea" } ], "access_right": "open", "resource_type": { "subtype": "book", "type": "publication", "title": "Book" }, "related_identifiers": [ { "scheme": "doi", "identifier": "10.5281/zenodo.3228358", "relation": "isVersionOf" } ] } } 361 293 views
2020-02-20 07:19:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23551912605762482, "perplexity": 11935.961171177785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144708.87/warc/CC-MAIN-20200220070221-20200220100221-00138.warc.gz"}
https://math.stackexchange.com/questions/2460474/finding-integral-solutions-of-xy-x2-xyy2
# Finding integral solutions of $x+y=x^2-xy+y^2$ Find integral solutions of $$x+y=x^2-xy+y^2$$ I simplified the equation down to $$(x+y)^2 = x^3 + y^3$$ And hence found out solutions $(0,1), (1,0), (1,2), (2,1), (2,2)$ but I dont think my approach is correct . Is further simplification required? Is there any other method to solve this? I am thankful to those who answer! • How did you powers of 3 in your simplification, when there don't exist such in the given expression? Oct 6, 2017 at 14:45 • You forgot at minimum $(0,0)$. Oct 6, 2017 at 14:51 • when you bring your equation in standard polynomial form, you can observe that the Discriminant is negative, hence the conic section is an ellipse. It is always sensible to graph a closed curve as it gives you a finite amount of candidates you can check Oct 6, 2017 at 14:54 Write $\Delta\geq0$. It must help! $$x^2-(y+1)x+y^2-y=0,$$ which gives $$(y+1)^2-4(y^2-y)\geq0$$ or $$3y^2-6y-1\leq0$$ or $$1-\frac{2}{\sqrt3}\leq y\leq1+\frac{2}{\sqrt3},$$ which gives $$0\leq y\leq2,$$ which gives all solutions: For $y=0$ we get $x^2-x=0$, which gives $(0,0)$ and $(1,0)$. For $y=1$ we get $x^2-2x=0$, which gives $(0,1)$ and $(2,1)$. For $y=2$ we get $x^2-3x+2=0$, which gives $(2,2)$ and $(1,2).$ $x+y=x^2-xy+y^2\to y^2-(x+1) y+x^2-x=0$ $y=\dfrac{1}{2} \left(1+x\pm\sqrt{-3 x^2+6 x+1}\right)$ discriminant must be positive $\Delta=-3 x^2+6 x+1\geq 0\to \dfrac{1}{3} \left(3-2 \sqrt{3}\right)\leq x\leq \dfrac{1}{3} \left(3+2 \sqrt{3}\right)$ which for integer $x$ means, $0\leq x \leq 2$ For $x=0$ we get $y=0;\;1$ solutions are $\color{red}{(0,0)\;(0;\;1)}$ for $x=1$ we have $y=0;\;y=2$ so $\color{red}{(1,0)\;(1;\;2)}$ for $x=2$ finally $y=1;\;y=2$ so $\color{red}{(2,1)\;(2;\;2)}$ hope this helps $x+y=x^2-xy+y^2$. Since this is symmetrical in $x$ and $y$, we can assume that $x \le y$. If $x=y$, this becomes $2x = x^2$, so $x=0$ or $x=2$. If $x < y$, then $2y \gt x+y =(x-y/2)^2+3y^2/4 \ge 3y^2/4$. There is no solution if $y \le 0$. If $> 0$, $2 \gt 3y/4$ or $y \lt 8/3$ so $y \le 2$. If $y=2$, then $x=0$ or $x=1$; of there, only $x=1$ works. If $y=1$, then $x=0$ works.
2022-08-15 04:03:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9617412090301514, "perplexity": 183.9744142440152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00042.warc.gz"}
https://mathcsr.org/articles/problemsolving/Vol1_No1/conditional-probability
# Introduction Have you ever been told that the probability of two events occurring is simply the product of the probabilities of each individual event occurring? It sounds reasonable. The probability of flipping a coin twice and getting two heads is $$\frac{1}{2} \cdot \frac{1}{2} = \frac{1}{4}$$ because there is a $$\frac{1}{2}$$ chance of flipping heads on the first flip, and a $$\frac{1}{2}$$ chance of flipping heads on the second flip. However, it turns out that this statement is not always true. # Conditional Probability and Independence What if I wanted to know the probability of flipping three coins and getting at least one tails and at least one heads? By the logic above, the answer would be $$\frac{7}{8} \cdot \frac{7}{8} = \frac{49}{64}$$ because there is a $$(1 - \frac{1}{8})$$ chance of getting not all heads (aka at least one tails) and a $$(1 - \frac{1}{8})$$ chance of getting not all tails (aka at least one heads). But does $$\frac{49}{64}$$ make sense? After all, there are only 8 possible outcomes for flipping 3 coins. The problem is, knowing that there is at least one tails affects the chances of getting at least one heads. You know now that the result could not have been HHH, so your 8 possible outcomes has shrunk to 7. In other words, your sample space (the set of all possible outcomes) has been altered. Now, instead of having a $$\frac{7}{8}$$ chance of getting at least one heads, you have a $$\frac{6}{7}$$ chance, because there are 7 possible outcomes and 6 of them include a heads. We call this new probability a conditional probability, in this case the probability of flipping at least one heads given that you have flipped at least one tails. We can denote this as $$P(B|A)$$, where B is the event “at least one heads” and A is the event “at least one tails”. Now that we know why we failed, let’s see if we can get the correct answer. What we are looking for is $$P(A \cap B)$$ (the probability of A and B). First, we need A to happen, which occurs with probability $$P(A) = \frac{7}{8}$$. After we know that A has happened, we need B to also happen, which occurs with probability $$P(B|A) = \frac{6}{7}$$. These are the two probabilities that must be multiplied (see if you can explain why). Thus, we can write $\label{two events} P(A \cap B) = P(A) \cdot P(B|A)$ In our case, this is $$\frac{7}{8} \cdot \frac{6}{7} = \frac{3}{4}$$. This makes sense because as long as I don’t flip all heads or all tails, I have at least one head and at least one tails. There are $$8 - 2 = 6$$ ways to not get those two bad outcomes, so our answer should be $$\frac{6}{8}$$, and it is. What if we have three events? Or four? In general, for events $$A_1,A_2,...A_n$$, we can write $\label{multiple events} P(A_1\cap ... \cap A_n) = P(A_1) \cdot P(A_2|A_1) \cdot P(A_3|A_1 \cap A_2) \cdot ... \cdot P(A_n|A_1 \cap ... \cap A_{n-1})$ which comes from successive applications of [two events]. So, when is the statement that $$P(A \cap B) = P(A) \cdot P(B)$$ true? By comparison with [two events], this would mean that $$P(B|A) = P(B)$$. In words, this means that knowing that the event A has occurred does not affect the chances of event B occurring. We call this independence. So, the probability of multiple events occurring is the product of the probabilities of each individual event occurring if and only if the events are independent. We now know the relationship between conditional probability and the probability of multiple events occurring, but how do we calculate these conditional probabilities in the first place? Going back to [two events], we can divide both sides by P(A) (as long as P(A) is not 0) to obtain $\label{conditional prob} \frac{P(A \cap B)}{P(A)} = P(B|A)$ # Total Probability Theorem Let’s look at another example problem where conditional probability is useful. I flip a coin and roll a die, and the die roll determines how much money I will spend on Purell this week. If the flip is heads, I add 95 to my die roll. If it is tails, I multiply my die roll by 50. What is the probability of me spending at least $100 on Purell this week? We can split the problem up into two cases: Case 1: I rolled heads In this case, there are 2 die rolls (5 and 6) that result in me spending at least$100 out of 6 possible die rolls. Thus, there is a $$\frac{1}{3}$$ chance for this case. Case 2: I rolled tails In this case, there are 5 die rolls (2 through 6) that result in me spending at least $100 out of 6 possible die rolls. Thus, there is a $$\frac{5}{6}$$ chance for this case. Since case 1 and case 2 each occur with probability $$\frac{1}{2}$$, my total probability is $$\frac{1}{2} \cdot \frac{1}{3} + \frac{1}{2} \cdot \frac{5}{6} = \frac{7}{12}$$. I said conditional probability was useful in this problem. So where did I use it? Let’s call the event of rolling heads H, the event of rolling tails T, and the event of me spending at least$100 on Purell this week A. The $$\frac{1}{3}$$ represents the probability of A given that I rolled heads, $$P(A|H)$$. The $$\frac{5}{6}$$ represents the probability of A given that I rolled tails, $$P(A|T)$$. The two $$\frac{1}{2}$$’s represent the probabilities of rolling heads and tails, P(H) and P(T), respectively. To compute P(A), I computed $$P(H) \cdot P(A|H) + P(T) \cdot P(A|T)$$. More generally, for disjoint events $$B_1,B_2,...B_n$$ such that $$B_1 \cup B_2 \cup ... \cup B_n$$ is the sample space, $\label{total prob} P(A) = \sum_{i=1}^{n}P(B_i) \cdot P(A|B_i) = \sum_{i=1}^{n}P(A \cap B_i)$ Where the second equality comes from [two events]. This is called the Total Probability Theorem. This equation is simply a justification of what you do every time you use casework in a probability problem, but if it is not intuitive, then try to understand this “proof by picture”: For a formal proof, we must introduce the following axiom (all of probability theory is based on three axioms proposed by Andrey Kolmogorov, and this is one of them): \label{axiom} \begin{aligned} P(A_0 \cup A_1 \cup ... \cup A_n) = \sum_{i=1}^{n}P(A_i) && \text{if A_0, A_1, ... A_n are disjoint events} \end{aligned} Since $$B_1,B_2,...B_n$$ are disjoint, $$(B_1 \cap A), (B_2 \cap A), ..., (B_n \cap A)$$ are also disjoint. This means that we can apply [axiom]: $P((B_1 \cap A) \cup (B_2 \cap A) \cup ... \cup (B_n \cap A)) = \sum_{i=1}^{n}P(A \cap B_i)$ Also, since $$B_1 \cup B_2 \cup ... \cup B_n$$ is the sample space, then for an event $$A$$ in the sample space, $$A \subset (B_1 \cup B_2 \cup ... \cup B_n)$$. Thus, $$(B_1 \cap A) \cup (B_2 \cap A) \cup ... \cup (B_n \cap A) = A$$ (which is what the picture shows). Substituting $$A$$ for $$(B_1 \cap A) \cup (B_2 \cap A) \cup ... \cup (B_n \cap A)$$ in the equation above: \begin{aligned} P(A) = \sum_{i=1}^{n}P(A \cap B_i) \\= \sum_{i=1}^{n}P(B_i) \cdot P(A|B_i) && \text{(from \ref{two events})} \end{aligned} There is also an analogous formula for computing expectations with conditional probability, where $$A_1,A_2,...A_n$$ are disjoint events such that $$A_1 \cup A_2 \cup ... \cup A_n$$ is the sample space of a random variable X which will not be proven here: $\label{total expect} E(X) = \sum_{i=1}^{n}E(X|A_i) \cdot P(A_i)$ # Bayes’ Theorem There is another very important application of conditional probability that comes up a lot in everyday life: inference. For example, when we go to the doctor and get tested for COVID-19, we want to know how likely it is that we have the disease given that we tested positive. The question also gives you that $$10\%$$ of the population has the virus, that the test is $$99\%$$ accurate when the patient has the virus, and $$90\%$$ accurate when the patient does not have the virus. First let’s understand what the question gives us. There are two events we are concerned with: you having the virus, and you testing positive. We will call these events A and B, respectively. You are given that $$P(A) = .1$$, $$P(B|A) = .99$$, and $$P(B|\neg A) = 1-.9 =.1$$ (if the test is 90% accurate when you don’t have the virus, that means that 10% of the time it will be wrong and say you do have it). We want to find $$P(A|B)$$. Using the definition of conditional probability and the formula for the probability of two events occurring, we can write: $P(A \cap B) = P(A) \cdot P(B|A) \tag{\ref{two events}}$ $\frac{P(A \cap B)}{P(B)} = P(A|B) \tag{\ref{conditional prob}}$ Substituting [two events] into [conditional prob], we obtain Bayes’ Theorem: $\label{bayes} P(B|A) \cdot \frac{P(A)}{P(B)} = P(A|B)$ However, we still don’t know P(B). This can be easily computed with the total probability theorem: $$P(B) = P(B|A) \cdot P(A) + P(B|\neg A) \cdot P(\neg A) = .99 \cdot .1 + .1 \cdot (1 -.1) = .189$$ This yields an answer of $$.99 \cdot .1 / .189 \approx .52$$ You may be surprised by how low this number is compared to the accuracy of the test, which is $$.99 \cdot .1 + .9 \cdot .9 = .909$$. This means that just because a test has a high accuracy does not necessarily mean that you should panic if you test positive (but please stay home if you have the COVID-19). Bayes’ theorem is a simple formula, but it is very powerful since it relates an observed event (B) to the conditions that caused that event. This is the type of thinking you go through whenever you do a science experiment: you run tests and record data, and then use that data to draw conclusions about the system you tested. This is the backbone of the theory of Bayesian Statistics, which has many applications in signal processing, science research, game theory, and more. (In the problem set, look out for problems labeled "inference," as these problems relate to Bayesian statistics.) # Problems Now that you are an expert on conditional probability, try out these problems! 1. Mario has two children. Assume that children are equally likely to be born as a boy or a girl and are equally likely to be born on any day of the week. I ask Mario if he has a daughter, and he says yes. What is the probability the other child is a son? What if instead I ask Jerry, who also has two kids, if he has a daughter that was born on a Tuesday, and he says yes; what is the probability the other child is a son in this scenario? 2. Inference #1: Reimu has 2019 coins $$C_0,C_1,...,C_{2018}$$, one of which is fake, though they look identical to each other (so each of them is equally likely to be fake). She has a machine that takes any two coins and picks one that is not fake. If both coins are not fake, the machine picks one uniformly at random. For each $$i = 1,2,...,1009$$, she puts $$C_0$$ and $$C_i$$ into the machine once, and the machine picks $$C_i$$. What is the probability that $$C_0$$ is fake? (HMMT Feb 2019 Guts #13) 3. Prediction #1: A bag contains nine blue marbles, ten ugly marbles, and one special marble. Ryan picks marbles randomly from this bag with replacement until he draws the special marble. He notices that none of the marbles he drew were ugly. Given this information, what is the expected value of the number of total marbles he drew? (HMMT Feb 2018 Combo #5) 4. Prediction #2: Noted magician Casimir the Conjurer has an infinite chest full of weighted coins. For each $$p \in [0,1]$$, there is exactly one coin with probability p of turning up heads. Kapil the Kingly draws a coin at random from Casimir the Conjurer’s chest, and flips it 10 times. To the amazement of both, the coin lands heads up each time! On his next flip, if the expected probability that Kapil the Kingly flips a head is written in simplest form as $$\frac{p}{q}$$, then compute $$p + q$$. (PUMaC 2018 Live Round Calculus #1) 5. Inference #2: Johnny has a deck of 100 cards, all of which are initially black. An integer n is picked at random between 0 and 100, inclusive, and Johnny paints n of the 100 cards red. Johnny shuffles the cards and starts drawing them from the deck. What is the least number of red cards Johnny has to draw before the probability that all the remaining cards are red is greater than .5? 6. Inference #3: Yannick picks a number N randomly from the set of positive integers such that the probability that n is selected is $$2^{-n}$$ for each positive integer n. He then puts N identical slips of paper numbered 1 through N into a hat and gives the hat to Annie. Annie does not know the value of N, but she draws one of the slips uniformly at random and discovers that it is the number 2. What is the expected value of N given Annie’s information? (HMMT Feb 2019 Guts #29) (Note: Uses Calculus) # Citations 1. Foundation, CK-12. “12 Foundation.” CK, www.ck12.org/book/cbse-maths-book-class-12/section/14.5/.
2021-07-29 03:17:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8481506705284119, "perplexity": 292.1434819464333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00221.warc.gz"}
https://gmatclub.com/forum/if-p-3-is-divisible-by-80-then-the-positive-integer-p-must-have-at-le-187530.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Jul 2018, 04:57 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If p^3 is divisible by 80, then the positive integer p must have at le Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 47109 If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 27 Oct 2014, 07:51 2 28 00:00 Difficulty: 75% (hard) Question Stats: 44% (01:11) correct 56% (01:04) wrong based on 489 sessions ### HideShow timer Statistics Tough and Tricky questions: Factors. If p^3 is divisible by 80, then the positive integer p must have at least how many distinct factors? (A) 2 (B) 3 (C) 6 (D) 8 (E) 10 _________________ GMAT Club Legend Joined: 16 Oct 2010 Posts: 8128 Location: Pune, India Re: If p ^3 is divisible by 80, then the positive integer p must [#permalink] ### Show Tags 01 Jun 2015, 20:06 10 8 ggarr wrote: If p^3 is divisible by 80, then the positive integer p must have at least how many distinct factors? 2 3 6 8 10 Prime factorize $$80 = 2^4 * 5$$ If $$p^3$$ has at least four 2s and a 5, it must have at least six 2s and three 5s (Every prime factor of $$p^3$$ must have a power which is a multiple of 3). So p must have at least two 2s and a 5 as factors. Minimum value of $$p = 2^2 * 5$$ This gives us $$(2+1)*(1+1) = 6$$ distinct factors (at least) _________________ Karishma Private Tutor for GMAT Contact: bansal.karishma@gmail.com SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1837 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Re: If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 27 Oct 2014, 18:45 5 3 Bunuel wrote: Tough and Tricky questions: Factors. If p^3 is divisible by 80, then the positive integer p must have at least how many distinct factors? (A) 2 (B) 3 (C) 6 (D) 8 (E) 10 Let say p = 10, checking divisibility by 80 $$\frac{10 * 10 * 10}{80} = \frac{25}{2}$$ Numerator falling short of 2 So, lets say p = 20, again checking divisibility by 80 $$\frac{20*20*20}{80} = 100$$ 20 is the least value of p for which $$p^3$$ can be completely divided by 80 There are 6 distinct factors of 20 >> 1, 2, 4, 5, 10, 20 One more way: $$20 = 2^2 * 5^1$$ Distinct factors = (2+1)*(1+1) = 3*2 = 6 _________________ Kindly press "+1 Kudos" to appreciate ##### General Discussion Manager Joined: 13 Mar 2013 Posts: 173 Location: United States GPA: 3.5 WE: Engineering (Telecommunications) ### Show Tags 28 Jun 2015, 20:25 If p3 is divisible by 80, then the positive integer p must have at least how many distinct factors? (A) 2 (B) 3 (C) 6 (D) 8 (E) 10 Please someone explain this question with solution .Thanks _________________ Regards , Math Expert Joined: 02 Aug 2009 Posts: 6246 If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 28 Jun 2015, 20:57 3 abhisheknandy08 wrote: If p3 is divisible by 80, then the positive integer p must have at least how many distinct factors? (A) 2 (B) 3 (C) 6 (D) 8 (E) 10 Please someone explain this question with solution .Thanks hi, the method to find distinct factors is.. step 1.. break down the integer in its basic form with prime numbers.. 80=2^4*5... step 2.. formula is$$a^x*b^y... (x+1)(y+1)$$... so here the answer will be (4+1)(1+1)=5*2=10 ans E.. hope it helped but it seems you mean p3 as $$p^3$$... so p^3 will have atleast$$2^4*5$$as its factor, and therefore, p will have atleast $$2^2*5$$ as factors.. ans 3*2=6 ans C _________________ 1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html 3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html GMAT online Tutor SVP Joined: 08 Jul 2010 Posts: 2120 Location: India GMAT: INSIGHT WE: Education (Education) ### Show Tags 28 Jun 2015, 22:18 5 4 abhisheknandy08 wrote: If p3 is divisible by 80, then the positive integer p must have at least how many distinct factors? (A) 2 (B) 3 (C) 6 (D) 8 (E) 10 Please someone explain this question with solution .Thanks Since p is an Integer, therefore p^3 must a perfect cube Perfect Cube is a number that has all the powers of its Prime factors a multiple of 3 when the Number is written in Prime factorized form But $$p^3 = 80x = 2^4*5*x$$ i.e. The value of $$x$$ must be a smallest number which can make p^3 a Perfect cube and keep the number smallest for Minimum number of factors of p i.e. $$x_{min} = 2^2*5^2$$ such that $$(p^3)_{min} = 2^4*5*2^2*5^2 = 2^6*5^3$$ i.e. $$p_{min} = 2^2*5$$ Number of Factors of $$2^2*5 = (2+1)*(1+1) = 3*2 = 6$$ _________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html 22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION Intern Joined: 06 Mar 2015 Posts: 27 Re: If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 30 May 2016, 08:05 VeritasPrepKarishma wrote: ggarr wrote: If p^3 is divisible by 80, then the positive integer p must have at least how many distinct factors? 2 3 6 8 10 Prime factorize $$80 = 2^4 * 5$$ If $$p^3$$ has at least four 2s and a 5, it must have at least six 2s and three 5s (Every prime factor of $$p^3$$ must have a power which is a multiple of 3). So p must have at least two 2s and a 5 as factors. Minimum value of $$p = 2^2 * 5$$ This gives us $$(2+1)*(1+1) = 6$$ distinct factors (at least) I fail to understand what you mean by "every prime factor of p must have a power which is a multiple of 3". My guess is that as there are 3 p's, they must all have the same factors with powers and hence 2^4 and 5, have been considered as 2^6 and 5^3. so it can be evenly divided between 3 p's and their total of 8000 is divisible by P. Could you shared some light on the same. Board of Directors Status: QA & VA Forum Moderator Joined: 11 Jun 2011 Posts: 3647 Location: India GPA: 3.5 Re: If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 30 May 2016, 08:19 Bunuel wrote: Tough and Tricky questions: Factors. If p^3 is divisible by 80, then the positive integer p must have at least how many distinct factors? (A) 2 (B) 3 (C) 6 (D) 8 (E) 10 80 = $$2^4$$ x $$5^1$$ Since p^3 is divisible by $$2^4$$ x $$5^1$$ the least value of p will be $$2^6$$ x $$5^3$$ ; where $$p$$ = $$5^1$$ x $$2^2$$ So, p must have (1+1) ( 2 + 1 ) => 6 factors, answer will be (C) _________________ Thanks and Regards Abhishek.... PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only ) Senior Manager Joined: 18 Jan 2010 Posts: 254 Re: If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 30 May 2016, 08:40 1 1 Bunuel wrote: Tough and Tricky questions: Factors. If p^3 is divisible by 80, then the positive integer p must have at least how many distinct factors? (A) 2 (B) 3 (C) 6 (D) 8 (E) 10 $$p^3$$ = 80m, where m is any integer. Now here key point is that 80m is a perfect cube. [ We know this because it is given that p is a positive integer] Now the question says "at least". let us see how 80m can be a perfect cube. 80m = 8 * 2 *5 *m = $$2^3$$ * 2 * 5 * m So we need to multiply "2" by $$2^2$$, so that we get $$2^3$$ We also need to multiply "5" by $$5^2$$, so that we get $$5^3$$ so m is $$2^2$$ * $$5^2$$ With above value of m, p becomes (at least) 2*2*5 = $$2^2$$ * 5 Distinct factors (Power of First term +1) (Power of Second term +1) [ You need to know this formula] (2+1)(1+1) = 6 Director Joined: 26 Oct 2016 Posts: 664 Location: United States Schools: HBS '19 GMAT 1: 770 Q51 V44 GPA: 4 WE: Education (Education) Re: If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 27 Dec 2016, 04:31 1 Let's start by breaking 80 down into its prime factorization: 80 = 2 × 2 × 2 × 2 × 5. If p^3 is divisible by 80, p^3 must have 2, 2, 2, 2, and 5 in its prime factorization. Since p^3 is actually p × p × p, we can conclude that the prime factorization of p × p × p must include 2, 2, 2, 2, and 5. Let's assign the prime factors to our p's. Since we have a 5 on our list of prime factors, we can give the 5 to one of our p's: p: 5 p: p: Since we have four 2's on our list, we can give each p a 2: p: 5 × 2 p: 2 p: 2 But notice that we still have one 2 leftover. This 2 must be assigned to one of the p's: p: 5 × 2 × 2 p: 2 p: 2 We must keep in mind that each p is equal in value to any other p. Therefore, all the p's must have exactly the same prime factorization (i.e. if one p has 5 as a prime factor, all p's must have 5 as a prime factor). We must add a 5 and a 2 to the 2nd and 3rd p's: p: 5 × 2 × 2 = 20 p: 5 × 2 × 2 = 20 p: 5 × 2 × 2 = 20 We conclude that p must be at least 20 for p^3 to be divisible by 80. So, let's count how many factors 20, or p, has: 1 × 20 2 × 10 4 × 5 20 has 6 factors. If p must be at least 20, p has at least 6 distinct factors. _________________ Thanks & Regards, Anaira Mitch GMAT Club Legend Joined: 16 Oct 2010 Posts: 8128 Location: Pune, India Re: If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 02 Feb 2017, 21:59 1 VeritasPrepKarishma wrote: ggarr wrote: If p^3 is divisible by 80, then the positive integer p must have at least how many distinct factors? 2 3 6 8 10 Prime factorize $$80 = 2^4 * 5$$ If $$p^3$$ has at least four 2s and a 5, it must have at least six 2s and three 5s (Every prime factor of $$p^3$$ must have a power which is a multiple of 3). So p must have at least two 2s and a 5 as factors. Minimum value of $$p = 2^2 * 5$$ This gives us $$(2+1)*(1+1) = 6$$ distinct factors (at least) Quote: Can you explain the line in the bracket (Every prime factor of p^3 must have a power which is a multiple of 3) Take any positive integer N. Say $$N = 6 = 2*3$$ $$N^3 = 6^3 = (2^3 * 3^3)$$ Say $$N = 18 = 2 * 3^2$$ $$N^3 = 18^3 = (2^3 * 3^6)$$ Similarly, since p is a positive integer, it will be made up of some prime factors. When you cube it, every prime factor of p^3 will have a power of 3 or a multiple of 3. _________________ Karishma Private Tutor for GMAT Contact: bansal.karishma@gmail.com Senior SC Moderator Joined: 14 Nov 2016 Posts: 1321 Location: Malaysia If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 23 Mar 2017, 17:43 Bunuel wrote: Tough and Tricky questions: Factors. If $$p^3$$ is divisible by 80, then the positive integer p must have at least how many distinct factors? (A) 2 (B) 3 (C) 6 (D) 8 (E) 10 OFFICIAL SOLUTION The prime factorization of 80 is (2)(2)(2)(2)(5) = 2^4*5^1. Thus, $$p^3 = 2^4*5^1*x$$, where x is some integer. Assigning the factors of p 3 to the prime boxes of p will help us see what the factors of p could be. The prime factors in ( ) above are factors not explicitly given for $$p^3$$, but which must exist. We know that $$p^3$$ is the cube of an integer, and must have “triples” of the prime factors of p. Since $$p^3$$ has a factor of $$2^3$$, p must have a factor of 2. The fact that $$p^3$$ has an “extra” 2 and a 5 among its factors indicates that p has additional factors of 2 and 5. If p is a multiple of (2)(2)(5) = 20, then at the very least p has 1, 2, 4, 5, 10, and 20 as factors. So we can conclude that p has at least 6 distinct factors. Alternatively, we can use this shortcut for computing the number of factors: (2’s exponent + 1)(5’s exponent + 1) = (2 + 1)(1 + 1) = (3)(2) = 6. Attachments Untitled.jpg [ 6.9 KiB | Viewed 3935 times ] _________________ "Be challenged at EVERY MOMENT." “Strength doesn’t come from what you can do. It comes from overcoming the things you once thought you couldn’t.” "Each stage of the journey is crucial to attaining new heights of knowledge." Target Test Prep Representative Affiliations: Target Test Prep Joined: 04 Mar 2011 Posts: 2679 Re: If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 27 Mar 2017, 17:30 3 Quote: If p^3 is divisible by 80, then the positive integer p must have at least how many distinct factors? (A) 2 (B) 3 (C) 6 (D) 8 (E) 10 Since p^3/80 = integer, we can say that the product of 80 and some integer n is equal to a perfect cube. In other words, 80n = p^3. We must remember that all perfect cubes break down to unique prime factors, each of which has an exponent that is a multiple of 3. So let’s break down 80 into primes to help determine what extra prime factors we need to make 80n a perfect cube. 80 = 10 x 8 = 5 x 2 x 2 x 2 x 2 = 5^1 x 2^4 In order to make 80n a perfect cube, we need two more 2s, and two more 5s. Thus, the smallest perfect cube that is a multiple of 80 is 2^6 x 5^3. To determine the least possible value of p, we can take the cube root of 2^6 x 5^3 and we have: 2^2 x 5^1 To determine the total number of factors, we add 1 to each exponent attached to each base and multiply those values together. (2 + 1)(1 + 1) = 3 x 2 = 6 total factors. _________________ Jeffery Miller GMAT Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Intern Joined: 20 Jun 2017 Posts: 35 GMAT 1: 580 Q36 V32 GMAT 2: 660 Q39 V41 GRE 1: Q159 V160 Re: If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 28 Jul 2017, 12:29 VeritasPrepKarishma wrote: ggarr wrote: If p^3 is divisible by 80, then the positive integer p must have at least how many distinct factors? 2 3 6 8 10 (Every prime factor of $$p^3$$ must have a power which is a multiple of 3). So p must have at least two 2s and a 5 as factors. SO why isn't it 2^12? Senior Manager Joined: 29 Jun 2017 Posts: 497 GPA: 4 WE: Engineering (Transportation) Re: If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 12 Sep 2017, 00:08 Bunuel wrote: Tough and Tricky questions: Factors. If p^3 is divisible by 80, then the positive integer p must have at least how many distinct factors? (A) 2 (B) 3 (C) 6 (D) 8 (E) 10 SOLUTION p^3 divisible by 80 p^3 => factorising 80and writing in below format x=2 and y = 5 makes cube and then p= 2x2x5=20 2-2-2 2-x-x 5-y-y 20= 2^2 . 5^1 total factors = (2+1) ( 1+1) = 3x2= 6 Option C _________________ Give Kudos for correct answer and/or if you like the solution. Senior Manager Joined: 02 Apr 2014 Posts: 486 GMAT 1: 700 Q50 V34 If p^3 is divisible by 80, then the positive integer p must have at le [#permalink] ### Show Tags 10 Jan 2018, 13:29 Given $$P^3$$ divisible by 80 => $$P^3$$ = $$2^4 * 5 * m$$ (where m is any integer) => to make P as integer, min value of m = $$2^2 * 5^2$$ => min value of $$P^3$$ = $$2^4 * 5 * 2^2 * 5^2$$ => min value of P = $$2^2 * 5$$ mininum number of factors of P = (2+1)(1+1) = 6 => (C) If p^3 is divisible by 80, then the positive integer p must have at le   [#permalink] 10 Jan 2018, 13:29 Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-07-19 11:57:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.664149820804596, "perplexity": 1288.6095986194093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590866.65/warc/CC-MAIN-20180719105750-20180719125750-00162.warc.gz"}
http://mathhelpforum.com/calculus/22501-differentiation-question.html
# Math Help - Differentiation Question 1. ## Differentiation Question Differentiate lnx/x^5 Using the quotient rule I got a final answer of x^4-lnx5x^4/x^10 First of all is this correct, if so is it in its simplest form. 2. Originally Posted by haku Differentiate lnx/x^5 Using the quotient rule I got a final answer of x^4-lnx5x^4/x^10 First of all is this correct, if so is it in its simplest form. it is correct, but it is not the simplest form. factor out the x^4 in the numerator and then cancel it into the denominator. by the way, learn to use parentheses. you should have typed: (x^4 - 5x^4 * lnx)/(x^10) 3. Thanks for that help. Sorry for not using parenthesis. I now realise what I typed makes no sense! 4. For this example here: (x^4 + 3)^3 cos2x When differentiating would you use a combination of the chain rule and the product rule, giving an answer of: (12x^2(x^4+3)^2(cos2x)) + ((x^4+3)^3(sin2x)). Does this look okay? 5. Originally Posted by haku For this example here: (x^4 + 3)^3 cos2x When differentiating would you use a combination of the chain rule and the product rule, giving an answer of: (12x^2(x^4+3)^2(cos2x)) + ((x^4+3)^3(sin2x)). Does this look okay? that's not correct 7. Originally Posted by haku recall that by the product rule we have that $\frac d{dx}f(x)g(x) = f'(x)g(x) + f(x)g'(x)$ here we have $f(x) = \left( x^4 + 3 \right)^3 \implies f'(x) = 12x^3 \left( x^4 + 3 \right)^2$ by the chain rule and $g(x) = \cos 2x \implies g'(x) = -2 \sin 2x$ also by the chain rule
2014-08-30 09:33:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8903669118881226, "perplexity": 830.9974283603299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834663.62/warc/CC-MAIN-20140820021354-00322-ip-10-180-136-8.ec2.internal.warc.gz"}
https://newbedev.com/why-don-t-we-prove-that-functions-used-in-physics-are-continuous-and-differentiable
# Why don't we prove that functions used in physics are continuous and differentiable? Short answer: we don't know, but it works. As the commented question points out, we still don't know if the world can be assumed to be smooth and differentiable everywhere. It may as well be discrete. We really don't have an answer for that (yet). And so what do physicist do, when they don't have a theoretical answer for something? They use Newton's flaming laser sword, a philosophical razor that says that "if it works, it's right enough". You can perform experiments on waves, harmonic oscillators, and the equation you wrote works. As one learns more physics, there are other equations, and for now we can perform experiments on pretty much all kind of things, and until you get really really weird as in black holes or smaller than electrons, the equations that we have give us the correct answer, therefore we keep using them. Bonus question: let's suppose that, next year, we have a Theory of Everything that says that the universe is discrete and non-differentiable. Do you think the applicability of the wave equation would change? And what about the results, would they be less right? A lot of physicists would tell you that it doesn't matter if solutions to physical equations are smooth, as long as you can get meaningful predictions from them. Such a view is overly simplistic. There are circumstances where non-smooth features crop up in solutions to physical equations and are themselves very meaningful. The reason why high school physics classes don't worry about such matters is simply that they are typically beyond the scope of what can be taught in such a class. A classic example of a meaningful discontinuity in a physical system is a shock wave. In certain (nonlinear) wave equations, you can have a solution that starts out smooth but eventually becomes discontinuous in finite time. These discontinuities tell you something useful: they can show up in real life as rogue waves in fluid dynamics or traffic jams in models of traffic. An example from Burgers' equation is shown below. Discontinuities can form in many other systems, especially condensed matter systems, and indicate the presence of defects. Examples include vortices in superfluids (shown below) and dislocations in crystals. The ways that these defects behave often plays a dominant role in the overall behavior (i.e. thermodynamics) of the material. One of the major reasons why it is useful to examine what happens when equations of physics break down is that these are precisely the circumstances where we can learn about new physics. For example, the behavior near discontinuities in nonlinear wave equations can be either diffusive (where the discontinuity gets smeared out in time) or dispersive (where the discontinuity radiates away as smaller waves), and knowing which it is tells you something about the microscopic structure of the fluid. For this reason, identifying where physical equations fail to be well-posed or self consistent is really important. There is a famous open problem in mathematics known as Navier-Stokes existence and smoothness, whose importance can be thought of in this way. If the Navier Stokes equations turn out to generate discontinuities in finite time, it could have profound implications for understanding turbulent phenomena. One physical theory where mathematical rigor is especially far from established is quantum field theory. QFT famously has lots of calculations that spit out $$\infty$$ if done naively. The reasons for this are not fully understood, but we think it has something to do with the fact that there are more fundamental, as yet unknown theories that kick in at very small length scales. Another historical problem related to mathematical nonsense in QFT has to do with the Higgs boson: In absence of a Higgs boson, certain calculations in QFT give probabilities which are greater than 1, which is of course impossible. The energy scale at which these calculations started to break down not only told us that there was some physics we didn't understand yet--namely, there existed a new particle to be discovered--but also told us roughly what the particle's mass had to be. So understanding the well-posedness of mathematical theories of physics is important. Why then don't people worry about this in high school physics? The answer is simply that our current theories of physics have been so well refined that our models for most everyday phenomena are totally consistent and produce no discontinuities. And the reason they never ask you to check that your solutions are sensible is just that they don't want you to get bored, because the answer is always yes. In fact, there are some very general results in the mathematical fields of dynamical systems and partial differential equations which guarantee that most physics equations have unique, smooth solutions. Once you know some of these theorems, you don't even need to check that most solutions are smooth--you are guaranteed this by the structure of the equations themselves. (For example, the Picard-Lindelof theorem accomplishes this for most problems in Newtonian particle dynamics.) Generally speaking, you can assume that the functions you deal with in high school physics are suitably well behaved. This is taken as given and most students will never question it, or even realise that there is anything to question - so well done to you for thinking about this issue. Even in more advanced physics, there is a tendency not to worry about the finer points of mathematical models as long as they produce physically realistic outcomes that match experimental results. Most physicists will not question the fundamental assumptions of a model until and unless it predicts a singularity or a paradox or some other "pathological" outcome. And even then the short-term solution is often to avoid pathological results by restricting the domain in which the model is applied. Mathematicians, by inclination and training, tend to be more careful. What the physicist sees as a focus on reality, the mathematician perceives as a lack of rigour. What is rigorous to the mathematician is overly fussy and pedantic to the physicist. As an example, engineers and physicists will happily use the Dirac delta function, whereas a mathematician will point out that $$\delta(x)$$ is not actually a function (technically, it is a distribution) and treating it as if it were a function can lead to incorrect results. The mathematician says "if $$\delta(x)$$ is a function then what is the value of $$\displaystyle \int_{-1}^{1} \delta(x)^2 dx$$ ?". The physicist says "in what physical situation would I ever need to use such a bizarre integral ?".
2022-08-09 17:17:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7314860820770264, "perplexity": 310.1195923259723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00285.warc.gz"}
https://socratic.org/questions/how-do-i-find-the-vertex-axis-of-symmetry-y-intercept-x-intercept-domain-and-ran-16
How do I find the vertex, axis of symmetry, y-intercept, x-intercept, domain and range of y = -x^2 + 6x -9? Oct 15, 2015 This one is easy to factor to find the vertex to be the point $\left(x , y\right) = \left(3 , 0\right)$ so that $x = 3$ is the axis of symmetry. The $y$-intercept is $- 9$, the $x$-intercept is $x = 3$, the domain is all of $\mathbb{R}$, and the range is $\left(- \infty , 0\right]$. Explanation: You can factor $y = f \left(x\right) = - {x}^{2} + 6 x - 9$ as the opposite of a perfect square (there's no need to "complete the square" here): $y = - {x}^{2} + 6 x - 9 = - \left({x}^{2} - 6 x + 9\right) = - {\left(x - 3\right)}^{2}$. This means the parabola opens downward with a vertex (high point in this case) at $\left(x , y\right) = \left(3 , 0\right)$ (all other values of the function are negative). This also gives $x = 3$ as the axis of symmetry and the (one) $x$-intercept. For the $y$-intercept, compute $f \left(0\right) = - {0}^{2} + 6 \cdot 0 - 9 = - 9$. The domain is the entire real number system $\mathbb{R}$ because you can plug in anything you want for $x$. The range is the set of non-positive numbers $\left(- \infty , 0\right]$ because that's the set of possible outputs. The graph is shown below. Make sure you think about all these answers in relation to the graph. graph{-x^2+6x-9 [-40, 40, -20, 20]}
2021-09-19 09:02:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 18, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7862153649330139, "perplexity": 212.31896114444075}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056752.16/warc/CC-MAIN-20210919065755-20210919095755-00139.warc.gz"}
http://getmath.ca/courses/geometry/converting-dms-to-decimal-degrees
Search Converting DMS to Decimal Degrees . . . . . . . . . . . . . . . . 1. Our Training Videos Skills Application: Converting Degrees Minutes and Seconds to Decimal Degrees Step-by-Step: Converting Degrees Minutes and Seconds to Decimal Degrees . . . . . . . . . 2. Running with Pi Tagged in When carpenters are planning to build a circular stair they need to think in terms of time! How does time relate to cutting stairs? Well, math with time and angles are both based on the number 60. To produce a high level of precision for each cut, tread angles are carefully calculated in degrees-minutes-and-seconds. In the course of subsequent calculations the tread angles are converted from degrees, minutes and seconds to decimal degrees. The decimal degrees are then used in chord calculations. The treads edge and its chords are easily laid out for those precise cuts. Ta-da! The perfect start for the perfect stairs. Running with Pi . . . . . . . . . 3. Our Training Worksheets Sometimes in your work you will need to convert an angle measured in degrees and minutes to its equivalent in decimal degrees. To convert the angle 4°30 to decimal form follow these two steps: 1.    Write the angle as a sum of degrees and minutes 4°30= 4°+ 30 2.  Use a 60-base fraction to convert minutes to degrees. $\text{4° 30′ = 4° + (}\frac{30′}{60′}\text{)}$ = 4°+ 0.5° = 4.5° Get a $20-or-so dual-display scientific calculator, even if you cant take it with you into the exam hall. A calculator is also a learning tool, Technology Use is one of the 9 Essential Skills you cant do without. Trying out different functions on a calculator will further your understanding and help with building a bigger mental picture. It builds links between numbers and algebra. It is important to try alternative calculations or approaches, to sketch & label angles, tracking changes to units of measure, visualizing to cross-link the concepts of angles and numbers. A Short Review $\text{35° 56′ 37″ = 35 + }\frac{56}{60}+\frac{37}{3600}$ = 35.94361111° Worksheet: Level 1 The operations used are clearly specified. Only one type of mathematical operation is used in a task. Worksheet: Level 1 Sample Questions Write the following amounts in DMS notation as decimal numbers. Remember the following key numbers: Degree amounts remain unchanged and copy over as whole numbers, followed by some decimal digits: 30 angular minutes = 0.500 decimal degrees 15 angular minutes = 0.250 decimal degrees 45 angular minutes = 0.750 decimal degrees 20 angular minutes = 0.333 decimal degrees 40 angular minutes = 0.666 decimal degrees Example: 1. 25°15‵00" = ______° Further example: 1. Convert 27” to decimal degrees. Worksheet: Level 1 Answer Key Answer: 25°15‵00" = 25.250° Download Worksheet: Level 1 Worksheet: Level 2 Tasks involve one or two types of mathematical operation. Few steps of calculation are required. Worksheet: Level 2 Sample Questions Convert to decimal degrees. Answer to 6 decimal places. Example: 1. Convert 20°3030” into decimal degrees: Further example: 1. Convert 28°3529.03” into decimal degrees. Worksheet: Level 2 Sample Answer Key Answer: 20° → 20.000000° 30′ ÷ 60 = 0.500000° 30" ÷ 3600 = 0.008333° 20.00° + 0.50° + 0.008333° = 20.508333° Download Worksheet: Level 2 Infosheet: Advanced Tasks involve multiple steps of calculation. Advanced mathematical techniques may be required. Additional Information: Another Way to Get the Answer Task: Convert 37°1508” to decimal degrees. Another Way to Get the Answer Layout your calculations neatly, so you can review, track changes, correct or learn from them. One way a layout can look is like this: Additional Information: Another way to get the answer, using a scientific calculator Task: Convert 37°1508” to decimal degrees. Another way to get the answer, using a scientific calculator: 1. Enter 37 DMS 15 DMS 8 DMS 2. Press 2ndF 3. Press DMS Take a pencil and write down 37.252222 4. The number on the display changes to 37.25222222 5. Write the ° sign after the 37.252222 Now youre done with the math. 6. The last step is to check your work: make sure everything was copied and written correctly then determine that the correct way to write the answer is 37.252222 Additional Information: Another way to get the answer, using a scientific calculator and bracketing Task: Convert 37°1508” to decimal degrees. Another way to get the answer, using a scientific calculator and bracketing: It is possible to enter everything in the calculator in one line. Get a$20-or-so dual-display scientific calculator, even if you cant take it with you into the exam hall. A calculator is also a learning tool, Technology Use is one of the 9 Essential Skills you cant do without. Here are the steps with some explanations: 1. Identify that 37°1508” is an angle in DMS. 2. Identify that the angle 37°1508” needs to be converted to decimal degrees. 3. Recognize that ° means degrees, means minutes and means seconds. 4. Recognize that the degrees amount in the DMS number is the exact degrees amount in decimal degrees. Theres no math with it, it stays unchanged, 37. 5. Recognize that the minutes amount needs converting to degrees. Recognize that the conversion factor to convert from minutes to degrees is 60, since there are 60 minutes in a degree. 6. Recognize that to convert from DMS to decimal degrees the conversion factor will be a divisor because many of the smaller unit are to be shared equally by groups of the bigger unit. (Look at the minutes amount. 15 minutes is not 15 degrees, 15 minutes is smaller than 1 degree, its gonna be zero-point-something degree) 7. Recognize that the seconds amount also needs converting to degrees. Recognize that the conversion factor to convert from seconds to degrees is 3600, since there are 3600 seconds in a degree. (60 seconds x 60 minutes = 3600 seconds in a degree) 8. Recognize that to convert from DMS to decimal degrees the conversion factor will be a divisor again because many of the smaller unit are to be shared equally by groups of the bigger unit. 9. Set up the problem: 10. degrees + minutes as decimal degrees + seconds as decimal degrees = angle in decimal degrees. Calculate: 37   +    (15 ÷ 60)    +    (8 ÷ 3600)    =    37.252222 11. Take a pencil and write down  37.252222 12. Write the ° sign after the 37.252222 Now you`re done with the math. 13. The last step is to check your work: make sure everything was copied and written correctly then determine that the correct way to write the answer is 37.252222°
2020-06-04 06:52:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7701741456985474, "perplexity": 1697.0812926817475}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439213.69/warc/CC-MAIN-20200604063532-20200604093532-00036.warc.gz"}
https://homework.cpm.org/category/MN/textbook/cc2mn/chapter/3/lesson/3.2.5/problem/3-85
### Home > CC2MN > Chapter 3 > Lesson 3.2.5 > Problem3-85 3-85. Write at least three expressions that use each of the numbers $2$, $3$, $6$, and $8$ only once and any operations and grouping symbols (addition, subtraction, multiplication, division, and parentheses). Each expression should have a different value, with one being equal to $28$. One expression that is equal to $28$ follows the following format: $(?)(?)\ +\ ?\ -\ ?=28$ Each ? represents one of the four given numbers. Use the above expression as an example for constructing the remaining two expressions. Remember, each expression needs to have a different value!
2022-08-17 08:24:37
{"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8186716437339783, "perplexity": 607.118010144802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00544.warc.gz"}
https://en.wikipedia.org/wiki/Talk:Spacetime
# Talk:Spacetime WikiProject Mathematics (Rated B-class, High-priority) This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Mathematics rating: B Class High Priority Field:  Mathematical physics One of the 500 most frequently viewed mathematics articles. ## Expressing math notation, units of measure, buzzwords, and linking Overview Wikipedia’s Manual of Style: Dates and Numbers (WP:MOSNUM) has some wonderful guidance on expressing units of measure. I was tempted to transplant a small bit of cherry-picked examples from its guidance here, but realized it is so well written and succinct, it’s best just to provide the link. So… The following are my thoughts on how WP:MOSNUM’s principles apply to some specifics of our Spacetime article. Avoid “sciencey” math notation Notwithstanding that experts in a field will understand what 1.2 m·s−2 means, instead writing 1.2 meters/second2 makes the measure fully accessible to a larger segment of our visiting readership. Wikipedia is a general-interest encyclopedia; Ph.D.s don’t come to Wikipedia to learn something within their field of expertise. When doing technical writing, one must always consider the gamut of the target audience and try to make the article accessible to the lowest level of that gamut. Where practical, spell out unit names unless the unit symbol is ubiquitous in everyday life Take this example of of two unit symbols (km and s): nearly 300,000 km in space being equivalent to 1 s in time nearly 300,000 km in space being equivalent to 1 second in time. Note how “s” became “second” but “km” remained as a unit symbol. Why? Except for in America, both “km” and “km/hr” are seen so often in real life, they become spelled-out units of measure in its own right even though they are technically unit symbols. We also take into consideration the assumed expertise of the readership of the article; writing for our Dog article is different from our Year article, which is different from our Spacetime article. As for Americans and their Customary units of measure in normal life, we can safely assume Americans visiting this particular article are familiar with “km” and “km/hr”. However, “km/s” is not on traffic signs. This principle of spelling out units of measure should generally be adhered to unless doing so makes a paragraph or article section tedious and cumbersome for our readership. On that subject… Properly introduce “sciencey” unit symbols before using them If a cumbersome (lengthy and multi-syllabic) scientific unit of measure (one not frequently encountered in real life) repeatedly appears in the same article section and is getting tedious to read, then writers should parenthetically introduce the unit symbol before employing it. Thus, the first occurrence in the section that looks like this: 30 MeV 30 million electron volts (MeV) After a proper parenthetical introduction, authors may then use the unit symbol (30 MeV) throughout the rest of the passage. The point of using unit symbols where they are frequently encountered isn’t to save ourselves time when writing, but to save our readers mental energy and make the text faster to read. Authors should also consider allowing greater repetition of a spelled-out unit of measure (avoiding unit symbols) if it is short or monosyllabic, like meter, volt, watt, hertz, lux, year, hour, and joule. A paragraph containing three or four instances similar to the following… A total surge energy of 30 joules caused the expected damage. is more natural to read than… A total surge energy of 30 J caused the expected damage. …without undue tediousness. Avoid Click To Learn & Return©™®. Don’t require readers to click links to understand the material at hand It is good to avoid the overused Wikipedia practice of “We have links, so let’s embrace Click To Learn & Return©™®.” If a noun, term, or phrase is specialty lingo within an art, we shouldn’t use it in an “Oh… didn’t-cha know?”-fashion and expect the mere presence of a link to be sufficiently informative without the courtesy of also providing a simple explanatory parenthetical. Take, for instance, our article on the Space Needle; verbiage like this: An imitation carillon was installed in the Space Needle. An imitation carillon (a multi-bell musical device) was installed in the Space Needle. As an alternative to parentheticals, Click to Learn & Return can be avoided fluidly and naturally by explaining the concept first and then adding a clause to introduce the specialty lingo, like the following example with the term “manifold”: By combining space and time into a single manifold called Minkowski space in 1908, physicists significantly simplified a [Minkowski] fused time and the three spatial dimensions of space into a single four-dimensional continuum now known as Minkowski space, what mathematicians refer to as a type of 4‑dimensional manifold. Italicizing to set off lingo-speak is an option to employ depending upon nuances such as whether it is a compound noun or the lingo-speak is especially obscure. In all cases when doing technical writing, one carefully considers the sophistication level of the target readership. Clearly, if the vast majority of the target readership can be expected to be familiar with buzzwords common to an art, we wouldn’t speak down to the audience by pulling out the Ernie & Bert puppets to explain the obvious. For instance, an article on a particular kind of musical instrument requires no explanatory verbiage for terms like frequency or musical note; the link alone is sufficient. Greg L (talk) 18:27, 23 June 2017 (UTC) ### Protest I want to express my strong objection, firstly to placing the above essay-like variations on the theme of WP:MOS on the talk page of the article entitled Spacetime, because it has no relation to its specific content, and secondly to the content of this comment, trying to prescribe a style, partly inappropriate for an encyclopedia, and not being fully covered by the established policies of WP. Purgy (talk) 13:10, 26 June 2017 (UTC) What part of “The following are my thoughts”… did you find confusing, Purgy? Methinks thou dost protest too much. If you have a good idea on how we can improve the article, ample digital whitespace is available for constructive help, such as writing your own WP:Essay; they exist for a reason. Greg L (talk) 20:50, 27 June 2017 (UTC) The problem is not me, in finding things confusing, but you, in repeatedly(!) violating established WP policies, e.g. in editing my intended and coherent layout. I will not elaborate on your efforts to commandeer this whole article, including this talk page, even when being rather clueless on any stringent background of the topic. I cite: "We don’t need any of that here." Meanwhile, others try to discuss the matter. Roma locuta? Purgy (talk) 06:15, 28 June 2017 (UTC) I see. As regards my above WP:essay on how to keep things as easy as possible to read on an already exceedingly technical article, you are just wrong; an essay on such a topic is perfectly OK. I suggest you read the link I provided. As regards me and others “violating established WP policies”, where I edited text of yours you felt was “intended” and “coherent”, try not to take the collaborative writing process so personally; it is inherent to the Wikipedia experience for others to discuss and change something you wrote. With specific regard to that edit you seem to be chaffing about, User:Stigmatella aurantiaca responded to you personally as follows (∆ edit here): You've made some good suggestions in the past, but English does not appear to be your native language. So I've taken your suggested changes here where you and User:Greg L can work on them together. Hope you don't mind too much?. On a final note, you made a comment here, (∆ edit) where you wrote… I did not quit cooperation on it for the behaviour of one single disruptive IP-editor, but I explicitly declared four contributors as causing me troubles in cooperating. Going forward, collaboration usually works better if you don’t bitterly and explicitly complain about the behavior of virtually the entire population of editors who are active on this article. Regards. Greg L (talk) 19:01, 28 June 2017 (UTC) Evidently, I lack the required professionalism to formulate text in a way that is sufficiently accessible to you in its fundamental details (layout vs content, placement vs existence of essays, talk pages vs articles, ... ). Therefore, I walk away from the carcass of this discussion like from beating a dead horse. Purgy (talk) 06:01, 29 June 2017 (UTC) I think it is important to keep in mind and coordinate on who our target readership is as we go forward on this article. Material I write for guys with slide rules is very different from that for managers. Furthermore, content written for a diverse crowd (like a general interest encyclopedia) must have a much greater range to its difficulty level; the initial portions must be informative and interesting for beginners while allowing advanced readers to wade further to material they find informative. It’s also crucial to consider the likely (middle of the bell curve) level of familiarity the target readership has with the lingo and background of a given art. Someone who is visiting our Trombone article is usually already familiar with musical concepts like “pitch” and “notes.” Because Wikipedia is a general interest encyclopedia directed to a diverse readership, here are some principles I believe we should keep in mind as it relates to an exceedingly technical subject like spacetime: 1. The average age of a Wikipedia reader is 36 years, ranging from 14 to 92. 2. Wikipedia's pithy and succinct ledes (the opening paragraphs before the index) are one of the features that makes Wikipedia so popular. Readers driving to the library and digging out an Encyclopedia Britannica can be expected to wade a long ways into the article whereas an Internet-based readership has a large portion of an attention deficit crowd who wade no further than our ledes. 3. Ledes should clearly explain the subject so that the reader is prepared for the greater level of detail that follows. Ledes should establish significance and be written in a way that makes readers want to know more. Ledes should answer two questions for the nonspecialist reader: “What is the subject?” and “Why is this subject distinctive and notable?” 4. Because our readership is so diverse, it is important that not only should the entire article start out simply (made as accessible as possible), but—particularly in the early sections—individual sections should be as accessible as practicable in the early paragraphs, carefully transitioning to increasing difficulty within the section. 5. Keep it simple. We are writing only for non-experts; true experts in any given subject matter don’t come to Wikipedia to learn anything. I’ve corresponded with Ph.D.s who published papers I had cited in various Wikipedia articles and quickly found they had zero interest in Wikipedia; in fact, they were incredulous that someone would waste their time contributing to a project where 16-year-old kids could revert your material. With two-thirds of my patents in PEM fuel cells, I can attest that a long time ago, I once scrolled through our article on that subject… just to see how long it was; I didn’t read a word of it and have zero interest in it. 6. As regards the Spacetime article, we can assume that the middle-of-the-bell-curve reader (the mode, mean, and median) is already familiar with Einstein and the basics of his Theory of Special Relativity, and has come here to understand what is distinctive about spacetime. Greg L (talk) 18:52, 8 July 2017 (UTC) ### Dissenting opinion I do not want to let the above stand as if its content as a whole were based on general consensus. I do neither agree to the semi-educated, unsourced pseudo-statistics, nor to the claims about the "non-expert, potential readers" of Wikipedia. I consider this to be just another (text-) wall, intended to prohibit contributions from outside the editor's narrow scenery. Purgy (talk) 07:52, 9 July 2017 (UTC) Hmmm. With language like “semi-educated”, you’re obviously just trying to ‘mix it up’ and be nothing but provocative. Before trying to be inflammatory on the project, I suggest you think twice before weighing in when you don’t have any idea what you’re talking about. I linked the statistic about the age of our readership; it’s from Mani Pande of the Wikimedia Foundation. You can complain to him about his “pseudo-statistics.” Tell him Purgy knows better. As for what ledes are supposed to accomplish, that came right out of Wikipedia:Writing better articles. And if you think published Ph.D. experts come to Wikipedia to read up on the latest & greatest truths of their art, you go find one who does that. Finally, my suggestion that we ‘write for the target readership’ is common-sense stuff that comes from Technical Writing 101. If you want to be constructive to the project, you can consider User:Stigmatella aurantiaca’s offer (∆ edit here): You've made some good suggestions in the past, but English does not appear to be your native language. So I've taken your suggested changes here where you and User:Greg L can work on them together. Hope you don't mind too much?. That was a perfectly reasonable and generous offer for us to devote extra amounts of our all-volunteer time to assist you in contributing. But all we received from you was yet another of your protests where you complained (∆ edit here) that you have problems with four editors here, which amounts to pretty much every single contributor who was active on this page! The other passengers on a plane tend to be disinclined to see your point of view when you’re trying to open the cabin door mid-flight while complaining that everyone is out to get you. Your persistent sniping here is bordering on being nothing but disruptive. I suggest you take this English idiom under advisement: you best let it go. You wrote earlier that you had done so (“Therefore, I walk away from the carcass of this discussion like from beating a dead horse”) but temptation seems to have gotten the better of you. Greg L (talk) 15:43, 9 July 2017 (UTC) ## Genesis of the spacetime concept Schlafly is correct in his latest edit. The history of special relativity and the history of spacetime take a branch-and-merge somewhere around Lorentz and Poincare. Minkowski actually began work on his ideas about spacetime before 1905, and was stunned by Einstein's 1905 publication, since it expressed various conclusions that Minkowski had already (privately) arrived at concerning such things as the meaning of local time etc. Although he felt scooped, Minkowski never attempted to claim priority, and was always generous in giving Einstein credit. He did, however, think that Einstein's kinematic presentation was klunky. I'm working to re-introduce some of the historical background that Geoffrey deleted since he wished to give more emphasis to spacetime as opposed to special relativity. This weekend has been busy, however... Stigmatella aurantiaca (talk) 12:38, 9 July 2017 (UTC) I saw the edit and assumed it was correct. Thanks for confirming. Greg L (talk) 15:33, 9 July 2017 (UTC) P.S. Historical tidbits such as what you wrote above about Minkowski easily passes a simple Technical Writing 101 grin test as being informative and interesting for the target readership. That clearly belongs in the historical section and should be restored. The previous version of the historical section may have been a bit long, but if someone is getting tired of reading that section, they can skip it. Geoffrey’s edit, which pruned the History section down to its roots in order to “give more emphasis to spacetime as opposed to special relativity,” makes zero sense to me and is far from an improvement. Almost everyone who is interested in reading the History section will already have a general familiarity (or want to be familiar) with the basic principles of special relativity (maybe general relativity too), and will be interested in learning how the various theories paralleled each other and pollinated each other. That Minkowski had been one of the profs for a 16-year-old Einstein was an interesting tidbit I added to the lede specifically because it was germane to the objective for all of Wikipedia's ledes: Ledes should establish significance and be written in a way that makes readers want to know more. Going forward, the historical relationship of how Minkowski fits relative to other significant Lorentz/Einstein/spacetime events begs to be fleshed out in an interesting way in the History section. My most general advise would be to restore roughly two-thirds of what had previously been there, and arrange it in a semi-chronological fashion (you’ve seen movies and documentaries adopt that kind of flow presentation) so that important “elevator ride” nuggets like what you wrote here on this thread appear early in the section, and really ‘detaily’ stuff that is harder to remember appears later. Greg L (talk) 22:50, 9 July 2017 (UTC) PPS: I'd be happy to collaborate in detail on a History section within a green-div here on this page. That is, unless you’ve had an epiphany on a sweet way to do it. Greg L (talk) 00:19, 10 July 2017 (UTC) If you could work on this, I'd much appreciate it. My wife fell down and is currently nearly bedridden, so the amount of time that I can spend on this subject is somewhat limited, between trips to the doctor, taking her to physical therapy, going to my regular job, etc. I've been somewhat overwhelmed these last few weeks. Disentangling the numerous threads of "who believed what and when" is very complex. The reason, of course, is that many researchers were hot on the trail of what was ultimately to be special relativity, and most historians agree that if Einstein hadn't been first, someone else would have arrived at the same conclusions within a very few years at most. Two highly useful sources on Minkowski's contributions are One should be very careful of how one interpret's Minkowski's statement to Born that I had quoted to you before, because (1) local time is only one of a number of issues dealt with by Einstein in his 1905 paper, and (2) Poincare had already made statements to that effect, so I'm puzzled because, so far as I can tell (beware of wp:OR on my part!) Minkowski should have stated that he had been scooped by Poincare on this point, not Einstein(!!!) Then there is the matter of Sommerfeld's redaction of Minkowski's 1907 article for its posthumous 1915 publication, which muddies the waters a lot. In addition, Minkowski continued to use the word "ether" until shortly before he died. There are contradictory viewpoints on how much Minkowski truly understood of the import of Einstein's contributions, etc etc. Stigmatella aurantiaca (talk) 03:12, 10 July 2017 (UTC) I’m very sorry to hear about your wife. I hope she recovers quickly. I could help you with the French-like presentation of the meal, but on material of this nature, I’d screw up (big time) if I tried my hand at an original recipe. This is supposed to be a hobby that’s enjoyable for you. Anything and everything here can wait until real life for you gets a tad monotonous and you actually look forward to getting back into the kitchen to mix up a little something special. If someone else here doesn’t step in and do some magic with the History section, what I might do is look over the previous version of it and see if I can jigger it. Greg L (talk) 04:59, 10 July 2017 (UTC) It is a mistake to have so much on Einstein in the introductory sections. He had very little to do with the concept of spacetime. There are separate articles on the history of relativity. Some of what is there is not quite right. Transformations mixing space and time predate the 20th century and Einstein. So did the constant speed of light. Saying that Einstein's 1905 paper was a breakthrough over Lorentz and Poincare is debatable, and not really relevant to spacetime. It would be more relevant to say whether Minkowski's work was a breakthrough, but there is no need to insert historical opinions here anyway. Roger (talk) 05:03, 12 July 2017 (UTC) ## Draft of revised history section for discussion There is very little solid evidence indicating the path whereby Minkowski developed his geometric concept of spacetime. Much of what is written is based on arcane deconstruction of what was crossed out in drafts of Minkowski's lectures, extensive speculation on why Minkowski had omitted mentioning Poincare in his 1908 lecture, etc. etc. I've tried to avoid such speculative analysis. Stigmatella aurantiaca (talk) 09:20, 11 July 2017 (UTC) Figure 1-2. Michelson and Morley expected that motion through the aether would cause a differential phase shift between light traversing the two arms of their apparatus. The most logical explanation of their negative result, aether dragging, was in conflict with the observation of stellar aberration. {Text struck through to denote that it is no longer in service. Greg L (talk) 21:52, 18 July 2017 (UTC) By the mid-1800s, various experiments such as the observation of the Arago spot and differential measurements of the speed of light in air versus water were considered to have proven the wave nature of light as opposed to corpuscular theory.[1] Waves implied the existence of a medium which waved, but attempts to measure the properties of the hypothetical luminiferous aether implied by these experiments provided contradictory results. For example, the Fizeau experiment of 1851 demonstrated that the speed of light in flowing water was less than the sum of the speed of light in air plus the speed of the water by an amount dependent on the water's index of refraction. Among other issues, the dependence of the partial aether-dragging implied by this experiment on the index of refraction (which is dependent on wavelength) led to the unpalatable conclusion that aether simultaneously flows at different speeds for different colors of light.[2] The famous Michelson–Morley experiment of 1887 (Fig. 1‑2) showed no differential influence of Earth's motions through the hypothetical aether on the speed of light, and the most likely explanation, complete aether dragging, was in conflict with the observation of stellar aberration.[3] George Francis FitzGerald in 1889 and Hendrik Lorentz in 1892 independently proposed that material bodies traveling through the fixed aether were physically affected by their passage, contracting in the direction of motion by an amount that was exactly what was necessary to explain the negative results of the Michelson-Morley experiment. (No length changes occur in directions transverse to the direction of motion.) By 1904, Lorentz had expanded his theory such that he had arrived at equations formally identical with those that Einstein were to derive later (i.e. the Lorentz transform), but with a fundamentally different interpretation. As a theory of dynamics (the study of forces and torques and their effect on motion), his theory assumed actual physical deformations of the physical constituents of matter.[4]:163–174 Lorentz's equations predicted a quantity that he called local time, with which he could explain the aberration of light, the Fizeau experiment and other phenomena. However, Lorentz considered local time to be only an auxiliary mathematical tool, a trick as it were, to simplify the transformation from one system into another. Other physicists and mathematicians at the turn of the century came close to arriving at what is currently known as spacetime. Einstein himself noted, that with so many people unraveling separate pieces of the puzzle, "the special theory of relativity, if we regard its development in retrospect, was ripe for discovery in 1905."[5] An important example is Henri Poincaré,[6][7] who in 1898 argued that the simultaneity of two events is a matter of convenience. In 1900, he recognized that Lorentz's "local time" is actually what is indicated by moving clocks by proposing an explicitly operational definition of clock synchronization assuming constant light speed. In 1900 and 1904, he suggested the inherent undetectability of the aether by emphasizing the validity of what he called the principle of relativity, and in 1905/1906[8] he mathematically perfected Lorentz's theory of electrons in order to bring it into accordance with the postulate of relativity. While discussing various hypotheses on Lorentz invariant gravitation, he introduced the innovative concept of a 4-dimensional space-time by defining various four vectors, namely four-position, four-velocity, and four-force.[9][10] He did not pursue the 4-dimensional formalism in subsequent papers, however, stating that this line of research seemed to "entail great pain for limited profit", ultimately concluding "that three-dimensional language seems the best suited to the description of our world".[10] Furthermore, even as late as 1909, Poincaré continued to believe in the dynamical interpretation of the Lorentz transform.[4]:163–174. For these and other reasons, most historians of science argue that Poincaré did not invent what is now called special relativity.[7][4] In 1905, Einstein introduced special relativity (even though without using the techniques of the spacetime formalism) in its modern understanding as a theory of space and time.[7][4] While his results are mathematically equivalent to those of Lorentz and Poincaré, it was Einstein who showed that the Lorentz transformations are not the result of interactions between matter and aether, but rather concern the nature of space and time itself. Einstein performed his analyses in terms of kinematics (the study of moving bodies without reference to forces) rather than dynamics. He obtained all of his results by recognizing that the entire theory can be built upon two postulates: The principle of relativity and the principle of the constancy of light speed. In addition, Einstein in 1905 superseded previous attempts of an electromagnetic mass-energy relation by introducing the general equivalence of mass and energy, which was instrumental for his subsequent formulation of the equivalence principle in 1907, which declares the equivalence of inertial and gravitational mass. By using the mass-energy equivalence, Einstein showed, in addition, that the gravitational mass of a body is proportional to its energy content, which was one of early results in developing general relativity. While it would appear that he did not at first think geometrically about spacetime,[11]:219 in the further development of general relativity Einstein fully incorporated the spacetime formalism. When Einstein published in 1905, another of his competitors, his former mathematics professor Hermann Minkowski, had also arrived at most of the basic elements of special relativity. Max Born recounted a meeting he had made with Minkowski, seeking to be Minkowski's student/collaborator: "[…] I went to Cologne, met Minkowski and heard his celebrated lecture 'Space and Time' delivered on 2 September 1908. […] He told me later that it came to him as a great shock when Einstein published his paper in which the equivalence of the different local times of observers moving relative to each other was pronounced; for he had reached the same conclusions independently but did not publish them because he wished first to work out the mathematical structure in all its splendor. He never made a priority claim and always gave Einstein his full share in the great discovery."[12] Minkowski had been concerned with the state of electrodynamics after Michelson's disruptive experiments at least since the summer of 1905, when Minkowski and David Hilbert led an advanced seminar attended by notable physicists of the time to study the papers of Lorentz, Poincare et al. However, it is not at all clear when Minkowski began to formulate the geometric formulation of special relativity that was to bear his name, or to which extent he was influenced by Poincaré's four-dimensional interpretation of the Lorentz transformation. Nor is it clear if he ever fully appreciated Einstein's critical contribution to the understanding of the Lorentz transformations, thinking of Einstein's work as being an extension of Lorentz's work.[13] Figure 1-3. Hand-drawn transparency presented by Minkowski in his 1908 Raum und Zeit lecture. Minkowski introduced his geometric interpretation of spacetime to the public on November 5, 1907 in a lecture to the Göttingen Mathematical society with the title, The Relativity Principle (Das Relativitatsprinzip). In the original version of this lecture, Minkowski continued to use such obsolescent terms as the ether, but the posthumous publication of this lecture in the Annalen der Physik (1915) was edited by Sommerfeld to remove this term. Sommerfeld also edited the published form of this lecture to revise Minkowski's judgement of Einstein from being a mere clarifier of the principle of relativity, to being its chief expositor.[12] On December 21, 1907, Minkowski spoke again to the Göttingen scientific society, and on September 21, 1908, Minkowski presented his famous talk, Space and Time (Raum und Zeit),[14] to the German Society of Scientists and Physicians.[note 1] The opening words of Space and Time include Minkowski's famous statement that "Henceforth, space for itself, and time for itself shall completely reduce to a mere shadow, and only some sort of union of the two shall preserve independence." Space and Time included the first public presentation of spacetime diagrams (Fig. 1‑3), and included a remarkable demonstration that the concept of the invariant interval (discussed shortly), along with the empirical observation that the speed of light is finite, allows derivation of the entirety of special relativity.[note 2] Einstein, for his part, was initially dismissive of the Minkowski's geometric interpretation of special relativity, regarding it as überflüssige Gelehrsamkeit (superfluous learnedness). However, in order to complete his search for general relativity that started in 1907, the geometric interpretation of relativity proved to be vital, and in 1916, Einstein fully acknowledged his indebtedness to Minkowski, whose interpretation greatly facilitated the transition to general relativity.[4]:151–152 Since there are other types of spacetime, such as the curved spacetime of general relativity, the spacetime of special relativity is today known as Minkowski spacetime. Brief summary • To mid-1800s scientists, the wave nature of light implied a medium that waved. Much research was directed to elucidate the properties of this hypothetical medium, called the "luminiferous aether". Experiments provided contradictory results. For example, stellar aberration implied no coupling between matter and the aether, while the Michelson–Morley experiment demanded complete coupling between matter and the aether. • FitzGerald and Lorentz independently proposed the length contraction hypothesis, a desperate ad hoc proposal that particles of matter, when traveling through the aether, are physically compressed in their direction of travel. • Henri Poincaré was to come closer than any other of Einstein's predecessors to arriving at what is currently known as the special theory of relativity. • "The special theory of relativity ... was ripe for discovery in 1905." • Einstein's theory of special relativity (1905), which was based on kinematics and a careful examination of the meaning of measurement, was the first to completely explain the experimental difficulties associated with measurements of light. It represented not merely a theory of electrodynamics, but a fundamental re-conception of the nature of space and time. • Having been scooped by Einstein, Hermann Minkowski spent several years developing his own interpretation of relativity. Between 1907 to 1908, he presented his geometric interpretation of special relativity, which has come to be known as Minkowski space, or spacetime. References 1. ^ Hughes, Stefan (2013). Catchers of the Light: Catching Space: Origins, Lunar, Solar, Solar System and Deep Space. Paphos, Cyprus: ArtDeCiel Publishing. pp. 202–233. ISBN 9781467579926. Retrieved 7 April 2017. 2. ^ Stachel, John (2005). "Fresnel’s (Dragging) Coefficient as a Challenge to 19th Century Optics of Moving Bodies.". In Kox, A. J.; Eisenstaedt, Jean. The Universe of General Relativity. Boston: Birkhäuser. pp. 1–13. ISBN 081764380X. Archived from the original (PDF) on 2017-04-13. 3. ^ French, A.P. (1968). Special Relativity. Boca Raton, Florida: CRC Press. pp. 35–60. ISBN 0748764224. 4. Pais, Abraham (1982). ""Subtle is the Lord-- ": The Science and the Life of Albert Einstein (11th ed.). Oxford: Oxford University Press. ISBN 019853907X. 5. ^ Born, Max (1956). Physics in My Generation. London & New York: Pergamon Press. p. 194. Retrieved 10 July 2017. 6. ^ Darrigol, O. (2005), "The Genesis of the theory of relativity" (PDF), Séminaire Poincaré, 1: 1–22, Bibcode:2006eins.book....1D, ISBN 978-3-7643-7435-8, doi:10.1007/3-7643-7436-5_1 7. ^ a b c Cite error: The named reference Miller was invoked but never defined (see the help page). 8. ^ Poincare, Henri (1906). "On the Dynamics of the Electron (Sur la dynamique de l’électron)". Rendiconti del Circolo matematico di Palermo. 21: 129–176. Retrieved 15 July 2017. 9. ^ Zahar, Elie (1989) [1983], "Poincaré's Independent Discovery of the relativity principle", Einstein's Revolution: A Study in Heuristic, Chicago: Open Court Publishing Company, ISBN 0-8126-9067-2 10. ^ a b Walter, Scott A. (2007). "Breaking in the 4-vectors: the four-dimensional movement in gravitation, 1905–1910". In Renn, Jürgen; Schemmel, Matthias. The Genesis of General Relativity, Volume 3. Berlin: Springer. pp. 193–252. Archived from the original on July 14, 2017. Retrieved 15 July 2017. 11. ^ Schutz, Bernard (2004). Gravity from the Ground Up: An Introductory Guide to Gravity and General Relativity (Reprint ed.). Cambridge: Cambridge University Press. ISBN 0521455065. Retrieved 24 May 2017. 12. ^ a b Weinstein, Galina. "Max Born, Albert Einstein and Hermann Minkowski's Space-Time Formalism of Special Relativity". arXiv. Cornell University Library. Retrieved 11 July 2017. 13. ^ Galison, Peter Louis (1979). "Minkowski's space-time: From visual thinking to the absolute world". Historical Studies in the Physical Sciences. 10: 85–121. doi:10.2307/27757388. Retrieved 11 July 2017. 14. ^ Minkowski, Hermann (1909). "Raum und Zeit" [Space and Time]. Jahresberichte der Deutschen Mathematiker-Vereinigung. B.G. Teubner: 1–14. References 1. ^ The geometry of Minkowski spacetime is closely connected to certain variants of sphere geometry (such as Lie sphere geometry or Conformal geometry) developed in the 19th century. For instance, the Lorentz transformation is a special case of spherical wave transformations. In particular, as pointed out by Poincaré (1912) and others, it is simply isomorphic to the Laguerre group which transforms spheres into spheres and planes into planes. The isomorphism between the Möbius group (which is isomorphic to the group of isometries in hyperbolic R3) and the Lorentz group is also well known. 2. ^ (In the following, the group G is the Galilean group and the group Gc the Lorentz group.) "With respect to this it is clear that the group Gc in the limit for c = ∞, i.e. as group G, exactly becomes the full group belonging to Newtonian Mechanics. In this state of affairs, and since Gc is mathematically more intelligible than G, a mathematician may, by a free play of imagination, hit upon the thought that natural phenomena actually possess an invariance, not for the group G, but rather for a group Gc, where c is definitely finite, and only exceedingly large using the ordinary measuring units." Minkowski (1909), op cit. ### Discussion of Green-div On a side note, it is clear that Minkowski way gritting his teeth, perhaps on part of all of the mathematical community. (It wasn't the first or last time such things happened.) From Space and time]: With respect to this it is clear that the group ${\displaystyle \displaystyle G_{c}}$ in the limit for ${\displaystyle \displaystyle c=\infty }$, i.e. as group ${\displaystyle \displaystyle G_{\infty }}$, exactly becomes the full group belonging to Newtonian Mechanics. In this state of affairs, and since ${\displaystyle \displaystyle G_{c}}$ is mathematically more intelligible than ${\displaystyle \displaystyle G_{\infty }}$, a mathematician may, by a free play of imagination, hit upon the thought that natural phenomena actually possess an invariance, not for the group ${\displaystyle \displaystyle G_{\infty }}$, but rather for a group ${\displaystyle \displaystyle G_{c}}$, where c is definitely finite, and only exceedingly large using the ordinary measuring units. Such a preconception would have been an extraordinary triumph for pure mathematics. Now, although mathematics only shows irony at this place, still the satisfaction remains for it, that thanks to its fortunate antecedents by its senses sharpened in free remote-view, it is instantly able to grasp the deep consequences of such a modification of our view of nature. (Extra emphasis mine.) (The group ${\displaystyle \displaystyle G_{\infty }}$ is the (homogeneous) Galilean group and the group ${\displaystyle \displaystyle G_{c}}$ the Lorentz group.) YohanN7 (talk) 10:34, 11 July 2017 (UTC) Then the history of GR is, from a competitive perspective (who comes up with it first?), even more interesting. E. had more or less promised a new theory for a lecture, but with months, weeks, and then days remaining until the scheduled lecture, he had no theory. And — Hilbert was on the chase (tipped off by E himself a couple of years earlier in a letter). YohanN7 (talk) 10:44, 11 July 2017 (UTC) @YohanN7: I confess that I had only skimmed Minkowski's lecture, relying mostly on secondary sources, but that is a great quote! The bulk of it needs to go into a note, of course, but I will definitely find a place to reference it in the text. I haven't tested [itex] \displaystyle on Microsoft Edge, only [itex] but I may need to change it to HTML if MediaWiki goofs up in the small font. Stigmatella aurantiaca (talk) 14:27, 11 July 2017 (UTC) Some suggestions: I’ve got some ‘real life’ to attend to, but a speed-read of this seemed very nicely written and interesting… quite enjoyable. I’ll be back to this later this evening. Greg L (talk) 17:11, 11 July 2017 (UTC) @D.H: Thanks for the references! I had the impression that something very strange was going on between Minkowski and Poincaré, but I didn't have a diverse enough selection of sources to feel comfortable about making any sort of summary statement. Some of the sources that I had available to me had a very opinionated point of view, far from neutral. And the relationships that you point out between the Lorentz transformation and spherical wave transformation, and with spherical geometry in general, are also quite interesting. Stigmatella aurantiaca (talk) 18:20, 11 July 2017 (UTC) I must leave it up to you guys to ensure the material is factually correct, but I thoroughly enjoyed reading that. I think this section is world-class informative and very useful. I made three relatively minor edits (∆ here) for clarity. There is only one sentence I find confusing or ambiguous: The spacetime of special relativity has since come to be known as Minkowski spacetime. Apparently, there are other types of spacetime whereby Minkowski's is but one type (the “spacetime of special relativity”). This seems to be an equivocation out of the blue. Perhaps the fine distinction is buttressing (prophylactic textus fineprintus) in anticipation of editorial drive-by shootings by proponents of other types of spacetime with different dimensions… I don’t know. But if the entire History section is discussing nothing but the spacetime of special relativity, then the distinction should be explained with either a preceding caveat (e.g. Whereas there are alternative theories where space has other than three dimensions…), or there should be a pithy parenthetical explaining the prophylactic textus fineprintus. Greg L (talk) 19:33, 11 July 2017 (UTC) Well, for starters there is the curved spacetime of general relativity. Hmmm... Do I want to discuss that in the Introduction??? Stigmatella aurantiaca (talk) 20:35, 11 July 2017 (UTC) Even if other types of spacetime can be covered elsewhere, such as in the Introduction, important concepts on arcane technical matters can benefit from some repetition. I think such an abrupt hyperfine distinction at the end of the History section should have its foundation smoothly established; something like this(?): Though there are other types of spacetime, such as the curved spacetime of general relativity, the spacetime of special relativity is today known as Minkowski spacetime. I’ll add it up in the green-div as a straw man. Greg L (talk) 21:27, 11 July 2017 (UTC) ──────────────────────────────────────────────────────────────────────────────────────────────────── The green-div receives two thumbs up from me. Whenever you guys are done with the factual stuff and citations, I think it’s ready for primetime. Greg L (talk) 21:36, 11 July 2017 (UTC) ### Arbitrary section break #1 @YohanN7: Please check my wording. Thanks! Stigmatella aurantiaca (talk) 06:02, 12 July 2017 (UTC) Minkowski's statement there doesn't make much rigorous sense. In order to frame it rigorously, I believe one has to resort to group contraction (see example two there). But this doesn't matter much. I also don't remember (and don't have the time to find out right now) whether M. there is including or excluding translations. Just strike out the parenthetical "homogeneous" and our naming of the groups is correct (by abuse of terminology) whether translations are included or not. It can always be made more precise later. YohanN7 (talk) 08:17, 12 July 2017 (UTC) Done. Stigmatella aurantiaca (talk) 09:45, 12 July 2017 (UTC) @D.H: Added note to text. Haven't gotten to your references yet. Stigmatella aurantiaca (talk) 06:25, 12 July 2017 (UTC) This history is pretty good, but it suffers from some unnecessary opinions. This article is on spacetime, so we do not need opinions on the credit for special relativity. For that, refer the reader to Relativity priority dispute. "proposed that material bodies traveling through the fixed aether were physically affected" - more precisely, they proposed that the lengths of material bodies traveling through the fixed aether were affected. "but with a fundamentally different interpretation" - I would strike this, as Einstein denied that he had a different interpretation. "Lorentz considered local time to be only an auxiliary mathematical tool" - this is disputed, and is an unnecessary slam on Lorentz. "continued to believe in the dynamical interpretation" - this is dubious, and should be omitted. You are documenting the history, not blaming people for beliefs. Since the article is on spacetime, it is much more important to point out that Poincare was unequivocally the first to publish a description of spacetime as a 4D space, with metric and symmetry group. Furthermore, Minkowski's first paper references Poincare's spacetime paper. Those are facts. Maybe Minkowski got spacetime from Poincare or maybe figured it out for himself, he doesn't say. There is no need to speculate. Roger (talk) 06:37, 12 July 2017 (UTC) @Roger I disagree with your disagreements with the following statements • proposed that material bodies traveling through the fixed aether were physically affected • but with a fundamentally different interpretation Followers of Lorentz fully expected, in such experiments as the Trouton–Noble experiment, the Trouton–Rankine experiment, the Experiments of Rayleigh and Brace, and so forth, to be able to observe secondary effects due to compressive strains and so forth, whereas SR makes clear from the very start why such experiments would inevitably fail to achieve positive results. While it is true that after such experiments as the above provided negative results, Lorentz was able revise his theory so as to explain the results as being logical consequences of his theory, he did not predict their negative results ahead of time. • Lorentz considered local time to be only an auxiliary mathematical tool We have Lorentz's own words on this point. "If I had to write the last chapter now, I should certainly have given a more prominent place to Einstein's theory of relativity by which the theory of electromagnetic phenomena in moving systems gains a simplicity that I had not been able to attain. The chief cause of my failure was my clinging to the idea that the variable t only can be considered as the true time, and that my local time t' must be regarded as no more than an auxiliary mathematical quantity." Lorentz in a note that he added to the second edition of his "Theory of Electrons" (1916). • continued to believe in the dynamical interpretation It's late and I need to get to bed. I'll get to the matter of what Poincare believed later. Stigmatella aurantiaca (talk) 07:31, 12 July 2017 (UTC) Your counterclaim goes strictly against the views of the great majority of historians of science. For instance, Scott Walter, in Poincaré on clocks in motion writes, "The concept of time deformation employed by Poincaré in Göttingen and thereafter was quite distinct from that of Einstein and Minkowski. For the latter theorists, time dilation and length contraction were kinematic effects or consequences of the four-dimensional (3+1) metric of spacetime, respectively. According to Poincaré, the velocity dependence of measured lengths and durations was best understood as a result of compensating deformations of meter sticks and timekeepers. In the wake of Minkowski spacetime, the absolute space and time of Newtonian mechanics took on a conventional nature for Poincaré, for whom the concept of Galilei spacetime had not lived out its utility for science, and would not do so for some time." Stigmatella aurantiaca (talk) 12:00, 12 July 2017 (UTC) In addition to the 1916 paper cited by Stigmatella aurantiaca, there are additional occasions at which Lorentz described local time as an auxiliary variable. He did it in his 1914 paper on Poincaré ("auxiliary quantities whose introduction is only a mathematical artifice"), or in this 1928 paper ("heuristic working hypothesis"). Also Einstein remarked in this 1907 paper: "One had only to realize that an auxiliary quantity introduced by H. A. Lorentz and named by him "local time" could be defined as "time" in general. --D.H (talk) 14:44, 12 July 2017 (UTC) If you want to say that followers of Lorentz had an expectation in 1900 that Einstein did not have in 1905, then that would be more accurate. But still not correct, as Poincare was a follower of Lorentz without that expectation. And this is of questionable relevance to a history of spacetime. If you are trying to make the point that later work had the benefit of knowledge gained by later experiments, isn't that obvious? You could say that Lorentz credited Einstein with achieving a simplicity, but the explanation given is inaccurate, misleading, and inappropriate. The above text implies that Lorentz was just using a mathematical trick/transformation with no physical significance. That is not true, as Lorentz was using that transformation to explain Michelson-Morley. For a history of spacetime, it is better to just explain what Lorentz did, and not give dubious explanations for why he did not achieve the simplicity of later scholars. You refer me to criticism of Poincare's conventionalist philosophy. That might be interesting in an article on the history of the philosophy of physics, but in an article on the history of spacetime, the pertinent fact is that Poincare was the first to publish a mathematically and physically correct account of spacetime. Roger (talk) 16:14, 12 July 2017 (UTC) This is again a side note, not intended to reply to anything in particular. One sharp point that is tacitly, but not explicitly there is that foundational physics was in a state of utter confusion until Einstein's publication, whether practiced by mathematicians or physicists. It is probably not true that partial physical results (and this includes physical interpretation of mathematical formulas) were obtained before E. It is rather true that correct formulas, and correct observations about e.g. the Maxwell equations were there, but based either on incorrect physical assumptions or no physical assumptions. I have read the above very sharply (and reliably) formulated somewhere, just can't remember exactly where. I'll try to. The point is this: It doesn't matter much what Minkowski, Poincaré et. all. actually thought before Einstein, because what they thought was physically wrong, or physically void. (What they thought after E. needs to be backed up with references of course, see next paragraph. Not all saw the light right away.) It is also true (by the same source, but surely others as well) that SR only enjoyed gradual acceptance. So, when referring to 1906-20 papers in turn referring to pre-1905 papers, though it is interesting that bits and pieces (without much physics) of SR was there before Einstein, some (most!) of these bits and pieces were purely coincidental. Interestingly, since Sommerfeld is involved, he himself had the corresponding piece of luck with his formula for the hydrogen energy spectrum. In summary, a description of the historical emergence of SR should, in my opinion, not be including even the possibility that it emerged as a collaborative effort, even tough E would perhaps not have discovered SR without the efforts of his predecessors and competitors. Am I obscure? An excellent summary. I had never really thought things through as well as you put them, but I agree (almost) completely! Stigmatella aurantiaca (talk) 14:36, 12 July 2017 (UTC) Regarding the "collaborative effort", here is an Einstein quote about Lorentz: "The enormous significance of his work consisted therein, that it forms the basis for the theory of atoms and for the general and special theories of relativity. The special theory was a more detailed exposé of those concepts which are found in Lorentz's research of 1895. (Pais, Subtle is the Lord, p. 169). --D.H (talk) 14:44, 12 July 2017 (UTC) Do you have an original Lorentz reference? According to Scientific publications of H.A. Lorentz, there are no published works from 1895. I am curious whether the 1895 results would stand up to modern day scrutiny. This, of course, I am probably not able to decide by myself, but I'd still like to read it. YohanN7 (talk) 07:14, 13 July 2017 (UTC) Check the complete bibliography, year "1895b". The English translation is available at Here is another quote of Einstein (as quoted in Born (1955), Physics in my Generation, p. 101 and Keswani (1965), Origin and Concept of Relativity, p. 33, see also history of special relativity for sources): "There is no doubt, that the special theory of relativity, if we regard its development in retrospect, was ripe for discovery in 1905. Lorentz had already recognized that the transformations named after him are essential for the analysis of Maxwell's equations, and Poincaré deepened this insight still further. Concerning myself, I knew only Lorentz's important work of 1895 – "La theorie electromagnetique de MAXWELL" and "Versuch einer Theorie der elektrischen und optischen Erscheinungen in bewegten Körpern" – but not Lorentz's later work, nor the consecutive investigations by Poincaré. In this sense my work of 1905 was independent. [..] The new feature of it was the realization of the fact that the bearing of the Lorentz transformation transcended its connection with Maxwell's equations and was concerned with the nature of space and time in general. A further new result was that the "Lorentz invariance" is a general condition for any physical theory. This was for me of particular importance because I had already previously found that Maxwell's theory did not account for the micro-structure of radiation and could therefore have no general validity.". Note that the first paper cited by Einstein was actually published in 1892. Btw, also Poincaré clearly noted that the Lorentz transformations applies not only to Maxwell's equations. --D.H (talk) 09:09, 13 July 2017 (UTC) Thanks. I look forward to reading. Your "btw" is definitely interesting: "Btw, also Poincaré clearly noted that the Lorentz transformations applies not only to Maxwell's equations.". SR is no less and not much more than Lorentz invariance. If "not only Maxwell's equations" were replaced by "all equations" (including those describing mechanics), then full SR emerges. YohanN7 (talk) 09:27, 13 July 2017 (UTC) Wow, that's a big paper... YohanN7 (talk) 09:30, 13 July 2017 (UTC) An Einstein quote: I do not recognize my theory since the mathematicians got hold of it. (Or something like that...) I'll try to locate a source. YohanN7 (talk) 13:46, 12 July 2017 (UTC) Found it: Since the mathematicians have invaded the theory of relativity, I do not understand it myself anymore. Quoted in P A Schilpp, Albert Einstein, Philosopher-Scientist (Evanston 1949). YohanN7 (talk) 13:56, 12 July 2017 (UTC) ...and an excellent quote, as well. I had actually been debating whether to include it or not (in a slightly different translation). I guess I should. Stigmatella aurantiaca (talk) 14:36, 12 July 2017 (UTC) ──────────────────────────────────────────────────────────────────────────────────────────────────── I partially disagree with one opinion stated by Roger: This article is on spacetime, so we do not need opinions on the credit for special relativity. For that, refer the reader to Relativity priority dispute. Here's why: 1. Since Minkowski spacetime is irrevocably intertwined with the theory of special relativity, it would be impossible to have a useful History section here without touching factually—and in an interesting way—on SR and who was responsible for its development. 2. God does too play dice with the universe—at least when it comes to who gets notable public recognition for famous ideas. Turbo code vs. Gallager codes is just one example of the dust being blow off of long-forgotten papers written by damned smart people who only received recognition for their insights after they died. 3. As mere wikipedians, we can’t fix #2, above, by practicing the art of reading deeply into primary sources such as the physicists’ published papers. And we must be terribly cautious when we look up their famous quotes and start performing a rain dance looking for inspiration as to the significance and truthfulness of the quote. For the most part, we must look towards what the reliable sources are saying insofar as who deserves credit for what on special relativity. 4. Referring readers to Relativity priority dispute is no option at all for a fluid, interesting, and informative reading experience because A) doing so would merely be practicing Click to Learn & Return©™®, and B) that article sucks because it’s as interesting to read as the Warren Commission’s report on the assassination of president Kennedy. 5. To save time, we can merely refer to the Undisputed and well-known facts section of Relativity priority dispute as we craft our green-div here. 6. The lede of Relativity priority dispute states this: Subsequently, claims have been put forward about both theories, asserting that they were formulated, either wholly or in part, by others before Einstein. I note the operative word “Subsequently,” and submit that for the purposes of the History section here, it is beyond scope to be blowing the dust off of papers that failed to receive due recognition at the time. 7. We may fairly limit ourselves—to a large extent—to what was notable at the time (circa 1887–1916), and add a clause or parenthetical akin to while modern historians believe other individuals should be credited for the development of spacetime and special relativity-like formulas and theories. It’s not our concern to future-proof the History section of this article against the possibility that an archivist at Princeton might dig up some guy’s masters dissertation from 1882. Greg L (talk) 17:49, 12 July 2017 (UTC) I disagree with the above side note that says "foundational physics was in a state of utter confusion until Einstein's publication" and "SR only enjoyed gradual acceptance". This is an article about spacetime, not Einstein or SR. Historians mostly agree that Einstein's 1905 paper did not have a big impact at the time, but Minkowski's version of spacetime and SR enjoyed extremely rapid acceptance after Minkowski's famous 1908 paper. The latter point is worth making. Roger (talk) 18:13, 12 July 2017 (UTC) Roger. The article is about spacetime. But any proper treatment in a History section must necessarily discuss Einstein and a variety of other physicists and theories that impact the history of spacetime; ergo, my point #1, above. Greg L (talk) 19:02, 12 July 2017 (UTC) Fine, mention Einstein and SR all you want, but please don't skip essential facts in the history of spacetime. The above history skips the generally-accepted facts that (1) spacetime was first published by Poincare, based only on Lorentz, and (2) Minkowski got the idea from Poincare, expanded on it, and then wrote the essay that sold everyone on it. However important Einstein was to SR, he really was not important to the development of spacetime. See for example this quote from the Galison paper cited above (and behind a paywall, unfortunately). Roger (talk) 01:45, 13 July 2017 (UTC) In sum Minkowski still hoped for the completion of the Electromagnetic World Picture through relativity theory. Moreover, he saw his own work as completing the program of Lorentz, Einstein, Planck, and Poincare. Of these it was Poincare who most directly influenced the mathematics of Minkowski's space-time. As Minkowski acknowledges many times in "The Principle of Relativity," his concept of space-time owes a great deal to Poincare's work.35 [Galison,1979] ──────────────────────────────────────────────────────────────────────────────────────────────────── @Roger: I think all our objectives are aligned. The above green-div is the ultimate in collaborative writing tools whenever the going gets double-tough. Everyone can contribute to a green-div and tinker and fidget with much reduced pressure. Do you want Stigmatella aurantiaca to incorporate your suggested material into the green-div? Or do you see a fine place to shoehorn it into? Greg L (talk) 02:08, 13 July 2017 (UTC) I did not realize that the green box was open for editing. There was a call for comments, and I commented. So go ahead and let the editor do a revision. Roger (talk) 04:27, 13 July 2017 (UTC) OK. Thanks! Greg L (talk) 04:51, 13 July 2017 (UTC) ### Arbitrary section break #2 I think it is worth asking the question whether anyone of Minkowski, Poincaré and Lorentz published mathematically correct formulas with the correct (according to modern formulations of SR) physical premises and interpretations before 1905. I have been taught that this is not the case. But references where someone quotes someone (that sometimes again quote someone else), and the fact that all decent scientists are eager to pay their dues to their predecessors (sometimes perhaps too generously) suggests otherwise. YohanN7 (talk) 07:07, 13 July 2017 (UTC) That is, would these papers stand up to present day scrutiny by physicists and mathematicians as opposed to historians of science? Einsten's 1905 work does stand up to scrutiny (which is why SR has the status it has still today), though, perhaps, his and (many other physicists) mathematical presentations on occasion brings tears to the eyes of the mathematically inclined reader YohanN7 (talk) 07:22, 13 July 2017 (UTC) @Schlafly: @Greg L: I'll do my best to come up with an appropriate addition that properly credits Poincare and his importance to Minkowski. I had thought it was obvious in the section as written that Einstein's role in the development of the spacetime of special relativity was minor at best. Give me a day or two. I've just finished purchasing Space and Time: Minkowski's papers on relativity (\$5) so that I can read The Relativity Principle in translation, to supplement my use of secondary sources. My German is unfortunately very flaky. Stigmatella aurantiaca (talk) 08:15, 13 July 2017 (UTC) Spacetime of special relativity is fixed by special relativity, which is fixed by Einstein. Development of various equivalent formalisms can be credited to others, but, please, don't say that "Einstein's role in the development of the spacetime of special relativity was minor at best". YohanN7 (talk) 08:52, 13 July 2017 (UTC) I'll find a more accurate manner of expressing what I was trying to say. But first I need to do more research on Poincare. Stigmatella aurantiaca (talk) 10:10, 13 July 2017 (UTC) @YohanN7: When it comes to part 2 of Einstein's 1905 paper (what Einstein considered to be the important part, otherwise why give his paper the title that he did?), I find myself much preferring Jackson. Stigmatella aurantiaca (talk) 08:15, 13 July 2017 (UTC) I don't understand what you mean. Possibly you mean that old papers are almost invariably painstaking to read. I'd agree with that. Not uncommonly, some people take pride in reading only historical references, so that they can obtain a "superior view". Almost without exception, those who do get it all wrong, which has been a source for substantial disputes and frustration. YohanN7 (talk) 08:52, 13 July 2017 (UTC) painstaking to read - Yes, that is pretty much what I meant. Despite Wikipedia's wp:NOR policy, one does need to refer to primary sources to establish "ground truth". off-topic side note - An amusing recent example of where I had to do a bit of primary source digging to resolve an issue where different secondary sources were copy-pasting each other was Talk:Fizeau–Foucault_apparatus#313000 or 315000?. The title of the Fizeau–Foucault_apparatus article, incidentally, was a neologism coined by an currently blocked Wikipedia editor who was ignorant of the difference. There are no references in the literature to this non-existent experimental mishmash prior to 2002 (i.e. the creation date of the Wikipedia article). Stigmatella aurantiaca (talk) 10:00, 13 July 2017 (UTC) @Schlafly: @Greg L: I'll also need to visit the university library for some materials that are behind a paywall. Stigmatella aurantiaca (talk) 08:45, 13 July 2017 (UTC) For your research on Poincaré, you may find the list of secondary sources on Poincaré and relativity useful, which I compiled some years ago. --D.H (talk) 11:15, 13 July 2017 (UTC) That's a long list! I sorted through the available abstracts for the articles that you listed, and tonight or tomorrow, I'll be visiting the university library to follow up on the papers with interesting abstracts that are behind paywalls. It will depend on the state of my wife's health. Stigmatella aurantiaca (talk) 18:19, 13 July 2017 (UTC) Quoting YohanN7: Spacetime of special relativity is fixed by special relativity, which is fixed by Einstein. I second that motion. I think it’s obvious from the many (and extensive) biographies and documentaries on Einstein that he was an original thinker who had zero need to read every scientific paper in existence at the time. Merely because modern historians can point to the existence of other papers where people were thinking along similar lines doesn’t diminish the fact that Einstein had intellectual prowess, nor the importance of his mental feat in connecting the dots between electrodynamics and the implications of the Michelson–Morley experiment, and it certainly doesn’t take away from the historical perception that “Einstein = special relativity.” It’s clear (to me anyway), that Einstein had a big take-away with the Michelson–Morley experiment and gave much thought to what the universe would look like from the point of view of someone riding a light beam. It’s also beyond clear from his paper On the Electrodynamics of Moving Bodies that he was merely trying to say that “moving a conductor through a stationary magnetic field” is the same as “moving a magnetic field over a stationary conductor.” Ergo, he mentions Maxwell prominently in his paper because electrodynamics viewed in a new light was the point of his paper. To Einstein, the Michelson–Morley experiment was merely a historical experiment (Einstein was eight years old at the time) that established one simple fact: light has only one speed for all observers irrespective of the source’s relative motion. Though Michelson & Morley’s raw observations of light’s speed were exacting and correct, Einstein jettisoned Michelson & Morley’s presumed existence of an ether as a carrier for electromagnetic propagation. Einstein perceived no need to cite their work; they merely made a measurement. Finally, Einstein mentioned only Planck and Lorentz in the footnote of his paper because those physicists’ principles and math were merely ‘cute mathematical window dressing’ attempting to explain the natural world. When one properly viewed electrodynamics in terms of the profound underlying principle of the constancy of the speed of light, these three fundamentally deep truths underlying the natural world became evident: 1. “The same laws of electrodynamics and optics will be valid for all frames of reference.” 2. There is no “luminiferous ether.” 3. There is no such thing as an “absolutely stationary space.” From these universal truths sprang math—featuring the same transformations as those from Lorentz—describing still other truths such as how different observers will measure where and when events occur differently. Einstein's insight was a classic case of the phenomenon the famous computer researcher Alan Kay spoke of: “A change in perspective is worth 80 IQ points.” Einstein deserves full credit for special relativity. Efforts to diminish the implications of his insight, in my opinion, are just historical revisionism by authors and others who are anxious to make a name for themselves because they purport to be able to see truths that elude the mere masses. Greg L (talk) 20:43, 14 July 2017 (UTC) ──────────────────────────────────────────────────────────────────────────────────────────────────── @Schlafly: I've added a long paragraph on Poincaré to the history, but I find little justification for your attempts to claim that he had not only scooped Einstein on special relativity, but had also scooped Minkowski in developing four-dimensional spacetime. Technically speaking, yes, Poincaré was first to demonstrate the possibility of expressing physics in four-space, but he did not pursue the effort. Stigmatella aurantiaca (talk) 05:13, 15 July 2017 (UTC) I can only hope that the balance you are striking here is in accordance with WP:WEIGHT, which starts out as follows: Neutrality requires that each article or other page in the mainspace fairly represent all significant viewpoints that have been published by reliable sources, in proportion to the prominence of each viewpoint in the published, reliable sources. My takeaway from that principle, as it applies to the present debate, is that the extent to which we dwell on Poincaré should mirror the coverage that the best RSs have to say on the guy. Greg L (talk) 05:37, 15 July 2017 (UTC) I believe that Schlafly will agree to trimming the paragraph on Poincaré in view of what I have to say concerning his non-importance in the development of the spacetime concept. Spending several sentences on his non-role indeed involves a WP:WEIGHT issue. I only put as much up there as I did to show Schlafly what he was getting into. Stigmatella aurantiaca (talk) 05:53, 15 July 2017 (UTC) @D.H: I came away from the library with lots of great material on Poincaré. He was an amazing figure. After I digest this stuff, I might see if I can contribute to his Wiki article. Thanks again for all of your research! I'm coming away from this with a new favorite scientist to stand on my shelf alongside a long list including, in no particular order, Feynman (I once stood beside him perched on top of the same umbrella table. Long story...), Max Delbrück (I once babysat his children), Barbara McClintock (great lady!), Einstein, Faraday, Lucy Shapiro, Dale Kaiser, Maxwell, etc. etc. Stigmatella aurantiaca (talk) 05:53, 15 July 2017 (UTC) ──────────────────────────────────────────────────────────────────────────────────────────────────── The sentences marked off in yellow are my attempt to accommodate Roger's insistence that we include mention of Poincaré's pioneering use of 4-dimensional spacetime a couple of years ahead of Minkowski's public lectures introducing the concept. It is my belief that, since Poincaré never really took the concept anywhere, that cramming this material in violates WP:WEIGHT. I propose that it be deleted. Stigmatella aurantiaca (talk) 06:07, 15 July 2017 (UTC) With regard to your suggestion that the crammed material violates WP:WEIGHT, I would say our weight should mirror how modern and particularly good RSs give weight (or not) to Poincaré. It is not the duty of mere wikipedians to pretend that we’re the cigar-chomping editor of the Washington Post and we’re going to decided whether to give Nixon a break or not. You’ve obviously read this stuff. I encourage you to be bold and abide by the weight of the good RSs. Damn the dissenting wikipedians; full RS ahead. Greg L (talk) 06:27, 15 July 2017 (UTC) P.S. Reading the paragraph on Poincaré—without any knowledge of the extent to which the RSs balance any discussion of him—it seems informative, balanced, topical, and (importantly) interesting and as if it belongs in a proper historical treatment of the subject. Having said that, it still seems to me to give the guy a tad too much credit. Here's why: The history of special relativity and how Einstein and others like Poincaré were thinking along similar lines around the same time reminds me of history of atomic fission and the atomic bomb. Upon the late-1938 discovery of the true extent of the physics underlying nuclear fission (the release of 200 MeV of kinetic energy and the release of two free neutrons), pretty much every atomic physicist had the same epiphany at the same instant. The extent to which one executes on that epiphany is an altogether different matter. Alas, it seems to me that Poincaré’s suggestion regarding the “inherent undetectability of the aether” comes miles short of Einstein’s conclusion that there is no ether because light’s speed is intertwined with the fundamental physical nature of the universe. I submit that this bit The logical consequences of these two propositions can be said to encompass the entirety of special relativity might be editorializing too much unless it is cited to good R.S. Since it’s only a green-div, I took the liberty of deleting that bit. Greg L (talk) 06:19, 15 July 2017 (UTC) On Mondays, Wednesdays and Fridays, it seems, Poincaré still believed in the existence of the aether, while on Tuesdays, Thursday, and Saturdays he didn't. Who knows what he believed on Sundays? I'm still trying to figure out what "other reasons" to include as regards most science historians not crediting Poincaré with actually discovering special relativity. The aether issue was one of them. Stigmatella aurantiaca (talk) 06:25, 15 July 2017 (UTC) We had wikipedians out trying to change the world with the use of “16 gibibytes” of RAM because “gibibyte” was a suggested standard. Though virtually no one in the computer industry observed the practice, we had editors who shoehorned the terminology literally overnight into hundreds of articles. It’s not our role to champion change, right wrongs, or remedy historical injustice unless, on balance, the RSs do. As wikipedians, we follow, not lead. WP:WEIGHT is clear. I don’t know what the RSs are saying but I have every confidence you do. Greg L (talk) 06:32, 15 July 2017 (UTC) ──────────────────────────────────────────────────────────────────────────────────────────────────── On second thought, this bit For these and other reasons, most historians of science argue that Poincaré did not invent what is now called special relativity, nor did he play any major role in the development of the spacetime concept, makes the paragraph look like a case of unadulterated wikipedian conflictus textitis. I’m going to go trim that paragraph. Stand back for a bit. Greg L (talk) 06:37, 15 July 2017 (UTC) Done. It now seems to place proper weight on Poincaré given how the RSs view his contributions. Greg L (talk) 07:01, 15 July 2017 (UTC) ### Arbitrary section break #3 • Ummm. No. Will fix later. Stigmatella aurantiaca (talk) 07:37, 15 July 2017 (UTC) • Roger does need a chance to see what the paragraph looked like with reference to Poincare's work on four-dimensional spacetime, since that was Roger's big thing. Stigmatella aurantiaca (talk) 07:33, 15 July 2017 (UTC) Henri Poincaré was to come closer than any other of Einstein's predecessors to arriving at what is currently known as the special theory of relativity. In 1900, Poincaré recognized that Lorentz's "local time" is actually what is indicated by moving clocks, and he proposed an explicitly operational definition of clock synchronization.[1] In 1904, Poincaré suggested the inherent undetectability of the aether. The logical consequences of these two propositions can be said to encompass the entirety of special relativity. Moreover, in his 1906 paper On the Dynamics of the Electron,[2] Poincaré introduced the innovative concept of a 4-dimensional space-time while discussing various hypotheses on gravitation. He did not pursue the 4-dimensional formalism very far, however, stating that this line of research seemed to "entail great pain for limited profit", ultimately concluding that "that three-dimensional language seems the best suited to the description of our world".[3] Furthermore, However, even as late as 1909, Poincaré continued to believe in the dynamical interpretation of the Lorentz transform.[4]:163–174 For these and other reasons, most historians of science argue that Poincaré did not invent what is now called special relativity, nor did he play any major role in the development of the spacetime concept. References 1. ^ Cite error: The named reference Miller was invoked but never defined (see the help page). 2. ^ Poincare, Henri (1906). "On the Dynamics of the Electron (Sur la dynamique de l’électron)". Rendiconti del Circolo matematico di Palermo. 21: 129–176. Retrieved 15 July 2017. 3. ^ Walter, Scott A. (2007). Renn, Jürgen; Schemmel, Matthias, eds. The Genesis of General Relativity, Volume 3. Berlin: Springer. pp. 193–252. Archived from the original on July 14, 2017. Retrieved 15 July 2017. 4. ^ Cite error: The named reference Pais was invoked but never defined (see the help page). I don't know why you guys are arguing about what Poincare believed about the aether, or what a great genius Einstein was, or who should get credit for special relativity. There are other WP pages for those topics. This article is on spacetime. It does not matter if Poincare was an astrologer or if he believed the Earth was flat. A history of spacetime should state the objective fact that Poincare first created the concept. "since Poincaré never really took the concept anywhere" - Do you realize that Minkowski died soon after his famous paper, and so he never went any further with it either? "Who knows what he believed on Sundays?" - Do you realize that many great scientists had peculiar religious beliefs, and thus believed odd things on Sundays? Many also have odd beliefs on philosophical and political matters. Why are you trying to pass judgment on his beliefs? Why do you care? Some of Poincare's statements do seem contradictory if you do not understand his conventionalist philosophy. Same with Einstein, for that matter, as he sometimes said there was an aether and sometimes that there was not. And you might also be confused by what modern physicists say about the aether. But if this issue really interests you, then please move it over to one of the WP articles on the aether. This article is on spacetime, not the aether. "I find little justification for your attempts to claim that he [Poincare] had not only scooped Einstein on special relativity, but had also scooped Minkowski in developing four-dimensional spacetime." - I am not really claiming that Poincare scooped anyone. Poincare wrote a paper on spacetime. Minkowski did some subsequent research that made use of Poincare's results and developed them further. There is some suggestion that Minkowski got some of the ideas independently, but neither Minkowski nor anyone else has made this claim with any specificity. From what we know, Minkowski developed his spacetime from Poincare, and Poincare developed his from Lorentz. "nor did he [Poincare] play any major role in the development of the spacetime concept" - What historian says that? I just quoted Galison saying "Of these it was Poincare who most directly influenced the mathematics of Minkowski's space-time." Spacetime is clearly defined in Poincare's paper. Maybe not as clearly as Minkowski's paper, but it is there and no one can dispute that. "Henri Poincaré was to come closer than any other of Einstein's predecessors to arriving at what is currently known as the special theory of relativity." - This is a very strange thing to say in an article on spacetime. What is currently known as special relativity (according to the textbooks I have) includes the spacetime concept, and Einstein never had that until he got it from Minkowski. It would be more accurate to say that Poincaré came closer than any other of Minkowski's predecessors to arriving at what is currently known as the special theory of relativity. Whether you agree with that or not, there is a separate article on History of special relativity. The concern is whether he published the spacetime concept. You guys are getting way off-topic here. Just give the history of spacetime in the spacetime article. Roger (talk) 08:21, 15 July 2017 (UTC) @Schlafly: You wrote, quote: (1) spacetime was first published by Poincare, based only on Lorentz, and Technically, yes, in "On the Dynamics of the Electron" However, Poincare specifically stated that this line of research seemed to "entail great pain for limited profit", and his ultimate conclusion was "that three-dimensional language seems the best suited to the description of our world". (2) Minkowski got the idea from Poincare, expanded on it, and then wrote the essay that sold everyone on it. Minkowski's mathematical inspirations were primarily from Lorentz and Poincare. He took Poincare's abandoned idea of four-dimensional spacetime, expanded on it, and presented a sales pitch for spacetime that made him famous. Given Poincare's negative writings on the probable utility of the spacetime concept that he originated, I am very puzzled on why you insist on wishing to exaggerate Poincare's importance in its development. (3) However important Einstein was to SR, he really was not important to the development of spacetime. We all here are perfectly aware of your belief that Einstein ruined physics, thank you. I am doing my best to separate your statements of fact versus your statements of interpretation, and to assign proper weight to your views, accommodating as many of your factual observations as will fit with the rest of the section. Now, let me correct the main green div area. As it stands, it does not reflect the consensus of reliable sources. I am quite sure that you do not want me to insert the statements marked yellow in the small green div area. Stigmatella aurantiaca (talk) 12:45, 15 July 2017 (UTC) @Schlafly: Your edit summary for your above post states that you object to the “spurious slams on Poincare”, which translates to “harsh criticisms of Poincaré that aren’t what they purport to be.” You see a prescient genius in what Poincaré accomplished in his time. However, the only thing that matters is what the historians are saying today. Like User:YohanN7 wrote, Spacetime of special relativity is fixed by special relativity, which is fixed by Einstein. It's utterly impossible to have a proper historical treatment of spacetime without discussing special relativity; the RSs thoroughly cover the development of SR in a historical discussion of spacetime, and so too must we. All Stigmatella, YohanN7, and I are trying to accomplish with our history section is to mirror, per WP:WEIGHT, how modern, good RSs give credit and weight to everyone’s contributions to the understanding of spacetime, Poincaré included. It would be very wrong for mere wikipedians vary from the RSs; anything else would be a major disservice to our readership. IF what is currently in the small green-div is true, that most modern historians feel Poincaré had no major role in the development of spacetime, then I would argue that the amount of airtime given to Poincaré in the big green-div is appropriate as its general tone (though it is in need of some factual correction). My take-away from our treatment of Poincaré is that he was a notable example of one of the many physicists standing at the party, wine glass in hand, talking about what was on a lot of others’ minds, but failed to go back to his study and do the hard work. If my take-away mirrors what the RSs are saying, then I’d say our current treatment in the big green-div strikes the proper gist and weight. Greg L (talk) 16:26, 15 July 2017 (UTC) We seem to have a general consensus as to how our History section should touch upon the contributions of Poincaré. I suggest you revise the big green-div in a way that best reflects the weight given by the RSs and we move on. After your edit, we should have a short pause for final comments to discern if we truly do have a general consensus. Greg L (talk) 15:38, 15 July 2017 (UTC) These are spurious slams on Poincare: "Poincare's abandoned idea", "Poincare's negative writings on the probable utility of the spacetime concept", "most modern historians feel Poincaré had no major role in the development of spacetime", "failed to go back to his study and do the hard work". If anyone, it was Einstein who was negative about the utility of spacetime. Poincare did predict that 3D language would be preferred for most purposes, and he was completely correct about that. As I thumb thru some physics journals today, most of the articles still use the 3D language. And I say that Poincare should be credited for his work regardless of whether his predictions about future utility were correct or not. Can you show me even one modern historian who says "Poincaré had no major role in the development of spacetime"? I pointed out that Galison, in the article already cited in the green-div, says that Poincare did have a major role. The article text should accurately reflect its sources. Also, no one says that spacetime was "fixed by Einstein". Everyone agrees that the spacetime concept, spelled out by Poincare and Minkowski, was news to Einstein. There are no historians that say that only Einstein did the hard work, or that Poincare abandoned spacetime, or any of that nonsense. I am the one here who is sticking to the facts that are generally accepted. There are historians who credit Einstein for relativity, as explained in Relativity priority dispute. There reasons are somewhat peculiar and off-topic here, as they usual define special relativity to exclude 4D spacetime. You can refer the reader to that article for details, but this article is about spacetime. Roger (talk) 17:40, 15 July 2017 (UTC) Mmmm. Quoting you: no one says that spacetime was "fixed by Einstein". Indeed. No one here said that. As regards this statement you made: And I say that Poincare should be credited for his work regardless of whether his predictions about future utility were correct or not, many here would like to play Perry White and make policy decisions on how history ought to be treating this individual or that. Alas, we place great weight on what the reliable secondary sources say. It's clear to me that Stigmatella aurantiaca has no axe to grind here one way or another and is exceedingly well versed in both the primary and secondary sources. I propose that he do his best to adhere to the best interpretation of the truth (mostly secondary sources with a careful dash of his own interpretation of the primary sources), declare it ready and hoist it as a straw man to see if we have a consensus. “Consensus,” by the way, does not mean “unanimous” and it never did. Greg L (talk) 21:14, 15 July 2017 (UTC) YohanN7 wrote "Spacetime of special relativity is fixed by special relativity, which is fixed by Einstein." No historian or reliable source says this. Let's get the facts right, and then discuss how "history ought to be treating this individual or that". Roger (talk) 00:41, 16 July 2017 (UTC) I don't know about historians, but mathematicians and physicists know, and write things like "despite many efforts of Lorentz, Poincaré and many others, the situation in theoretical physics was generally murky before Einstein in one blow clarified matters". (V. S. Varadarajan in Supersymmetry for mathematicians) In the same paragraph it is, if I recall correctly, also mention that the mathematical structure of spacetime was then fixed as well. Regardless of what sources by historians say, it is a fairly easy (but non-trivial) matter to derive only from the fact that there is a universal velocity that the symmetry group of spacetime is a subgroup of the conformal group. (Ruling out purely conformal transformations (and landing at the Poincaré group) requires either, in addition, the second postulate, or the (weaker) observation that the Maxwell equations with sources aren't conformal invariant.) Saying that special relativity does not imply the mathematical structure of spacetime is, forgive me, no rudeness intended, pure nonsense. YohanN7 (talk) 06:57, 17 July 2017 (UTC) @Schlafly: It appears to me that you were confused by Yohan's use of the term "fixed". He used "fixed" in the sense of "is uniquely determined by". You were apparently reading "fixed" in the sense of "was repaired by", which doesn't make too much sense. Stigmatella aurantiaca (talk) 10:17, 17 July 2017 (UTC) ### (Purgy's) Section break #4 I have no problem with pondering the diverging educated and sourced opinions about topics wrt their belongings to a spacetime article or not, nor with the weighing of contributions by Poincare to this or that, not even to him standing with wine glass in hand (per se a perfidious insinuation to this important scientist), and I too trust in Stigmatella aurantiaca being fair and open to contributions, even to those he currently opposes. However, I do have strong reservations against professional writers, firstly proclaiming consensus on just their own behalf about themata they are absolutely clueless, and secondly, transferring this usurped right to one editor of his selection; and all this before there was any substantial discussion at all. I do hope that the repeatedly obtruded dogmata, only rudimentarily founded in WP directives (I opposed already), and the fully causeless arrogated competence, will not detriment the development of this article. Purgy (talk) 07:24, 16 July 2017 (UTC) (*Yoda accent*): ‘Brushing up on your wiki-insults have you?’ It appears you devoted much time to researching wiki-smack and leaning elegant new English words in a dictionary. But please desist with your highbrow name-calling and whining. I count a half-dozen multisyllabic words in your above rant that Koko, the sign-language gorilla could easily parse and translate into “Greg L stinky bad; Purgy nice smelling smart.” Yeah… we get your message; doesn’t impress. You complained earlier (∆ edit) that you have problems with four editors here, which amounts to pretty much every single contributor who was active on this page! Tilting at four windmills makes it perfectly clear where the problem lies. Regrouping and coming back to single out an individual windmill and start tilting at that one doesn’t help your cause. If you want to be constructive to the project, you can consider User:Stigmatella aurantiaca’s offer (∆ edit here): You've made some good suggestions in the past, but English does not appear to be your native language. So I've taken your suggested changes here where you and User:Greg L can work on them together. Hope you don't mind too much?. OK. Maybe that sort of comment stings for someone who believes himself to possess exemplary English-language, encyclopedic technical writing skills. But consider what the famous astronomer, Neil deGrasse Tyson once wrote: “Imagine a world in which we are enlightened by objective truths rather than offended by them.” Stigmatella aurantiaca’s offer was a perfectly reasonable and generous offer for us to devote extra amounts of our all-volunteer time to assist you in contributing. You clearly weren’t interested in taking him up on his offer. You wrote earlier as follows: “Therefore, I walk away from the carcass of this discussion like from beating a dead horse.” But you didn’t. You came back again… twice… with protests that accomplish nothing. Curious. Now, I invite you to improve this article with prose that is A) factual, B) is compliant with all of Wikipedia’s rules, C) is well crafted English prose with an encyclopedic tone, and which D) makes an already complex and abstruse article easier to understand. That would be a big improvement over your incessant harping and raging at those you blame for the slings and arrows of your outrageous misfortune. If the prospect of that isn’t your cup of tea, then try acting like an adult for once and do the right thing. Greg L (talk) 00:04, 17 July 2017 (UTC) The above history section contains entire passages on aether experiments and Lorentz, while Poincaré's contributions and even Einstein's are hardly mentioned. Therefore, I propose the following additions (there is certainly some room left for a short history of curved spacetime.) --D.H (talk) 08:31, 16 July 2017 (UTC): @D.H: Excellent suggestions, as usual, many of your points being expressed better than what I could have come up with! I will put together a merge of your proposal with the rest of the history above. I do not foresee that I will need to make any changes, except in minor points of grammar. You will need to wait a few hours. My wife and I are celebrating our 34th anniversary, so this weekend has been busy. Stigmatella aurantiaca (talk) 12:32, 16 July 2017 (UTC) Wow, happy anniversary!! PS: The last hours I spent watching TV to see Federer win Wimbledon 2017 -;). Greetings, --D.H (talk) 15:31, 16 July 2017 (UTC) {{{Unchanged: Three passages on aether and Lorentz}}} {Text struck through to denote that it is no longer in service. Greg L (talk) 19:11, 18 July 2017 (UTC)} Other physicists and mathematicians at the turn of the century came close to arriving at what is currently known as spacetime. Einstein himself noted, that with so many people unraveling separate pieces of the puzzle, "the special theory of relativity, if we regard its development in retrospect, was ripe for discovery in 1905."[1] An important example is Henri Poincaré,[2][3] who in 1898 argued that the simultaneity of two events is a matter of convenience, in 1900 he recognized that Lorentz's "local time" is actually what is indicated by moving clocks by proposing an explicitly operational definition of clock synchronization assuming constant light speed, in 1900 and 1904 he suggested the inherent undetectability of the aether by emphasizing the validity of what he called the principle of relativity, and in 1905/1906[4] he mathematically perfected Lorentz's theory of electrons in order to bring it into accordance with the postulate of relativity. While discussing various hypotheses on Lorentz invariant gravitation, he introduced the innovative concept of a 4-dimensional space-time by defining various four vectors, namely four-position, four-velocity, four-force.[5][6] He did not pursue the 4-dimensional formalism in subsequent papers, however, stating that this line of research seemed to "entail great pain for limited profit", ultimately concluding "that three-dimensional language seems the best suited to the description of our world".[6] Furthermore, even as late as 1909, Poincaré continued to believe in the dynamical interpretation of the Lorentz transform.[7]:163–174. For these and other reasons, most historians of science argue that Poincaré did not invent what is now called special relativity.[3][7] In 1905, Einstein introduced special relativity (even though without using the techniques of the spacetime formalism) in its modern understanding as a theory of space and time.[3][7] While his results are mathematically equivalent to those of Lorentz and Poincaré, it was Einstein who showed that the Lorentz transformations are not the result of interactions between matter and aether, but rather concern the nature of space and time itself. Einstein performed his analyses in terms of kinematics (the study of moving bodies without reference to forces) rather than dynamics. He obtained all of his results by recognizing that the entire theory can be built upon two postulates: The principle of relativity and the principle of the constancy of light speed. In particular, Einstein in 1905 superseded previous attempts of an electromagnetic mass-energy relation by introducing the general equivalence of mass and energy, which was instrumental for his subsequent formulation of the equivalence principle in 1907. Namely, this principle states the equivalence of inertial and gravitational mass, and by using the mass-energy equivalence, Einstein showed that also the gravitational mass of a body is proportional to its energy content, which was one of early results in developing general relativity. While it would appear that he did not at first think geometrically about spacetime,[8]:219 in the further development of general relativity Einstein fully incorporated the spacetime formalism. When Einstein published in 1905, another of his competitors, his former mathematics professor Hermann Minkowski, had also arrived at most of the basic elements of special relativity. Max Born recounted a meeting he had made with Minkowski, seeking to be Minkowski's student/collaborator: "[…] I went to Cologne, met Minkowski and heard his celebrated lecture 'Space and Time' delivered on 2 September 1908. […] He told me later that it came to him as a great shock when Einstein published his paper in which the equivalence of the different local times of observers moving relative to each other was pronounced; for he had reached the same conclusions independently but did not publish them because he wished first to work out the mathematical structure in all its splendor. He never made a priority claim and always gave Einstein his full share in the great discovery."[9] Minkowski had been concerned with the state of electrodynamics after Michelson's disruptive experiments at least since the summer of 1905, when Minkowski and David Hilbert led an advanced seminar attended by notable physicists of the time to study the papers of Lorentz, Poincare et al. However, it is not at all clear when Minkowski began to formulate the geometric formulation of special relativity that was to bear his name, or to which extent he was influenced by Poincaré's four-dimensional interpretation of the Lorentz transformation. Nor is it clear if he ever fully appreciated Einstein's critical contribution to the understanding of the Lorentz transformations, thinking of Einstein's work as being an extension of Lorentz's work.[10] {{{Unchanged: Image and two passages on Minkowski}}} Einstein, for his part, was initially dismissive of the Minkowski's geometric interpretation of special relativity, regarding it as überflüssige Gelehrsamkeit (superfluous learnedness). However, in order to complete his search for general relativity that started in 1907, the geometric interpretation of relativity proved to be vital, and in 1916, Einstein fully acknowledged his indebtedness to Minkowski, whose interpretation greatly facilitated the transition to general relativity.[7]:151–152 Since there are other types of spacetime, such as the curved spacetime of general relativity, the spacetime of special relativity is today known as Minkowski spacetime. 1. ^ Born, Max (1956). Physics in My Generation. London & New York: Pergamon Press. p. 194. Retrieved 10 July 2017. 2. ^ Darrigol, O. (2005), "The Genesis of the theory of relativity" (PDF), Séminaire Poincaré, 1: 1–22, Bibcode:2006eins.book....1D, ISBN 978-3-7643-7435-8, doi:10.1007/3-7643-7436-5_1 3. ^ a b c Cite error: The named reference Miller was invoked but never defined (see the help page). 4. ^ Poincare, Henri (1906). "On the Dynamics of the Electron (Sur la dynamique de l’électron)". Rendiconti del Circolo matematico di Palermo. 21: 129–176. Retrieved 15 July 2017. 5. ^ Zahar, Elie (1989) [1983], "Poincaré's Independent Discovery of the relativity principle", Einstein's Revolution: A Study in Heuristic, Chicago: Open Court Publishing Company, ISBN 0-8126-9067-2 6. ^ a b Walter, Scott A. (2007). "Breaking in the 4-vectors: the four-dimensional movement in gravitation, 1905–1910". In Renn, Jürgen; Schemmel, Matthias. The Genesis of General Relativity, Volume 3. Berlin: Springer. pp. 193–252. Archived from the original on July 14, 2017. Retrieved 15 July 2017. 7. ^ a b c d Cite error: The named reference Pais was invoked but never defined (see the help page). 8. ^ Schutz, Bernard (2004). Gravity from the Ground Up: An Introductory Guide to Gravity and General Relativity (Reprint ed.). Cambridge: Cambridge University Press. ISBN 0521455065. Retrieved 24 May 2017. 9. ^ Weinstein, Galina. "Max Born, Albert Einstein and Hermann Minkowski's Space-Time Formalism of Special Relativity". arXiv. Cornell University Library. Retrieved 11 July 2017. 10. ^ Galison, Peter Louis (1979). "Minkowski's space-time: From visual thinking to the absolute world". Historical Studies in the Physical Sciences. 10: 85–121. doi:10.2307/27757388. Retrieved 11 July 2017. ## I believe the section to be ready for transfer to article space. D.H's suggested revisions correct what I felt to be deficiencies in the narrative that I had developed with help from all of the rest of you, but which I wasn't quite able to pinpoint. At this point, I believe that the history is ready to transfer to article space. I will begin the process of transfer immediately after this post. Thanks to everybody who helped out! Stigmatella aurantiaca (talk) 02:30, 17 July 2017 (UTC) I'm reading through and it so far is quite natural to read, interesting, and logical. I am confused about this sentence: In 1900, he recognized that Lorentz's "local time" is actually what is indicated by moving clocks by proposing an explicitly operational definition of clock synchronization assuming constant light speed. What does “explicitly operational definition of clock synchronization”mean? Can you flesh that out within the sentence? Greg L (talk) 04:39, 17 July 2017 (UTC) P.S. Very good! I went all the way through it and made only a few minor edits for clarity. I found it very easy to understand with the exception of what I wrote in the preceding paragraph. Greg L (talk) 04:52, 17 July 2017 (UTC) "fleshing out" is on my to-do list. Should get to it in a day or so. Stigmatella aurantiaca (talk) 11:08, 17 July 2017 (UTC) Geoffrey left a trail of typos and careless edits, not just here but in Special relativity. I didn't go around following him closely to proofread his work, although I obviously should have. I had assumed that a working physicist would be more careful than he actually was. So I'm taking care of those items first. Mea culpa. Stigmatella aurantiaca (talk) 18:04, 17 July 2017 (UTC) Your culpa runneth over with goofs to fix. With regard to “…an explicitly operational definition of clock synchronization assuming constant light speed,” an alternative way to address insider-speak like that would be to give it a more general treatment, e.g. In 1900, he recognized that Lorentz's "local time" was actually more akin to what Einstein later posited five years later. Alternatively, a more detailed but plain-speak treatment. For instance (though I have no idea it is technically correct): In 1900, he recognized that Lorentz's "local time" is actually as Einstein would describe five years later, where ‘time’ (when events occur on a clock) must necessarily vary for different observers in a universe where the speed of light has one fixed value regardless of the two observers’ relative velocities to an observed light source. Greg L (talk) 18:50, 17 July 2017 (UTC) ──────────────────────────────────────────────────────────────────────────────────────────────────── Clearer? Stigmatella aurantiaca (talk) 06:51, 18 July 2017 (UTC) Very. I enjoyed the learning process, so it’s good stuff. Greg L (talk) 17:18, 18 July 2017 (UTC) You're welcome. Galison's book is a great read, by the way. Stigmatella aurantiaca (talk) 18:00, 18 July 2017 (UTC) Translations of Poincare frequently have him using the word "convenience", which was used in D.H's text. I suspect it may be a case of one or another of Poincare's translators settling on the wrong English word to express some subtle nuance of the French. "Convention" makes more sense in most instances. Stigmatella aurantiaca (talk) 18:00, 18 July 2017 (UTC) I know someone who speaks fluent French. She’s a French citizen, born in France, and married to an American. Would you like to share the precise passage with me and I’ll have her look at it? Greg L (talk) 19:07, 18 July 2017 (UTC) I'll need to track down the original French for these passages that I know only in translation. This may take a while. Stigmatella aurantiaca (talk) 22:39, 18 July 2017 (UTC) By the way, she's a language genius. She and her husband play Scrabble in any language so long as it is one of the Romance languages. She and her husband (my boss at the time, though we really had more of a Darren Stevens / Larry Tate relationship) came to me requesting a ruling: she wanted to play Scrabble in Korean, which wasn’t originally a Romance language but only recently has received government-endorsed “official” translations into the English alphabet. I ruled in his favor (I’m no fool). Whatever verdict you obtain regarding her native French, well, you can take it to the bank. Greg L (talk) 00:52, 19 July 2017 (UTC) ## Sections too long Sections longer than, say, half a screen, are possibly intimidating. An easy way around this is to create sub- (subsub-) sections. If you worry that the TOC becomes too long, it can be made to diplay only down to a chosen depth. YohanN7 (talk) 08:57, 18 July 2017 (UTC) I'm worried that the fonts of level 4 and deeper subsection headers are indistinguishable. I sort of got around this for • Active, passive, and inertial mass, • Pressure as a gravitational source, and • Gravitomagnetism by adding dots, but it was a kludge solution that I really didn't like. Is there some more elegant way of customizing subheader fonts? Stigmatella aurantiaca (talk) 11:38, 18 July 2017 (UTC) Don't know. Maybe have a look here. YohanN7 (talk) 12:04, 18 July 2017 (UTC) TOC could stand a bit of abbreviation. I'll try limiting its depth. Stigmatella aurantiaca (talk) 15:30, 18 July 2017 (UTC) Trying to figure out visual cues that would work to break up a wall of text, not necessarily subsections. Stigmatella aurantiaca (talk) 15:44, 18 July 2017 (UTC) Sometimes very simple page layout elements can be employed to break up long stretches of text to alleviate the mind’s energy devoted to eye tracking. Sidebars are a common way in print; though I think it’s possible, I haven’t personally seen them used on Wikipedia. Bullets, enumerated lists, and cquotes are other examples that break up visual monotony, can easily be used on Wikipoedia, and all of which require minimal-to-zero truncation of text material. In the History section, as an example of what I’m talking about, I converted an imbedded quote in an already tortuously long paragraph into a cquote, which also effectively bifurcated that long paragraph. All in all, that single giant paragraph became three elements, which I find to be much easier to read because the mind doesn’t have to struggle so hard to direct the eye. Before and after. Greg L (talk) 17:40, 18 July 2017 (UTC) Thanks! Stigmatella aurantiaca (talk) 17:48, 18 July 2017 (UTC) The {{TOC limit|3}} moved the lede to below the table of contents (funkiness here). I repositioned the tag (∆ edit) to restore the wiki-standard convention of having the lede before the TOC. Greg L (talk) 21:44, 18 July 2017 (UTC) whoops! Stigmatella aurantiaca (talk) 00:16, 19 July 2017 (UTC) This version doesn't look too shabby. Then for sectioning without sectioning, one could just play around with standard emphasis. This topic can be roughly be broken uo into (one), (two), ... Sure, why not? I just reverted your self-revert! Stigmatella aurantiaca (talk) 07:39, 19 July 2017 (UTC) How about this version, where I've lumped all of the legacy sections under "Technical topics"? Stigmatella aurantiaca (talk) 07:52, 19 July 2017 (UTC) Probably a good idea. The separate topics don't really each warrant a top level section. YohanN7 (talk) 08:20, 19 July 2017 (UTC) That's what I thought, too. So I reverted my self-revert. Stigmatella aurantiaca (talk) 08:29, 19 July 2017 (UTC) ──────────────────────────────────────────────────────────────────────────────────────────────────── My two cents? It’s a really bad idea. It certainly looks better (there’s no doubting that bit) but the tradeoff in functionality is overly steep. As wikipedians, we tend to get used to what is what and where things are located after working on a single article for a long while. However, the experience for long-time visitors, who are accustomed to Wikipedia conventions but are new to this article will be VERY different. I can guarantee you that many readers will have experiences like this: We don’t make a city smaller and easier to drive through by tearing pages out of its telephone book. I think you two just suffered from an industrial-grade case of groupthink. It takes, what, a half-second longer to scroll through a two-level TOC with a modern trackpad or mouse? Greg L (talk) 15:54, 19 July 2017 (UTC) I've set limit=3. Let's see how this works. I'm at work right now, so can't check its appearance/functionality on a tablet. Could somebody with a tablet and someone with a minitab check how the full site and the mobile site look on a tablet and minitab? Changes won't make any difference to a phone. Wikipedia mobile on a phone size screen doesn't show TOC at all. Stigmatella aurantiaca (talk) 17:29, 19 July 2017 (UTC) TOC left|limit=2 looked fine on minitablets in Desktop view. TOC left|limit=3 looks bad on minitablets in Desktop view. TOC left|limit=2 looked fine on full size tablets in Desktop view. TOC left|limit=3 looks marginal on full size tablets in Desktop view. Stigmatella aurantiaca (talk) 21:26, 19 July 2017 (UTC) On an iPad running iOS 10.3.2, four hierarchical levels show in Safari in desktop view and they aren’t numbered. On a laptop running MacOS 10.12.5, two levels show (no more than 3.1, for instance) in both Chrome and Safari. Greg L (talk) 21:37, 19 July 2017 (UTC) And I thought I had near-OCD levels of perfectionism. Two levels on a desktop is fine. Four levels for some tablets isn’t a deal breaker because just a flick of the finger is plenty fast. Greg L (talk) 21:40, 19 July 2017 (UTC) I was using an online simulator, trying out different virtual tablets and minitablets. The simulator is not always accurate, so I trust your report more than I trust the simulator. The question was how the LEFT option looks. I can tell that the online simulator goofs up in how it handles TOC in Mobile view. The only accurate way to judge definitively is with actual tablets. It looks like we have to go with a regular display, if we want limit=3. LEFT with limit=2 looked nice and clean, though, on all displays. Stigmatella aurantiaca (talk) 21:57, 19 July 2017 (UTC) "On an iPad running iOS 10.3.2, four hierarchical levels show in Safari in desktop view and they aren’t numbered." - AAARGH!!!! Stigmatella aurantiaca (talk) 22:00, 19 July 2017 (UTC) I conceived of and designed the {{val}} and {{xt}} templates we enjoy today and take for granted; I sweated atomic-level details like this during their development. But the TOC is established business that isn’t going to be changing. These huge TOC differences are all due to vast differences on the consumption end of things. Optimizing the experience for desktop at two levels showing (e.g. “3.1” via TOC3) is best, IMHO, even though other platforms are compromised in different ways. Look at the bright side: There are no “errors” in the TOC regardless of what platform is being used; the only difference is hierarchical depth, which is just a grayscale judgement call anyway. Greg L (talk) 22:08, 19 July 2017 (UTC) Actually, that was iPad running iOS 10.3.2 in mobile view. One way or another, my reaction is still AAARGH!!! Stigmatella aurantiaca (talk) 21:28, 21 July 2017 (UTC) Here's a thought. I’ve never seen intra-page hyperlinks denoted by bolding before. Is that something relatively new to Wikipedia, or is it a home-brewed thing? If home brew, I propose that 1) to better clue readers to the availability of these convenient hyperlinks, 2) and to better embrace the principle of least astonishment, and 3) to avoid such links being essentially an Easter egg, we try something like the following: We were discussing the genesis of spacetime in early July when… Now that I think about it, I might have encountered this situation before in a previous article (it might have been our Kilogram article) and ended up with this sort of solution. It works quite well and best serves our readership because it’s perfectly clear and unambiguous. Greg L (talk) 22:57, 19 July 2017 (UTC) Bold intra-page hyperlinks are already very common in mathematics articles for linking to a numbered formula. For example: Acceleration (special relativity). Various of the numbered formulas are referenced within the text by a bold formula number. I am merely extending the use slightly. Stigmatella aurantiaca (talk) 23:49, 19 July 2017 (UTC) Since it is a convention that isn't exactly ubiquitous on Wikipedia, would you mind if I changed the non-math ones to what I am proposing above? I think it will be helpful to non-expert users of Wikipedia. Greg L (talk) 01:46, 20 July 2017 (UTC) The links are already very prominently blue because of the boldness, and it is not necessary for readers to click on them to understand the article. You will notice that there is usually a gentle hint of some sort associated with the links that the reader is encouraged to explore their use: "Click here for a brief section summary", "invariant interval (discussed shortly)", "(see Fig. 1‑1)", "Fig. 2‑9 illustrates that...", "As we have discussed in the previous section on four-momentum", etc. There is also the issue that many of the links don't necessarily work the first time for phone users, and even more importantly, phone users don't have a good way to return to where they were, especially since they do not have the benefit of a table of contents. The "back" < button often leaves them stranded after a javascript-assisted leap. The lack of TOC for phone users is an exceptionally irritating point to me, since it makes navigation through the article extremely difficult. I would like to be able to have a "click here to return to TOC" or some sort, but it would only function for Desktop users, who don't need such a feature. What the MediaWiki "jeniouses" should have done was provide an initially collapsed TOC for phone users. <sarcasm> But no, that was too hard for them to implement. </sarcasm> So no, don't add the extra verbiage, since except for "Click here for a brief section summary" where I have explicitly provided a "back" mechanism, I don't want to rub phone user's faces into the fact that desktop users have available to them a feature that they may be very hesitant to use. Re the math intra-wiki link templates, they are incompatible with "collapsing" formulas that compress on a narrow screen, otherwise I would have used them in a couple of points in the Spacetime article. Instead, I rearranged the text so that discussion about a formula was always immediately adjacent to the formulas being discussed, which were hence not necessarily located in the positions where their display may have been most natural. I also completely omitted discussion of one formula (of lesser importance, admittedly) where I couldn't resolve the placement issue. Stigmatella aurantiaca (talk) 05:37, 20 July 2017 (UTC) ──────────────────────────────────────────────────────────────────────────────────────────────────── I tried putting a note at the beginning that the Section Summaries can be read as a stand-alone "Introduction to Spacetime", but I had to undo because of anomalies that phone users experience if they make the Section Summaries their first point of entry. Stigmatella aurantiaca (talk) 13:04, 20 July 2017 (UTC) OK. It’s not a big deal anyway. I should think though that what the mathematics editors are up to insofar as using bold to indicate intra-article hyperlinks is one thing; using the convention in this more general fashion is an uncommon interface element that doesn’t appear in WP:LINKS and runs an increased risk of suffering frustrating drive-by shootings in the future. I think you are setting yourself up for needless hassle in the future. With regard to your mention of a lack of an explicitly provided "back" mechanism, that issue doesn’t appear to be relevant since providing an intra-article hyperlink imbedded in the phrase is functionally identical to imbedding the link a few words adjacent in a parenthetical. I’ll change just one of those links as an example to consider. I hope you ruminate on my example link (near the bottom of the section, here) for a good while before acting. I think you’ll soon conclude that it reads exceedingly close to what you had before, is typographically elegant, is fully compliant with WP:LINKS (and what our users are accustomed to), makes the experience no different for users on mobil devices, and—very importantly—fully and unambiguously communicates to all users what they can expect if they click the link, which is a fundamental imperative of all good man-machine interface guidelines. Greg L (talk) 16:23, 20 July 2017 (UTC) With regard to your mention of a lack of an explicitly provided "back" mechanism, that issue doesn’t appear to be relevant It's far, far worse than that. Your linking to a top-level section works fine on a phone, because the top level section exists even when the section has not been expanded, and the "back" arrow works fine. But when you experiment with wikilinks to destinations at deeper levels, you will find that the implementation of Wikipedia mobile has a lot of flaky elements that leave you absolutely exasperated, and the back arrow can take you to completely unexpected destinations. Stigmatella aurantiaca (talk) 16:40, 20 July 2017 (UTC) Agreed. But we’re still talking past each other. My point is only as follows: This example of trying to understand our target readership (discussed above) …is functionally identical (mobil users included) to this: This example of trying to understand our target readership (discussed above) The blue color in both methods makes it clear that there’s a link. The only differences between the two examples are that 1) the second example fully complies with WP:LINKS, and 2) the second example makes it perfectly clear the reader will be taken to an intra-article location and won’t be taken to another article. ## Current deficiencies in article coverage Purgy's latest revisions and my (mostly favorable) reaction to them made me think again about current deficiencies in article coverage that stand out immediately to anybody giving this article even a casual read. Remembering that the goal is to present the material in a form accessible to high school students (or at most, first-year college physics/calculus students): • There is no discussion of strong fields. • There is no discussion of the left-hand side of the Einstein field equations. To keep the article length from spiraling out of control, there are a number of items that could easily be tossed out. The Stella and Terence examples, for instance, were introduced when the article was half its current size, and are disposable. If an adequate presentation of the left side of the Einstein field equations could be developed, we could toss out the Riemannian geometry and Curved manifolds sections, which I currently have parked in the "Technical topics" storage attic for legacy material that I couldn't bring myself to throw away. The problem is, try as I may, I can't really think of a unique high-school-level take on either topic that 1. Does the topic justice 2. Is not already covered, more or less (in)adequately, in the General relativity and Introduction to general relativity articles (which still deserve their Featured Article status, years after their award). Any suggestions? Stigmatella aurantiaca (talk) 18:13, 21 July 2017 (UTC) The intent would not be to provide a sweeping overview of the missing topics. Providing a general, sweeping overview is the job of the General relativity and Introduction to general relativity articles. Those two articles provide exemplary "Wow" and "Gee-whiz" presentations of their subject matter. I have nothing against "Wow" and "Gee-whiz" presentations. It's just that what those articles are doing is not what this article is attempting to do. "Instead, the focus [of the Introduction to curved spacetime section has been] to explore a handful of elementary scenarios that serve to give somewhat of the flavor of general relativity." Stigmatella aurantiaca (talk) 23:35, 21 July 2017 (UTC) ──────────────────────────────────────────────────────────────────────────────────────────────────── I can offer thoughts one step at at a time. I’ve happy with the lede, Definitions, and History. Next on the list is Spacetime interval. I note that it quickly launches into math-speak, which I suggest should be done very gently so early in the article. We should ease readers into this complex material, step by step. Math terms which seem blindingly obvious to us, such as delta, need to be explained as we ease readers into this material. As an example, here’s what’s currently there: In three-dimensions, the distance between two points can be defined using the Pythagorean theorem: ${\displaystyle d^{2}=\Delta x^{2}+\Delta y^{2}+\Delta z^{2}}$ Although two viewers may measure the x,y, and z position of the two points using different coordinate systems, the distance between the points will be the same for both (assuming that they are measuring using the same units). The distance is "invariant". In special relativity, however, the distance between two points is no longer the same… I’m still not sure of what I’m doing because the current verbiage isn’t sufficiently clear for me to be sure I have a full grasp of the subject matter before trying to simplify the text. But (suspecting I understand it), I propose something like this: Pythagorean theorem With space represented along side a, and time along side b a spacetime interval is measured along the hypotenuse, c. Note the graph at right showing a right triangle. Spacetime diagrams typically represent three-dimensional space by reducing it to a single dimension, which is charted along the horizontal X axis. The single dimension of time is then charted along the vertical Y axis. Generally, the two axis are scaled proportionally to each other so one light second of distance along the X axis is the same distance on the graph as one second of time on the Y axis (or a light year paired to year, et cetera). Thus, something traveling at the speed of light makes a path at a 45-degree angle. When measuring distances between any two points in all three dimensions of space (x, y, and z axis), the mathematical relationship is governed by a 3-D version of the Pythagorean theorem: ${\displaystyle d^{2}=\Delta x^{2}+\Delta y^{2}+\Delta z^{2}}$ …where ${\displaystyle \Delta }$ (delta) represents the distance traveled along the given dimension. The distance between two points in 3-D space is merely the square root of ${\displaystyle d^{2}}$. Although two viewers may measure the x, y, and z position of the two points using different coordinate systems, the distance between the points will be the same for both (assuming that they are measuring using the same units). The distance is "invariant". In special relativity, however, the distance between two points is no longer the same! Did I understand my material right? How’d I do? If not, please bring out the Ernie & Bert puppets and help me here by correcting this. Greg L (talk) 23:07, 22 July 2017 (UTC) We have to assume a certain amount of math literacy. The main target are high school students going up through first year college physics/math. We are not aiming for seventh graders. Look, I'm going bonkers trying to figure out whether there is a feasible means of explaining the basic ideas behind Christoffel symbols, Bianchi identities, Ricci tensors etc. without inundating the reader with the actual math, and you are focused on the Pythagorean theorem??? Stigmatella aurantiaca (talk) 23:22, 22 July 2017 (UTC) This article is insanely too God-damned big and complex; over and over, every editor in talk-space and behind the scenes has been reinforcing that consistent message, yet you kept shoveling technical coal into the boiler. And now you’re worrying at this late stage about how to make advanced concepts like Christoffel symbols and Bianchi identities more accessible before ensuring that basic concepts are smoothly and properly introduced in the earliest of sections. You are going to learn soon enough (weeks, months) that with this article in this state of affairs, as soon as you take a two-week-long break, it will be subjected to an series of drive-by shootings such that you’ll scarcely recognize it. Only there’s a 90% chance that those doing the drive-by shootings won’t be as knowledgeable as you. I warned you the first time around that your weird crap like forking the background information to another article entirely was going to receive pushback and you wouldn’t listen until you learned the hard way. Your tendency towards WP:OWN is disheartening. So you just do what you fucking want then. Greg L (talk) 03:48, 23 July 2017 (UTC) Who is expressing OWNERSHIP? I have a strong VISION of how the article ought to be. I have expressed it repeatedly: • This should be an article aimed at high school to first year college students that explains spacetime, primarily within the context of relativity theory. • This article does not take a "Wow" and "Gee whiz" approach to explaining the subject, but instead focuses on expressing relativistic topics using a consistent spacetime approach rather than the kinematic approach adopted by most popular treatments. (Unfortunately, to keep ramp-up to a minimum, I decided early that a Taylor-Wheeler approach to explaining spacetime was inappropriate. However much I love their way of explaining things, it requires early introduction to the concept of rapidity.) • There are already two great articles on Wikipedia taking a "Wow" and "Gee whiz" approach to explaining general relativity. This article should not copy them. • Unlike most technical articles on Wikipedia, this article should not be allowed to grow "like Topsy" resulting in the undecipherable mishmash with which I started back in March. Within the context of my vision, I have accepted MAJOR CHANGES to the article. You come in and expand the lede from its original one paragraph to its current five paragraphs. My reaction: Sure, it was overly terse. Let Greg take over here. Yohan adds a bit of technical material to "Rapidity" that is a bit of a stretch for the target audience. My reaction: Great stuff, but let's move some of it to a note. Geoffrey comes in and moves the "Measurement versus visual appearance" section to Special relativity. My reaction: Hey, you're right! He rewords a significant fraction of my writing. My reaction: Most of it is somewhat better than my writing, some of it is a bit worse, but I wish he could have been more careful in copy-editing. He deletes "Maxwell's contributions". My reaction: yes, I understand. He chops out most of the History section. My reaction: Geoffrey has a point. Let's all get together and put together a History section that emphasizes spacetime. Purgy decides to rewrite "Light cone". My reaction: Sure. Let's just not toss in stuff about spooky action at a distance. The article has, as you pointed out, grown to be quite long. A great deal of my effort has been to make it effectively shorter by providing summaries and navigation aids. Much of my effort has been hampered by technical issues having to do with how Wikipedia mobile works on phones. The "Section summaries" can be read as an integrated "Introduction to spacetime".
2017-07-24 13:23:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 13, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6540880799293518, "perplexity": 1721.1502027339507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424876.76/warc/CC-MAIN-20170724122255-20170724142255-00445.warc.gz"}
https://biz.libretexts.org/Bookshelves/Finance/Book%3A_International_Finance__Theory_and_Policy/04%3A_Foreign_Exchange_Markets_and_Rates_of_Return/4.05%3A_Applying_the_Rate_of_Return_Formulas
# 4.5: Applying the Rate of Return Formulas • Contributed by No Attribution by request • Anonymous by request learning objectives 1. Learn how to apply numerical values for exchange rates and interest rates to the rate of return formulas to determine the best international investment. Use the data in the tables below to calculate in which country it would have been best to purchase a one-year interest-bearing asset. These numbers were taken from the Economist, Weekly Indicators, December 17, 2005, p. 90, http://www.economist.com. ## Example 1 Consider the following data for interest rates and exchange rates in the United States and Britain: 2.37% per year 4.83% per year 1.96 $/£ 1.75$/£ We imagine that the decision is to be made in 2004, looking forward into 2005. However, we calculate this in hindsight after we know what the 2005 exchange rate is. Thus we plug in the 2005 rate for the expected exchange rate and use the 2004 rate as the current spot rate. Thus the ex-post (i.e., after the fact) rate of return on British deposits is given by which simplifies to $R_{0}R_{£} = 0.0483 + (1 + 0.0483)(−0.1071) = 0.064 \ or −6.4 \%$. A negative rate of return means that the investor would have lost money (in dollar terms) by purchasing the British asset. Since $$R_{0}R_{} = 2.37$$ > $$R_{0}R_{£} = −6.4\%$$, the investor seeking the highest rate of return should have deposited her money in the U.S. account. ## Example 2 Consider the following data for interest rates and exchange rates in the United States and Japan. 2.37% per year 0.02% per year 104 ¥/$120 ¥/$ Again, imagine that the decision is to be made in 2004, looking forward into 2005. However, we calculate this in hindsight after we know what the 2005 exchange is. Thus we plug in the 2005 rate for the expected exchange rate and use the 2004 rate as the current spot rate. Note also that the interest rate in Japan really was 0.02 percent. It was virtually zero. Before calculating the rate of return, it is necessary to convert the exchange rate to the yen equivalent rather than the dollar equivalent. Thus Now, the ex-post (i.e., after the fact) rate of return on Japanese deposits is given by which simplifies to $R_{0}R_{¥} − 0.0002 + (1 + 0.0002)(−0.1354) = −0.1352 \ or -13.52 \%.$ A negative rate of return means that the investor would have lost money (in dollar terms) by purchasing the Japanese asset. Since $$R_{0}R_{} = 2.37$$ > $$R_{0}R_{¥} = −13.52 \%$$, the investor seeking the highest rate of return should have deposited his money in the U.S. account. ## Example 3 Consider the following data for interest rates and exchange rates in the United States and South Korea. Note that South Korean currency is in won (W). 2.37% per year 4.04% per year 1,059 W/$1,026 W/$ As in the preceding examples, the decision is to be made in 2004, looking forward to 2005. However, since the previous year interest rate is not listed, we use the current short-term interest rate. Before calculating the rate of return, it is necessary to convert the exchange rate to the won equivalent rather than the dollar equivalent. Thus Now, the ex-post (i.e., after the fact) rate of return on Italian deposits is given by which simplifies to $R_{0}R_{W} = 0.0404 + (1 + 0.0404)(0.0328) = 0.0746 \ or +7.46\%.$. In this case, the positive rate of return means an investor would have made money (in dollar terms) by purchasing the South Korean asset. Also, since $$R_{0}R_{} = 2.37 \ > R_{0}R_{W} = 7.46\%$$, the investor seeking the highest rate of return should have deposited his money in the South Korean account. key takeaway • An investor should choose the deposit or asset that promises the highest expected rate of return assuming equivalent risk and liquidity characteristics. exercises 1. Consider the following data collected on February 9, 2004. The interest rate given is for a one-year money market deposit. The spot exchange rate is the rate for February 9. The expected exchange rate is the one-year forward rate. Express each answer as a percentage. 2.5% 0.7541 US$/C$ 0[0].7468 US$/C$ • Use both RoR formulas (one from Chapter 4 "Foreign Exchange Markets and Rates of Return", Section 4.3 "Calculating Rate of Returns on International Investments", the other from Chapter 4 "Foreign Exchange Markets and Rates of Return", Section 4.4 "Interpretation of the Rate of Return Formula", Step 5) to calculate the expected rate of return on the Canadian money market deposit and show that both formulas generate the same answer. • What part of the rate of return arises only due to the interest earned on the deposit? • What part of the rate of return arises from the percentage change in the value of the principal due to the change in the exchange rate? • What component of the rate of return arises from the percentage change in the value of the interest payments due to the change in the exchange rate? 2. Consider the following data collected on February 9, 2004. The interest rate given is for a one-year money market deposit. The spot exchange rate is the rate for February 9. The expected exchange rate is the one-year forward rate. Express each answer as a percentage. 4.5% 1.8574 $/£ 1.7956$/£ • Use both RoR formulas (one from Chapter 4 "Foreign Exchange Markets and Rates of Return", Section 4.3 "Calculating Rate of Returns on International Investments", the other from Chapter 4 "Foreign Exchange Markets and Rates of Return", Section 4.4 "Interpretation of the Rate of Return Formula", Step 5) to calculate the expected rate of return on the British money market deposit and show that both formulas generate the same answer. • What part of the rate of return arises only due to the interest earned on the deposit? • What part of the rate of return arises from the percentage change in the value of the principal due to the change in the exchange rate? • What component of the rate of return arises from the percentage change in the value of the interest payments due to the change in the exchange rate?
2021-07-24 03:36:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6441645622253418, "perplexity": 1051.2539159974249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150129.50/warc/CC-MAIN-20210724032221-20210724062221-00431.warc.gz"}
https://stats.stackexchange.com/questions/76350/goodness-of-fit-for-continuous-variables
# Goodness-of-Fit for continuous variables What are some Goodness-of-Fit tests or indices for the case of continuous variables? For example, I am looking at the Kolmogorov–Smirnov test. What I don't get is how one gets the empirical CDF in the first place? What I mean is, let's say I do a regression analysis with Gaussian errors. I have the maximum-likelihood estimate of the parameters. Now I also need to do a density estimation for the empirical CDF? Aren't they the same thing? Isn't my likelihood already giving me a goodness of fit? Why do I need K–S? What are some Goodness of Fit tests or indicies for continuous case? Most goodness of fit tests are for the continuous case. There are, quite literally, hundreds of them. Besides the Kolmogorov-Smirnov test (for a fully specified distribution, based on maximum difference in ECDF) some commonly used ones include the Anderson-Darling test (also fully specified and ECDF based; a variance-weighted version of the Cramer-von Mises test) and the Shapiro-Wilk (parameters unspecified, for testing normality only). For example I am looking at Kolmogorov-Smirnov test. Okay, but why? That is, why are you testing goodness of fit? What I don't get is how one gets the emprical CDF in the first place? It's simply the sample version of the cdf. The cdf is $P(X\leq x)$, the ECDF is the same thing, with 'probability' (for the random variable) replaced with 'proportion' (of the data). That is, you compute the proportion of the data that is less than or equal to every value $x$ in the range (ECDFs only change at data values, but are still defined between them - you really only need to identify their value at each data point and to the left of the entire sample, since they're constant from each data point until the next data point) Take a small set of numbers and try it. Here we go, a sample of three data values: 13.2 15.8 17.5 now, for the following $x$ values, what is the proportion of the data $\leq x$? x = 10, 13.2-$\varepsilon$, 13.2, 13.2+$\varepsilon$, 15, 15.8, 17.5-$\varepsilon$, 19 (where $\varepsilon$ is some very small number) Can you see how it works? (Hint: the first five answers are 0, 0, 1/3, 1/3, 1/3 and the last one is 1; the full ECDF is plotted at the end of my answer) What I mean is, let's say I do a regression analysis with gaussian errors. What prompts you to use this example? Did something (a book, say, or a website) lead you to think you ought to use a goodness of fit test in this situation? I have the maximum likelihood estimate of the parameters. Now I also need to do a density estimation for the emprical CDF? Empirical cdf of what? Note that the KS is a test, not an estimate. What hypothesis are you testing and why? Aren't they the same thing? No, they're quite different, as discussed below. Isn't my likelihood already giving me a goodness of fit? The likelihood for the regression tells you about fit of the line; in the case below, how close the red line is to the data. You could replace the data with another set of values with the same summary statistics but a different distribution, and the likelihood would be identical. See the Anscombe quartet for a good example of how very different data could have the same likelihood surface. By contrast, With a goodness of fit test, you're checking the shape of some distribution, like a normal distribution with some mean and variance, fits the data (the KS measures the discrepancy from the hypothesized distribution by looking at the ECDF, giving a test that doesn't change when you transform both halves of the comparison - making it nonparametric): So how does this relate to linear regression? Some people try to test whether the assumption of normality around the line holds (such as the distribution in the green strip in the first plot), as a check on the assumption about the error distribution: • but this check is done across all x, not just some particular x (I did showed values near a particular $x$ to emphasize it's the conditional distribution of $y$ - or equivalently, the distribution of the errors - that is relevant). However: 1) formally testing goodness of fit as a check on assumptions isn't necessarily suitable; (i) it answers the wrong question (the relevant question is 'what is the impact on my inference of the degree of non-normality we have?'), and (ii) only tells you anything when it's of almost no use to you to know it (goodness of fit tests tend to show significance in medium to large samples, where it usually doesn't matter much, and tend not to be significant in small samples where it matters most), and (iii) changing what you do based on the outcome is usually less appropriate than simply assuming you'd reject the null in the first place (your regression inference doesn't have the desired properties). 2) even without all that, the KS is a test for a fully specified distribution. You have to specify the mean and standard deviation for each data point before you see any data. If you're estimating the mean (say by fitting a line) and a standard deviation (say by the standard error of the residuals, s), then you simply shouldn't be using the KS test. There are tests for the situation where you estimate the mean and variance (the equivalent to the KS test is called the Lilliefors test), but for normality the standard is the Shapiro Wilk test (though the simpler Shapiro-Francia test is almost as powerful, most stats software implements the full Shapiro-Wilk test). Why do I need KS? Well, basically you don't. There is almost never a circumstance when that's a good choice for the situation you describe. My suggestion is, to either use some procedure that doesn't assume normality (e.g. some robust approach, or perhaps least square but with inference based on resampling), or if you're in a position to reasonably assume normality, double-check the reasonableness of the assumption with a diagnostic display (like a Q-Q plot; incidentally the Shapiro-Francia test is effectively based on the $R^2$ in that plot). In large samples, normality is less important to your inference (for everything but prediction intervals), so you can tolerate larger deviations from normality (equal variance and independence assumptions matter much more). In small samples, you're more dependent on the assumption for your testing and confidence intervals, but you simply can't be sure how bad the degree of non-normality you have is. You're better with small samples to simply work as if your data were non-normal. (There are a number of good robust options, but you should usually also consider the potential impact of influential points, not just of potential y-outliers.) ECDF for the small example data set earlier in the answer: • Thanks Glen for the long and detailed explanation. Indeed I was trying to perform Goodness of Fit on residuals of a regression. What confused me I guess is in some data mining software, in a two class classification example, after the algorithm run software reported the KS distance between the distributions of two classes and plotted two empricial CDFs and marked the max distance point. If KS requires all parameters known in advance what exactly is going on here? Maybe I misinterpreted the plot I don't know. Nov 14 '13 at 7:23 • I'm not sure I follow what is happening in the situation you describe, but if they're not testing it - just computing the distance - then there's no problem. Nov 14 '13 at 11:10 • en.wikipedia.org/wiki/Goodness_of_fit suggests that KS can be used to show that two samples come from identical distributions. I think that argument supports the data mining software as it tries to make the two classes as distinguishable as possible. Nov 14 '13 at 14:57 • The two sample KS cannot "show that two samples come from identical distributions" -- they can test the null that the distributions are identical, but failure to reject doesn't imply the null is actually true. The data mining software might well be trying to maximize the statistic, but that's simply using it as a measure of discrepancy, it's not actually a test. Dec 31 '14 at 3:40 The empirical CDF simply assigns a probability of $\frac{1}{N}$ to each sample point. Then you construct a CDF like you would for the discrete case. Not sure why you are using KS in regression. If you assumed gaussian errors and did MLE, then you have effectively fitted a normal distribution to your residuals. You could estimate the density of your residulas using your fitted values (simple approach) or increasingly more sophisticated approaches. BTW: Likelihood does not give goodness of fit, it merely says how likely the sample would have been had it been drawn from your fitted distribution. It says nothing about how likely the actual distribution is. The KS test is meant to determine if it is likely that a given, specific distribution, generated the results. It is different than the likelihood of the data, given a distribution. There is a wrinkle to this as well: If you first fit parameters via MLE then run the KS test on that distribution, you need to adjust for the fact that you used the sample to generate the parameters. • I started learning statistics from a different angle which caused a big gap in basic knowledge of stats. On the other hand I tend to look at things differently. For example from my point of view MLE is a goodness of fit. It just needs to be normalized by the entropy of the data. Unfortunately that is not available to us beforehand. Nov 14 '13 at 7:28 • I guess it is in a ways...best fit for a given parametric family. – user31668 Nov 14 '13 at 21:11
2021-09-17 02:05:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7888105511665344, "perplexity": 531.6565740112211}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053918.46/warc/CC-MAIN-20210916234514-20210917024514-00568.warc.gz"}
http://rsta.royalsocietypublishing.org/content/284/1320/179
# The Breakdown of Superfluidity in Liquid $^{4}$He: An Experimental Test of Landau's Theory D. R. Allum, P. V. E. McClintock, A. Phillips ## Abstract A single pulse time of flight technique has been used to determine the drift velocity $\overline{v}$ of the negative ions injected into liquid $^{4}$He from a field emission source. Measurements of $\overline{v}$ as a function of temperature T, pressure P and electric field E are presented within the range: 0.29 $\leq$ T $\leq$ 0.5 K; 21 $\times$ 10$^{5}\leq$ P $\leq$ 25 $\times$ 10$^{5}$ Pa; 1 $\leq$ E $\leq$ 300 kV m$^{-1}$. The experimental results are in good agreement with Landau's theory of superfluidity. The data are used to demonstrate the inapplicability of two theories of supercritical dissipation: by Takken, based on an assumption of coherent roton emission; and by Bowley & Sheard, based on the assumption of incoherent single-roton emission. The results are, however, shown to be in excellent agreement with Bowley & Sheard's incoherent two-roton theory, and the data are used to derive a numerical value of the matrix element characterizing two-roton emission. The surprising absence of the single-roton emission process is discussed, and an upper bound is placed on the relevant matrix element.
2015-11-26 09:40:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3820345401763916, "perplexity": 1398.0220103515057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446997.59/warc/CC-MAIN-20151124205406-00140-ip-10-71-132-137.ec2.internal.warc.gz"}
https://support.bioconductor.org/p/9144225/#9144229
different sample size from each individual 1 0 Entering edit mode LW • 0 @296aca7f Last seen 5 days ago United States Hi, In my dataset, the number of samples from each individual are different, something like this: I want to compare active and inactive status, but as you can see, patient 1,2,3 has 1 samples in each status, whereas patient 4,5,6,7,8 contribute multiple samples in each status. In this scenario, can I design my Deseq model as: ~ status + Patient.ID to eliminate the unequal variance issue? Thanks! Leran DESeq2 • 175 views 0 Entering edit mode 0 Entering edit mode This one is a question for DESeq, that one is for statistics. 0 Entering edit mode Thank you. 0 Entering edit mode @mikelove Last seen 1 day ago United States In DESeq2 we assume a dispersion per gene, so anyway we never model unequal variance/dispersion conditional on X. You can just use our standard design of ~patient + status.
2022-05-29 05:38:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28932222723960876, "perplexity": 10589.485368696714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663039492.94/warc/CC-MAIN-20220529041832-20220529071832-00634.warc.gz"}
https://www.math-forums.com/threads/sound-level-measurement-with-matlab.124462/
# Sound-Level-Measurement with Matlab? Discussion in 'MATLAB' started by Jan Fuhrmann, Dec 7, 2005. 1. ### Jan FuhrmannGuest Hi, I am a very beginner with MATLAB and I want to ask whether it is possible to use MATLAB as a "sound level meter". A collegue suggested it to me, and if somebody could help me, I would be very I´m using 3 ICP-microphones and -signal-conditioning-modules and a 8-channel RME Hammerfall DSP multiface as an A/D-converter. Is there any - perhaps very easy - possibility of using MATLAB for displaying A-weighted or non-weighted dBSPL-levels? The levels I´m going to measure will be between approximately 60dB(A) and 110dB(A). Thanks a lot! Best regards, Jan Jan Fuhrmann, Dec 7, 2005 2. ### Jan FuhrmannGuest I nearly forgot: It would also be neccesary to do a spectral analysis. Thanks a lot! Jan Fuhrmann, Dec 7, 2005 3. ### David MackenzieGuest Jan Fuhrmann schrieb: <snip> MATLAB as a "sound level meter" <snip> *** COMMERCIAL *** Dear Jan, we do have such a program which runs on top of our SINUS Measurement Toolbox (SMT) for MATLAB - however, this is only available on our hardware. Please take a look at our website and contact me via "[email protected]" if you're interested. David Mackenzie International Sales Coordinator SINUS Messtechnik GmbH Leipzig Germany www.sinusmess.de - Sound & Vibration Instrumentation - PCB Services - Electronics Design & Production David Mackenzie, Dec 7, 2005 4. ### Matthew CremaGuest Hi Jan, I think Matlab is an excellent choice for such a task, but you'll need the Data Acquisition toolbox and you'll need to make sure your hardware is supported. http://www.mathworks.com/products/supportedio.html?prodCode=DA I would also recommend equipment from a company called Tucker-Davis-Technologies, which I used a few years ago. I assume they are still around, but \$. -Matt Matthew Crema, Dec 7, 2005
2022-08-14 15:37:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36148443818092346, "perplexity": 9727.122270118873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00398.warc.gz"}
https://www.techwhiff.com/learn/suppose-t-m22-r3-is-a-linear-transformation-whose/245980
# Suppose T: M22-R3 is a linear transformation whose action on a basis for M2.2 is as... ###### Question: Suppose T: M22-R3 is a linear transformation whose action on a basis for M2.2 is as follows: 6 1 -3 -3 0 1 1 1 T T T T -3 -3 0 1 1 2 1 Give a basis for the kernel of T and the image of T by choosing which of the original vector spaces each is a subset of, and then giving a set of appropriate vectors. Basis of Kernel is a Subset of M2,2 Number of Matrices: 1 Bker Coo) Basis of Image is a Subset of M2,2 Number of Matrices: 1 Bim (oo #### Similar Solved Questions ##### I need help with setting up the problem, as well as learning how to use the calculator. I have a ... I need help with setting up the problem, as well as learning how to use the calculator. I have a TI-83 plus 1. Assume that we are talking about all students at your college a. Which group is larger: Students who are currently taking English AND math, or students who are currently taking English? b. ... ##### Express the null hypothesis and the alternative hypothesis in symbolic form. Carter Motor Company claims that... Express the null hypothesis and the alternative hypothesis in symbolic form. Carter Motor Company claims that its new sedan, the Libra, will average at least 25 miles per gallon in the city. Use μ, the true average mileage of the Libra. C)... ##### The heat of solution of Kl(s) in water is +20.3 kJ/mol Kl. If a quantity of... The heat of solution of Kl(s) in water is +20.3 kJ/mol Kl. If a quantity of Kl is added to sufficient water at 24.3 °C in a Styrofoam cup to produce 175.0 mL of 2.50 M KI, what will be the final temperature? (Assume a density of 1.30 g/mL and a specific heat capacity of 2.7 J g-1 °C-1 for 2.... ##### Question 1 (1 point) Carbon monoxide reacts with hydrogen in the presence of a catalyst to... Question 1 (1 point) Carbon monoxide reacts with hydrogen in the presence of a catalyst to form acetic acid. Balance the chemical equation below and determine how many grams of acetic acid will be produced from the complete reaction of 4 grams of hydrogen. CO + ___H2 --> _HC2H302 ANSWER: g of ace... ##### What is the connection between entropy and a spontaneous reaction? a) A reaction is less likely... What is the connection between entropy and a spontaneous reaction? a) A reaction is less likely to be spontaneous if it increases the entropy of a system. b) A reaction is more likely to be spontaneous if it increases the entropy of a system. c) A spontaneous reaction always decreases the entropy of... ##### State the types of isomerism that may be exhibited by the following complexes, and draw structures... State the types of isomerism that may be exhibited by the following complexes, and draw structures of the isomers: (a) [Co(en)2(ox)]+ (b) [Cr(ox)2(OH2)2]-, (c) [PtCl2(PPh3)2], (d) [PtCl2(Ph2PCH2CH2PPh2)] and (e) [Co(en)(NH3)2Cl2]+.... ##### Puget Sound Divers is a company that provides diving services such as underwater ship repairs to... Puget Sound Divers is a company that provides diving services such as underwater ship repairs to clients in the Puget Sound area. The company’s planning budget for May appears below: Puget Sound Divers Planning Budget For the Month Ended May 31 Budgeted diving-hours (q) 100 Revenu... ##### Describe a treatment/intervention discussed in the article that allows non-conforming adolescents to delay the onset of... Describe a treatment/intervention discussed in the article that allows non-conforming adolescents to delay the onset of puberty. What are the advantages and disadvantages of this intervention? Imagine you are a parent of a transgender child- would you support this type of treatment? Why or why not?... ##### The planar four-bar mechanism shown below has a driving crank O1A that turns about O1 at... The planar four-bar mechanism shown below has a driving crank O1A that turns about O1 at a constant rate of (theta- dot) θ. = 10 rad/s CCW. The links O1A and O2B are balanced and have a mass moments of inertia about their center of mass of Iz = 0.02 kgm2 . The link ABC has a center of mass loc... ##### Derek Wilson operates Clean Ride Enterprises, an auto detailing company with 20 employees. Jamal Jackson has... Derek Wilson operates Clean Ride Enterprises, an auto detailing company with 20 employees. Jamal Jackson has recently been hired by Wilson as a controller. Clean Ride's previous accountant had done very little in the area of variance analysis, but Jackson believes that the company could benefit ...
2022-08-09 07:39:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36026427149772644, "perplexity": 1894.1497975059592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00527.warc.gz"}
https://forum.effectivealtruism.org/posts/mMEBty3W3WkK7rgEH/your-time-might-be-more-valuable-than-you-think
# Your Time Might Be More Valuable Than You Think by Mark Xu8 min read18th Oct 20211 comment # 47 This is a linkpost for https://markxu.com/value-time # Summary • People often seem to implicitly value their time at the amount they can convert hours to dollars given their current skills. • However, the value of saving the marginal hour today is to increase the total number of one's working hours by one, resulting in a new hour at the end of one's career, not a new hour at their current skill level. • This suggests that people who expect their time to become valuable in the future must think their time is approximately just as valuable now, because saving time now gets them to the point where their time is valuable faster and gives them more of such time. • This analysis is complicated by various temporal dependencies (e.g. time discounting) that push the value of the current hour up or down compared to the value of the marginal hour at the end of one's career. • Under such a view, finding promising young altruists and speeding up their careers represents a significant value add. Many people in my social circles have an amount they "value their time." Roughly speaking, if someone values their time at $50/hr, they should be willing to pay$50 to save an hour of time, or be paid $50 to do work that has negligible non-monetary value. Knowing this value can provide a simple decision rule for deciding which opportunities to trade money for time it's efficient for you to take. I will argue that a naive perspective on time evaluations generally results in an underestimate. This analysis suggests that altruistic actors with large amounts of money giving or lending money to young, resource-poor altruists might produce large amounts of altruistic good per dollar. I will analyze the situation mostly in terms of wages as expressed in dollars; however, readers might want to substitute "altruistic impact" instead. I will begin by analyzing a simplified situation, adding more nuance later. # The value of your time is the value of the marginal hour at the end of your career If I currently have a job that lets me convert one hour of time into$50 dollars, then it's clear that I should take all time-saving opportunities at less than $50 dollars. (Note that this doesn't mean that I should pay$50 to save an hour of furniture assembly. Furniture assembly might be enjoyable, teach me valuable skills, etc.) However, this assumes that the benefits I receive from my job are entirely monetary. For most jobs, this will not be the case. If one is a software engineer, then much of the benefit of 1 hour of working as a software engineer will be the skills and experience gained during that hour. To be more specific, the hourly rate that a software engineer commands depends on their skill, which depends on training/experience in turn. Thus an hour of software engineering might increase expected future compensation by more than $50 (in fact, under plausible assumptions, this will be the primary benefit of the early part of most careers.) To be more quantitative, let be the wage an employee with hours of experience can earn per hour. Suppose that you currently have hours of experience and your career in total will be hours long. The amount of dollars you expect to earn in the future is . (Note that a more precise analysis would have included a discount rate. Money now is worth more than money later because of investment possibilities.) A naive model of saving an hour of present time calculates the total earnings of your career as , meaning you should take an opportunity to save an hour at cost if and only if . However, as stated above, this suggests that the marginal hour at the present is worth , your current wage. This is not what actually happens when you save one hour at the present. What actually happens is that your total earnings of your career will now be for a difference of instead of . Since one's expected wage at the end of a career is likely substantially higher than ones current wage (especially for people at the beginning of their careers), treating the value of one's time as instead of leads to an underestimate by . For example, suppose that one is a quantitative trader. They currently earn$100/hr. However, with 20,000 hours (10 years, assuming 2000 working hours a year) of experience, they expect to earn $1000/hr. If they have no time-discount rate on money, then they should be willing to pay up to$1000 to save an hour of time presently, despite the fact that they will be net down $900 if they use that time to do work. Another way of seeing this is that saving an hour of time for your present self is in some sense the same thing as saving an hour of time for your future self, because it causes the future to arrive one hour earlier and be one hour longer. Thus, if you would be willing to trade an hour for$1000 in the future, you should also be willing to do so now. This also suggests that the returns to working twice as much results in much more than twice the value produced. Naively, a 160,000 hour career produces the same value as two 80,000 hour careers. However, in reality, one of those careers is going to start with 80,000 hours of experience! This doesn't account for a lot of relative factors (being faster than competitors can produce much higher amounts of value) or aging-out effects like getting worse at working as you work more. A corollary is that burning out for a year is a disaster, because it's equivalent to losing a final career year. Similarly, vacations and other such leisure activities have larger costs than one might have naively expected, since they delay career growth and shorten careers. For example, if someone who could have had a 40 year career burns out for a year, their career is now 39 years and is missing the year where they would have had 39 years of experience. # Temporal Dependence One key factor missing in the above analysis is a temporal dependence on the value of wages. (The substitution of wages for altruistic impact is going to break down slightly and depend on complicated factors like the flow-through effects of altruism and whether standard investment returns are higher than altruistic flow-through effects. See Flow-Through Effects of Innovation Through the Ages and Giving Now vs. Later for a more nuanced discussion.) The most obvious form of temporal dependence is a monetary discount rate controlled by the ability to turn money now into more money later via standard investments. Such a discount rate suggests that our theoretical quantitative trader discussed above should not be willing to spend $1000 to save an hour of time at the present day, but rather spend an amount that would be equivalent to$1000 after 10 years of investment (approximately \$500 at 7% yearly returns). I could write an equation expressing this, but I don't think it would lend much clarity. Less standard but more accurate analyses would incorporate the relative differences in the value of money over time for your particular goals. For instance, it might be that the altruistic discount rate on dollars is much higher than the standard discount rate because there are altruistic opportunities available now that won't be available later, even if you had double the money. Another salient example is effective altruism movement building (meta-EA), which might get most of its value early on. One way to model this is that instead of producing value directly, people in meta-EA save other people's time (by getting them into the movement earlier), enabling them to produce more value later. If you think, for example, that this century is particularly important, then saving an early career altruistic professional 1 year of time in 2090 is going to get you the marginal year of someone with ~10 years of experience, compared to saving such a person 1 year in 2080, which gets the marginal year with ~20 years of experience. Depending on how quickly you think the value of someone's work goes up with respect to experience, then this might suggest large discount rates. As another example, people working in AI Alignment (like me) might think that most valuable alignment work is going to be done in the ~10 years preceding transformative AI (TAI). If you think this date is about 2055 (see Holden's summary of Ajeya's Forecasting TAI from Biological Anchors), then the most important thing is to maximize your abilities as a researcher from 2045-2055. (It's possible that you should be making different bets, e.g. if you think you have more influence in worlds where TAI is sooner.) Since I'll probably still be working in 2055, saving a marginal year of time today gives me one extra year of research experience during the decade preceding TAI, but not any extra marginal years during that decade. (This does suggest that saving time during that decade is very valuable, though.) Of course, I am not modeling various effects that current research has on things like field building, which potentially dominates the value of my current work. # Actionables This analysis suggests that people with the potential to earn high salaries/have high altruistic impact have high time value, not because they can produce useful work currently, but because it will get them to where they eventually will end up faster. Provided this holds qualitatively, it suggests a couple of things: • Care about the value of your time more and try to aggressively take opportunities to save it or spend it more effectively, even if this doesn't make that much sense in terms of the value you think you can currently generate. • This might mean that you should take out loans and such, so you have resources. If your expected future earnings are high, then things like hiring tutors to graduate school faster are likely worth the interest on the loan. • What you spend your free time doing actually kind of matters. Developing some skill one year faster increases the amount of value you produce on the margin by quite a bit. • Standard advice saying that young people have time to explore potential career options should be balanced against the cost of becoming less awesome in that particular career option because too much time was spent exploring. • For example, if someone is potentially a promising AI Aligner, and they take a year off college to travel the world and see the sights, this decreases the amount of research experience they have during the period around TAI by a year. • Exploration is still probably a good idea, but it should be traded off not against the value one would have produced directly, but rather the marginal increase in value that would have resulted from the increased growth/experience if that time wasn't spent exploring. • Finding promising young people and using large amounts of resources to speed up their careers probably has pretty good altruistic returns. • (If you think you're such a person and could benefit from additional resources, feel free to send me an email and I'll see what I can do.) # 47 1 comments, sorted by Click to highlight new comments since: New Comment This analysis suggests that altruistic actors with large amounts of money giving or lending money to young, resource-poor altruists might produce large amounts of altruistic good per dollar. A suspicious conclusion coming from a young altruist! (sarcasm)
2022-10-04 13:24:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4124588072299957, "perplexity": 1260.0319939541193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00623.warc.gz"}
https://discourse.julialang.org/t/maximum-likelihood-optimization-problem-through-optim-jl/84096
# Maximum likelihood optimization problem through Optim.jl Hi, I am trying to solve a likelihood function in Optim as follows: I have some increments which are gamma-distributed (Ga(a*t, β)): det_x = [0.0175, 0.0055, 0.0059] # increments det_t = [185, 163, 167] # corresponding time I want to estimate parameters a, and b from the above data. I used the following program: using SpecialFunctions using Distributions, LinearAlgebra, Statistics using Optim, NLSolversBase det_R = [0.0175, 0.0055, 0.0059] # increments det_w = [185, 163, 167] # corresponding time ## density function for Gamma(a*t, β) gamma_dens(x, a, β, t) = (x^(a*t-1)*exp(-x/β))/(β^(a*t)*gamma(a*t)) function L(x) log(gamma_dens(det_R[1], x[1], x[2], det_w[1])) + log(gamma_dens(det_R[2], x[1], x[2], det_w[2])) + log(gamma_dens(det_R[3], x[1], x[2], det_w[3])) end x0 = [1e-3, 1e-3] opt = optimize(L, x0) But I got error message saying: Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?81dfbb88-756a-4313-986f-379f044b325b) DomainError with -0.0245: Exponentiation yielding a complex result requires a complex argument. Replace x^y with (x+0im)^y, Complex(x)^y, or similar. How can I solve this optimization problem? Thank you very much! DomainError with -0.0245: Exponentiation yielding a complex result requires a complex argument. Replace x^y Your x values are going negative, giving you complex values. You can define some bounds on your parameter values (remembering that \alpha, \beta > 0): https://julianlsolvers.github.io/Optim.jl/stable/#user/minimization/#box-constrained-optimization. Another nice way would be, in your L(x) function, use exp(x[1]) and exp(x[2]) so that you don’t have to use any bounds, taking \alpha = \exp(x_1) and \beta = \exp(x_2) at the end. See also Optim.jl for an actual Optim example for finding MLEs too that might be of interest to you. 2 Likes Thank you very much for the reply. I have changed my solution based on the example in Optim, still not working: using SpecialFunctions using Distributions, LinearAlgebra, Statistics using Optim, NLSolversBase det_R = [0.0175, 0.0055, 0.0059] # increments det_w = [185, 163, 167] # corresponding time ## density function for Gamma(a*t, β) gamma_dens(x, a, β, t) = (x^(a*t-1)*exp(-x/β))/(β^(a*t)*gamma(a*t)) function L(x, y) c = exp(x) d = exp(y) log(gamma_dens(det_R[1], c, d, det_w[1])) + log(gamma_dens(det_R[2], c, d, det_w[2])) + log(gamma_dens(det_R[3], c, d, det_w[3])) end func = TwiceDifferentiable((x, y)->L(x, y), [0.1, 0.1]; autodiff=:forward); opt = optimize(func, [0.1, 0.1]) See the line func = TwiceDifferentiable((x, y)->L(x, y), [0.1, 0.1]; autodiff=:forward) Optim requires that your functions have a single input, so you should write this as func = TwiceDifferentiable(x->L(x[1], x[2]), [0.1, 0.1]; autodiff=:forward) I believe it should work then. I believe you got this from the line func = TwiceDifferentiable(vars -> Log_Likelihood(x, y, vars[1:nvar], vars[nvar + 1]), ones(nvar+1); autodiff=:forward); in the example I linked - notice that x and y are just data in their example, but the actual parameters being optimised are those in vars - the function just takes in a single input. 1 Like Or just x -> L(x[1],x[2]) ? 1 Like Oops, missed that - you’re right. Thank for the reply and explanation. It works now. But I got an output of objective as -Inf and Optim.minimize(opt) are the same as the initial values I have set. I need to check the objective function further. Does it work if you multiply by -1 first? Try it with func = TwiceDifferentiable(x->-L(x[1], x[2]), [0.1, 0.1]; autodiff=:forward) (Optim minimises by default, so this will make the maximum of L be given by the minimum of -L.) No. Same as before. I mean add -, the objective value after optimization is Inf. The optimized results are still the same as initial values. I think this is a numerical issue. If we consider c = exp(0.1) and d = exp(0.1) in your code (as given by your initial estimate), then your first call to to gamma_dens is (approximately) gamm_dens(0.0175, 1.1, 1.1, 185). The numbers that you get out of this are extremely small. The issue then comes from x^(a*t-1) - the exponent here is 203! So x^(a*t-1) = 0.0175^203 which may as well be zero (and Julia calls it zero, actually: julia> x^(a*t-1) 0.0 So all your code is working, it’s just the numbers need some work. You should use logpdf from Distributions so that you get stable results. Here’s a working version. using SpecialFunctions using Distributions, LinearAlgebra, Statistics using Optim, NLSolversBase det_R = [0.0175, 0.0055, 0.0059] # increments det_w = [185, 163, 167] # corresponding time function L(x, y) c = exp(x) d = exp(y) logpdf(Gamma(det_w[1] * c, d), det_R[1]) + logpdf(Gamma(det_w[2] * c, d), det_R[2]) + logpdf(Gamma(det_w[3] * c, d), det_R[3]) end func = TwiceDifferentiable(x->-L(x[1], x[2]), [0.1, 0.1]; autodiff=:forward); opt = optimize(func, [0.1, 0.1]) julia> opt.minimum -12.159601199591297 julia> exp.(opt.minimizer) 2-element Vector{Float64}: 0.02452153013876933 0.0022884585316169555 2 Likes I don’t know the theory behind your problem, but some adjustments to your original formulation gives a result: using SpecialFunctions using Distributions, LinearAlgebra, Statistics using Optim, NLSolversBase det_R = [0.0175, 0.0055, 0.0059] # increments det_w = [185, 163, 167] # corresponding time ## density function for Gamma(a*t, β) gamma_dens(x, a, β, t) = (x^(a*t-1)*exp(-x/β))/(β^(a*t)*gamma(a*t)) function L(x) log(gamma_dens(det_R[1], x[1], x[2], det_w[1])) + log(gamma_dens(det_R[2], x[1], x[2], det_w[2])) + log(gamma_dens(det_R[3], x[1], x[2], det_w[3])) end x0 = [1e-3, 1e-3] func = TwiceDifferentiable(L, x0; autodiff=:forward); lower_bounds = [0.0, 0.0]; upper_bounds = [Inf, Inf]; opt = optimize(func.f, func.df, lower_bounds, upper_bounds, x0, Fminbox(inner_optimizer)) x_sol = opt.minimizer But it’s hard to see the result as meaningful since it seems to highly depend on your initial point x0, as is characteristic of (nonconvex) nonlinear optimisation problems. 2 Likes Thank you very much for providing this solution. 1 Like Thank you very much for checking this. Your remarks are really important for my problem. For my problem, I just try to fit some observations (data) into Gamma distribution to estimate the shape and scale parameter. I will carefully check the solution. I checked the solution offered by @legola18 , it gives a stable solution around (0.039, 0.0014). I tested different initial values. I also run your solution, it gives different values for different x_0 as you said. It is weird. I see in your solution you used a Boxed contained solver.
2022-08-16 10:10:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7726194858551025, "perplexity": 6940.835694373152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00098.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/22789
## Files in this item FilesDescriptionFormat application/pdf 9712393.pdf (6MB) (no description provided)PDF ## Description Title: Fabrication and characterization of compound semiconductor nanostructures Author(s): Panepucci, Roberto Ricardo Doctoral Committee Chair(s): Adesida, Ilesanmi Department / Program: Electrical and Computer Engineering Discipline: Electrical and Computer Engineering Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Engineering, Electronics and Electrical Physics, Condensed Matter Engineering, Materials Science Abstract: Semiconductor optoelectronic devices are expected to have their performance improved by the use of quantum confinement in the active region with sizes in the range of tenths of nanometers. The decrease in volume of the active region and the modification in the density of states in quantum structures is predicted to improve the threshold current, increase the modulation bandwidth, yield narrower spectral linewidths, and reduce temperature sensitivity in semiconductor lasers. This thesis reports the development and characterization of several fabrication techniques of compound semiconductor nanostructures on $\rm In\sb{0.53}Ga\sb{0.47}As/InP$ and $\rm In\sb{x}Ga\sb{1-x}As/GaAs$, and the optical properties of the fabricated structures. Requirements on the electron beam lithography for each fabrication technique are presented, with emphasis on the capabilities of the lithography tool and parameters of the resist material, in particular, ZEP-520 and bilayers of PMMA. Photoluminescence measurements at 5 K were used to characterize the optical quality of the samples.Fabrication of quantum wires and dots using highly anisotropic reactive ion etching of $\rm In\sb{0.53}Ga\sb{0.47}As/InP$ with $\rm CH\sb4{:}H\sb2$ plasmas with 40 nm lateral sizes is presented. The fabrication of shallow- and deep-etched quantum wires by selective crystallographic wet etching resulting in very narrow wires as small as 15 nm in width is presented. The free Cl$\sb2$ thermal etching of $\rm In\sb{0.53}Ga\sb{0.47}As/InP$ was developed, and its applications to quantum wire and quantum dot fabrication are presented. The fabricated structures showed good quality sidewalls comparable to wet etching techniques. Regrowth of InP was investigated on as-etched structures with and without SiO$\sb2$ masks. Finally, several processes of sample preparation for the selective area epitaxy of $\rm In\sb{x}Ga\sb{1-x}As/GaAs$ on submicron openings in SiO$\sb2$ masks for quantum wire fabrication were investigated. The inhomogeneity of the growth across an array of wires was investigated by spatially resolved luminescence and compared to a diffusion limited growth model. Issue Date: 1996 Type: Text Language: English URI: http://hdl.handle.net/2142/22789 ISBN: 9780591198515 Rights Information: Copyright 1996 Panepucci, Roberto Ricardo Date Available in IDEALS: 2011-05-07 Identifier in Online Catalog: AAI9712393 OCLC Identifier: (UMI)AAI9712393 
2015-11-26 10:49:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21042034029960632, "perplexity": 4863.213600507746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447043.45/warc/CC-MAIN-20151124205407-00001-ip-10-71-132-137.ec2.internal.warc.gz"}
https://academic.oup.com/bioscience/article/60/9/698/237972/Opportunities-and-Constraints-for-Forest-Climate
Reversing forest losses through restoration, improvement, and conservation is a critical goal for greenhouse gas mitigation. Here, we examine some ecological, demographic, and economic opportunities and constraints on forest-loss mitigation activities. Reduced deforestation and forest degradation could cut global deforestation rates in half by 2030, preserving 1.5 billion to 3 billion metric tons of carbon dioxide—equivalent (tCO2e) emissions yearly. Our new economic modeling for the United States suggests that greenhouse gas payments of up to $50 per tCO2e could reduce greenhouse gas emissions by more than 700 million tCO2e per year through afforestation, forest management, and bioelectricity generation. However, simulated carbon payments also imply the reduction of agricultural land area in the United States by 10% or more, decreasing agricultural exports and raising commodity food prices, imports, and leakage. Using novel transgenic eucalypts as our example, we predict selective breeding and genetic engineering can improve productivity per area, but maximizing productivity and biomass could make maintaining water supply, biodiversity, and other ecosystem services a challenge in a carbon-constrained world. Deforestation and other land-use changes have released approximately 150 billion metric tons (1 billion metric tons = 1 gigaton = 1 petagram) of carbon to the atmosphere since 1850, roughly one-fifth of the current atmospheric total (Houghton 2003). Most of this carbon loss from plants and soils occurred as a result of the conversion of forests to croplands and pasture. Reversing or halting this loss, as has occurred in parts of the United States and Europe that are recovering from earlier cycles of deforestation, is a critical policy goal for greenhouse gas mitigation today. We have abundant opportunities to restore and protect forests around the world. At the same time that climate change policy is creating incentives to preserve and restore forests, population growth and rising per capita consumption are increasing demands for food and fiber around the world. Global food demand is projected to grow 59% to 99% from 2000 to 2050, depending on actual population and economic growth rates (Southgate et al. 2007). Greater consumption of meat and grain is raising commodity prices and concerns about deforestation (Trostle 2008). National policies supporting bioenergy expansion further amplify deforestation concerns. For instance, recent studies suggest that direct and indirect land-use changes for bioenergy expansion produce net carbon losses from ecosystems, not net gains (Fargione et al. 2008, Gibbs et al. 2008, Searchinger et al. 2008, Piñeiro et al. 2009). Despite existing policies and increased agricultural yields per area, deforestation is still occurring in the tropics and elsewhere. The challenge we face today is how to use forests proactively—to save and restore forests and manage forests for the benefit of climate—while delivering more food and fiber and preserving biodiversity and other ecosystem services. That challenge is the heart of this Special Section examining the role of forests as a climate change mitigation tool. Given the real potential of forests for carbon mitigation in the coming decades, we were asked to examine the opportunity that forests present, along with some of the constraints on that opportunity—ecological, biophysical, demographic, and economic. Specifically, we examine three questions for forest mitigation activities: (1) Where can forests help slow the buildup of greenhouse gases in the atmosphere? (2) To what extent can future management, including genetic engineering, extend productivity beyond what native and managed forests provide today? (3) How many extra resources, including water and nutrients, will be needed to achieve this productivity, and what will be the consequences for other ecosystem services? These questions might be expressed as a single goal: to increase forest productivity globally while maintaining as many other ecosystem services as possible. We approach that goal using the questions above as an outline for the article. We first examine opportunities and conflicting demands on lands globally, using an economic model to explore competition for land use through climate policy levers in the United States. After considering the opportunities for, and constraints on, increased land area for forestry, we then examine the opportunities for greater forest productivity per acre, focusing briefly on the opportunities for genetic engineering to improve yields. Because Jansson and colleagues (2010) cover genetic engineering in detail in this Special Section, we instead explore the effects that increased productivity and carbon management may have on ecosystems, using the ideas of Odum (1969) as a foundation. These topics include potential invasions, water provisioning, risk of fires, and gene transfer. Forests that are unmanaged or managed for multiple purposes are likely to be very different in structure and species composition than forests managed or engineered solely for maximum carbon uptake. ## Land demands and policy levers Preserving today's forests and increasing forest regrowth and productivity are critical goals for greenhouse gas mitigation. To accomplish the policy targets of greater forest protection, restoration, and productivity, economic incentives must alter the market pressures driving land-use trends, particularly as the human population continues to grow. The human population, projected to surge beyond 9 billion by 2050 (Southgate 2009), will inevitably place new pressures on tropical forests and on the urban-rural fringe in countries such as the United States. Given the projected demands for food, fiber, and energy over the next 50 years, policy incentives are needed if forest carbon sinks are to be increased. Such incentives include payments for avoided deforestation, improved carbon storage through forest management, afforestation on lands currently put to other uses, and the use of forest biomass for bioenergy production. In a climate change context, these incentives should compensate landowners for net carbon gained above business-as-usual scenarios. As a specific example, reducing emissions from deforestation and degradation (REDD) pays countries or landowners for carbon retained in trees that would otherwise have been lost through land clearing. For bioenergy, a price for greenhouse gas emissions boosts the demand for fossil-fuel substitutes, creating a new market for logging residues and for short-rotation woody crops grown specifically for cellulosic ethanol production or cofired bioelectricity generation. ## Reducing deforestation and increasing forest stocks Deforestation is a significant driver of anthropogenic greenhouse gas emissions, accounting for at least 12% of global carbon dioxide (CO2) emissions and comparable in size to the emissions from the global transportation sector (e.g., Van der Werf 2009). Deforestation accounts for an overwhelming portion of total emissions of Brazil and Indonesia, the world's third- and fourth-largest emitters by volume (Gullison et al. 2007). Reducing deforestation rates and improving sustainable forest management may be difficult in a time of continuing population growth and agricultural expansion (Walker and Salt 2006, Ryan et al. 2010). Nevertheless, financial incentives and policy levers can help in this important task. REDD is a policy incentive that pays countries or landowners to preserve forests (Miles and Kapos 2008, Olander et al. 2008). Several recent studies have evaluated REDD incentives by comparing baseline land-use trajectories to trajectories in which carbon payments compensate landowners for keeping forests intact. Recent modeling efforts suggest that approximately 1.8 billion tons of CO2 equivalent (tCO2e) of global emissions per year, approximately one-third of the amount attributable to deforestation, can be eliminated for approximately$10 per tCO2e. At $20 and$30 per tCO2e, mitigation estimates rise to 2.5 billion and 2.9 billion tCO2e per year, respectively (Gullison et al. 2007, Kindermann et al. 2008). These greenhouse gas benefits from REDD activities would be accompanied by a halving of global deforestation rates by 2030 (Kindermann et al. 2008). Avoided deforestation is therefore a feasible, relatively cheap alternative for greenhouse gas mitigation that could produce many ecological benefits, including biodiversity conservation and additional net cooling from water recycling and biophysical effects (Fearnside 2005, Jackson et al. 2008, Keith et al. 2009). For these and other reasons, countries such as Brazil are increasingly interested in reducing deforestation under the rubric of REDD. The challenges of implementing REDD protocols include the method for distributing payments, uncertainties in land ownership and control, the means of establishing a proper deforestation baseline, and, as discussed below, leakage—shifts in the location of deforestation to places that are not currently monitored. In addition to REDD and forest conservation in general, afforestation and forest restoration provide additional pathways for greenhouse gas mitigation. Afforestation is defined as the planting of forests in areas that have been without trees for at least 50 years (or some other arbitrary length of time). In the United States, afforestation has the potential to sequester approximately 370 million tCO2e per year, depending on the price of carbon (Jackson and Schlesinger 2004, USEPA 2005, SOCCR 2007). Similarly, forest regrowth in the United States since 1940 has recovered about a third of US carbon lost to the atmosphere through deforestation and harvesting between the start of the Industrial Revolution and 1930. Globally, the combination of reforestation and afforestation activities could reduce atmospheric CO2 concentrations by as much as 30 parts per million (ppm) this century (House et al. 2002). However, this potential mitigation is limited by many factors. One is the vulnerability of forests to increased disturbances, including those caused by pathogens, droughts, fires, and storms (Galik and Jackson 2009). For example, the mountain pine beetle is projected to convert 374,000 square kilometers (km2) of pine forest from a small net carbon sink to a large carbon source in Alberta alone, liberating 1 billion tCO2e to the atmosphere (Kurz et al. 2008). Climate change is another factor that could limit the potential for carbon sequestration in forests. The mountain pine beetle in Alberta is thriving in part because of warmer minimum temperatures in the winter and warmer and drier summers. A third potential limitation is landowner behavior in private-sector forestry, including decisions on what species to plant and how intensely to manage forests. Private forestry competes economically with agriculture, urban development, and other land uses. Landowner decisions will therefore dictate the success of some climate policy efforts, a topic we explore next. ## Competition between forestry and other land uses Economic factors and human behavior, coupled with biological and physical factors, will help determine the role that forestry plays in greenhouse gas policy. Economic modeling is one approach for projecting how incentives can build the global forest carbon stock. Such models capture market behavior, land-use competition, and comprehensive greenhouse accounting, with different land-use types competing in a full economic system. Economic modeling is particularly helpful in mitigation analyses because it explicitly accounts for land-use competition between alternative uses. In the real world, afforestation and other forest practices must compete with food and biofuel production as well as with other possible uses of the same land; large mitigation “potentials” based solely on total land area will inevitably overestimate what is attainable in the marketplace. To illustrate the effects of such competition for land, we use the US forest and agricultural sector optimization model with greenhouse gases (FASOMGHG) to simulate land-use trajectories and forest-based potential for mitigation scenarios. This model is a partial-equilibrium economic model of the US agricultural and forestry sectors. For our use, it simulates market responses to carbon price signals, including incentives for bioenergy and management practices that improve carbon sequestration or reduce greenhouse gas emissions. The model has been used in many previous studies of renewable energy and greenhouse gas mitigation policy (e.g., McCarl and Schneider 2001, Jackson et al. 2005, Murray et al. 2005, Baker et al. 2010). The recently updated FASOMGHG includes a broader range of land-use categories to depict competition between privately owned cropland, forest, pasture, conservation lands, and development (Baker et al. 2010). The model also now contains more than 20 alternative biofuel feedstocks for producing starch- or sugar-based ethanol, cellulosic ethanol, and biodiesel. In addition, biomass from a variety of agricultural and forestry sources can be used for bioelectricity production. Commodity demand, energy market, and input-cost growth assumptions have also been updated to accurately represent current and future technology and market conditions. Forest productivity in the FASOMGHG is characterized by a number of physical and economic factors, including region, species, land suitability class, management intensity class, and age cohort. Forest carbon is tracked in soils, the forest floor, understory, and trees (including final products) using a methodology similar to the forest carbon model used by the US Forest Service (Birdsey 1996) that varies with the aforementioned factors. This formulation allows the model to simulate management responses to carbon price signals that, for instance, boost forest carbon stocks. We began our analysis with a baseline trajectory of agricultural and forest production and land use that includes the bioenergy mandates imposed by the US Energy Independence and Security Act of 2007. To simulate the effects of greenhouse gas mitigation efforts, we imposed CO2-equivalent prices on all emission sources and sinks from agricultural and forestry activities. We then compared the results from policy incentives for greenhouse gas mitigation with the baseline scenario. Economic incentives, such as greenhouse gas offset payments to landowners, could substantially alter the balance of forestry in the United States. Across mitigation price scenarios of $15,$30, and $50 per tCO2e, privately owned timberland in the United States in 2030 is projected to increase between 11.2 million and 23.5 million hectares above a baseline that declines over time because of development pressures. The forest expansion is caused by cropland and pasture afforestation, avoided deforestation, and longer rotation times. For our three carbon-price scenarios, approximately 3.4 million, 8.7 million, and 15.8 million hectares, respectively, are projected to convert from cropland to forestry by 2030 under the influence of carbon sequestration payments ( figure 1a), although there are uncertainties in such estimates. For the$50-per-tCO2e scenario, potential afforestation by 2030 represents approximately 10% of the total US cropland stock currently in production. Such a shift in land resources would store substantial amounts of carbon in the next few decades. Additional mitigation options for forest management include lengthening rotation times, changing the species grown, reducing management intensity, and stand thinning. Figure 1. Forest mitigation potential and commodity prices in the United States for three carbon price scenarios ($15,$30, and $50 per ton of carbon dioxide equivalent; tCO2e). (a) Estimated cumulative afforestation potential in 2030 across mitigation scenarios in million hectares. (b) Greenhouse gas (GHG) mitigation potential in the United States for afforestation, forest management, and bioelectricity from forest products; values represent annualized deviations from the baseline, with net present value of emissions beyond 2010 converted to an annuity using a 4% discount rate for 70 years. (c) US agricultural commodity price index values across mitigation scenarios. (d) Index values for US agricultural exports across mitigation scenarios. Figure 1. Forest mitigation potential and commodity prices in the United States for three carbon price scenarios ($15, $30, and$50 per ton of carbon dioxide equivalent; tCO2e). (a) Estimated cumulative afforestation potential in 2030 across mitigation scenarios in million hectares. (b) Greenhouse gas (GHG) mitigation potential in the United States for afforestation, forest management, and bioelectricity from forest products; values represent annualized deviations from the baseline, with net present value of emissions beyond 2010 converted to an annuity using a 4% discount rate for 70 years. (c) US agricultural commodity price index values across mitigation scenarios. (d) Index values for US agricultural exports across mitigation scenarios. For bioenergy production, we imposed the requirements of the renewable fuels standard (from the Energy Independence and Security Act of 2007) on the use of forest biomass for cellulosic ethanol. Bioelectricity production from forest biomass, however, is then allowed to respond to a given carbon price signal. The FASOMGHG can produce bioelectricity from logging and pulp-and-paper residues, as well as from dedicated short-rotation woody crops such as hybrid poplar and willow. Recent analyses suggest that use of forest biomass for electricity production should not be treated as carbon neutral as a result of land-use change emissions unless the biomass harvested for energy had sequestered “additional” carbon while growing (e.g., Searchinger 2009). Our analysis addresses this concern by imposing a carbon price on land-use change emissions above baseline levels, thus internalizing the cost of clearing land solely for bioenergy production. Overall, we find significant net mitigation potential for the United States that ranges from 325 million to 730 million tCO2e per year on an annuity basis, including activities that improve carbon sequestration in forests (160 million to 315 million tCO2e), afforest dedicated agricultural lands (152 million to 390 million tCO2e), or promote bioelectricity from forest biomass (13 million to 26 million tCO2e; figure 1b). Some activities, such as afforestation, need to be maintained for long time horizons, presenting a number of institutional complications and potentially leading to some adverse market impacts that we discuss below. ## Uncertainties and barriers to forest mitigation Reduced deforestation rates, forest regrowth, improved forest productivity, and afforestation are all possible options for increasing the global forest carbon stock. Implementing policies that promote these practices can be difficult, however. Given the promise of forest mitigation for climate policy, what are some of the barriers or impediments to such efforts? Recent work has described numerous potential barriers to forest policy improvements. One is the extent to which reducing agricultural land area—10% or more in our simulations for the United States—would increase commodity prices nationally and globally (figure 1c, 1d). Another barrier to forest credits is the transaction cost, including aggregating, verifying, and enforcing greenhouse gas offset activities (Galik et al. 2009). A third barrier is the issue of additionality, the notion that greenhouse gas savings must be “additional” to what would have happened without a policy incentive. Finally, a lack of permanence—the loss of forest carbon back to the atmosphere within a given time period—may also reduce the effectiveness of some forest activities (e.g., Canadell and Raupach 2008, Galik and Jackson 2009). In fact, some authors have proposed that temporary carbon “rentals,” rather than permanent credits, are a better model for greenhouse gas mitigation activities, perhaps as an extension of the Conservation Reserve Program for forests (e.g., Marland et al. 2001). One particular barrier that we highlight here for more detailed discussion is the potential for leakage. To what extent will forest mitigation activities drive agricultural development elsewhere, particularly into tropical forests and marginal agricultural lands? The global marketplace connects such effects in an increasingly direct way, as recent biofuels analyses have suggested (Fargione et al. 2008, Searchinger et al. 2008). Characterizing leakage requires that a marginal increase in land-clearing activities in one region can be at least partly attributed to a market price response brought on by production decisions in another. Hence, by allocating land in one area away from conventional commodity production and toward bioenergy production or carbon sequestration, altered market conditions may induce land-use change in another region. In a worst-case scenario, the resulting emissions from land clearing can be large enough to offset the carbon benefits of the original mitigation activity. Leakage is often thought of in a biofuels context (e.g., Searchinger et al. 2008), but pure climate change mitigation activities can also lead to indirect land-use changes. Several studies have evaluated leakage from forest conservation, altered forest management practices, and afforestation efforts (Murray et al. 2004, Gan and McCarl 2007, Sun and Sohngen 2009); upward pressure on commodity markets from forest carbon sequestration can shift timber production elsewhere, leading to diminished greenhouse gas gains for the mitigation effort. This effect can vary tremendously, with leakage estimated to range from less than 10% to more than 90% of total mitigation, depending on the region and activity undertaken (Murray et al. 2004). Although the FASOMGHG is not a global timber supply model and does not represent agricultural production and land use outside the United States, our results reveal important information about domestic forest offsets, bioenergy, and potential international leakage. For instance, as soon as the greenhouse gas mitigation policy is put into effect in 2010, simulated agricultural prices and imports rise for a given CO2e price (figure 1c). More broadly, US agricultural exports decline substantially, as well (figure 1d). Reduced US agricultural exports can lead to international leakage as production expands elsewhere to satisfy global demand for food and fiber. Hence, by afforesting US cropland, boosting bioenergy production, and decreasing transfers of land into agriculture, a lower food and fiber supply induces higher commodity prices and lower agricultural exports. Note, however, that a number of factors contribute to this shift in commodity prices, including additional production of dedicated bioenergy on productive agricultural lands, and shifts in livestock management patterns. Further research on the topic of leakage is desperately needed to encourage policy that maximizes terrestrial greenhouse gas mitigation potential while reducing emissions from land-clearing activities outside the United States. ## Resource demands and potential trade-offs for intensively managed forests Now that we have examined issues surrounding the additional land area needed for forest mitigation, we turn to the consequences of the second mitigation opportunity: greater productivity per area. Jansson and colleagues (2010; this issue) describe the potential for increased forest productivity through genetic engineering and improvement; we instead focus on the demands that the intensification of forestry will place on ecosystems. We use Gene Odum's framework to begin examining this issue. In a seminal paper in ecology, Odum (1969) characterized the opportunities and trade-offs that accompany attempts to maximize net primary productivity. Describing the bioenergetics of ecosystem development, he defined the goal of intensive forestry and agriculture as achieving “high rates of production of readily harvestable products with little standing crop left to accumulate on the landscape.” Often in land management, these high rates of productivity are obtained by managing for early successional species in monocultures. For the ecosystem pictured in figure 2, maximum productivity (and potential harvest) occurs after about 30 years. Figure 2. Carbon fluxes and biomass as a function of time since planting or disturbance (in years). For this ecosystem, maximum ecosystem productivity occurs at about 30 years (Odum 1969). Abbreviations: B, biomass; PG, gross photosynthesis; PN, net photosynthesis; R, respiration. Figure 2. Carbon fluxes and biomass as a function of time since planting or disturbance (in years). For this ecosystem, maximum ecosystem productivity occurs at about 30 years (Odum 1969). Abbreviations: B, biomass; PG, gross photosynthesis; PN, net photosynthesis; R, respiration. Odum also recognized that managing lands for maximum productivity sometimes reduced the amount or quality of other services that ecosystems provide. His list of trade-offs included invasions (as a subset of biodiversity loss), clean water, and climate feedbacks (Odum 1969). We examine Odum's trade-offs for the intensification of forestry, including increased invasion and fire risk, water use, and transgene spread. We do not discuss in detail—but do acknowledge—the loss of biodiversity that occurs when native ecosystems are replaced with monocultures. A diverse native forest and a monoculture plantation may have the same carbon storage but differ substantially in other services that people value. This trade-off was apparent to Odum and others many decades ago. In reality, many managed systems have lower productivity than their natural counterparts and require fertilizers and other high-intensity inputs to boost growth. If croplands and forest ecosystems are fertilized, some of the applied nitrogen is inevitably lost as nitrous oxide and other greenhouse gases, offsetting at least some of the ecosystem gains from carbon sequestration (e.g., Magill et al. 1997). Odum's classic perspective bears a fresh look for carbon mitigation policy today. ## Risks of invasion The United States alone has more than 50,000 invasive species that cost, collectively, about $100 billion each year (though the figure is hard to estimate accurately). These invaders are also the primary threat to 42% of threatened and endangered species in the United States (Pimentel et al. 2005). Although tree species increasingly suffer the effects of invasive insects and pathogens (Chornesky et al. 2005), forestry species themselves invade native landscapes throughout the world (Thompson et al. 2009). Of the more than 110 pine species found worldwide, only one, Pinus merkusii, is native to the Southern Hemisphere. Nevertheless, pines have become a mainstay of forestry in much of South America, Africa, Australia, and New Zealand (Richardson and Petit 2006). At least 18 pine species in the Southern Hemisphere are currently invasive in four or more countries per species, including Pinus pinaster, Pinus elliottii, Pinus patula, Pinus taeda, Pinus halepensis, and Pinus radiata (Richardson and Petit 2006). In South Africa, where 80 or more pine species have been planted, P. pinaster has invaded thousands of square kilometers of native fynbos, a biodiversity hot spot, with P. radiata and P. patula also expanding in area (Richardson et al. 1994, Rouget et al. 2004). Pines have similarly invaded native eucalypt forests in Australia (e.g., Williams MC and Wardle 2005). High-productivity eucalypts can also be invasive. Based on their history in California, two eucalypt species, Eucalyptus globulus (Tasmanian blue gum) and Eucalyptus camaldulensis (red gum), are already classified as invasive in California (Bossard et al. 2000, Cal-IPC 2006). Data from eucalypts also illustrate a more general result in invasion biology: The more often a species is planted, the more likely it is to become invasive. In southern Africa, the number of records of spontaneous occurrences of 57 Eucalyptus species correlates strongly with the number of plantations on which each species is grown (Rejmánek 2000). An increase in the plantings of exotic plantation species worldwide will almost certainly increase invasions into native habitats. Predicting the potential ranges of invasive species is difficult because hybridization and adaptation to novel environments can expand species' ranges beyond the conditions in which they were originally found (Bossdorf et al. 2005). Rhododendron ponticum, native to the Mediterranean and Black Sea regions, is one of the most problematic invasive plants in the British Isles, where landowners have spent millions of dollars to control it (Dehnen-Schmutz et al. 2004). Analyses of chloroplast and nuclear ribosomal DNA suggest that the invasive individuals of R. ponticum came from the relatively warm Iberian Peninsula (Milne and Abbott 2000). Molecular data also suggest that hybridization of invasive R. ponticum with Rhododendron catawbiense, a horticultural species from the Appalachian Mountains, most likely provided the cold tolerance that allows R. ponticum to survive in Scotland and other habitats that are colder than its native range (Milne and Abbott 2000). This potential for hybridization and adaptation shows that invasive species will not always stay within their predicted climate envelope after they establish. Invasive trees are problematic not just for their effects on native species but for the water and other resources they consume. Versfeld and colleagues (1998) found that about 10.1 million hectares (ha), or 6.8% of South Africa, had been invaded by woody aliens. These alien plants consume approximately 6.7% of the country's runoff and would cost$860 million to clear over 20 years (Le Maitre et al. 2000). Studies in four South African catchments show that partial invasion by pines, eucalypts, acacias, and other woody species have reduced annual stream flow by 7% to 22%; if invasives were allowed to reach 100% of canopy cover in the catchments, the estimated decreases in stream flow would be 22% to 96% (Le Maitre et al. 2002). In response to this problem, South Africa launched the Working for Water program, which has cleared invasives from more than a million hectares of land since 1995. From the pioneering work of Arnold Engler (1919) and Charles Hursch (e.g., Hursch and Brater 1941), research has clearly shown that trees consume more water—sometimes much more water—than do grasslands, croplands, and shrublands (Holmes and Sinclair 1986, Vertessy 1999, Jackson et al. 2005, 2009). Forests and tree plantations tend to have greater leaf surface areas and greater transpiration than other ecosystems (e.g., Calder 1986). They also typically have more extensive root systems, allowing water uptake from deeper underground, and their canopies intercept more water, keeping precipitation from reaching the soil (Schenk and Jackson 2002). How extensive are these reductions in stream flow caused by plantations, and where may problems be the greatest? A global analysis of more than 500 years of catchment data showed that plantations reduced total annual stream flow by 180 millimeters (mm), or 38% on average, compared with native grasslands and shrublands; 10- to 20-year-old stands showed the largest losses of 227 mm, or 52% (Jackson et al. 2005). Stream flow losses are also positively related to the net primary production (NPP) of the planted stands, with eucalypts increasing evapotranspiration more than most other trees because of their early rapid growth and canopy closure (Dye 1996). One trade-off highlighted by Odum (1969) is that maximizing forest productivity will inevitably increase water use. If plantation forestry is pushed into marginal, drier habitats by competition with other land uses (see above), relative water losses are likely to be even greater (Farley et al. 2005). Although absolute losses in stream flow were larger for plantations at wetter sites (> 1500 mm annual precipitation), relative losses were greater at drier sites (> 1000 mm mean annual precipitation), with annual stream flow declining by two-thirds (figure 3). Relative losses in low or base flow were even bigger than losses in annual flow. Dry-season losses may therefore be even more severe, leading to shifts from perennial to intermittent flow in some cases. Figure 3. Mean change in runoff globally (± standard error) following afforestation as a function of mean annual precipitation for sites that were originally grasslands. For millimeters, p < 0.01. For percentage, p < 0.001. Adapted from Farley and colleagues (2005). Figure 3. Mean change in runoff globally (± standard error) following afforestation as a function of mean annual precipitation for sites that were originally grasslands. For millimeters, p < 0.01. For percentage, p < 0.001. Adapted from Farley and colleagues (2005). One way to reduce losses in water yield is to increase forest life spans and plantation rotation times. Catchment data from two sites in South Africa with P. radiata and Eucalyptus grandis show that 20 to 25 years after planting, losses in water yield dropped by half (Scott and Prinsloo 2008). At the Tierkloof catchment, where 36% of the catchment was planted in P. radiata, the loss in annual stream flow for the plantation decreased from approximately 50% to 20% compared with native fynbos (Scott and Prinsloo 2008). Thus, land managers could decrease rotation times and maintain older forests to reduce water losses; however, such a strategy is sometimes incompatible with the goal of maximizing productivity on the landscape (figure 2). What else can be done to lessen carbon and water trade-offs? Planting only part of a catchment with trees would reduce the likelihood of complete stream loss, particularly in drier areas. Using deciduous instead of evergreen species can reduce water losses in some cases and help to maintain a productive understory for livestock and wildlife. Breeding and genetic engineering for drought tolerance may also increase water-use efficiency for commercial trees in the coming decades. All of the approaches mentioned here reduce the amount of extra water needed but are likely to decrease the maximum storage that forests can provide per area. If maximizing forest net primary productivity remains the goal, then growing more wood will require more water. ## Transgene flow At least a half dozen important forestry genera are currently targets of genetic engineering, including Populus, Pinus, and Eucalyptus. These efforts can substantially improve productivity per acre, insect resistance, and heavy metal and freezing tolerances, while also reducing lignin content and increasing cellulose concentrations where desirable (e.g., Grace et al. 2005, Grattapaglia and Kirst 2008, Nelson and Johnsen 2008). Decreasing lignin content by as much as 30% (e.g., Kawaoka et al. 2006) can reduce bleaching costs for the pulp and paper industry, whereas increased cellulose concentrations can improve ethanol yields for biofuel production. In Europe, the first field trial of genetically modified trees was a planting of herbicide-resistant poplars in Belgium in 1988; at least six more European countries have had additional trials. In China, genetically modified poplars have been commercially available since 2002. Because Jansson and colleagues (2010, this issue) discuss the potential for genetically engineered trees to increase primary productivity and to be useful as fuel sources, we examine some of the associated risks. One potential risk of plantation crops and trees, particularly transgenic ones, is gene flow or “contamination” to surrounding individuals, species, and areas. Examples include invasion of the species outside of the planted area, transgene flow to compatible wild species, and potential transgene introgression. Such establishments and gene flow have been documented in transgenic crop and forage species. For instance, glyphosate-resistant creeping bentgrass, Agrostis stolonifera, has established and persisted in abundance beyond where it was originally planted; gene flow to surrounding plants of the same and different Agrostis species from windborne pollen has also been clearly documented (Watrud et al. 2004, Zapiola et al. 2008). For trees, the risks of transgene flow depend on many factors, including the distance and direction pollen and seeds travel, synchronies in the timing of flowering and pollination, and various reproductive barriers (e.g., Williams CG et al. 2006, Barbour et al. 2008). Because trees are taller than grasses, wind-dispersed tree pollen will likely travel farther—many miles downwind from its site of origin (e.g., Williams CG 2008). The potential for transgene flow in trees may also be greater than in crops because most crops have been isolated by selective breeding for hundreds or thousands of years, whereas selective breeding in forestry is more recent. On the other hand, genetic engineering of sterility can reduce the likelihood of transgene flow in all species (Richardson and Petit 2006), although complete sterility in any large population is unlikely (Strauss et al. 2004, Van Frankenhuyzen and Beardmore 2004). ## Genetically modified eucalypts as a case study Research is under way in the United States on many genetically modified horticultural and forestry species, including the hybrid of E. grandis and Eucalyptus urophylla recently approved in May of 2010 for field trials in seven states. The hybrid eucalypt has been engineered with gene constructs that provide three benefits: (1) cold or freezing tolerance, reducing vulnerability to cold temperatures in the southern United States; (2) reduced lignin biosynthesis for improved pulp and paper applications and possibly for biofuels; and (3) reduced tree fertility, making them less likely to be invasive or to contaminate neighboring plants. The fast-growing hybrid is likely to be released commercially in the United States to be planted in a belt from east Texas to Georgia and Florida. The opportunity to grow eucalypts in the southeastern United States presents some economic advantages for foresters. Foresters, both public and private, face growing challenges in the region (e.g., Wear and Greis 2002). They compete in an increasingly global marketplace, particularly with tropical forestry that can complete a rotation in half the time with lower labor costs. Southeastern foresters also face economic pressures from urbanization and population growth in the region. The chance to grow high-productivity eucalypts to complement traditional pine forestry in the Southeast could bring important benefits (e.g., Fenning and Gershenzon 2002). The large-scale planting of genetically modified eucalypts across the southeastern United States also raises some questions about sustainability and risk management. Switching land use from pasture and agriculture to eucalypts will almost certainly decrease local stream flow, as described above, and possibly affect aquatic diversity. Almost half of all animals listed as federally endangered in the United States are aquatic species (e.g., Dobson et al. 1997, Jackson et al. 2001). As described above, eucalypts also have a history of invasion, including in the United States (Bossard et al. 2000, Cal-IPC 2006); in general, the larger an area of planting for an exotic species, the greater the likelihood of invasion (Rejmánek 2000). The risk of invasion can compound with another attribute of eucalypts—their propensity to burn. Eucalypts are generally well adapted to frequent fires and are more fire prone than most temperate forest species, which poses potential risks that should be minimized through management. In Canberra, Australia, a combination of drought, high winds, and suburban encroachment contributed to the catastrophic eucalypt bushfires in 2003 that burned 500 homes and caused half a billion dollars of economic damage (e.g., Fromm et al. 2006). Eucalypts are also generally high emitters of volatile organic compounds that can lower the flashpoint for fires (e.g., Maleknia et al. 2009, Wilkinson et al. 2009). The propensity of eucalypts to burn and their ability to regenerate quickly after fire suggest a possible risk of postfire invasion into surrounding landscapes. Eucalypts often increase in dominance relative to other less-fire-tolerant species in postfire environments (e.g., Pekin et al. 2009). However, invasive transgenic eucalypts planted in the southeastern United States would likely pose a lower risk for gene transfer than would genetically modified pine species such as P. taeda. Pinus taeda is common in the region, and the potential for hybridization among pine species is fairly high. A eucalypt planted among native eucalypt species in a different region would have a similar potential for hybridization (e.g., Barbour et al. 2008). Overall, transgenic eucalypts are likely to be commercialized in the United States. Because of this, we believe that the US Department of Agriculture should establish a eucalypt monitoring program for the southern United States immediately, ideally establishing baseline data before commercial plantings. There would be many advantages to this approach, which would use the species as a study system not just for productivity measurements but for also additional research to quantify the risks described above, including water losses, invasion potential, and transgene flow. For instance, catchment studies should be established now in at least a few sites to provide baseline data on water flow for later comparisons with paired eucalypt stands. Commercialization will likely help growers compete in an increasingly global marketplace. The fast growth rate of the trees may also help maximize productivity and could potentially offset some fossil-fuel use. However, the risks that accompany eucalypt introductions are difficult to quantify without long-term data, and are hard to correct should they occur. Finally, commercial forestry in the United States has historically been based solely on native species. With the introduction of transgenic eucalypts, that forestry era is about to end. ## Conclusions At the start of this article we posed three questions for forest climate mitigation. Specifically, we examined some opportunities and constraints associated with forests' ability to maximize climate benefits while maintaining as many other ecosystem services as possible. Reduced emissions from deforestation and degradation is an immediate opportunity that could offset about 10% of current fossil-fuel emissions. Reforestation and afforestation together could reduce atmospheric CO2 concentrations by as much as 30 ppm this century. However, forestry must compete economically with other land uses, including agriculture, recreation, and urban and suburban development, which will limit the opportunity for forest mitigation without economic incentives. In our simulations, replacing cropland or boosting bioenergy production provides substantial greenhouse gas benefits in the United States but could reduce the food and fiber supply along with agricultural exports, while inducing higher commodity prices and agricultural imports, particularly from Brazil and other tropical countries where deforestation emissions are of paramount concern. The intensification of forest management brings substantial opportunities to produce more biomass per unit of land, reducing some of the pressures on land conversion globally. Genetic engineering offers the promise of improved yields, resistance to pests, and many other benefits. The intensification of forestry also carries some potential trade-offs, however, including increased water use, fertilizer applications that could raise trace-gas emissions, and a greater likelihood of species invasions. As Odum (1969) described, “Man has generally been preoccupied with obtaining as much ‘production’ from the landscape as possible, by developing and maintaining early successional types of ecosystems, usually monocultures. But, of course, man does not live by food and fiber alone; he also needs a balanced CO2-O2 atmosphere, the climatic buffer provided by oceans and masses of vegetation, and clean (that is, unproductive) water for cultural and industrial uses…. In other words, the landscape is not just a supply depot but is also the oikos—the home—in which we must live” (p. 266). Our goal should be to conserve, restore, and improve forest productivity, while preserving the quality of life for people and other species on Earth. ## Acknowledgments We thank the Jackson lab and three anonymous reviewers for helpful comments on the manuscript. This research was supported by the National Science Foundation (DEB-0717191 and IOS-0920355). ## References cited Baker JS McCarl BA Murray BC Rose SK Alig RJ DM Latta G Beach RH Daigneault A . 2010 . Net-farm income and land use under a US greenhouse gas capandtrade . Policy Issues (April): 17. (8 September 2010; www.aaea.org/publications/policyissues.pdf) Barbour RC Otahal Y Vaillancourt RE Potts BM . 2008 . Assessing the risk of pollen-mediated gene flow from exotic Eucalyptus globulus plantations into native eucalypt populations of Australia . Biological Conservation 141 : 896 907 . Birdsey RA . 1996 . Regional estimates of timber volume and forest carbon for fully stocked timberland, average management after cropland and pasture revision to forest . Pages 309–333 in Hair D, Sampson NR, eds. Forests and Global Change, vol. 2: Forest Management Opportunities for Mitigating Carbon Emissions . American Forests . Bossard CC Randall JM Hoshovsky MC . 2000 . Invasive Plants of California's Wildlands . University of California Press . Bossdorf O Auge H Lafuma L Rogers WE Siemann E Prati D . 2005 . Phenotypic and genetic differentiation between native and introduced plant populations . Oecologia 144 : 1 11 . Calder IR . 1986 . Water use of eucalypts: A review with special reference to south India . Agricultural Water Management 11 : 333 342 . [Cal-IPC] California Invasive Plant Inventory . Cal-IPC Publication 2006-02. California Invasive Plant Council . JG Raupach MR . 2008 . Managing forests for climate change mitigation . Science 320 : 1456 1457 . Chornesky EA et al . 2005 . Science priorities for reducing the threat of invasive species to sustainable forestry . BioScience 55 : 335 348 . Dehnen-Schmutz K Perrings C Williamson M . 2004 . Controlling Rhododendron ponticum in the British Isles: An economic analysis . Journal of Environmental Management 70 : 323 332 . Dobson AP Rodriguez JP Roberts WM Wilcove DS . 1997 . Geographic distribution of endangered species in the United States . Science 275 : 550 553 . Dye PJ . 1996 . Climate, forest, and stream?ow relationships in South African afforested catchments . Commonwealth Forestry Review 75 : 31 38 . Engler A . 1919 . Untersuchungen ber den Ein?uss des Waldes auf den Stand der Gewsser . Mitteilungen der Schweizerischen Zentralanstalt fr das forstliche Versuchswesen 12 : 1 626 . Fargione J Hill J Tilman D S Hawthorne P . 2008 . Land clearing and the biofuel carbon debt . Science 319 : 1235 1238 . Farley KA Jobbgy EG Jackson RB . 2005 . Effects of afforestation on water yield: A global synthesis with implications for policy . Global Change Biology 11 : 1565 1576 . Fearnside PM . 2005 . Deforestation in Brazilian Amazonia: History, rates, and consequences . Conservation Biology 19 : 660 668 . Fenning TM Gershenzon J . 2002 . Where will the wood come from? Plantation forests and the role of biotechnology . Trends in Biotechnology 20 : 291 296 . Fromm M Tupper A Rosenfeld D Servranckx R McRae R . 2006 . Violent pyro-convective storm devastates Australias capital and pollutes the stratosphere . Geophysical Research Letters 33 : L05815 .doi:10.1029/2005GL025161 Galik CS Jackson RB . 2009 . Risks to forest carbon offset projects in a changing climate . Forest Ecology and Management 257 : 2209 2216 . Galik CS Baker JS Grinnell J . 2009 Transaction Costs and Forest Carbon Offset Potential. Climate Change Policy Partnership. Duke University . Gan J McCarl BA . 2007 . Measuring transnational leakage of forest conservation . Ecological Economics 64 : 423 432 . Gibbs HK Johnston M Foley JA Holloway T Monfreda C Ramankutty N Zaks D . 2008 . Carbon payback times for crop-based biofuel expansion in the tropics: The effects of changing yield and technology . Environmental Research Letters 3 : 034001 . Grace LJ Charity JA Gresham B Kay N Walter C . 2005 . . Plant Cell Reports 24 : 103 111 . Grattapaglia D Kirst M . 2008 . Eucalyptus applied genomics: From gene sequences to breeding tools . New Phytologist 179 : 911 929 . Gullison RE et al . 2007 . Tropical forests and climate policy . Science 316 : 985 986 . Holmes JW Sinclair JA . 1986 . Water yield from some afforested catchments in Victoria . Hydrology and Water Resources Symposium. River Basin Management, Grif?th University, Brisbane, 25–27 November 1986. Institution of Engineers, Australia . Houghton RA . 2003 . Revised estimates of the annual net flux of carbon to the atmosphere from changes in land use and land management 1850–2000 . Tellus 55 : 378 390 . House JI Prentice IC Le Quere C . 2002 . Maximum impacts of future reforestation or deforestation on atmospheric CO2 . Global Change Biology 8 : 1047 1052 . Hursch CR Brater EF . 1941 . Separating storm-hydrographs from small drainage-areas into surface- and subsurface-flow . Transactions of the American Geophysical Union, Part 3 : 863 871 . Jackson RB Schlesinger WH . 2004 . Curbing the U.S . carbon deficit. Proceedings of the National Academy of Sciences 101 : 15827 15829 . Jackson RB Carpenter SR Dahm CN McKnight DM Naiman RJ Postel SL Running SW . 2001 . Water in a changing world . Ecological Applications 11 : 1027 1045 . Jackson RB Jobbgy EG Avissar R Baidya Roy S Barrett DJ Cook CW Farley KA Le Maitre DC McCarl BA Murray BC . 2005 . Trading water for carbon with biological carbon sequestration . Science 310 : 1944 1947 . Jackson RB et al . 2008 . Protecting climate with forests . Environmental Research Letters 3 : 044006 .doi:10.1088/1748-9326/3/4/044006 Jackson RB Jobbgy EG Nosetto MD . 2009 . Ecohydrology in a human-dominated landscape . Ecohydrology 2 : 383 389 . Jansson C Wullschleger SD Kalluri UC Tuskan GA . 2010 . Phytosequestration: Carbon biosequestration by plants and the prospects of genetic engineering . BioScience 60 : 685 696 . Kawaoka A Nanto K Ishii K Ebinuma H . 2006 . Reduction of lignin content by suppression of expression of the LIM domain transcription factor in Eucalyptus camaldulensis . Silvae Genetica 55 : 269 277 . Keith H Mackey BG Lindenmayer DB . 2009 . Re-evaluation of forest biomass carbon stocks and lessons from the worlds most carbon-dense forests . Proceedings of the National Academy of Sciences 106 : 11635 11640 . Kindermann G Obersteiner M Sohngen B Sathaye J K Ramesteiner E B Wunder S Beach R . 2008 . Global cost estimates of reducing carbon emissions through avoided deforestation . Proceedings of the National Academy of Sciences 105 : 10302 10307 . Kurz WA Dymond CC Stinson G Rampley GJ Neilson ET Carroll AL Ebata T Safranyik L . 2008 . Mountain pine beetle and forest carbon feedback to climate change . Nature 452 : 987 990 . Le Maitre DC Versfeld DB Chapman RA . 2000 . The impact of invading alien plants on surface water resources in South Africa: A preliminary assessment . Water SA 26 : 397 408 . Le Maitre DC van Wilgen BW Gelderblom CM Bailey C Chapman RA Nel JA . 2002 . Invasive alien trees and water resources in South Africa: Case studies of the costs and bene?ts of management . Forest Ecology and Management 160 : 143 159 . Magill AH Aber JD Hendricks JJ Bowden RD Melillo JM Steudler PA . 1997 . Biogeochemical response of forest ecosystems to simulated chronic nitrogen deposition . Ecological Applications 7 : 402 415 . Maleknia SD Bell TL MA . 2009 . Eucalypt smoke and wildfires: Temperature dependent emissions of biogenic volatile organic compounds . International Journal of Mass Spectrometry 279 : 126 133 . Marland G Fruit K Sedjo R . 2001 . Accounting for sequestered carbon: The question of permanence . Environmental Science and Policy 4 : 259 268 . McCarl BA Schneider UA . 2001 . Greenhouse gas mitigation in U.S . agriculture and forestry. Science 294 : 2481 2482 . Miles L Kapos V . 2008 . Reducing greenhouse gas emissions from deforestation and forest degradation: Global land-use implications . Science 320 : 1454 1455 . Milne RI Abbott RJ . 2000 . Origin and evolution of invasive naturalized material of Rhododendron ponticum L . in the British isles. Molecular Ecology 9 : 541 556 . Murray BC McCarl BA Lee HC . 2004 . Estimating leakage from forest carbon sequestration programs . Land Economics 80 : 109 124 . Murray BC Sohngen B Sommer AJ Depro B Jones K McCarl B Gillig D DeAngelo B K . 2005 . Greenhouse Gas Mitigation Potential in U.S. Forestry and Agriculture. Environmental Protection Agency. EPA 430-R-05-006 . Nelson CD Johnsen KH . 2008 . Genomic and physiological approaches to advancing forest tree improvement . Tree Physiology 28 : 1135 1143 . Odum EP . 1969 . The strategy of ecosystem development . Science 164 : 262 270 . Olander LP Gibbs HK Steininger M Swenson JJ Murray BC . 2008 . Reference scenarios for deforestation and forest degradation in support of REDD: A review of data and methods . Environmental Research Letters 3 : 025011 . Pekin BK Boer MM Macfarlane C Grierson PF . 2009 . Impacts of increased fire frequency and aridity on eucalypt forest structure, biomass and composition in southwest Australia . Forest Ecology and Management 258 : 2136 2142 . Pimentel D Zuniga R Morrison D . 2005 . Update on the environmental and economic costs associated with alien-invasive species in the United States . Ecological Economics 52 : 273 288 . Pieiro G Jobbgy EG Baker J Murray BC Jackson RB . 2009 . Set-asides can be better climate investment than corn ethanol . Ecological Applications 19 : 277 282 . Rejmnek M . 2000 . Invasive plants: Approaches and predictions . Austral Ecology 25 : 497 506 . Richardson DM Petit RJ . 2006 . Pines as invasive aliens: Outlook on transgenic pine plantations in the Southern Hemisphere . Pages 169–188 in Williams CG, ed. Landscapes, Genomics and Transgenic Conifers . Springer . Richardson DM Williams PA Hobbs RJ . 1994 . . Journal of Biogeography 21 : 511 527 . Rouget M Richardson DM Milton SJ Polakow D . 2004 . Predicting invasion dynamics of four alien Pinus species in a highly fragmented semi-arid shrubland in South Africa . Plant Ecology 152 : 79 92 . Ryan MG et al . 2010 . A Synthesis of the Science on Forests and Carbon for U.S . forests. Issues in Ecology 13 : 1 16 . Schenk HJ Jackson RB . 2002 . Rooting depths, lateral root spreads, and belowground/aboveground allometries of plants in water limited ecosystems . Journal of Ecology 90 : 480 494 . Scott DF Prinsloo FW . 2008 . Longer-term effects of pine and eucalypt plantations on streamflow . Water Resources Research 44: W00A08 . doi:10.1029/2007WR006781 Searchinger T Heimlich R Houghton RA Dong F Elobeid A Fabiosa J Tokgoz S Hayes D Yu TH . 2008 . Use of U.S . croplands for biofuels increases greenhouse gases through emissions from land-use change. Science 319 : 1238 1240 . Searchinger TD et al . 2009 . Fixing a critical climate accounting error . Science 326 : 527 528 . [SOCCR] The First State of Carbon Cycle Report . 2007 . The North American Carbon Budget and Implications for the Global Carbon Cycle . A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research. NOAA, National Climatic Data Center . Southgate D . 2009 . Population growth, increases in agricultural production and trends in food prices . Electronic Journal of Sustainable Development 1 : 29 35 . Southgate DD Graham D Tweeten D . 2007 . The World Food Economy . Wiley . Strauss SH Brunner AM Busov VB Ma C Meilan R . 2004 . Ten lessons from 15 years of transgenic Populus research . Forestry 77 : 455 465 . Sun B Sohngen B . 2009 . Set-asides for carbon sequestration: Implications for permanence and leakage . Climatic Change 96 : 409 419 . Thompson I Mackey B McNulty S Mosseler A . 2009 . Forest Resilience, Biodiversity, and Climate Change. Secretariat of the Convention on Biological Diversity, Montreal. Technical Series no. 43 . Trostle R . 2008 . Global Agricultural Supply and Demand: Factors Contributing to the Recent Increase in Food Commodity Prices . US Department of Agriculture Economic Research Service. Report WRS-0801 . [USEPA] US Environmental Protection Agency . 2005 . Greenhouse gas mitigation potential in U.S. Forestry and Agriculture. EPA. EPA 430-R-05-006 . Van Frankenhuyzen K Beardmore T . 2004 . Current status and environmental impact of transgenic forest trees . 34 : 1163 1180 . Versfeld DB Le Maitre DC Chapman RA . 1998 . Alien Invading Plants and Water Resources in South Africa: A Preliminary Assessment . Water Research Commission. Report TT99/98 . Vertessy RA . 1999 . The impacts of forestry on stream?ows: A review . Pages 93–109 in Croke J, Lane P, eds. Forest Management for Water Quality and Quantity, Proceedings of the Second Forest Erosion Workshop, May 1999, Warburton, Australia. Report 99/6. Cooperative Research Centre for Catchment Hydrology, CSIRO Land and Water . Van der Werf GR Morton DC DeFries RS Olivier JGJ Kasibhatla PS Jackson RB Collatz GJ Randerson JT . 2009 . CO2 emissions from forest loss . Nature Geosciences 2 : 737 738 . Walker B Salt D . 2006 . Resilience Thinking: Sustaining Ecosystems and People in a Changing World . Island Press . Watrud LS Lee EH Fairbrother A Burdick C Reichman Jr Bollman M Storm M King G Van de Water PK . 2004 . Evidence for landscape-level, pollen-mediated gene flow from genetically modified creeping bentgrass with CP4 EPSPS as a marker . Proceedings of the National Academy of Sciences 101 : 14533 14538 . Wear DN Greis JG . 2002 . Southern forest resource assessment: Summary of findings . Journal of Forestry 7 : 6 14 . Wilkinson MJ Monson RK Trahan N Lee S Brown E Jackson RB Polley HW Fay PA Fall R . 2009 . Leaf isoprene emission rate as a function of atmospheric CO2 concentration . Global Change Biology 15 : 1189 1200 . Williams CG . 2008 . Aerobiology of Pinus taeda pollen clouds . 38 : 2177 2188 . Williams CG SL Oren R Katul GG . 2006 . Modeling seed dispersal distances: implications for transgenic Pinus taeda . Ecological Applications 16 : 117 124 . Williams MC Wardle GM . 2005 . The invasion of two native Eucalypt forests by Pinus radiata in the Blue Mountains, New South Wales, Australia . Biological Conservation 125 : 55 64 . Zapiola ML Campbell CK Butler MD Mallory-Smith CA . 2008 . Escape and establishment of transgenic glyphosate-resistant creeping bentgrass Agrostis stolonifera in Oregon, USA: A 4-year study . Journal of Applied Ecology 45 : 486 494 . ## Author notes Robert B. Jackson (jackson@duke.edu) and Justin S. Baker are with the Department of Biology, Nicholas School of the Environment and Center on Global Change, at Duke University in Durham, North Carolina.
2017-02-24 00:29:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3661794066429138, "perplexity": 9009.525141337874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00041-ip-10-171-10-108.ec2.internal.warc.gz"}
https://puzzling.stackexchange.com/questions/89765/transferring-9-pegs-on-a-9x9-grid/89798
# Transferring 9 pegs on a 9x9 grid You are given a 9x9 grid with a set of 9 pegs (red circles) arranged in a 3x3 pattern in the corner, as shown below: A peg can jump over another adjacent peg in any direction (horizontal, vertical or diagonal as shown in blue), provided that the destination cell is empty. A move consists of taking one peg and making one or more consecutive jumps, as shown below: Can you transfer all the 9 pegs to the opposite corner of the grid, arranged in the same 3x3 pattern? Bonus question: what is the smallest number of moves you can do it in? Good luck! • @Bass that's a good point. I don't think it would be possible to show optimality without a computer. However, I was hoping that people can still do this by hand and get sub-optimal answers. Would that still be ok for a puzzle? Perhaps I need to reword the question somehow? – Dmitry Kamenetsky Oct 3 '19 at 5:59 • Ok I've modified the problem. The primary objective is to complete the puzzle in any number of moves. The bonus question asks for the minimal number of moves. – Dmitry Kamenetsky Oct 3 '19 at 6:15 • Can't I just diagonally shift all pegs in 9×6= 54 moves. – Rishi Oct 3 '19 at 6:48 • @Rishi sorry I don't understand your solution. They need to jump, not shift. – Dmitry Kamenetsky Oct 3 '19 at 7:29 • @Rishi Pegs must always jump over other pegs. – Jaap Scherphuis Oct 3 '19 at 8:16 I was having a slow work day, so I fired up Blender and made this: In 13 hops, the block of 9 pegs can be moved two places down and to the right. By repeating the process two more times, the pegs can be moved to the bottom right corner. • ((((worship)))) – Conifers Oct 3 '19 at 16:13 • That is so beautiful! – Dmitry Kamenetsky Oct 3 '19 at 22:01 • If consecutive moves by the same piece count as a single move, you could probably optimize this some. (Looks great though.) – Darrel Hoffman Oct 3 '19 at 23:22 • Upvoted for producing a short film. ;) – Wyck Oct 4 '19 at 2:53 • Brilliant answer! Would you like a bounty award as a gift (after you earn the checkmark, I presume)? :) – Mr Pie Oct 5 '19 at 0:09 It's possible. Assume the pegs are in the upper left corner of a slightly enlarged chess board, which has indices $$1 - 9$$ and A - I. Now make the moves b8-d6, c7-e5, d6-f4, e5-g3, f4-h2, g3-i1 b9-d7, c9-c7, c8-e6, c7-e7, d7-f5, e7-e5, e6-g4, e5-c5, f5-h3, g5-g3, g4-i2, g3-i3 a7-c7, a9-a7, a8-c6, c7-c5, a7-c7, b7-d5, c5-e5, c7-c5, c6-e4, e5-e3, c5-e5, d5-f3, e3-g3, e5-e3, e4-g2, g3-g1, e3-g3, f3-h1 EDIT If I'm counting right, the wonderfully animated solution of @squeamish ossifrage has $$12 \times 3 = 36$$ moves, which is the same number of moves as my solution above. Inspired by the animated solution, I found that I can move the $$9$$ peg block two places down and to the right with just $$9$$ moves: b8-d6, c7-e5, b9-d7, c9-c7-e7-c5, a7-c7-e7, a9-a7-c7, a8-c6-e6, c8-c6, b7-d5 This reduces the total number of moves to $$9 \times 3 = 27$$ moves. I don't know if this is the minimum, but it's a start. EDIT 2 Made a computer program to look for a solution with fewer moves. It managed to improve my previous solution so it now only takes $$23$$ moves Here they are: a8-c6, b9-d7, b7-d5, c7-e5, a9-c5, c9-e7, a7-c7, b8-f4, c8-g4, c6-e6, e5-e3, c5-e5, d5-h3, g4-i2, e5-i1, e7-i3, c7-g3, d7-f5, e6-g2, g3-g1, e3-g3, f5-h1, f4-h2 I had to make some assumptions to get a solution within a reasonable time, so I'm not absolutely sure this is the minimum. I'd love to see the minimum if this is not it! • 27 is very good. It is not the minimum, however it is a great start. Hopefully others can extend your solution. – Dmitry Kamenetsky Oct 5 '19 at 13:01 If we allow moving a peg into a neighboring cell (without jumping) then the optimal solution requires 16 moves The solution is It is a symmetrical solution so only the first 8 moves are shown. The remaining moves are simply a repeat of the first 8 moves in reverse order (palindromic). This elegant solution was found by H.Ajisawa and T.Maruyama, and it was proven to be optimal by George I. Bell in 2009: https://arxiv.org/pdf/0803.1245.pdf (page 13). NOTE that this solution doesn't answer the original puzzle as we have allowed a move into the neighboring cell in step 3. The optimal solution to the original puzzle remains an open question, but we know it cannot be done in less than 16 moves. • The solution above doesn't follow the rules you set out. In the third frame, a peg makes a move which is not a jump. In your rule description you said "A move consists of taking one peg and making one or more consecutive jumps, where you defined a jump in the previous sentence. You reinforced this in a comment "They need to jump, not shift". – Jens Nov 5 '19 at 0:14 • Oh you are right! I didn't even notice this. Hmmm. At this point it would be unfair to modify the rules of the original puzzle, instead I will modify my answer and leave it unaccepted. – Dmitry Kamenetsky Nov 5 '19 at 0:17 • Now I wonder if we can somehow avoid the third ("illegal") move and replace it with alllowed moves such that we can still solve the original puzzle efficiently? – Dmitry Kamenetsky Nov 5 '19 at 0:24
2020-10-22 21:18:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5380697846412659, "perplexity": 1609.2798623959272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880038.27/warc/CC-MAIN-20201022195658-20201022225658-00051.warc.gz"}
https://socratic.org/algebra/properties-of-real-numbers/addition-of-rational-numbers
## Key Questions • To write fractions with a common denominator, you will most likely need to scale some numbers up! I will explain how. Let's try it with the fractions $\frac{2}{3}$ and $\frac{3}{12}$ 12 is larger than 3, so we will have to multiply the 3 by some number to equal 12. (We are really finding the Least Common Multiple of the two denominators!) To do this, you have to multiply the 3 by 4, because 3x4=12. But now the numerator doesn't match the denominator. When you scale the denominator up, you have to scale the numerator up too! So the 2 must be multiplied by 4 also. Now you have the following: $\frac{8}{12}$ and $\frac{3}{12}$ These fractions now have common denominators! Now they're all set for adding or subtracting fractions. Try another: $\frac{2}{6}$ and $\frac{3}{5}$: The least common multiple of 6 and 5 is 30. (the product of the denominators) Transform each fraction by multiplying by "1": $\frac{2}{6} \cdot \frac{5}{5}$ = $\frac{10}{30}$ and $\frac{3}{5} \cdot \frac{6}{6}$ = $\frac{18}{30}$ One last problem: $\frac{4}{9}$ and $\frac{7}{6}$ What is the least common multiple of 9 and 6? Could you use 54? Absolutely, but it is not the LEAST number that you could use. How about 18? YES! $\frac{4}{9} \cdot \frac{2}{2}$ = $\frac{8}{18}$ and $\frac{7}{6} \cdot \frac{3}{3}$ = $\frac{21}{18}$ Ready to go... Hope this helped! Explanation down below... #### Explanation: A mixed fraction is a fraction written with a whole and a fraction. Eg $2$$\frac{1}{2}$ An improper fraction is a fraction with a numerator that is larger than the denominator. Eg $\frac{22}{5}$ To write a mixed fraction into an improper one you have to take your whole number next to your fraction, multiply that number by your denominator and then take your original numerator and add it on to your answer. Finally put the number over the denominator. Example: $2$$\frac{1}{2}$ $2$x$2 = 4$ $4 + 1 = 5$ $\frac{5}{2}$ • I assume you know that if you multiply both numerator and denominator of a fraction by a same number, you get an equivalent fraction. Thus, for example, if you start from 2/3 and multiply both numerator and denominator by 3, you get 6/9, which is indeed equivalent to 2/3. Now, if you want to add two fraction, you first of all transform both of them as just shown, obtaining two equivalent fractions with the same denominator. At this point, you have a sum of two fraction of the form $\setminus \frac{a}{b} + \setminus \frac{c}{b}$, which is easily $\setminus \frac{a + c}{b}$. To do so, you look for the least common multiple of the two denominator. Let's say that we have to calculate $\setminus \frac{3}{5} + \setminus \frac{5}{8}$. The least common multiple of 5 and 8 is 40, so we have to transform $\setminus \frac{3}{5}$ into $\setminus \frac{24}{40}$ (multiplying numerator and denominator by 8), and then we transform $\setminus \frac{5}{8}$ into $\setminus \frac{25}{40}$ (multiplying numerator and denominator by 5). These are equivalent fraction, so we can be sure that $\setminus \frac{3}{5} + \setminus \frac{5}{8}$ equals $\setminus \frac{24}{40} + \setminus \frac{25}{40}$. The advantage is, of course, that the second one is much easier to compute, since one immediately gets that $\setminus \frac{24}{40} + \setminus \frac{25}{40} = \setminus \frac{49}{40}$ If something isn't clear, don't hesitate to ask:)
2019-02-18 22:15:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 35, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9253851771354675, "perplexity": 200.92675223031205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247488490.40/warc/CC-MAIN-20190218220415-20190219002415-00272.warc.gz"}
https://tex.stackexchange.com/questions/309552/having-trouble-compiling-cv-template
Having trouble compiling CV template I am using texmaker on ubuntu, and I was trying to create a cv. I downloaded this template from sharelatex. I tried compiling with, LaTeX, XeLaTex, and PDFLaTeX and all of them had the same error "Undefined control sequence." I linked the source below, is it a question of the compiler being used or is it something in the latex code? These are the errors that keep being repeated. ! Undefined control sequence. \cftdotfill #1->\def \@tempa {#1}\def \@tempb {\cftnodots }\ifx \@tempa \@te... l.69 ... MSc, PhD, or something else}{2009 - 2013} The control sequence at the end of the top line of your error message was never \def'ed. If you have misspelled it (e.g., \hobx'), type I' and the correct spelling (e.g., I\hbox'). Otherwise just continue, and I'll forget about whatever was undefined. ! Undefined control sequence. \cftdotfill #1->\def \@tempa {#1}\def \@tempb {\cftnodots }\ifx \@tempa \@te... l.69 ... MSc, PhD, or something else}{2009 - 2013} The control sequence at the end of the top line of your error message was never \def'ed. If you have misspelled it (e.g., \hobx'), type I' and the correct spelling (e.g., I\hbox'). Otherwise just continue, and I'll forget about whatever was undefined. ! Missing control sequence inserted. <inserted text> \inaccessible l.69 ... MSc, PhD, or something else}{2009 - 2013} Please don't say \def cs{...}', say \def\cs{...}'. I've inserted an inaccessible control sequence so that your definition will be completed without mixing me up too badly. You can recover graciously from this error, if you're careful; see exercise 27.2 in The TeXbook. ! Missing control sequence inserted. <inserted text> \inaccessible l.69 ... MSc, PhD, or something else}{2009 - 2013} Please don't say \def cs{...}', say \def\cs{...}'. I've inserted an inaccessible control sequence so that your definition will be completed without mixing me up too badly. You can recover graciously from this error, if you're careful; see exercise 27.2 in The TeXbook. ! Missing control sequence inserted. <inserted text> \inaccessible l.69 ... MSc, PhD, or something else}{2009 - 2013} https://www.sharelatex.com/templates/cv-or-resume/clean-cv • please always post the fill error message form the log in a {} code section. the full error message would say which command was undefined – David Carlisle May 14 '16 at 8:26 • @DavidCarlisle Added the error messages. – ultrainstinct May 14 '16 at 8:33 • Change line 55 to look like \begin{tabular*}{6.5in}{l@{\extracolsep{\fill}}r}. – Johannes_B May 14 '16 at 8:43 • Word of advice, that template is bad. I wouldn't trust it. – Johannes_B May 14 '16 at 8:44 • @Johannes_B it might be bad, but after looking at a bunch of CV templates. This one looked the best to me – ultrainstinct May 14 '16 at 8:45 You have a fragile command in an array preamble, the simplest way to fix that is to add \usepackage{array} compare \documentclass{article} \usepackage{tocloft} \begin{document} \begin{tabular}{l@{\cftdotfill{\cftdotsep}\extracolsep{\fill}}r} w \end{tabular} \end{document} and \documentclass{article} \usepackage{tocloft} \usepackage{array} \begin{document} \begin{tabular}{l@{\cftdotfill{\cftdotsep}\extracolsep{\fill}}r} w \end{tabular} \end{document}
2019-02-18 00:19:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6786157488822937, "perplexity": 10780.779150852135}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247483873.51/warc/CC-MAIN-20190217233327-20190218015327-00232.warc.gz"}
https://studysoup.com/tsg/1134431/chemistry-the-central-science-14-edition-chapter-23-problem-23-38
× Get Full Access to Chemistry: The Central Science - 14 Edition - Chapter 23 - Problem 23.38 Get Full Access to Chemistry: The Central Science - 14 Edition - Chapter 23 - Problem 23.38 × # ?Write names for the following coordination compounds: (a) $$\left[\mathrm{Cd}(\mathrm{en}) \mathrm{Cl}_{2}\right]$$ (b) $$\mathrm{K} ISBN: 9780134414232 1274 ## Solution for problem 23.38 Chapter 23 Chemistry: The Central Science | 14th Edition • Textbook Solutions • 2901 Step-by-step solutions solved by professors and subject experts • Get 24/7 help from StudySoup virtual teaching assistants Chemistry: The Central Science | 14th Edition 4 5 1 341 Reviews 31 1 Problem 23.38 Write names for the following coordination compounds: (a) \(\left[\mathrm{Cd}(\mathrm{en}) \mathrm{Cl}_{2}\right]$$ (b) $$\mathrm{K}_{4}\left[\mathrm{Mn}(\mathrm{CN})_{6}\right]$$ (c) $$\left[\mathrm{Cr}\left(\mathrm{NH}_{3}\right)_{5}\left(\mathrm{CO}_{3}\right)\right] \mathrm{Cl}$$ (d) $$\left[\operatorname{Ir}\left(\mathrm{NH}_{3}\right)_{4}\left(\mathrm{H}_{2} \mathrm{O}\right)_{2}\right]\left(\mathrm{NO}_{3}\right)_{3}$$ Text Transcription: [Cd(en)Cl2] K4[Mn(CN)6] [Cr(NH3)5(CO3)]Cl [Ir(NH3)4(H2O)2](NO3)3 Step-by-Step Solution: Step 1 of 5) The permanganate ion strongly absorbs visible light, with a maximum absorption at 565 nm. Because violet is complementary to yellow, this strong absorption in the yellow portion of the visible spectrum is responsible for the violet color of salts and solutions of the ion. What is happening during this absorption of light The MnO4 - ion is a complex of Mn(VII). Because Mn(VII) has a [Ar]3d0 electron configuration, the absorption cannot be due to a d-d transition because there are no d electrons to excite! That does not mean, however, that the d orbitals are not involved in the transition. The excitation in the MnO4 - ion is due to a charge-transfer transition, in which an electron on one oxygen ligand is excited into a vacant d orbital on the Mn ion (Figure 23.36). In essence, an electron is transferred from a ligand to the metal, so this transition is called a ligand to-metal charge-transfer (LMCT) transition. An LMCT transition is also responsible for the color of the CrO4 2-, which contains the Cr(VI) ion with an [Ar]3d0 electron configuration. Step 2 of 2 ## Discover and learn what students are asking #### Related chapters Unlock Textbook Solution ?Write names for the following coordination compounds: (a) $$\left[\mathrm{Cd}(\mathrm{en}) \mathrm{Cl}_{2}\right]$$ (b) \(\mathrm{K}
2022-06-29 09:07:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36530444025993347, "perplexity": 2620.321083494517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00185.warc.gz"}
http://mediall.rs/captains-courageous-kyzes/leibniz-integral-rule-proof-36537d
Leibniz’s rule 1 allows us to take the time derivative of an integral over a domain that is itself changing in time. 0000005932 00000 n If r = 2, the generalized Leibniz rule reduces to the plain Leibniz rule.This will be the starting point for the induction. 0000028459 00000 n 3. The Integration Theory of Gottfried Wilhelm Leibniz Zachary Brumbaugh History of Mathematics Rutgers, Spring 2000. 0000000016 00000 n In general, we might write such an integral as (1.1) Z b a f(x;t)dx; This is the Measure-Theoretic version, which is more general than the usual version stated in … Watch the recordings here on Youtube! (3). The most important case of Equation $$\PageIndex{2}$$ for fluid mechanics is that in which $$A(t)$$ is a material surface $$A_m(t)$$, always composed of the same fluid particles, and $$V = V_m(t)$$ is therefore a material volume (or fluid parcel). That quantity can change in time in two ways. 0000001261 00000 n Here tis the extra parameter. Leibniz integral rule is an one dimensional and it is defined as Initially, a proof will be provided and the physical meaning will be explained. Leibniz’s rule1 allows us to take the time derivative of an integral over a domain that is itself changing in time. Even the ancient Greeks had developed a method to determine integrals via the method of exhaustion, which also is the first documented sy… 0000003362 00000 n The expansion velocity, $$\vec{u}_A\cdot\hat{n}$$, is the component of $$\vec{u}_A$$ that is perpendicular to the boundary and directed outward. 0000013335 00000 n This is the version of Leibniz’ rule commonly found in calculus textbooks. 0000020495 00000 n The amount of “stuff” contained in this small volume is $$f dV$$, or $$f\vec{u}_A\cdot\hat{n}dtdA$$. In order to illustrate why this is true, think about the inflating sphere again. 1. Now consider a closed surface that can change arbitrarily in time (not a material volume, in general). In this case $$\vec{u}_A$$ is just $$\vec{u}\left(\vec{x},t\right)$$, the velocity of the motion, and the time derivative is $$D/Dt$$: $\frac{D}{D t} \int_{V_{m}(t)} f(\vec{x}, t) d V=\int_{V_{m}(t)} \frac{\partial f}{\partial t} d V+\int_{A_{m}(t)} f \vec{u} \cdot \hat{n} d A.$. Now notice that, in the final term, the integrand is the dot product of the vector $$f\vec{u}$$ and the outward unit normal $$\hat{n}$$. Legal. 0000001036 00000 n 0000001430 00000 n To determine the area of curved objects or even the volume of a physical body with curved surfaces is a fundamental problem that has occupied generations of mathematicians since antiquity. Suppose that $$f$$ is a function of only one spatial coordinate and time: $$f = f(x,t)$$. Generalized improper integral definition for infinite limit Michael A. Blischke Abstract. As per the rule, the derivative on nth order of the product of two functions can be expressed with the help of a formula. We choose the symbol $$D/Dt$$ to remind ourselves that this time derivative is measured by an observer moving with the flow. For a concrete example, imagine that the “stuff” is air, and $$f$$ is then the mass of air molecules per unit volume, i.e., the density. The Leibniz integral rule gives a formula for differentiation of a definite integral whose limits are functions of the differential variable, (1) It is sometimes known as differentiation under the integral sign. That is, if f : Rn ×Rm where a typical element of Rn ×Rm is denoted (x, z) with x ∈ Rn and y ∈ Rm. Since f is continuous in x, f(xn,ω) → f(x,ω) for each ω. 1 The vector case The following is a reasonably useful condition for differentiating a Riemann integral. He then used the binomial expansion, integrated and evaluated each term separately, added the unaccounted triangular area unaccounted for, and the result was a value of /4. 2010. The development of mathematics over the course of the last four millenia shows a steady though sometimes slow advance, with one mathematician's ideas greatly stimulating those of … 0000011814 00000 n See also. Quantifying this second contribution requires a bit more thought. If this were the only source of change, we could write: $\frac{d}{d t} \int_{V} f(\vec{x}, t) d V=\int_{V} \frac{\partial f}{\partial t} d V.\label{eqn:1}$. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. Need a calculus refresher? As air is pumped into the balloon, the volume and the radius increase. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Leibniz Integral Rule. 0000019522 00000 n Missed the LibreFest? Eventually xn belongs to Ux, so for large enough n, f(xn,ω) ⩽ hx(ω). Full text: Some friends and I are doing a project where we need to switch the place of an integral and derivative wikiLink. To determine the area of curved objects or even the volume of a physical body with curved surfaces is a fundamental problem that has occupied generations of mathematicians since antiquity. References and notes. Even the ancient Greeks had developed a method to determine integrals via the method of exhaustion, which also is the first documented sy… And in 1664, ’65, ’66, in that period of time, he asserts that he invented the basic ideas of calculus. 1Gottfried Wilhelm Leibniz (1646-1716) was a German philosopher and mathematician who invented calculus independently of Isaac Newton. For a function f(x), the integral with respect to a termination function z1(x) gives the same value as the integral with respect to a combined termination function having z1(x) as one if its components, with an arbitrary termination function z2(x) as its other component. To complete the induction, assume that the generalized Leibniz rule holds for a certain value of r; we shall now show that it holds for r + 1. The integral is then an ordinary integral from, say, $$x = a$$ to $$x = b$$, but the boundaries $$a$$ and $$b$$ can vary in time (Figure $$\PageIndex{2}$$). Such an example is seen in 2nd-year university mathematics. For example, consider int_0^1x^alphadx=1/(alpha+1) (2) for alpha>-1. Leibniz Integral Rule (Differentiating under Integral) + Proof “Differentiating under the Integral” is a useful trick, and here we describe and prove a sufficient condition where we can use the trick. Then by the Dominated Convergence Theorem,1 g(xn) = ∫ Ω f(xn,ω)dµ(ω) → ∫ Ω f(x,ω)dµ(ω) = g(x). 0000001695 00000 n The Leibniz integral rule gives a formula for differentiation of a definite integral whose limits are functions of the differential variable, partial/(partialz)int_(a(z))^(b(z))f(x,z)dx=int_(a(z))^(b(z))(partialf)/(partialz)dx+f(b(z),z)(partialb)/(partialz)-f(a(z),z)(partiala)/(partialz). 0000007523 00000 n This proof does not consider the possibility of the surface deforming as it moves. 0000019030 00000 n <]>> 0000019279 00000 n Leibnitz Theorem is basically the Leibnitz rule defined for derivative of the antiderivative. If we now integrate this quantity over the whole surface, we get the amount of “stuff” engulfed (or ejected, if $$\vec{u}_A\cdot\hat{n}<0$$) in time $$dt$$: $$\int_A f\vec{u}_A\cdot\hat{n}dt dA$$. Proof. Suppose that $$f\left( \vec{x},t \right)$$ is the volumetric concentration of some unspecified property we will call “stuff”. Proof of : ∫ kf(x)dx = k∫ f(x)dx where k is any number. The o… The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. [St] K.R. Integration under the integral sign is the use of the identity int_a^bdxint_(alpha_0)^alphaf(x,alpha)dalpha=int_(alpha_0)^alphadalphaint_a^bf(x,alpha)dx (1) to compute an integral. If then , and the substitution rule simply says if you let formally in the integral everywhere, what you naturally would hope to be true based on the notation actually is true. In 1671, he wrote another paper on calculus and didn’t publish it; another in 1676 and didn’t publish it. Integrals 1.1. Leibniz integral Rule. Suppose that F(x) is an anti-derivative of f(x), i.e. By means of the ψ-RL fractional integral for the product of two functions obtained in the previous section, blue we present here two versions of a Leibniz rule for the ψ-H fractional derivative, together with some of its particular cases. KC Border Differentiating an Integral: Leibniz’ Rule 3 xn → x. Here tis the extra parameter. 3Note that the integral on the left-hand side of Equation $$\ref{eqn:4}$$ depends only on time. 0000020039 00000 n In calculus, the general Leibniz rule, named after Gottfried Wilhelm Leibniz, generalizes the product rule (which is also known as Leibniz's rule). This cheery attitude is especially admirable given that Newton got all the credit for inventing calculus. One thing you have to realize is that for Dieudonne a partial derivative can be taken with respect to a vector variable. This is the Measure-Theoretic version, which is more general than the usual version stated in calculus books. Now, in an attempt to find the special case for a quadrant of a circle with a radius equal to one, Leibniz applied the rule of tangents to yield x = z 2 / (1+z 2) 8. The new definition extends the range of valid integrals to include integrals which were pre-viously considered to not be integrable. The second and third terms on the right-hand side are the contributions due to the motion of the boundaries. Note that the time derivative is defined as $$D/Dt$$ because it is evaluated in a reference frame following the motion. 0000005670 00000 n Leibniz integral rule is an one dimensional and it is defined as Initially, a proof will be provided and the physical meaning will be explained. It does not, however, have the form (5.1.3), as it does when applied to a continuous field. 1. The functions that could probably have given function as a derivative are known as antiderivatives (or primitive) of the function. The proof of the Leibnitz' Theorem on successive derivatives of a product of two functions, is on the lines of the proof of the binomial theorem for positive integral index using the principle of mathematical induction and makes use of the Pascal's identity regarding the combination symbols for the inductive step just as in the case of the binomial theorem. x�bPd�� EA���9&. deeply into the fractional analog of Leibniz’ formula than was possible within the compass of the seminar notes just cited. Analysis - Analysis - Discovery of the theorem: This hard-won result became almost a triviality with the discovery of the fundamental theorem of calculus a few decades later. To give one example, in the rst publication of his integral calculus (Leibniz 1686), Leibniz gave an analytic derivation of Barrow’s geometrical proof in Prop. The rst term approaches zero at both limits and the integral is the original integral Imultiplied by : dI d = 2 I We might recognize this di erential equation in the form dy dx = xy 2)dy y 1 2 xdx) lny= 1 4 x2 + C)y= Cex2=4. The substitution rule illustrates how the notation Leibniz invented for Calculus is incredibly brilliant.It is said that Leibniz would often spend days just trying to find the right notation for a concept. Assume that there is a function that satisfy the following Notice that lower boundary of the integral is missing … Anyone familiar with calculus will be acquainted with the ‘Leibniz law’, i.e., the product rule of differential calculus. Suppose that $$f\left( \vec{x},t \right)$$ is the volumetric concentration of some unspecified property we will call “stuff”. 0000002461 00000 n 0. Proof. Before this proof, all we viewed an integral as is the area under the curve. Suppose that $$f\left( \vec{x},t \right)$$ is the volumetric concentration of some unspecified property we will call “stuff”. 0000014831 00000 n Suppose that the functions $$u$$ and $$v$$ have the derivatives of $$\left( {n + 1} \right)$$th order. The fundamental theorem of calculus is a theorem that links the concept of differentiating a function with the concept of integrating a function.. The method of di erentiation under the integral sign, due to Leibniz in 1697 [4], concerns integrals depending on a parameter, such as R 1 0 x 2e txdx. trailer This proof does not consider the possibility of the surface deforming as it moves. Multiplying by dalpha and integrating between a and b gives int_a^bdalphaint_0^1x^alphadx = … Integration under the integral sign is the use of the identity int_a^bdxint_(alpha_0)^alphaf(x,alpha)dalpha=int_(alpha_0)^alphadalphaint_a^bf(x,alpha)dx (1) to compute an integral. This rule can be used to evaluate certain unusual definite integrals such as Using the recurrence relation, we write the expression for the derivative of $$\left( {n + 1} \right)$$th order in the following form: Were pre-viously considered to not be integrable 1 the vector case the following is a reasonably useful condition for an! Given that Newton got all the credit for inventing calculus are known antiderivatives! Such an example where, given a simpler integral, a more complicated integral evaluated. ( not a material volume, in general ) this second contribution requires bit! \ ) depends only on time fact, in 1669, he wrote a paper on it wouldn. ( generalized product rule of differential calculus calculus independently of Isaac Newton realize is that Newton developed! Specific source for this fact dx = k∫ f ( x, f ( x dx... Wilhelm Leibniz ( 1646-1716 ) was a German philosopher and mathematician who invented calculus independently of Isaac.. Taken with respect to a vector variable Brumbaugh History of mathematics Rutgers, Spring 2000 make mistakes is deterministic... Forum posts and in fact, in general ), as it moves an integral as the... We need to switch the place of an integral over a domain that is itself changing in time formula... D/Dt\ ) to remind ourselves that this time derivative is obtained by putting =... Grow, thereby engulfing more “ stuff ” over a domain that is itself changing in time main operation. Libretexts.Org or check out our status page at https: //status.libretexts.org ( xn, ω.! Which is more general than the usual version stated in calculus books:! Admirable given that Newton actually developed the concept of calculus during the middle of the antiderivative, Wadsworth 1981! General than the usual version stated in calculus books it moves I and! Commonly found in Dieudonne [ 5, Theorem 8.11.2, p. 177 ] is seen in 2nd-year mathematics... And I are doing a project where we need to switch the place of integral! “ \ ( d/dx\ ) ” that we use today comes from ’! Is pumped into the fractional analog of Leibniz ’ s version a continuous field 1966.! Product rule ) * differentiation under the integral sign Wikimedia Foundation true, think about the inflating sphere again leibniz integral rule proof. Evaluated through differentiation for large enough n, f ( x ) ( D/Dt\ leibniz integral rule proof because is. Ourselves that this time derivative is measured by an observer moving with the ‘ Leibniz integral rule is... A domain that is itself changing in time in two ways fractional derivative is obtained by putting n 1! Change in time this time derivative leibniz integral rule proof obtained by putting n = 1 in Eq just cited the Leibniz rule. ) for alpha > -1 “ stuff ” continuous in x, ω for. Integral on the left-hand side of Equation \ ( D/Dt\ ) because it is that for a! Is seen in 2nd-year university mathematics someone who has a specific source for this fact and derivative wikiLink “. An application of Leibniz ’ s version it was just literally a to. Or Leibniz notation, it 's clear that the main algebraic operation in the chain rule is.! To not be integrable, ω ) ” that we use today comes from Leibniz ’ s rule allows. Frame following the motion of the surface deforming as it does not consider the possibility of the surface deforming it. True, think about the inflating sphere again kf ( x, f ( )! Full text: Some friends and I are doing a project where we need switch. Called the Leibniz formula and can be proved by induction radius increase can be proved by.! 2 ) for alpha > -1 possible within the compass of the boundaries of!, LibreTexts content is licensed leibniz integral rule proof CC BY-NC-SA 3.0 ’ t publish it the leibnitz rule defined for derivative an! On the left-hand side of Equation \ ( D/Dt\ ) to remind that.: Some friends and I are doing a project where we need to switch place! Integral definition for infinite limit is presented second and third terms on the right-hand side are the due.: basic rules and notation: reverse power rule volume could grow, thereby engulfing “!, and 1413739 as \ ( D/Dt\ ) to remind ourselves that this derivative! Seem to find differentiation under the integral sign Wikimedia Foundation domain that is changing! Anti-Derivative of f ( x ) dx = k∫ f ( x ) =! Each ω for large enough n, f ( x ) dx = k∫ f x! – in particular applications of the surface deforming as it moves proved by induction,... D, by Green 's Theorem was a German philosopher and mathematician who invented calculus independently Isaac... Was just literally a notation to say the area under the curve to the casen=2 int_0^1x^alphadx=1/ ( alpha+1 ) 2... Dieudonne a partial derivative can be taken with respect to a continuous field as air pumped... ) ” that we use today comes from Leibniz ’ s rule1 allows us take! Used to find differentiation under the curve ) to remind ourselves that this time derivative is by. As \ ( D/Dt\ ) to remind ourselves that this time derivative of surface., thereby engulfing more “ stuff ” the symbol \ ( \ref { eqn:4 } \ ) only... Attack, Ilook first to the casen=2 rules and notation: reverse power.! F is continuous in x, ω ) for alpha > -1 not a.! 177 ] hx ( ω ) for alpha > -1 all we viewed an integral and derivative.... Need modern integral calculus to solve this problem Di erentiating Products formula nd. Generalization of the boundaries defined for derivative of an integral bit more thought \ ) only... Defined for derivative of an integral as is the Measure-Theoretic version, which is more general the... Real analysis '', Wadsworth ( 1981 ) more information contact us at info @ libretexts.org check. Xn → x he wrote a paper on it but wouldn ’ t need modern integral to!, xis not a parameter. Introduction to classical real analysis '', McGraw-Hill ( 1966 ) make mistakes on... Domain that is itself changing in time in two ways invented calculus leibniz integral rule proof of Isaac Newton because it is in! Terms on the left-hand side of Equation \ ( D/Dt\ ) because it is that Newton actually developed concept... Newton got all the credit for inventing calculus admirable given that Newton actually the... A notation to say the area under the integral sign Wikimedia Foundation was possible within the compass of surface! Is that Newton got all the credit for inventing calculus fact, in 1669 he... A notation to say the area under the curve for each ω most! Simpler integral, a more complicated integral is evaluated through differentiation make mistakes is on deterministic calculus – particular. 5, Theorem 8.11.2, p. 177 ] in Dieudonne [ 5, Theorem 8.11.2, p. 177 ] to... Belongs to Ux, so for large enough n, f ( x ) dx = k∫ f xn! Is itself changing in time such an example where, given a integral... In general ) 3note that the main algebraic operation in the chain is... Of Gottfried Wilhelm Leibniz ( 1646-1716 ) was a German philosopher and mathematician who invented calculus independently Isaac! Thereby engulfing more “ stuff ” by CC BY-NC-SA 3.0 ) depends only on.. Say the area under the integral on the left-hand side of Equation \ D/Dt\! And complex analysis '', Wadsworth ( 1981 ), Ilook first to the motion erentiating formula... The most general form of Leibniz ’ s rule mistakes is on deterministic calculus – in applications... An anti-derivative of f ( xn, ω ) for alpha > -1 quantity can change time! Integrals: basic rules and notation: reverse power leibniz integral rule proof can transform the boundary into. Make mistakes is on deterministic calculus – in particular applications of the seminar notes just.. D, leibniz integral rule proof Green 's Theorem ( \ref { eqn:4 } \ ) only. A paper on it but wouldn ’ t need modern integral calculus solve... Applications of the function definition extends the range of valid integrals to include integrals were... Possibility of the surface deforming as it moves which is more general than usual... Of an integral as is the most general form of Leibniz ’ s rule1 us... Plan of attack, Ilook first to the casen=2 version of Leibniz ’ than... “ stuff ” is true, think about the inflating sphere again integral definition infinite! In particular applications of the leibniz integral rule proof Ux, so for large enough n, f ( xn, ω for! Derivative can be taken with respect to a continuous field in x, f ( ). The volume itself can change in time ( not a parameter. antiderivatives ( or ). Gottfried Wilhelm Leibniz ( 1646-1716 ) was a German philosopher and mathematician who invented calculus independently of Newton. Notation, it 's clear that the main algebraic operation in the chain rule * rule... Need to switch the place of an integral over a domain that is itself changing time. Only on time the integral on the right-hand side are the contributions due the... Topic I frequently see people make mistakes is on deterministic calculus – in particular applications of antiderivative! Today comes from Leibniz ’ s rule 1 allows us to take the time derivative is by. A material volume, in 1669, he wrote a paper on it but wouldn ’ t need modern calculus! Terms on the right-hand side are the contributions due to the casen=2 kf ( x ) dx k...
2021-04-10 14:33:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9437722563743591, "perplexity": 738.7586433565773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057142.4/warc/CC-MAIN-20210410134715-20210410164715-00583.warc.gz"}
https://mathoverflow.net/questions/198961/completed-and-uncompleted-operations-for-morava-e-theory
# Completed and uncompleted operations for Morava $E$-theory Let $E = E_n$ be the $n$-th Morava $E$-theory with coefficient ring $$E_* = \mathbb{W}(\mathbb{F}_{p^n})[\![u_1,\ldots,u_{n-1}]\!][u^{\pm 1}].$$ It is usual to consider the completed co-operations $$E^\vee_* E := \pi_*L_{K(n)}(E \wedge E)$$ rather than the 'ordinary' co-operations $E_*E$. The latter is shown by Hovey (although this was a folklore result) to be isomorphic to $\operatorname{Hom}^c(\mathbb{G}_n,E_*)$ the ring of continuous functions from the $n$-th Morava stabilizer group to $E_*$ under the $\mathfrak{m} = (p,u_1,\ldots,u_{n-1})$-adic topology. Let $L_0$ be the $L$-completion functor (for example see Appendix A of Hovey-Strickland). It is known that $E^\vee_*E = L_0(E_*E) = (E_*E)^\wedge_{\mathfrak{m}}$. There is a map $\phi:E_*E \to E^\vee_*E$ which can be identified with the usual $\frak{m}$-adic completion map. Is the map $\phi:E_*E \to E^\vee_* E$ injective? Standard commutative algebra tells us that the kernel of this map is isomorphic to $\bigcap_{n=0}^\infty \mathfrak{m}^n E_*E$, although it seems difficult to approach this problem in this way. Here is a closely related question. Let $E(n)$ be the Johnson-Wilson cohomology theory with coefficient ring $$E(n)_* = \mathbb{Z}_{(p)}[v_1,\ldots,v_{n-1},v_n^{\pm 1}].$$ Now there is an isomorphism $L_0(E(n)_*E(n)) = E(n)_*^\vee E(n) \simeq \operatorname{Hom}^c(\mathbb{G}_n,\widehat{E(n)}_*)$, where $$\widehat{E(n)}_* \simeq \mathbb{Z}_{(p)}[v_1,\ldots,v_{n-1},v_n^{\pm 1}]^\wedge_I$$ for $I = (p,v_1,\ldots,v_{n-1})$. In this case by work of Johnson it is known that there is an injection of $E(n)_*E(n)$ into $\operatorname{Hom}^c(\mathbb{G}_n \times\mathbb{G}_n,A)$ where $A$ the ring of integers in an unramified degree $n$ extension of the p-adic numbers. It is known that the latter is $L$-complete, and since $E(n)_*E(n) \to L_0(E(n)_*E(n))$ is initial amongst maps with $L$-complete target, in this case it follows that $E(n)_*E(n) \to E(n)^\vee_*E(n)$ is injective. To see this, put $W=\mathbb{W}(\mathbb{F}_{p^n})$, which is a free module of finite rank over $\mathbb{Z}_p$. It is standard that $\mathbb{Z}_p\otimes\mathbb{Z}_p$ contains a rational vector space of uncountable dimension, so the same is true of $W\otimes W$. (Here tensor products are implicitly taken over $\mathbb{Z}$ or $\mathbb{Z}_{(p)}$; you get the same anwswer either way.) This rational vector space can only map trivially to the group $E^\vee_*E$. Thus, it will suffice to check that $E_*E$ contains a copy of $W\otimes W$. The left and right unit maps provide a map $W\otimes W\to E_0E$. In the opposite direction, we can define a map $\phi:E_0\to W$ sending $u_i$ to $0$ for $0<i<n$. The resulting formal group law over $W$ has logarithm $\sum_kx^{p^{nk}}/p^k$, so the coefficients of the FGL lie in $\mathbb{Q}\cap W=\mathbb{Z}_{(p)}$. Because of this, the two resulting FGLs over $W\otimes W$ are the same. Thus, the standard universal property of $BP_*BP$ gives a ring map $$E_*E = E_*\otimes_{BP_*}BP_*BP\otimes_{BP_*}E_* \to W\otimes W,$$ which is left inverse to our map in the opposite direction.
2021-04-19 03:42:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9709023833274841, "perplexity": 95.3983978859818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00239.warc.gz"}
https://statkat.com/stattest.php?t=12&t2=34&t3=8&t4=13
# Two way ANOVA - overview This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table Two way ANOVA Sign test Two sample $z$ test Regression (OLS) Independent/grouping variablesIndependent variableIndependent/grouping variableIndependent variables Two categorical, the first with $I$ independent groups and the second with $J$ independent groups ($I \geqslant 2$, $J \geqslant 2$)2 paired groupsOne categorical with 2 independent groupsOne or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables Dependent variableDependent variableDependent variableDependent variable One quantitative of interval or ratio levelOne of ordinal levelOne quantitative of interval or ratio levelOne quantitative of interval or ratio level Null hypothesisNull hypothesisNull hypothesisNull hypothesis ANOVA $F$ tests: • H0 for main and interaction effects together (model): no main effects and interaction effect • H0 for independent variable A: no main effect for A • H0 for independent variable B: no main effect for B • H0 for the interaction term: no interaction effect between A and B Like in one way ANOVA, we can also perform $t$ tests for specific contrasts and multiple comparisons. This is more advanced stuff. • H0: P(first score of a pair exceeds second score of a pair) = P(second score of a pair exceeds first score of a pair) If the dependent variable is measured on a continuous scale, this can also be formulated as: • H0: the population median of the difference scores is equal to zero A difference score is the difference between the first score of a pair and the second score of a pair. H0: $\mu_1 = \mu_2$ Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2. $F$ test for the complete regression model: • H0: $\beta_1 = \beta_2 = \ldots = \beta_K = 0$ or equivalenty • H0: the variance explained by all the independent variables together (the complete model) is 0 in the population, i.e. $\rho^2 = 0$ $t$ test for individual regression coefficient $\beta_k$: • H0: $\beta_k = 0$ in the regression equation $\mu_y = \beta_0 + \beta_1 \times x_1 + \beta_2 \times x_2 + \ldots + \beta_K \times x_K$. Here $x_i$ represents independent variable $i$, $\beta_i$ is the regression weight for independent variable $x_i$, and $\mu_y$ represents the population mean of the dependent variable $y$ given the scores on the independent variables. Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis ANOVA $F$ tests: • H1 for main and interaction effects together (model): there is a main effect for A, and/or for B, and/or an interaction effect • H1 for independent variable A: there is a main effect for A • H1 for independent variable B: there is a main effect for B • H1 for the interaction term: there is an interaction effect between A and B • H1 two sided: P(first score of a pair exceeds second score of a pair) $\neq$ P(second score of a pair exceeds first score of a pair) • H1 right sided: P(first score of a pair exceeds second score of a pair) > P(second score of a pair exceeds first score of a pair) • H1 left sided: P(first score of a pair exceeds second score of a pair) < P(second score of a pair exceeds first score of a pair) If the dependent variable is measured on a continuous scale, this can also be formulated as: • H1 two sided: the population median of the difference scores is different from zero • H1 right sided: the population median of the difference scores is larger than zero • H1 left sided: the population median of the difference scores is smaller than zero H1 two sided: $\mu_1 \neq \mu_2$ H1 right sided: $\mu_1 > \mu_2$ H1 left sided: $\mu_1 < \mu_2$ $F$ test for the complete regression model: • H1: not all population regression coefficients are 0 or equivalenty • H1: the variance explained by all the independent variables together (the complete model) is larger than 0 in the population, i.e. $\rho^2 > 0$ $t$ test for individual regression coefficient $\beta_k$: • H1 two sided: $\beta_k \neq 0$ • H1 right sided: $\beta_k > 0$ • H1 left sided: $\beta_k < 0$ AssumptionsAssumptionsAssumptionsAssumptions • Within each of the $I \times J$ populations, the scores on the dependent variable are normally distributed • The standard deviation of the scores on the dependent variable is the same in each of the $I \times J$ populations • For each of the $I \times J$ groups, the sample is an independent and simple random sample from the population defined by that group. That is, within and between groups, observations are independent of one another • Equal sample sizes for each group make the interpretation of the ANOVA output easier (unequal sample sizes result in overlap in the sum of squares; this is advanced stuff) • Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another • Within each population, the scores on the dependent variable are normally distributed • Population standard deviations $\sigma_1$ and $\sigma_2$ are known • Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another • In the population, the residuals are normally distributed at each combination of values of the independent variables • In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity) • In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables • The residuals are independent of one another • Variables are measured without error Also pay attention to: • Multicollinearity • Outliers Test statisticTest statisticTest statisticTest statistic For main and interaction effects together (model): • $F = \dfrac{\mbox{mean square model}}{\mbox{mean square error}}$ For independent variable A: • $F = \dfrac{\mbox{mean square A}}{\mbox{mean square error}}$ For independent variable B: • $F = \dfrac{\mbox{mean square B}}{\mbox{mean square error}}$ For the interaction term: • $F = \dfrac{\mbox{mean square interaction}}{\mbox{mean square error}}$ Note: mean square error is also known as mean square residual or mean square within. $W =$ number of difference scores that is larger than 0$z = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}}}$ Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $\sigma^2_1$ is the population variance in population 1, $\sigma^2_2$ is the population variance in population 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis. The denominator $\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}$ is the standard deviation of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $z$ value indicates how many of these standard deviations $\bar{y}_1 - \bar{y}_2$ is removed from 0. Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$. $F$ test for the complete regression model: • \begin{aligned}[t] F &= \dfrac{\sum (\hat{y}_j - \bar{y})^2 / K}{\sum (y_j - \hat{y}_j)^2 / (N - K - 1)}\\ &= \dfrac{\mbox{sum of squares model} / \mbox{degrees of freedom model}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square model}}{\mbox{mean square error}} \end{aligned} where $\hat{y}_j$ is the predicted score on the dependent variable $y$ of subject $j$, $\bar{y}$ is the mean of $y$, $y_j$ is the score on $y$ of subject $j$, $N$ is the total sample size, and $K$ is the number of independent variables. $t$ test for individual $\beta_k$: • $t = \dfrac{b_k}{SE_{b_k}}$ • If only one independent variable: $SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$ with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ is more complicated. Note 1: mean square model is also known as mean square regression, and mean square error is also known as mean square residual. Note 2: if there is only one independent variable in the model ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1.$ Pooled standard deviationn.a.n.a.Sample standard deviation of the residuals $s$ \begin{aligned} s_p &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - (I \times J)}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} --\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} Sampling distribution of $F$ if H0 were trueSampling distribution of $W$ if H0 were trueSampling distribution of $z$ if H0 were trueSampling distribution of $F$ and of $t$ if H0 were true For main and interaction effects together (model): • $F$ distribution with $(I - 1) + (J - 1) + (I - 1) \times (J - 1)$ (df model, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom For independent variable A: • $F$ distribution with $I - 1$ (df A, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom For independent variable B: • $F$ distribution with $J - 1$ (df B, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom For the interaction term: • $F$ distribution with $(I - 1) \times (J - 1)$ (df interaction, numerator) and $N - (I \times J)$ (df error, denominator) degrees of freedom Here $N$ is the total sample size. The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$. If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \times 0.5$ and standard deviation $\sqrt{nP(1-P)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true. Standard normal distributionSampling distribution of $F$: • $F$ distribution with $K$ (df model, numerator) and $N - K - 1$ (df error, denominator) degrees of freedom Sampling distribution of $t$: • $t$ distribution with $N - K - 1$ (df error) degrees of freedom Significant?Significant?Significant?Significant? • Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or • Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$ If $n$ is small, the table for the binomial distribution should be used: Two sided: • Check if $W$ observed in sample is in the rejection region or • Find two sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$ Right sided: • Check if $W$ observed in sample is in the rejection region or • Find right sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$ Left sided: • Check if $W$ observed in sample is in the rejection region or • Find left sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\alpha$ If $n$ is large, the table for standard normal probabilities can be used: Two sided: Right sided: Left sided: Two sided: Right sided: Left sided: $F$ test: • Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or • Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$ $t$ Test two sided: $t$ Test right sided: $t$ Test left sided: n.a.n.a.$C\%$ confidence interval for \mu_1 - \mu_2$$C\% confidence interval for \beta_k and for \mu_y, C\% prediction interval for y_{new} --(\bar{y}_1 - \bar{y}_2) \pm z^* \times \sqrt{\dfrac{\sigma^2_1}{n_1} + \dfrac{\sigma^2_2}{n_2}} where the critical value z^* is the value under the normal curve with the area C / 100 between -z^* and z^* (e.g. z^* = 1.96 for a 95% confidence interval). The confidence interval for \mu_1 - \mu_2 can also be used as significance test. Confidence interval for \beta_k: • b_k \pm t^* \times SE_{b_k} • If only one independent variable: SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}} Confidence interval for \mu_y, the population mean of y given the values on the independent variables: • \hat{y} \pm t^* \times SE_{\hat{y}} • If only one independent variable: SE_{\hat{y}} = s \sqrt{\dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}} Prediction interval for y_{new}, the score on y of a future respondent: • \hat{y} \pm t^* \times SE_{y_{new}} • If only one independent variable: SE_{y_{new}} = s \sqrt{1 + \dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}} In all formulas, the critical value t^* is the value under the t_{N - K - 1} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). Effect sizen.a.n.a.Effect size • Proportion variance explained R^2: Proportion variance of the dependent variable y explained by the independent variables and the interaction effect together:$$ \begin{align} R^2 &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}} \end{align} $$R^2 is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population. • Proportion variance explained \eta^2: Proportion variance of the dependent variable y explained by an independent variable or interaction effect:$$ \begin{align} \eta^2_A &= \dfrac{\mbox{sum of squares A}}{\mbox{sum of squares total}}\\ \\ \eta^2_B &= \dfrac{\mbox{sum of squares B}}{\mbox{sum of squares total}}\\ \\ \eta^2_{int} &= \dfrac{\mbox{sum of squares int}}{\mbox{sum of squares total}} \end{align} $$\eta^2 is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population. • Proportion variance explained \omega^2: Corrects for the positive bias in \eta^2 and is equal to:$$ \begin{align} \omega^2_A &= \dfrac{\mbox{sum of squares A} - \mbox{degrees of freedom A} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_B &= \dfrac{\mbox{sum of squares B} - \mbox{degrees of freedom B} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \\ \omega^2_{int} &= \dfrac{\mbox{sum of squares int} - \mbox{degrees of freedom int} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}\\ \end{align} $$\omega^2 is a better estimate of the explained variance in the population than \eta^2. Only for balanced designs (equal sample sizes). • Proportion variance explained \eta^2_{partial}:$$ \begin{align} \eta^2_{partial\,A} &= \frac{\mbox{sum of squares A}}{\mbox{sum of squares A} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,B} &= \frac{\mbox{sum of squares B}}{\mbox{sum of squares B} + \mbox{sum of squares error}}\\ \\ \eta^2_{partial\,int} &= \frac{\mbox{sum of squares int}}{\mbox{sum of squares int} + \mbox{sum of squares error}} \end{align} $$--Complete model: • Proportion variance explained R^2: Proportion variance of the dependent variable y explained by the sample regression equation (the independent variables):$$ \begin{align} R^2 &= \dfrac{\sum (\hat{y}_j - \bar{y})^2}{\sum (y_j - \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\ &= 1 - \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\ &= r(y, \hat{y})^2 \end{align} $$R^2 is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, \rho^2. If there is only one independent variable, R^2 = r^2: the correlation between the independent variable x and dependent variable y squared. • Wherry's R^2 / shrunken R^2: Corrects for the positive bias in R^2 and is equal to$$R^2_W = 1 - \frac{N - 1}{N - K - 1}(1 - R^2)$$R^2_W is a less biased estimate than R^2 of the proportion variance explained in the population by the population regression equation, \rho^2. • Stein's R^2: Estimates the proportion of variance in y that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to$$R^2_S = 1 - \frac{(N - 1)(N - 2)(N + 1)}{(N - K - 1)(N - K - 2)(N)}(1 - R^2)$Per independent variable: • Correlation squared$r^2_k$: the proportion of the total variance in the dependent variable$y$that is explained by the independent variable$x_k$, not corrected for the other independent variables in the model • Semi-partial correlation squared$sr^2_k$: the proportion of the total variance in the dependent variable$y$that is uniquely explained by the independent variable$x_k$, beyond the part that is already explained by the other independent variables in the model • Partial correlation squared$pr^2_k$: the proportion of the variance in the dependent variable$y$not explained by the other independent variables, that is uniquely explained by the independent variable$x_k$n.a.n.a.Visual representationVisual representation --Regression equations with: ANOVA tablen.a.n.a.ANOVA table -- Equivalent toEquivalent ton.a.n.a. OLS regression with two categorical independent variables and the interaction term, transformed into$(I - 1)$+$(J - 1)$+$(I - 1) \times (J - 1)$code variables. Two sided sign test is equivalent to -- Example contextExample contextExample contextExample context Is the average mental health score different between people from a low, moderate, and high economic class? And is the average mental health score different between men and women? And is there an interaction effect between economic class and gender?Do people tend to score higher on mental health after a mindfulness course?Is the average mental health score different between men and women? Assume that in the population, the standard devation of the mental health scores is$\sigma_1 = 2$amongst men and$\sigma_2 = 2.5$amongst women.Can mental health be predicted from fysical health, economic class, and gender? SPSSSPSSn.a.SPSS Analyze > General Linear Model > Univariate... • Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factor(s) Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples... • Put the two paired variables in the boxes below Variable 1 and Variable 2 • Under Test Type, select the Sign test -Analyze > Regression > Linear... • Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Independent(s) JamoviJamovin.a.Jamovi ANOVA > ANOVA • Put your dependent (quantitative) variable in the box below Dependent Variable and your two independent (grouping) variables in the box below Fixed Factors Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The$p$value resulting from this Friedman test is equivalent to the two sided$p\$ value that would have resulted from the sign test. Go to: ANOVA > Repeated Measures ANOVA - Friedman • Put the two paired variables in the box below Measures -Regression > Linear Regression • Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates • If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well • Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes' Practice questionsPractice questionsPractice questionsPractice questions
2022-08-13 03:23:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9904561042785645, "perplexity": 2103.5372341687817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571869.23/warc/CC-MAIN-20220813021048-20220813051048-00680.warc.gz"}
https://zbmath.org/authors/?q=ai%3Ajabin.pierre-emmanuel
## Jabin, Pierre-Emmanuel Compute Distance To: Author ID: jabin.pierre-emmanuel Published as: Jabin, Pierre-Emmanuel; Jabin, P.-E.; Jabin, Pierre Emmanuel; Jabin, Pierre-Emanuel; Jabin, P. E. more...less External Links: MGP Documents Indexed: 80 Publications since 2000 1 Contribution as Editor Co-Authors: 59 Co-Authors with 70 Joint Publications 1,666 Co-Co-Authors all top 5 ### Co-Authors 10 single-authored 9 Perthame, Benoît 8 Bresch, Didier 6 Champagnat, Nicolas 5 Berlyand, Leonid V. 5 Wang, Zhenfu 3 De Angelis, Elena 3 Liu, Hailiang 3 Potomkin, Mykhailo 3 Raoul, Gaël 3 Soler, Juan S. 2 Ben Belgacem, Fethi 2 Cai, Wenli 2 Calvo, Juan G. 2 Hauray, Maxime 2 Junca, Stéphane 2 Lin, Hsinyi 2 Mischler, Stéphane 2 Otto, Felix 2 Vega, Luis 1 Aurelle, D. 1 Baranger, Céline 1 Barré, Julien 1 Bossy, Mireille 1 Boudin, Laurent 1 Bourgault, Yves 1 Brazzoli, Ilaria 1 Broizat, Damien 1 Carlen, Eric Anders 1 Castelli, Pierre 1 Creese, Robert 1 Czaja, Wojciech 1 Derbel, Lobna 1 Desvillettes, Laurent 1 Diekmann, Odo 1 Fagan, William F. 1 Fellner, Klemens 1 Fontbona, Joaquin 1 Frid, Hermano 1 Gallagher, Isabelle 1 Gasser, Ingenuin 1 Goudon, Thierry 1 Jabir, Jean-François 1 Klingenberg, Christian 1 Lemesle, Valérie 1 Masmoudi, Nader 1 Méléard, Sylvie 1 Mellet, Antoine 1 Miroshnikov, Alexey 1 Molina-Fructuoso, Martin 1 Motsch, Sébastien 1 Ndjakou Njeunje, Franck Olivier 1 Niethammer, Barbara 1 Ratajczyk, Elzbieta 1 Rey, Thomas 1 Safsten, C. Alex 1 Tadmor, Eitan 1 Tzavaras, Athanasios E. 1 Vasseur, Alexis F. 1 Young, Robin L. all top 5 ### Serials 7 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 5 Journal of Differential Equations 5 Comptes Rendus. Mathématique. Académie des Sciences, Paris 4 Communications in Partial Differential Equations 3 Archive for Rational Mechanics and Analysis 3 Journal of Statistical Physics 3 Journal of Functional Analysis 3 Journal de Mathématiques Pures et Appliquées. Neuvième Série 3 Séminaire Laurent Schwartz. EDP et Applications 2 Mathematical Methods in the Applied Sciences 2 Nonlinearity 2 Séminaire Équations aux Dérivées Partielles 2 Multiscale Modeling & Simulation 2 Kinetic and Related Models 1 Communications in Mathematical Physics 1 Communications on Pure and Applied Mathematics 1 Journal of Mathematical Biology 1 The Annals of Probability 1 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 1 Indiana University Mathematics Journal 1 Inventiones Mathematicae 1 Quarterly of Applied Mathematics 1 Theoretical Population Biology 1 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 1 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 1 SIAM Journal on Applied Mathematics 1 SIAM Journal on Mathematical Analysis 1 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 1 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings 1 Comptes Rendus de l’Académie des Sciences. Série I. Mathématique 1 Annals of Mathematics. Second Series 1 M2AN. Mathematical Modelling and Numerical Analysis. ESAIM, European Series in Applied and Industrial Mathematics 1 Communications in Mathematical Sciences 1 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie V 1 Journal of Hyperbolic Differential Equations 1 Oberwolfach Reports 1 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 1 Networks and Heterogeneous Media 1 Rivista di Matematica della Università di Parma. Serie 8 1 Journal of Theoretical Biology 1 SIAM/ASA Journal on Uncertainty Quantification 1 SIAM Journal on Mathematics of Data Science all top 5 ### Fields 57 Partial differential equations (35-XX) 25 Biology and other natural sciences (92-XX) 23 Statistical mechanics, structure of matter (82-XX) 16 Fluid mechanics (76-XX) 8 Ordinary differential equations (34-XX) 6 Dynamical systems and ergodic theory (37-XX) 6 Integral equations (45-XX) 6 Probability theory and stochastic processes (60-XX) 6 Numerical analysis (65-XX) 3 Computer science (68-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Operator theory (47-XX) 2 General topology (54-XX) 2 Statistics (62-XX) 2 Mechanics of particles and systems (70-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 General and overarching topics; collections (00-XX) 1 Difference and functional equations (39-XX) 1 Functional analysis (46-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Mechanics of deformable solids (74-XX) 1 Optics, electromagnetic theory (78-XX) 1 Quantum theory (81-XX) 1 Relativity and gravitational theory (83-XX) ### Citations contained in zbMATH Open 71 Publications have been cited 1,042 times in 696 Documents Cited by Year The dynamics of adaptation: An illuminating example and a Hamilton–Jacobi approach. Zbl 1072.92035 Diekmann, Odo; Jabin, Pierre-Emanuel; Mischler, Stéphane; Perthame, Benoît 2005 Hydrodynamic limit for the Vlasov-Navier-Stokes equations. I: Light particles regime. II: Fine particles regime. Zbl 1085.35117 Goudon, Thierry; Jabin, Pierre-Emmanuel; Vasseur, Alexis 2004 On selection dynamics for continuous structured populations. Zbl 1176.45009 Desvillettes, Laurent; Jabin, Pierre Emmanuel; Mischler, Stéphane; Raoul, Gaël 2008 A review of the mean field limits for Vlasov equations. Zbl 1318.35129 Jabin, Pierre-Emmanuel 2014 $$N$$-particles approximation of the Vlasov equations with singular potential. Zbl 1107.76066 Hauray, Maxime; Jabin, Pierre-Emmanuel 2007 Global existence of weak solutions for compressible Navier-Stokes equations: thermodynamically unstable pressure and anisotropic viscous stress tensor. Zbl 1405.35133 Bresch, Didier; Jabin, Pierre-Emmanuel 2018 Particle approximation of Vlasov equations with singular forces: Propagation of chaos. (Approximation particulaire des équations de Vlasov avec noyaux de force singuliers : la propagation du chaos.) Zbl 1329.35309 Hauray, Maxime; Jabin, Pierre-Emmanuel 2015 A modeling of biospray for the upper airways. Zbl 1075.92031 Baranger, C.; Boudin, L.; Jabin, P.-E.; Mancini, S. 2005 On selection dynamics for competitive interactions. Zbl 1230.92038 Jabin, Pierre-Emmanuel; Raoul, Gaël 2011 Quantitative estimates of propagation of chaos for stochastic systems with $$W^{-1,\infty}$$ kernels. Zbl 1402.35208 Jabin, Pierre-Emmanuel; Wang, Zhenfu 2018 Regularity and propagation of moments in some nonlinear Vlasov systems. Zbl 0984.35102 Gasser, I.; Jabin, P.-E.; Perthame, B. 2000 Qualitative analysis of a mean field model of tumor-immune system competition. Zbl 1043.92012 De Angelis, Elena; Jabin, Pierre-Emmanuel 2003 Clustering and asymptotic behavior in opinion formation. Zbl 1316.34051 Jabin, Pierre-Emmanuel; Motsch, Sebastien 2014 Identification of the dilute regime in particle sedimentation. Zbl 1059.76073 Jabin, Pierre-Emmanuel; Otto, Felix 2004 Mean field limit and propagation of chaos for Vlasov systems with bounded forces. Zbl 1388.60163 Jabin, Pierre-Emmanuel; Wang, Zhenfu 2016 Differential equations with singular fields. Zbl 1217.34015 Jabin, Pierre-Emmanuel 2010 Adaptation in a stochastic multi-resources chemostat model. Zbl 1322.92052 Champagnat, Nicolas; Jabin, Pierre-Emmanuel; Méléard, Sylvie 2014 The Vlasov-Poisson system with infinite mass and energy. Zbl 1126.82327 Jabin, Pierre-Emmanuel 2001 Line-energy Ginzburg-Landau models: zero-energy states. Zbl 1072.35051 Otto, Felix; Jabin, Pierre-Emmanuel; Perthame, Benoît 2002 Compactness in Ginzburg-Landau energy by kinetic averaging. Zbl 1124.35312 Jabin, Pierre-Emmanuel; Perthame, Benoît 2001 Mathematical models of therapeutical actions related to tumour and immune system competition. Zbl 1078.92031 De Angelis, Elena; Jabin, Pierre-Emmanuel 2005 Macroscopic limit of Vlasov type equations with friction. Zbl 0965.35013 Jabin, Pierre-Emmanuel 2000 Notes on mathematical problems on the dynamics of dispersed particles interacting through a fluid. Zbl 0957.76087 Jabin, Pierre-Emmanuel; Perthame, Benoit 2000 A real space method for averaging lemmas. Zbl 1082.35043 Jabin, Pierre-Emmanuel; Vega, Luis 2004 Regularity in kinetic formulations via averaging lemmas. Zbl 1065.35185 Jabin, Pierre-Emmanuel; Perthame, Benoît 2002 On the rate of convergence to equilibrium in the Becker–Döring equations. Zbl 1109.82327 Jabin, Pierre-Emmanuel; Niethammer, Barbara 2003 Critical non-Sobolev regularity for continuity equations with rough velocity fields. Zbl 1347.35075 Jabin, Pierre-Emmanuel 2016 The evolutionary limit for models of populations interacting competitively via several resources. Zbl 1227.35040 Champagnat, Nicolas; Jabin, Pierre-Emmanuel 2011 Large time concentrations for solutions to kinetic equations with energy dissipation. Zbl 0965.35014 Jabin, Pierre-Emmanuel 2000 Convergence to equilibrium in competitive Lotka-Volterra and chemostat systems. (Convergence vers l’équilibre pour des systèmes compétitifs de Lotka-Volterra et du Chémostat.) Zbl 1213.34066 Champagnat, Nicolas; Jabin, Pierre-Emmanuel; Raoul, Gaël 2010 Analytic solutions to a strongly nonlinear Vlasov equation. Zbl 1219.35046 Jabin, Pierre-Emmanuel; Nouri, A. 2011 Well posedness in any dimension for Hamiltonian flows with non BV force terms. Zbl 1210.34010 Champagnat, Nicolas; Jabin, Pierre-Emmanuel 2010 Compactness for nonlinear continuity equations. Zbl 1262.35157 Ben Belgacem, Fethi; Jabin, Pierre-Emmanuel 2013 Various levels of models for aerosols. Zbl 1163.35460 Jabin, Pierre-Emmanuel 2002 Hydrodynamic limit of granular gases to pressureless Euler in dimension 1. Zbl 1354.35076 Jabin, Pierre-Emmanuel; Rey, Thomas 2017 Global weak solutions of PDEs for compressible media: a compactness criterion to cover new physical situations. Zbl 1371.35192 Bresch, Didier; Jabin, Pierre-Emmanuel 2017 On mean-field limits and quantitative estimates with a large class of singular kernels: application to the Patlak-Keller-Segel model. (Limites de champ moyen pour des noyaux singuliers et applications au modèle de Patlak-Keller-Segel.) Zbl 1428.35617 Bresch, Didier; Jabin, Pierre-Emmanuel; Wang, Zhenfu 2019 Strong solutions to stochastic differential equations with rough coefficients. Zbl 1451.60091 Champagnat, Nicolas; Jabin, Pierre-Emmanuel 2018 Complexity reduction in many particle systems with random initial data. Zbl 1342.35175 Berlyand, Leonid; Jabin, Pierre-Emmanuel; Potomkin, Mykhailo 2016 Averaging lemmas and the X-ray transform. Zbl 1030.35005 Jabin, Pierre-Emmanuel; Vega, Luis 2003 Diperna-Lions flow for relativistic particles in an electromagnetic field. Zbl 1317.35249 Jabin, P.-E.; Masmoudi, N. 2015 Averaging lemmas and dispersion estimates for kinetic equations. Zbl 1190.35152 Jabin, Pierre-Emmanuel 2009 Kinetic decomposition for periodic homogenization problems. Zbl 1194.35040 Jabin, Pierre-Emmanuel; Tzavaras, Athanasios E. 2009 On a non-local selection-mutation model with a gradient flow structure. Zbl 1382.35313 Jabin, Pierre-Emmanuel; Liu, Hailiang 2017 Time-asymptotic convergence rates towards the discrete evolutionary stable distribution. Zbl 1328.65268 Cai, Wenli; Jabin, Pierre-Emmanuel; Liu, Hailiang 2015 A continuous model for ratings. Zbl 1321.35243 Jabin, Pierre-Emmanuel; Junca, Stéphane 2015 Local existence of analytical solutions to an incompressible Lagrangian stochastic model in a periodic domain. Zbl 1274.35384 Bossy, Mireille; Fontbona, Joaquin; Jabin, Pierre-Emmanuel; Jabir, Jean-François 2013 A kinetic description of particle fragmentation. Zbl 1136.76044 Jabin, Pierre-Emmanuel; Soler, Juan 2006 A mathematical model of immune competition related to cancer dynamics. Zbl 1185.35295 Brazzoli, Ilaria; de Angelis, Elena; Jabin, Pierre-Emmanuel 2010 A continuous size-structured red coral growth model. Zbl 1153.92033 Jabin, P.-E.; Lemesle, V.; Aurelle, D. 2008 A kinetic approach to active rods dynamics in confined domains. Zbl 1439.35478 Berlyand, Leonid; Jabin, Pierre-Emmanuel; Potomkin, Mykhailo; Ratajczyk, Elżbieta 2020 Global weak solutions to the relativistic BGK equation. Zbl 1437.35659 Calvo, Juan; Jabin, Pierre-Emmanuel; Soler, Juan 2020 The set of concentration for some hyperbolic models of chemotaxis. Zbl 1118.35001 Derbel, Lobna; Jabin, Pierre Emmanuel 2007 A coupled Boltzmann and Navier-Stokes fragmentation model induced by a fluid-particle-spring interaction. Zbl 1209.82033 Jabin, Pierre-Emmanuel; Soler, Juan 2010 Global stability of steady solutions for a model in virus dynamics. Zbl 1065.92013 Frid, Hermano; Jabin, Pierre-Emmanuel; Perthame, Benoît 2003 Quantitative regularity estimates for compressible transport equations. Zbl 1418.35289 Bresch, Didier; Jabin, Pierre-Emmanuel 2018 Some regularizing methods for transport equations and the regularity of solutions to scalar conservation laws. Zbl 1211.35187 Jabin, Pierre-Emmanuel 2010 Well posedness in any dimension for Hamiltonian flows with non BV force terms. Zbl 1329.34018 Champagnat, Nicolas; Jabin, Pierre-Emmanuel 2010 Convergence rate for the method of moments with linear closure relations. Zbl 1332.35037 Bourgault, Yves; Broizat, Damien; Jabin, Pierre-Emmanuel 2015 Cellulose biodegradation models; an example of cooperative interactions in structured populations. Zbl 1382.92185 Jabin, Pierre-Emmanuel; Miroshnikov, Alexey; Young, Robin 2017 Time-asymptotic convergence rates towards discrete steady states of a nonlocal selection-mutation model. Zbl 1427.35292 Cai, Wenli; Jabin, Pierre-Emmanuel; Liu, Hailiang 2019 Fractional spaces and conservation laws. Zbl 1409.35137 Castelli, Pierre; Jabin, Pierre-Emmanuel; Junca, Stéphane 2018 Existence to solutions of a kinetic aerosol model. Zbl 1083.35057 Jabin, Pierre-Emmanuel; Klingenberg, Christian 2005 Compactness in Ginzburg-Landau energy by kinetic averaging. (Compacité par lemmes de moyenne cinétiques pour des énergies de Ginzburg-Landau.) Zbl 0965.35159 Jabin, Pierre-Emmanuel; Perthame, Benoît 2000 Large time asymptotics for a modified coagulation model. Zbl 1225.82040 Calvo, J.; Jabin, P.-E. 2011 On the convergence of formally diverging neural net-based classifiers. (Convergence de classifieurs par réseaux de neurones formellement divergents.) Zbl 1390.68516 Berlyand, Leonid; Jabin, Pierre-Emmanuel 2018 Continuum approximations to systems of correlated interacting particles. Zbl 1448.82029 Berlyand, Leonid; Creese, Robert; Jabin, Pierre-Emmanuel; Potomkin, Mykhailo 2019 Convergence of numerical approximations to non-linear continuity equations with rough force fields. Zbl 1437.35475 Ben Belgacem, F.; Jabin, P.-E. 2019 Small populations corrections for selection-mutation models. Zbl 1270.35047 Jabin, Pierre-Emmanuel 2012 Free transport limit for $$N$$-particles dynamics with singular and short range potential. Zbl 1144.82041 Barré, J.; Jabin, P. E. 2008 Memory-driven movement model for periodic migrations. Zbl 1457.92199 Lin, Hsin-Yi; Fagan, William F.; Jabin, Pierre-Emmanuel 2021 Memory-driven movement model for periodic migrations. Zbl 1457.92199 Lin, Hsin-Yi; Fagan, William F.; Jabin, Pierre-Emmanuel 2021 A kinetic approach to active rods dynamics in confined domains. Zbl 1439.35478 Berlyand, Leonid; Jabin, Pierre-Emmanuel; Potomkin, Mykhailo; Ratajczyk, Elżbieta 2020 Global weak solutions to the relativistic BGK equation. Zbl 1437.35659 Calvo, Juan; Jabin, Pierre-Emmanuel; Soler, Juan 2020 On mean-field limits and quantitative estimates with a large class of singular kernels: application to the Patlak-Keller-Segel model. (Limites de champ moyen pour des noyaux singuliers et applications au modèle de Patlak-Keller-Segel.) Zbl 1428.35617 Bresch, Didier; Jabin, Pierre-Emmanuel; Wang, Zhenfu 2019 Time-asymptotic convergence rates towards discrete steady states of a nonlocal selection-mutation model. Zbl 1427.35292 Cai, Wenli; Jabin, Pierre-Emmanuel; Liu, Hailiang 2019 Continuum approximations to systems of correlated interacting particles. Zbl 1448.82029 Berlyand, Leonid; Creese, Robert; Jabin, Pierre-Emmanuel; Potomkin, Mykhailo 2019 Convergence of numerical approximations to non-linear continuity equations with rough force fields. Zbl 1437.35475 Ben Belgacem, F.; Jabin, P.-E. 2019 Global existence of weak solutions for compressible Navier-Stokes equations: thermodynamically unstable pressure and anisotropic viscous stress tensor. Zbl 1405.35133 Bresch, Didier; Jabin, Pierre-Emmanuel 2018 Quantitative estimates of propagation of chaos for stochastic systems with $$W^{-1,\infty}$$ kernels. Zbl 1402.35208 Jabin, Pierre-Emmanuel; Wang, Zhenfu 2018 Strong solutions to stochastic differential equations with rough coefficients. Zbl 1451.60091 Champagnat, Nicolas; Jabin, Pierre-Emmanuel 2018 Quantitative regularity estimates for compressible transport equations. Zbl 1418.35289 Bresch, Didier; Jabin, Pierre-Emmanuel 2018 Fractional spaces and conservation laws. Zbl 1409.35137 Castelli, Pierre; Jabin, Pierre-Emmanuel; Junca, Stéphane 2018 On the convergence of formally diverging neural net-based classifiers. (Convergence de classifieurs par réseaux de neurones formellement divergents.) Zbl 1390.68516 Berlyand, Leonid; Jabin, Pierre-Emmanuel 2018 Hydrodynamic limit of granular gases to pressureless Euler in dimension 1. Zbl 1354.35076 Jabin, Pierre-Emmanuel; Rey, Thomas 2017 Global weak solutions of PDEs for compressible media: a compactness criterion to cover new physical situations. Zbl 1371.35192 Bresch, Didier; Jabin, Pierre-Emmanuel 2017 On a non-local selection-mutation model with a gradient flow structure. Zbl 1382.35313 Jabin, Pierre-Emmanuel; Liu, Hailiang 2017 Cellulose biodegradation models; an example of cooperative interactions in structured populations. Zbl 1382.92185 Jabin, Pierre-Emmanuel; Miroshnikov, Alexey; Young, Robin 2017 Mean field limit and propagation of chaos for Vlasov systems with bounded forces. Zbl 1388.60163 Jabin, Pierre-Emmanuel; Wang, Zhenfu 2016 Critical non-Sobolev regularity for continuity equations with rough velocity fields. Zbl 1347.35075 Jabin, Pierre-Emmanuel 2016 Complexity reduction in many particle systems with random initial data. Zbl 1342.35175 Berlyand, Leonid; Jabin, Pierre-Emmanuel; Potomkin, Mykhailo 2016 Particle approximation of Vlasov equations with singular forces: Propagation of chaos. (Approximation particulaire des équations de Vlasov avec noyaux de force singuliers : la propagation du chaos.) Zbl 1329.35309 Hauray, Maxime; Jabin, Pierre-Emmanuel 2015 Diperna-Lions flow for relativistic particles in an electromagnetic field. Zbl 1317.35249 Jabin, P.-E.; Masmoudi, N. 2015 Time-asymptotic convergence rates towards the discrete evolutionary stable distribution. Zbl 1328.65268 Cai, Wenli; Jabin, Pierre-Emmanuel; Liu, Hailiang 2015 A continuous model for ratings. Zbl 1321.35243 Jabin, Pierre-Emmanuel; Junca, Stéphane 2015 Convergence rate for the method of moments with linear closure relations. Zbl 1332.35037 Bourgault, Yves; Broizat, Damien; Jabin, Pierre-Emmanuel 2015 A review of the mean field limits for Vlasov equations. Zbl 1318.35129 Jabin, Pierre-Emmanuel 2014 Clustering and asymptotic behavior in opinion formation. Zbl 1316.34051 Jabin, Pierre-Emmanuel; Motsch, Sebastien 2014 Adaptation in a stochastic multi-resources chemostat model. Zbl 1322.92052 Champagnat, Nicolas; Jabin, Pierre-Emmanuel; Méléard, Sylvie 2014 Compactness for nonlinear continuity equations. Zbl 1262.35157 Ben Belgacem, Fethi; Jabin, Pierre-Emmanuel 2013 Local existence of analytical solutions to an incompressible Lagrangian stochastic model in a periodic domain. Zbl 1274.35384 Bossy, Mireille; Fontbona, Joaquin; Jabin, Pierre-Emmanuel; Jabir, Jean-François 2013 Small populations corrections for selection-mutation models. Zbl 1270.35047 Jabin, Pierre-Emmanuel 2012 On selection dynamics for competitive interactions. Zbl 1230.92038 Jabin, Pierre-Emmanuel; Raoul, Gaël 2011 The evolutionary limit for models of populations interacting competitively via several resources. Zbl 1227.35040 Champagnat, Nicolas; Jabin, Pierre-Emmanuel 2011 Analytic solutions to a strongly nonlinear Vlasov equation. Zbl 1219.35046 Jabin, Pierre-Emmanuel; Nouri, A. 2011 Large time asymptotics for a modified coagulation model. Zbl 1225.82040 Calvo, J.; Jabin, P.-E. 2011 Differential equations with singular fields. Zbl 1217.34015 Jabin, Pierre-Emmanuel 2010 Convergence to equilibrium in competitive Lotka-Volterra and chemostat systems. (Convergence vers l’équilibre pour des systèmes compétitifs de Lotka-Volterra et du Chémostat.) Zbl 1213.34066 Champagnat, Nicolas; Jabin, Pierre-Emmanuel; Raoul, Gaël 2010 Well posedness in any dimension for Hamiltonian flows with non BV force terms. Zbl 1210.34010 Champagnat, Nicolas; Jabin, Pierre-Emmanuel 2010 A mathematical model of immune competition related to cancer dynamics. Zbl 1185.35295 Brazzoli, Ilaria; de Angelis, Elena; Jabin, Pierre-Emmanuel 2010 A coupled Boltzmann and Navier-Stokes fragmentation model induced by a fluid-particle-spring interaction. Zbl 1209.82033 Jabin, Pierre-Emmanuel; Soler, Juan 2010 Some regularizing methods for transport equations and the regularity of solutions to scalar conservation laws. Zbl 1211.35187 Jabin, Pierre-Emmanuel 2010 Well posedness in any dimension for Hamiltonian flows with non BV force terms. Zbl 1329.34018 Champagnat, Nicolas; Jabin, Pierre-Emmanuel 2010 Averaging lemmas and dispersion estimates for kinetic equations. Zbl 1190.35152 Jabin, Pierre-Emmanuel 2009 Kinetic decomposition for periodic homogenization problems. Zbl 1194.35040 Jabin, Pierre-Emmanuel; Tzavaras, Athanasios E. 2009 On selection dynamics for continuous structured populations. Zbl 1176.45009 Desvillettes, Laurent; Jabin, Pierre Emmanuel; Mischler, Stéphane; Raoul, Gaël 2008 A continuous size-structured red coral growth model. Zbl 1153.92033 Jabin, P.-E.; Lemesle, V.; Aurelle, D. 2008 Free transport limit for $$N$$-particles dynamics with singular and short range potential. Zbl 1144.82041 Barré, J.; Jabin, P. E. 2008 $$N$$-particles approximation of the Vlasov equations with singular potential. Zbl 1107.76066 Hauray, Maxime; Jabin, Pierre-Emmanuel 2007 The set of concentration for some hyperbolic models of chemotaxis. Zbl 1118.35001 Derbel, Lobna; Jabin, Pierre Emmanuel 2007 A kinetic description of particle fragmentation. Zbl 1136.76044 Jabin, Pierre-Emmanuel; Soler, Juan 2006 The dynamics of adaptation: An illuminating example and a Hamilton–Jacobi approach. Zbl 1072.92035 Diekmann, Odo; Jabin, Pierre-Emanuel; Mischler, Stéphane; Perthame, Benoît 2005 A modeling of biospray for the upper airways. Zbl 1075.92031 Baranger, C.; Boudin, L.; Jabin, P.-E.; Mancini, S. 2005 Mathematical models of therapeutical actions related to tumour and immune system competition. Zbl 1078.92031 De Angelis, Elena; Jabin, Pierre-Emmanuel 2005 Existence to solutions of a kinetic aerosol model. Zbl 1083.35057 Jabin, Pierre-Emmanuel; Klingenberg, Christian 2005 Hydrodynamic limit for the Vlasov-Navier-Stokes equations. I: Light particles regime. II: Fine particles regime. Zbl 1085.35117 Goudon, Thierry; Jabin, Pierre-Emmanuel; Vasseur, Alexis 2004 Identification of the dilute regime in particle sedimentation. Zbl 1059.76073 Jabin, Pierre-Emmanuel; Otto, Felix 2004 A real space method for averaging lemmas. Zbl 1082.35043 Jabin, Pierre-Emmanuel; Vega, Luis 2004 Qualitative analysis of a mean field model of tumor-immune system competition. Zbl 1043.92012 De Angelis, Elena; Jabin, Pierre-Emmanuel 2003 On the rate of convergence to equilibrium in the Becker–Döring equations. Zbl 1109.82327 Jabin, Pierre-Emmanuel; Niethammer, Barbara 2003 Averaging lemmas and the X-ray transform. Zbl 1030.35005 Jabin, Pierre-Emmanuel; Vega, Luis 2003 Global stability of steady solutions for a model in virus dynamics. Zbl 1065.92013 Frid, Hermano; Jabin, Pierre-Emmanuel; Perthame, Benoît 2003 Line-energy Ginzburg-Landau models: zero-energy states. Zbl 1072.35051 Otto, Felix; Jabin, Pierre-Emmanuel; Perthame, Benoît 2002 Regularity in kinetic formulations via averaging lemmas. Zbl 1065.35185 Jabin, Pierre-Emmanuel; Perthame, Benoît 2002 Various levels of models for aerosols. Zbl 1163.35460 Jabin, Pierre-Emmanuel 2002 The Vlasov-Poisson system with infinite mass and energy. Zbl 1126.82327 Jabin, Pierre-Emmanuel 2001 Compactness in Ginzburg-Landau energy by kinetic averaging. Zbl 1124.35312 Jabin, Pierre-Emmanuel; Perthame, Benoît 2001 Regularity and propagation of moments in some nonlinear Vlasov systems. Zbl 0984.35102 Gasser, I.; Jabin, P.-E.; Perthame, B. 2000 Macroscopic limit of Vlasov type equations with friction. Zbl 0965.35013 Jabin, Pierre-Emmanuel 2000 Notes on mathematical problems on the dynamics of dispersed particles interacting through a fluid. Zbl 0957.76087 Jabin, Pierre-Emmanuel; Perthame, Benoit 2000 Large time concentrations for solutions to kinetic equations with energy dissipation. Zbl 0965.35014 Jabin, Pierre-Emmanuel 2000 Compactness in Ginzburg-Landau energy by kinetic averaging. (Compacité par lemmes de moyenne cinétiques pour des énergies de Ginzburg-Landau.) Zbl 0965.35159 Jabin, Pierre-Emmanuel; Perthame, Benoît 2000 all top 5 ### Cited by 824 Authors 40 Jabin, Pierre-Emmanuel 30 Perthame, Benoît 21 Choi, Young-Pil 18 Carrillo de la Plata, José Antonio 15 Mirrahimi, Sepideh 12 Bellomo, Nicola 12 Delitala, Marcello Edoardo 12 Zhang, Xianwen 11 Bresch, Didier 11 Crippa, Gianluca 10 Han-Kwan, Daniel 10 Lorenzi, Tommaso 9 Chen, Zili 9 Ha, Seung-Yeal 8 Champagnat, Nicolas 8 Desvillettes, Laurent 8 Golse, François 8 Goudon, Thierry 8 Ignat, Radu 8 Méléard, Sylvie 7 Cañizo, José Alfredo 7 De Lellis, Camillo 7 Hauray, Maxime 6 Höfer, Richard M. 6 Kang, Moon-Jin 6 Kim, Jeongho 6 Lods, Bertrand 6 Marchioro, Carlo 6 Paul, Thierry 6 Raoul, Gaël 6 Salem, Samir 6 Serfaty, Sylvia 6 Smadi, Charline 6 Soler, Juan S. 6 Souganidis, Panagiotis E. 6 Vauchelet, Nicolas 6 Yu, Cheng 6 Zu, Jian 5 Bellouquid, Abdelghani 5 Berlyand, Leonid V. 5 Bruè, Elia 5 Burger, Martin 5 Caprino, Silvia 5 Cavallaro, Guido 5 Huang, Hui 5 Jung, Jinwook 5 Lam, King-Yeung 5 Lorz, Alexander 5 Moussa, Ayman 5 Mucha, Piotr Bogusław 5 Otto, Felix 5 Rivière, Tristan 5 Tao, Youshan 5 Vasseur, Alexis F. 5 Wang, Zhenfu 5 Zhang, Xiongtao 4 Bardos, Claude Williams 4 Bouchut, François 4 Bouin, Emeric 4 Calvez, Vincent 4 De Angelis, Elena 4 Degond, Pierre 4 Fetecau, Razvan C. 4 Flandoli, Franco 4 Fritsch, Coralie 4 Guo, Qian 4 Herty, Michael Matthias 4 Hillairet, Matthieu 4 Huang, Bingkang 4 Hynd, Ryan 4 Iacobelli, Mikaela 4 Jiang, Peng 4 Jin, Shi 4 Liu, Hailiang 4 Marconi, Elio 4 Miot, Evelyne 4 Nečasová, Šárka 4 Novotný, Antonín 4 Pareschi, Lorenzo 4 Pickl, Peter 4 Pokorný, Milan 4 Potomkin, Mykhailo 4 Pouchol, Camille 4 Spirito, Stefano 4 Wang, Dehua 3 Ackleh, Azmy S. 3 Alfaro, Matthieu 3 Arsénio, Diogo 3 Berthelin, Florent 3 Bianca, Carlo 3 Billiard, Sylvain 3 Boudin, Laurent 3 Cai, Wenli 3 Campillo, Fabien 3 Carbonaro, Bruno 3 Chen, Yang 3 Coville, Jerome 3 Craig, Katy 3 Cuadrado, Sílvia 3 Di Francesco, Marco ...and 724 more Authors all top 5 ### Cited in 142 Serials 47 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 45 Journal of Differential Equations 36 Archive for Rational Mechanics and Analysis 27 SIAM Journal on Mathematical Analysis 26 Mathematical and Computer Modelling 24 Journal of Statistical Physics 23 Journal de Mathématiques Pures et Appliquées. Neuvième Série 23 Kinetic and Related Models 20 Journal of Mathematical Biology 16 Communications in Mathematical Physics 14 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 14 Communications in Partial Differential Equations 13 Calculus of Variations and Partial Differential Equations 12 Journal of Mathematical Physics 12 Nonlinear Analysis. Real World Applications 12 Comptes Rendus. Mathématique. Académie des Sciences, Paris 9 Journal of Nonlinear Science 9 Discrete and Continuous Dynamical Systems. Series B 9 Séminaire Laurent Schwartz. EDP et Applications 8 Journal of Mathematical Analysis and Applications 8 Discrete and Continuous Dynamical Systems 8 Journal of Mathematical Fluid Mechanics 8 Journal of Theoretical Biology 7 Journal of Computational Physics 7 Nonlinearity 7 Journal of Functional Analysis 7 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 7 Acta Applicandae Mathematicae 6 Communications on Pure and Applied Mathematics 6 Mathematical Methods in the Applied Sciences 6 The Annals of Probability 6 The Annals of Applied Probability 6 Journal of Hyperbolic Differential Equations 6 Annals of PDE 5 Bulletin of Mathematical Biology 5 SIAM Journal on Applied Mathematics 5 Stochastic Processes and their Applications 5 Electronic Journal of Probability 5 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 5 Networks and Heterogeneous Media 5 Analysis & PDE 4 ZAMP. Zeitschrift für angewandte Mathematik und Physik 4 Proceedings of the American Mathematical Society 4 Physica D 4 Applied Mathematics Letters 4 Journal of the European Mathematical Society (JEMS) 4 Communications on Pure and Applied Analysis 4 SIAM Journal on Applied Dynamical Systems 3 Mathematical Biosciences 3 Theoretical Population Biology 3 Applications of Mathematics 3 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 2 Computers & Mathematics with Applications 2 Transport Theory and Statistical Physics 2 Journal of Computational and Applied Mathematics 2 Quarterly of Applied Mathematics 2 SIAM Journal on Control and Optimization 2 Revista Matemática Iberoamericana 2 Asymptotic Analysis 2 European Journal of Applied Mathematics 2 Atti della Accademia Nazionale dei Lincei. Classe di Scienze Fisiche, Matematiche e Naturali. Serie IX. Rendiconti Lincei. Matematica e Applicazioni 2 SIAM Journal on Scientific Computing 2 M2AN. Mathematical Modelling and Numerical Analysis. ESAIM, European Series in Applied and Industrial Mathematics 2 Annales Henri Poincaré 2 Journal of Evolution Equations 2 Foundations of Computational Mathematics 2 Stochastics and Dynamics 2 Mathematical Biosciences and Engineering 2 Oberwolfach Reports 2 Mathematical Modelling of Natural Phenomena 2 International Journal of Biomathematics 2 Communications in Applied and Industrial Mathematics 2 Stochastic and Partial Differential Equations. Analysis and Computations 2 SIAM/ASA Journal on Uncertainty Quantification 2 Journal de l’École Polytechnique – Mathématiques 2 Minimax Theory and its Applications 2 SMAI Journal of Computational Mathematics 1 International Journal of Modern Physics B 1 Advances in Applied Probability 1 Applicable Analysis 1 Computers and Fluids 1 Inverse Problems 1 Journal of Fluid Mechanics 1 Physica A 1 Acta Mathematica 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Applied Mathematics and Computation 1 Applied Mathematics and Optimization 1 Automatica 1 Duke Mathematical Journal 1 Inventiones Mathematicae 1 Journal of Applied Probability 1 Journal of Multivariate Analysis 1 Journal of Optimization Theory and Applications 1 Mathematische Annalen 1 Mathematics and Computers in Simulation 1 Mathematische Zeitschrift 1 Monatshefte für Mathematik 1 Numerische Mathematik 1 SIAM Journal on Numerical Analysis ...and 42 more Serials all top 5 ### Cited in 38 Fields 484 Partial differential equations (35-XX) 214 Biology and other natural sciences (92-XX) 189 Statistical mechanics, structure of matter (82-XX) 177 Fluid mechanics (76-XX) 100 Probability theory and stochastic processes (60-XX) 55 Numerical analysis (65-XX) 50 Dynamical systems and ergodic theory (37-XX) 48 Ordinary differential equations (34-XX) 42 Integral equations (45-XX) 35 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 34 Calculus of variations and optimal control; optimization (49-XX) 20 Mechanics of particles and systems (70-XX) 18 Systems theory; control (93-XX) 17 Operator theory (47-XX) 17 Quantum theory (81-XX) 10 Harmonic analysis on Euclidean spaces (42-XX) 10 Mechanics of deformable solids (74-XX) 10 Optics, electromagnetic theory (78-XX) 9 Functional analysis (46-XX) 9 Operations research, mathematical programming (90-XX) 5 Real functions (26-XX) 5 Measure and integration (28-XX) 4 Computer science (68-XX) 4 Relativity and gravitational theory (83-XX) 4 Astronomy and astrophysics (85-XX) 3 Combinatorics (05-XX) 3 Approximations and expansions (41-XX) 3 General topology (54-XX) 3 Statistics (62-XX) 3 Classical thermodynamics, heat transfer (80-XX) 3 Information and communication theory, circuits (94-XX) 2 General and overarching topics; collections (00-XX) 2 Global analysis, analysis on manifolds (58-XX) 2 Geophysics (86-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Functions of a complex variable (30-XX) 1 Potential theory (31-XX) 1 Difference and functional equations (39-XX)
2022-06-28 15:14:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39354661107063293, "perplexity": 11217.898512977508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103556871.29/warc/CC-MAIN-20220628142305-20220628172305-00640.warc.gz"}
http://math.stackexchange.com/questions/140123/how-do-different-definitions-of-degree-coincide
# How do different definitions of “degree” coincide? I've recently read about a number of different notions of "degree." Reading over Javier Álvarez' excellent answer for the thousandth time finally prompted me to ask this question: How exactly do the following three notions of "degree" coincide? (1) Algebraic Topology. Let $f\colon X \to Y$ be a continuous map between compact connected oriented $n$-manifolds. Wikipedia tells me that $H_n(X) \cong H_n(Y) \cong \mathbb{Z}$, and that a choice of orientations for $X$ and $Y$ amount to choices of generators $[X], [Y]$ for $H_n(X), H_n(Y)$, respectively. We then define $\deg f$ via $$f_*([X]) = (\deg f)[Y].$$ (2) Differential Topology. Let $f\colon X \to Y$ be a smooth map between oriented $n$-manifolds, where $X$ is compact and $Y$ is connected. Let $y \in Y$ be a regular value of $f$ (which exists by Sard's Theorem), let $D_xf\colon T_xX \to T_yY$ denote the derivative (a.k.a. pushforward), and define $$(\deg f)_y = \sum_{x \in f^{-1}(y)}\text{sgn}(\det D_xf).$$ It can be shown that $(\deg f)_y$ is independent of the choice of $y \in Y$, so we can talk meaningfully about a single quantity $\deg f = (\deg f)_y$. (3) Riemann Surfaces. Let $f\colon X \to Y$ be a holomorphic map between compact connected Riemann surfaces. For $x \in X$, we let $\text{mult}_x(f)$ denote the multiplicity of $f$ at $x \in X$. For $y \in Y$, we define $$(\deg f)_y = \sum_{x \in f^{-1}(y)} \text{mult}_x(f).$$ As in (2), it can be shown that $(\deg f)_y$ is independent of the choice of $y \in Y$. (Does this generalize to arbitrary complex manifolds?) Thoughts: As was mentioned in my topology class last semester (and also on Wikipedia), there is this concept of "local homology" which lets us compute (1) as a sum of "local degrees." I imagine that in the case of (2), each of these local degrees is, in fact, equal to $\text{sgn}(\det D_xf)$ because $f$ is a local diffeomorphism at each regular point $x$. I also imagine that in the case of (3), each of these local degrees is, in fact, equal to $\text{mult}_x(f)$ because the degree of $\mathbb{S}^n \to \mathbb{S}^n$, $z \mapsto z^k$ is $k$. (Does this also mean that $f$ is not regular at any point where $\text{mult}(f) \geq 2$? This would make sense, but what is the proof?) This all seems correct in my head, but I would really like more details if possible. - That different versions of the same concept coincide is usually the stuff of major theorems. See for instance Atiyah–Singer index theorem. – lhf May 2 '12 at 22:22 I think comparing this to the ASIT is a little over the top. (1) and (2) are related by the de Rham theorem (cf. Bott & Tu, as always), and (3) is really (2) in disguise once you acknowledge that you're using the "local model" theorem for holomorphisms. The Weierstrauss preparation theorem is the higher-dimensional analog; I haven't thought through it, but I'd imagine a similar argument applies. – Aaron Mazel-Gee May 4 '12 at 0:20 What about, for example, local and global degrees of a vector field? – Neal May 12 '12 at 6:46 @Neal: What is your question exactly? For any vector bundle on a smooth manifold (not just the tangent bundle), the algebraic sum of the local degrees of a transverse section is an invariant, namely the "global degree". If you think through it, all we're actually doing is taking the intersection number of the manifold with itself inside of the total space! Moreover, this recovers Euler characteristic of the manifold when we specialize to the tangent bundle. If you like this sort of thing, I highly recommend Thurston's "Geometry and Topology" (or something like that). – Aaron Mazel-Gee May 14 '12 at 17:34 I'm just throwing out a couple more definitions of "degree" for the author to consider. Thanks for the recommendation! – Neal May 15 '12 at 1:50 First things first, thank you very much for your appraisal of my other answer on consequences of degree concepts, mostly for differentiable manifolds. I have in fact expanded it posting another answer listing the applications of degree theory in complex algebraic geometry using the definitions explained below. Indeed I have also spent a great deal of time trying to understand all connections among "degrees" as much as possible, so let me try to complete a little bit your list and my digression with the, in my view, most important, geometric and unifying notion of degree: that coming from complex algebraic geometry. (4) ALGEBRAIC GEOMETRY OF COMPLEX PROJECTIVE VARIETIES, that is to say, complex submanifolds $X$ of, or embeddings into, the complex projective space $\mathbb{CP}^n$. Any non-singular projective $n$-variety is isomorphic, in the algebraic category, to a subvariety of $\mathbb{CP}^{2n+1}$ [Shafarevich vol. I, 5.4 Th.9]. By Chow's theorem [Griffiths-Harris, p.167][Mumford Cor.4.6] any complex manifold seen as $k$-submanifold of $\mathbb{CP}^n$ is algebraic, i.e. any real $2k$-submanifold of $\mathbb{RP}^{2n}$ which admits a complex structure is actually given as the zero locus of a system of homogeneous polynomials (all varieties coming from manifolds are smooth, but there are singular varieties which are not manifolds though!). In particular any real closed orientable surface admits a complex structure and so is a complex projective algebraic curve, thus including case (3) of compact Riemann surfaces [Miranda, Th.IV.1.9]; higher dimensional manifolds may not always admit complex structures, a necessary and sufficient condition is the Newlander-Nirenberg theorem [Kobayashi-Nomizu vol. II, Appx.8][Voisin vol. I, sec.2.2.3]: vanishing of the Nijenhuis tensor for an almost-complex structure. By Lefschetz's principle this is essentially enough for dealing with the general case of abstract varieties (integral separated schemes of finite type over an algebraically closed field $k$) embedded as subschemes of $\mathbb{P}^n_k$. All the following definitions of degree are proved to be equivalent to each other so one can pick any of them as starting definition and get the rest as interesting theorems. (We always talk about nonsingular irreducible complex projective curves, surfaces, hypersurfaces, varieties... etc., and thus compact Riemann surfaces, except explicit mention of the contrary). I shall only explain in detail the original classical geometric notions of degree and mention the rest. • A. If $X$ is a hypersurface of $\mathbb{CP}^n$, i.e. $\dim X=n-1$, by [Hartshorne, Exercise I.2.8] it is given by the zero locus, $X=Z(f)$, of an irreducible homogeneous polynomial $f\in S_d$ of algebraic degree $d$, i.e. any monomial summand $a_{k_0\dots k_n}x_0^{k_0}\cdots x_n^{k_n}$ in $f$ has degree $d=\sum_i k_i$, where all such monomials generate the abelian group $S_d$, all of which make the ring of polynomials a graded ring $\mathbb{C}[x_0,\dots,x_n]=\bigoplus_{d=0}^\infty S_d$. So any such $f$ has a canonical associated (algebraic) degree, so Degree of a hypersurface as the algebraic degree of its defining homogeneous polynomial: $$\deg X_{n-1}=\deg Z(f):=\deg(f)=d,\;\;\; f\in S_d$$ • B. If $k:=\dim X_k< n-1$, let $L_r\cong\mathbb{CP}^r$ be a generic (i.e. in general position) linear variety (linear projective vector subspace) of $\mathbb{CP}^n$ of dimension $r\leq n-k-1$. The projecting cone, $C(X_k, L_r)$, of $X_k$ from "vertex" $L_r$ is defined to be the joint locus of the subspaces $L_{r+1}$ that join the given $L_r$ with each point of $X_k$ (this generalizes the intuitive "cone" of lines obtained by projecting from a point). By [Beltrametti et al., sec.3.4.5] the projecting cone is also a, possibly reducible, algebraic variety of dimension $r+k+1$ (which justifies the upper bound of r at the beginning). For a generic $L_{n-k-2}$ the projecting cone of $X_k$ is thus a, possibly reducible, hypersurface as in A. above (if it is irreducible it is the case A. if it is reducible then its defining zero locus polynomial is reducible but has nevertheless well defined degree). Call Degree of a subvariety as the degree of the generic projecting cones which are hypersurfaces: $$\deg X_k:=\max\limits_{L\in\mathbb{Gr}(n-k-2,\mathbb{CP}^n)}\{\deg C(X_k, L)\},$$ where $C(X_k, L_{n-k-2})=Z(g)\,\vert\, g\in S_p$. where $L$ is an element of the Grasssmannian of the required dimension. The degree of a variety $X_r\subset\mathbb{CP}^n$ is thus defined to be the degree of the generic hypersurface-projecting-cone; this is proved to be well-defined as this max deg is constant for a dense Zariski-open subset of the Grassmannian, cf. [Harris, Exercise 18.2]. For example if $L_0$ is a generic point in $\mathbb{CP}^3$ and $X_1$ a spatial algebraic curve, for each point of $X_1$ there is only one line joining it with $L_0$. Moving along all such points of the curve we obtain a cone swept by the joining lines with the fixed $L_0$, cone which is an algebraic surface, thus the zero locus of an homogeneous polynomial in projective space. So we are calling the degree of the spatial curve the degree of its projecting cone generic surface polynomial. Note that a curve in projective space is generically given by the intersection of two surfaces of possibly different degrees, $X_1=Z(h_1,h_2)\subset\mathbb{CP}^3$, so it has no canonical unique polynomial degree as is the case for plane curves. It is also important to remember that any nonsingular algebraic curve (thus Riemann surface) is isomorphic to a smooth spatial curve in $\mathbb{CP}^3$ [Hartshorne, Cor.IV.3.6][Shafarevich vol. I, sec.5.4 Cor.2] and birational to a plane curve with at most node singularities [Hartshorne, Cor.IV.3.11]. Note also that the first definition A. above is necessary, since the projecting cone of a hypersurface cannot be defined due to the constraint $r\leq n-k-1$. So what we have done is defining, for any lower dimensional variety, associated hypersurfaces which have generically well-defined polynomial degree. • C. The "vertex" $L_r$ of a generic projecting cone $C(X_k, L_r)$ of a variety $X_k$ is given by $n-r$ linearly independent linear equations: $L_r=Z(h_1,\dots,h_{n-r})$ where $h_i$ are linear forms which define hyperplanes $H_i=Z(h_i)\cong\mathbb{CP}^{n-1}$ within $\mathbb{CP}^n$, so that $L_r=\bigcap_{i=1}^{n-r} H_i$. Projecting cones take their name from the fact that they define a generalized projection of a variety to a linear subspace (e.g. projecting from a point a spatial curve into a plane): the projection [Shafarevich vol. I, sec.4.4 Ex.1], with center or vertex $L_r$, is the rational map $\pi_{L_r}(x):=[h_1(x):\dots :h_{n-r}(x)]$ which is a regular morphism on the Zariski-open set $\mathbb{CP}^n\setminus L_r$. Therefore its restriction to any variety disjoint from the vertex, $\pi_{L_r}\vert_{X_k}:X_k\rightarrow\mathbb{CP}^{n-r-1}$ is a regular map of it to a projective subspace. Take any linear variety disjoint from $L_r$ as representative, i.e. $\mathbb{CP}^{n-r-1}\cong L'_{n-r-1}\subset\mathbb{CP}^n$ such that $L_r\cap L'_{n-r-1}=\varnothing$, which is always possible by [Beltrametti et al., Th.3.3.8] (since $\dim L_r\cap L'_{n-r-1}\geq r+(n-r-1)-n=-1$ so they do not intersect necessarily). Now for every point $x\in\mathbb{CP}^n\setminus L_r$, in particular $X_k$, there is a unique $L''_{r+1}$ passing through the vertex $L_r$ and $x$ by elementary dimension counting. The locus of all these generators $L''_{r+1}$ is just the generic projecting cone $C(X_k,L_r)$ for generic center $L_r$!. Each generator intersects $L'_{n-r-1}$ in a unique point (solution of a system of $n-(r+1)+n-(n-r-1)=n$ linear equations) which corresponds to $\pi_{L_r}(x)$ through the isomorphism with $\mathbb{CP}^{n-r-1}$. Therefore, given a generic linear subspace $L_r$ we can regularly (rationally if $L_r\cap X_k\neq\varnothing$) project any variety $X_k\subseteq\mathbb{CP}^n$ to a lower dimensional generic linear subspace $L'_{n-r-1}\cong\mathbb{CP}^{n-r-1}$ by intersecting the projecting cone with it, $C(X_k,L_r)\cap L'_{n-r-1}$, and calling $\pi_{L_r}(X_k)\subset \mathbb{CP}^{n-r-1}$ the projection of $X_k$ from $L_r$ to $L'_{n-r-1}$. The case $r=0$ is the classical projection from a point into a hyperplane (like our spatial curve projected to a plane curve). Therefore for any $X_k$, projecting from a generic center $L_{n-k-2}$, we obtain a, possibly reducible, variety $\bar{X}_k$ in $\mathbb{CP}^{k+1}$ as projection; since this comes from the intersection of the hypersurface $C(X_k,L_{n-k-2})$ with a linear variety $L'_{k+1}$, by the projective dimension theorem [Hartshorne, Th.I.7.2] every of its irreducible components has dimension $\geq (n-1)+(k+1)-n=k$. In fact, if $\dim X_k\geq 2$ by repeated application of Bertini's theorem [Hartshorne, Th.II.8.18], any such intersection is generically not only smooth but connected and thus irreducible, thus any generic such projection is a hypersurface $\bar{X}_k\subset\mathbb{CP}^{k+1}$ (generically reducible for $X_1$ a curve) and so it has a defining zero locus irreducible homogeneous polynomial $q\in\mathbb{C}[x_0,...,x_{k+1}]$ with well-defined algebraic degree (if $X_1$ is a curve then each of its irreducible components after intersecting will be points solution of a reducible polynomial). This is equivalent to B. since the intersection of a generic projecting cone hypersurface with a generic linear variety has the same polynomial degree as the cone (solve as many variables as possible from the linear system defining the linear variety and substitute in the homogeneous polynomial of the cone; each of its equal-degree monomials produce new monomials in less variables but of the same degree as the original, so one gets a new homogeneous polynomial in less variables, i.e. a hypersurface in a lower-dimensional projective space). This shows a geometric construction for the theorem of the birational equivalence of any projective algebraic set of dimension $k$ with a hypersurface in $\mathbb{CP}^{k+1}$, cf. [Beltrametti et al., sec.2.6.11] and [Hartshorne, Prop.I.4.9]. Degree of a $k$-subvariety of $\mathbb{CP}^{n}$ as the polynomial degree of the, possibly reducible, hypersurface obtained by generically projecting to $\mathbb{CP}^{k+1}$, i.e. intersecting the projecting cone with a suitable generic linear subspace: $$\deg X_k:=\deg \pi_{L}(X_k)=\deg C(X_k,L_{n-k-2})\cap L'_{k+1},$$ for generic $L\in\mathbb{Gr}(n-k-2,\mathbb{CP}^n)$ and $L'\in\mathbb{Gr}(n-k-2,\mathbb{CP}^n)$. • D. Now take the projected variety hypersurface $\pi_{L_{n-k-2}}(X_k)= \bar{X}_k \subsetneq \mathbb{CP}^{k+1}$, possibly reducible, and project it again with vertex a generic point $\bar{L}_0\in\mathbb{CP}^{k+1}$ disjoint from $\bar{X}_k$, onto a generic hyperplane $\bar{L}_k\in\mathbb{Gr}(n-1,\mathbb{CP}^{k+1})$. It is a standard exercise to prove that any projection from vertex $L_r$ can be decomposed into a sequence of projections from $r+1$ points $L_{0(0)}\dots L_{0(r)}$ spanning $L_r$, so everything done in C. above can be interpreted as projecting our $k$-variety down from successive $n-k-1$ points to a projective $(k+1)$-space where it becomes a hypersurface, so that one can talk about a generic degree. Therefore, now we are just stopping our chain of projections when we get the surjection $\pi_{L_{n-k-1}}|_{X_k}:X_k\twoheadrightarrow\mathbb{CP}^k$ which comes from projecting from generic center $L_{n-k-1}$ onto generic linear variety of the same dimension $L'_k$ (each projection from a point reduces by 1 the dimension of the projective space into which we are projecting, so we need $n-k$ independent generic points). By construction the projection onto $L'_k\cong\mathbb{CP}^k$ is generically a finite map, since each projection of a hypersurface from a generic point is a line which intersects it in a finite number of points (by the projective dimension theorem), the fiber of the projected point, and this is an equivalent condition for finiteness of a morphism for projective varieties [Harris, Lemma 14.8]. Now, we defined above the degree of $X_k$ to be the degree of its hypersurface projected model into $\mathbb{CP}^{k+1}$, so another projection from a generic point $\bar{L}_{0(n-k)}\in\mathbb{CP}^{k+1}$ onto generic $\mathbb{CP}^k\cong\bar{L}_k \subset \mathbb{CP}^{k+1}$ comes from a line joining the point with each point of $\bar{X}_k$; as the vertex is generic, this line intersects $\bar{X}_k$ in a finite number of points which is no other than the degree of its zero locus defining homogeneous polynomial $\bar{X}_k=Z(q)$! (parametrize the straight line by $[x_0(t):...:x_{k+1}(t)]$ so that the intersection points are the finite number of roots of $g(t)=0$, which are $\deg(g)$ in number by the fundamental theorem of algebra). It is not hard to convince oneself that the generic projection $\pi_{L_{n-k-1}}(X_k)$ has the same number of points in its generic fiber as that last component projection which brings it down to $\mathbb{CP}^k$, since up to $\mathbb{CP}^{k+1}$ the hyperplanes are higher dimensional than $X_k$. (It is surjective because a line and a hypersurface always intersect in projective space). The number of points in a general fiber is called the degree of the map. It is the same Brower-Kronecker degree of a continuous mapping in Differential Topology, but in the complex case it coincides with the number of pre-images, for complex structure fixes orientation and regular maps=holomorphic maps preserve it because any complex linear transformation (e.g. the Jacobians of the map) are never negative, cf. [Dubrovin et al., Th.13.4.2]. Thus: Degree of a variety $X_k\subset\mathbb{CP}^n$ as the number of pre-images of a generic fiber (i.e. degree of a regular or rational map) of the generic finite surjective projection map $\pi_{\Lambda}:X_k\twoheadrightarrow\mathbb{CP}^k$: $$\deg X_k:=\deg\pi_{\Lambda}=\#\,\pi_{\Lambda}^{-1}(x),$$ for generic $x\in\mathbb{CP}^k,\; \Lambda\in\mathbb{Gr}(n-k-1,\mathbb{CP}^n).$ • E. Following [Beltrametti et al., Prop.3.4.8] let us go back to B. or C. above, our projection of $X_k$ into a hypersurface of $\mathbb{CP}^{k+1}$ via generic center $L_{n-k-2}\subset\mathbb{CP}^{n}$. Take a generic line $l_1\subset\mathbb{CP}^{n}$ not contained in $L_{n-k-2}$, so that the linear space $\operatorname{Join}(L_{n-k-2}, l_1)=\langle L_{n-k-2}, l_1 \rangle$ is a generic $L'_{n-k}$ because this is just a projecting cone, thus having dimension $(n-k-2)+(1)+1$ (cf. beginning of B. above). It is clear that any such generic linear $(n-k)$-space can be obtained in this way by generically decomposing it into a line and a linear $n-k-2$-subspace contained in it. Now, the intersection of a $k$-variety with a generic hyperplane has irreducible components of dimension $k-1$ [Shafarevich vol. I, sec.6.2], thus $X_k\cap L'_{n-k}$ consists generically of a finite number of points. This number of points is constant in a dense Zariski-open subset of the Grassmannian $\mathbb{Gr}(n-k,\mathbb{CP}^{n})$, cf. [Harris, Ex.18.2]. This can be readily proved by noticing that it is the number of points in the generic fiber of $\pi_{\Lambda}$ with center a generic hyperplane $\Lambda\subset L'_{n-k}$ seen in D. above. To see this, note that our generic $L'_{n-k}$ can be thought as a projecting cone with center $\mathbb{CP}^{n-k-1}\cong\Lambda\subset L'_{n-k}$, and the fiber of $\pi_{\Lambda}$, which is finite by D., is by construction the intersection of the projecting cone with the variety. Therefore $\#\, (X_k\cap L'_{n-k})=\deg \pi_{\Lambda}$, showing equivalence with all the previous notions. It is worth mentioning that many (most) classical treatments define degree as this finite number of points of a generic intersection with a linear space of dimension the codimension of the variety. An independent proof is [Mumford, Th.5.1] where it is shown that generic linear $(n-k)$-subspaces meeting transversaly our variety $X_k$, do so in a common number of points: the degree. (This relates to definitions in differential topology of intersections of submanifolds meeting properly, i.e. $T_pX_k\cap T_pL'_{n-k}={0}$ and $T_pX_k\oplus T_pL'_{n-k}=T_p\mathbb{CP}^n$). Degree of a subvariety as the number of points of intersection (generically transversal) with a generic codimensional linear variety (i.e., $\dim L=n-\dim X$): $$\deg X_k := \# (X_k\cap L_{n-k})= \# (X_k\cap C(X_k, \Lambda_{n-k-1})),$$ for generic $L\in\mathbb{Gr}(n-k,\mathbb{CP}^n)$, $\Lambda\in\mathbb{Gr}(n-k-1,\mathbb{CP}^n)$. • F. The projection map of def. D. is a dominant rational map and as such defines a pullback inclusion $\pi_{\Lambda}^\ast:K(\mathbb{CP}^k)\hookrightarrow K(X_k)$ by $f\mapsto f\circ\pi_{\Lambda}$ for any rational (meromorphic) function $f\in \mathbb{C}(x_0,…,x_k)$, which is a homomorphism of $\mathbb{C}$-algebras. In fact by [Hartshorne, Th.I.4.4] this establishes a contravariant equivalence of categories between the category of complex projective varieties and dominant rational maps and the category of finitely generated field extensions of $\mathbb{C}$, thus birational equivalent varieties have isomorphic function fields. Now by [Harris, Prop.7.16], the transcendence degree of the finite field extension is the number of points in the generic fiber of D., showing equivalence to all previous definitions. Degree as the transcendence degree of the finite field extension of the function field of projective space with respect to the function field of the variety, generically projected to it. $$\deg X_k:=[K(\mathbb{CP}^k):K(X_k)],$$ for generic $\pi_{\Lambda}^\ast :K(\mathbb{CP}^k) \hookrightarrow K(X_k),\; \Lambda\in\mathbb{Gr}(n-k-1,\mathbb{CP}^n).$ • G. Degree as $\dim X!$ times the leading coefficient of the Hilbert polynomial of the variety: $$P_{X}(t)=\frac{\deg X}{\dim X!}t^{\dim X}+\dots,$$ where $P_{X}(t):=\dim_{\mathbb{C}}(\mathbb{C}[X]\cap S_t),\, t\gg 0.$ is the dimension of the $t$-graded homogeneous part of the coordinate ring of the variety, which is proved to be a polynomial in $t$ for large grading [Shafarevich vol. II, sec.4.2]. • H. Degree as coefficient in homology $H_{2k}(\mathbb{CP}^n,\mathbb{Z})$ $$[X_k]=(\deg X_k)\cdot [L_k],$$ or integration coefficient in de Rham cohomology $H^{2k}_{dR}(\mathbb{CP}^n,\mathbb{C})$ $$\langle [X_k],\omega\rangle=\deg X_k\cdot\langle[L_k],\omega \rangle\Leftrightarrow \int_{X_k}\omega=\deg X_k\cdot\int_{L_k}\omega.$$ • I. Degree as coefficient in the linear equivalence class of the generic projection to a divisor in $\mathbb{CP}^{\dim X+1}$, given by the possibly reducible hypersurface $\pi_{L_{n-k-2}}(X_k)=:\bar{X}_k$: $$[\bar{X}_k]\sim^{lin.}(\deg X_k)\cdot [L_k],$$ where linear equivalence is considering every divisor mod a rational divisor, i.e. $[D]\in \operatorname{Cl}\,(\mathbb{CP}^{k+1}) :=\operatorname{Div}(\mathbb{CP}^{k+1})/\sim^{lin.}$ with $D_1\sim^{lin.}D_2 :\Leftrightarrow \exists f\in\mathbb{C}(X_k)$ such that $D_1-D_2=\operatorname{div}(f)$ and Div is the free abelian group generated by the hypersurfaces. Equivalently, degree as the coefficient in the rational equivalence class of the Chow group of order $\dim X$, i.e. $[X_k]\in A_k(\mathbb{CP}^n)$: $$[X_k]\sim^{rat.}(\deg X_k)\cdot [L_k],$$ • J. Degree as intersection number of self-intersecting $\dim X$ times the hyperplane twisting sheaf $\mathcal{O}_{\mathbb{P}^n_k}(1)\vert_{X}$, where the intersection of invertible sheaves (line bundles) is defined by$$(\mathcal{L_1}\cdot ... \cdot\mathcal{L_m})_{X}:=\chi_{X}-\sum_i\chi_{X}(\mathcal{L}_i^{-1})+\sum_{i<j}\chi_{X}(\mathcal{L}_i^{-1}\otimes\mathcal{L}_j^{-1})-\cdots+(-1)^n\chi_{X}(\mathcal{L}_1^{-1}\otimes\cdots\otimes\mathcal{L}_m^{-1})$$ for $m\geq k$ line bundles on $X_k$, and $\chi_X$ the Euler characteristics of the bundle. That is to say: $$\deg X:= (\mathcal{O}_{\mathbb{P}^n_k}(1)\vert_{X})^{\dim X}.$$ - I am not able to provide a long detailed answer for it as I do not have time, but there is a long detailed discussion on the degree of a map, etc in Modern Geometry, Methods and Applications Vol II if you favor geometric type of reasoning. There is no need for using Atiyah-Singer index theorem for this. -
2016-05-27 02:56:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196444153785706, "perplexity": 239.42642643130557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276416.16/warc/CC-MAIN-20160524002116-00204-ip-10-185-217-139.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/32645/why-amino-acids-zwitterion-become-either-negative-or-positive-at-low-and-high
# Why amino acids (Zwitterion) become either negative or positive at low and high pH solutions? The amino acids are Zwitterions. In neutral pH, an Amino acid's amino group has a postive charge and Carboxyl group has negative charge. They cancel each others charge thanks to the $Hydrogen$ that's roaming between the two groups. But, I am not able to understand why Amino acids have positive charges at a low pH and negative at high pH? Given amino acids are Zwitterions, how can both terminals have the same charge (either positive or negative) at low and high pH? Can someone explain this from general chemistry and pH perspective (acid/base)? PS: first posted in Biology site but community recommended it to be moved here. Firstly a note about terminology. The word "terminus" is reserved for the N- or C-termini of a polypeptide chain. For a free amino acid, you should refer to the carboxyl and amino groups as the $\alpha$-$\ce{COOH}$ and $\alpha$-$\ce{NH2}$ groups respectively. Anyway, the -$\ce{COOH}$ group is acidic; above a certain pH, typically around 2, it can be deprotonated to form a negatively charged -$\ce{COO-}$ group. On the other hand, the -$\ce{NH2}$ group is basic; below a certain pH, typically around 9, it can be protonated to form a positively charged -$\ce{NH3+}$ group. I made a short table describing the protonation states of the two groups. $$\begin{array}{c|c|c|c} \textbf{pH} & \textbf{Carboxyl group exists as} & \textbf{Amino group exists as} & \textbf{Net charge} \\ \hline <2 & \ce{-COOH}\text{ (neutral)} & \ce{-NH3+}\text{ (positively charged)} & +1 \\ <2 & \ce{-COO^-}\text{ (negatively charged)} & \ce{-NH3+}\text{ (positively charged)} & 0 \\ >9 & \ce{-COO^-}\text{ (negatively charged)} & \ce{-NH2}\text{ (neutral)} & -1 \\ \end{array}$$ For more information you can read Wikipedia or any biochemistry textbook - the first few chapters will often be devoted to discussing chemistry like this. • To be frank, when it comes to academic related things (science), Wikipedia hasn't been an easy diegest feed. I found out the video, that I commented below. Also thanks to your explanation, I also checked Morrison & Boyd, plus a lecture from MIT open courseware. – bonCodigo Jun 13 '15 at 23:55 • @bonCodigo - RE: Wikipedia - You're right. When Wikipedia first started out the articles were written at a fairly low level. Then the articles got rewritten by experts so that the articles were absolutely technically correct. The result many times is a lot of jargon and complicated formulas which are difficult for a beginning student to understand. – MaxW Sep 18 '16 at 18:41 By increasing the pH you are actually increasing the concentration of hydrogen ions present. By the simplistic definition, $pH = -\log\ce{[H+]}$ Thus at a low pH you have lots of $\ce{[H+]}$ present. This can protonate your amino acid resulting in a positive charge overall. The inverse is true for high pH (low concentration of $\ce{[H+]}$) • My question : How can both terminals get protonated to result in one charge? Can you explain the chemical process of how each NH2, COOH terminals getting protonated and deprotonated? – bonCodigo Jun 9 '15 at 14:17 • Both terminals will not be protonated (under reasonable conditions). To start off with you have a single positive charge and a single negative charge. This is your zwitterion with no net charge. If you have lots of H+ you will protonate, removing a minus. So overall you have a single positive charge. – Christopher Jun 9 '15 at 14:32 • At physiological pH (7.4) the amino group (which has a pKa of around 9) is protonated. – orthocresol Jun 9 '15 at 14:34 • I watch this video, things became much clearer. Isn't pH 7.4 considered to be in neutral range? – bonCodigo Jun 10 '15 at 14:17 Look for the $pK_a/pK_b$ values for each amino acid. When the pH of the solution is lower than both $pK_a$s, the amino acid will be in its protonated, non-zwitterionic form. When the pH is greater than both $pK_a$s, the amino acid will be in its deprotonated, anionic form. When the pH is between the two $pK_a$s (around neutral pH), the amino acid will be in its zwitterionic form. • Welcome to chemistry.SE! If you had any questions about the policies of our community, you can ‎visit the help center or take a ‎‎tour of the website.‎ – M.A.R. Jun 9 '15 at 16:09
2019-08-25 09:43:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4470566511154175, "perplexity": 1809.2141077553554}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323246.35/warc/CC-MAIN-20190825084751-20190825110751-00186.warc.gz"}
https://mathzsolution.com/why-is-the-volume-of-a-cone-one-third-of-the-volume-of-a-cylinder/
# Why is the volume of a cone one third of the volume of a cylinder? The volume of a cone with height $$hh$$ and radius $$rr$$ is $$\frac{1}{3} \pi r^2 h\frac{1}{3} \pi r^2 h$$, which is exactly one third the volume of the smallest cylinder that it fits inside. This can be proved easily by considering a cone as a solid of revolution, but I would like to know if it can be proved or at least visual demonstrated without using calculus. A visual demonstration for the case of a pyramid with a square base. As Grigory states, Cavalieri’s principle can be used to get the formula for the volume of a cone. We just need the base of the square pyramid to have side length $$r\sqrt\pi r\sqrt\pi$$. Such a pyramid has volume $$\frac13 \cdot h \cdot \pi \cdot r^2. \frac13 \cdot h \cdot \pi \cdot r^2.$$ Then the area of the base is clearly the same. The cross-sectional area at distance a from the peak is a simple matter of similar triangles: The radius of the cone’s cross section will be $$a/h \times ra/h \times r$$. The side length of the square pyramid’s cross section will be $$\frac ah \cdot r\sqrt\pi.\frac ah \cdot r\sqrt\pi.$$ Once again, we see that the areas must be equal. So by Cavalieri’s principle, the cone and square pyramid must have the same volume:$$\frac13\cdot h \cdot \pi \cdot r^2 \frac13\cdot h \cdot \pi \cdot r^2$$
2023-01-28 10:52:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9024346470832825, "perplexity": 86.11372301307343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00759.warc.gz"}
https://forsaljningavaktierqcmw.web.app/24411/73751.html
# Ett slags modernism i vetenskapen: Teoretisk fysik i Sverige Translation of WordPress - 5.5.x in Swedish # This file is Since 4 5 ∘ 45^\​circ 45∘ is a standard angle, you use the known values for sine and cosine of this angle. Trigonometric Ratios of Special Angles: 0, 30, 45, 60, 90 (video lessons, examples and solutions) to find exact values of expressions involving sine, cosine and tangent values of 0, 30, 45, 60 and 90 degrees, How to find sin, cos, tan, cot, csc,  23 feb. 2021 — The ones we need to know are 0 30 45 60 and 90. So draw a Very smart and easy trick to find the exact value of sin cos tan cot sec cosec. Chart with the sine cosine tangent value for each degree in the first quadrant. (a) Since 315 degrees is between 270 degrees and 360 degress, then sin 315 is evaluated in the 4th quadrant, and there is 45 degrees between the terminal  Solution · Steps · $\mathrm{Use\:the\:following\:trivial\:identity}:\quad\sin\left(45^{\ circ\:}\right)=\frac{\sqrt{2}}{2}$ Use the following trivial identity : sin(45 ◦ )=√22. $=\ I can use degree measures to measure angles and rotations. I can evaluate exact values for sine, cosine, and tangent around the unit circle. Recall from geometry that So sin(45) and cos(45) also have exact values. So you now of course 2 Jun 2008 Title. Sin/Cos/Tan of 30/45/60 Degree. 2019 — skriver forskarna som publicerat sin artikel i tidskriften The Journal of After considering the results, they selected a target HbA1c value that is 9:10- 9:45 “​Diabetes and cardiovascular disease: Insights into The risk of heart failure appeared to decline in parallel with a greater degree of weight loss. ## Top Five Cos 45 Grade So, sin 45 degrees trigonometry value, in-fraction will be, sin 45° = Calculation of the Value of Sin 45 Degree in Fraction Sin 0° = √0 /√4 = 0 Sin 30° = √1 /√4 =1/2 Sin 45° = √2 /√4 = 1/√2 Sin 60° = √3 /√4 = √3 /2 Sin 90° = √4 /√4 = 1 The exact value of sin of 45 degrees in fraction is 1 2. It is an irrational number and is equal to 0.7071067812 … in decimal form. The value of sin of angle 45 degrees is considered as 0.7071 approximately in mathematics. The value of sin Find the exact value of sine of 45 degrees? ### Non-response analysis of NU2015, Analysrapport 2017:1 av S Davies · Citerat av 3 — effekter, och ger i sin tur en tydlig möjlighet att utvärdera de antaganden som value of the welfare benefits of its interventions compared to its cost to the UK such is the heterogeneity of cartels, that a case-specific approach, as recently on institutional and enforcement characteristics such as the degree of formal. 28, 1.3.6, Latitude, 57n50-62n00, x deg, x min E or W - Coordinates should refer to 43, 1.4.5, Description of High Conservation Values or High Nature Values present But This PEFC requirement is very specific and the PEFC group scheme och korrigerande handlingar för Grönt Paraply i sin funktion som gruppledare. förmåga att göra sin röst hörd för att hävda sina rättmätiga behov och Faktaruta 1.1.1 Behandling av patienter med benartärsjukdom. 45. kaPitEl 1 • patients with varying degrees of lower- specific questionnaire for the assessment of predictive value; PPV = Positive predictive value; Sad = Severe arterial disease. För instrumentmakarna har tillverkningen alltid behållit sin hantverks- mässiga karaktär. Impact of Q-Griffithsin anti-HIV microbicide gel in non-human primates: In situ Collegial verbalisation — the value of an independent observer: an of metric shape with large perspective changes (? Orebro region lan 2 M eddel. från The degree of purity, unless stated to the contrary, is given as a weigh t is given: here the exact value has been obtained by counting all the seed. Oskar Klein blev den förste professorn i Sverige som hade sin huvud- 45 Warwick, ”Cambridge mathematics and Cavendish physics, I ”, 626 & 628–629. 28 ”It was of great value for me to attend in September 1925 a conference in Holz​- tions, mathematical techniques and the degree to which these were compat-. av S Davies · Citerat av 3 — effekter, och ger i sin tur en tydlig möjlighet att utvärdera de antaganden som value of the welfare benefits of its interventions compared to its cost to the UK such is the heterogeneity of cartels, that a case-specific approach, as recently on institutional and enforcement characteristics such as the degree of formal. I can expand the identity of$\sin (60°+1°)$but I don't know where to go after that. Furthermore, what is COS 30 in fraction? value The value of cos of angle 30 degrees is equal to √32 in fraction form exactly. It is an irrational number and equals to 0.8660254037… 0.8660254037 … in decimal form and the same value is considered as 0.866 approximately in mathematics. Konvertibel adalah dewey njurunda vårdcentral personal mats thörnberg trafikskola be körkort stockholm prosa exempel huslån utan kontantinsats ### AirPatrol Nordic Full Manual English and Swedish - Klima 2019-02-27 Trigonometric Function Values Of Special Angles. How to derive the trigonometric function values of 30, 45 and 60 degrees and their corresponding radian measure. Musikforlagskvarter i new york bildar salter med syror ### Symptoms of jinn possession Simplify using half angle identities No! There are lots more but not all angles have exact expressions involving nothing more than square-roots. Exact values of sin(45), cos(45), tan(45), csc(45), sec(45), cot(45), Find exact values of all trigonometric functions when the angle is 45 degrees,blackpenr 2019-03-12 tan 45 = sin 45 / cos 45 tan 60 = sin 60 / cos 60 tan 90 = sin 90 / cos 90 = 1/0 which is NOT DEFINED Similarly, we have sec = 1/cos, cot = cos / sin and cosec = 1/ sin. ## Focaccia – Lilla Spinneriet Restaurang, Café, Butik I can expand the identity of$\sin (60°+1°)\$ but I don't know where to go after that. Furthermore, what is COS 30 in fraction? 46. ASPEKT 1. Når mycket goda kunskapsresultat med Arete Meritering ger lärare en möjlighet att dokumentera och synliggöra sin utveckla en databas och ett system för analys (TVAAS: The Tennessee Value- fication, advanced degrees, and even scores on standardized tests explain little. "Modeling and Validation of an Open-Source Mean Value Heavy-Duty Diesel an Automotive Application", Mechanical systems and signal processing, 45(1):​  Vid handläggning av ärenden och beslut som angår NEAR ska Part iaktta sin respektive 5 specific tasks whose deliverables, staff, and time plan are predefined. Linkage with the Swedish Level of Living Survey provides over 45 years of developed and run in line with the goals and values expressed in the application. Point B is the point of tangency of line BD (angle ABD = 90 degrees) and segment BD = 25 feet.
2023-02-04 11:34:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6132921576499939, "perplexity": 4343.480258229224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500126.0/warc/CC-MAIN-20230204110651-20230204140651-00271.warc.gz"}
https://www.aenglish.co.uk/morphology/the-ta-form-of-ru-and-u-verbs/
# The past verb form So far, with the -te, -nai and plain forms, we have learned that verbs come in two main inflection types: ru verbs, where the verb stem remains unchanged when different endings are attached, and -u verbs where the last letter of the verb stem can change depending on the ending that is attached. Now let’s look at the -ta form, which is very simple as it uses the same stem as the -te form. -ru verbs Here’s a list of some common -ru verbs. -u verbs Here is a table of the different ways to make -ta forms for -u verbs. Irregular verbs Conclusion If you’ve mastered the -ta form, the -ta form is very easy, as it follows exactly the same pattern.
2022-05-22 18:02:29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438103199005127, "perplexity": 3072.0446754240756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00422.warc.gz"}
http://codeforces.com/topic/66644/en3
Does IOI rank correlate with CF rating? Revision en3, by I_Love_Tina, 2019-03-27 14:04:58 Background I have recently stumbled across a paper which analyzed the correlation between the IMO medal and future success in the mathematical field. So, inspired by annual predictions of IOI rankings based on CF ratings, I don't know what to do with my time decided to take a more "serious" approach. Method I collected the data from the last $7$ editions of IOI in order to create the graph of $f(x)$ is the maximal rating of the participant with rank $x$. For the last $6$ editions, I relied almost solely on the IOI statistics website. The $2012$ edition, however, is missing a lot of information and I collected the CF handles either from snarknews or by finding the handle on search engines. I put the participants in the same order they were arranged in the scoreboard, so if there are multiple people on the same position, they would be assigned consecutive ranks. In case there was an unofficial participant, I assigned the mean of the ranks between which this participant was situated. If there were participants with no competitions, I assigned the rating $1500$. This should not have big influence on the graph because there were very few such participants. Things to consider: • A lot of participants from past editions don't participate in CF round anymore • The rating system has undergone inflation. • The rating system fluctuates too much. • The style/difficulty/originality of CF problems highly depends on the round and unlike the AtCoder case this creates disproportionality of rating distribution. So do the div 2/div 3/educational rounds. • Not everybody is active in CF rounds. IOI 2012 $\begin{array}{c|lcr} & \# & \text{Fraction of CF users} & \text{Median} & \text{Mean} \\ \hline \text{Gold} & 25 & 0.96 & 2497 & 2573.5 \\ \hline \text{Silver} & 52 & 0.38 & 2220 & 2247.5 \\ \hline \text{Bronze} & 77 & 0.27 & 2137 & 2100.5 \\ \hline \text{No Medal} & 155 & 0.31 & 1802 & 1839.5 \\ \hline \text{Total} & 310 & 0.47 & 2135 & 2124 \\ \hline \end{array}$ IOI 2013 $\begin{array}{c|lcr} & \# & \text{Fraction of CF users} & \text{Median} & \text{Mean} \\ \hline \text{Gold} & 25 & 0.96 & 2499 & 2528.7 \\ \hline \text{Silver} & 50 & 0.50 & 2321 & 2322.8 \\ \hline \text{Bronze} & 74 & 0.38 & 2145 & 2127.9 \\ \hline \text{No Medal} & 149 & 0.40 & 1826 & 1850.9 \\ \hline \text{Total} & 298 & 0.60 & 2137 & 2130 \\ \hline \end{array}$ IOI 2014 $\begin{array}{c|lcr} & \# & \text{Fraction of CF users} & \text{Median} & \text{Mean} \\ \hline \text{Gold} & 25 & 0.875 & 2499 & 2538.8 \\ \hline \text{Silver} & 50 & 0.54 & 2270 & 2333.3 \\ \hline \text{Bronze} & 74 & 0.29 & 2127 & 2119.2 \\ \hline \text{No Medal} & 149 & 0.39 & 1919 & 1920.5 \\ \hline \text{Total} & 298 & 0.58 & 2162 & 2170.4 \\ \hline \end{array}$ IOI 2015 $\begin{array}{c|lcr} & \# & \text{Fraction of CF users} & \text{Median} & \text{Mean} \\ \hline \text{Gold} & 29 & 1.0 & 2596 & 2714.7 \\ \hline \text{Silver} & 53 & 0.53 & 2364 & 2404.2 \\ \hline \text{Bronze} & 79 & 0.34 & 2187 & 2208.7 \\ \hline \text{No Medal} & 160 & 0.53 & 1794 & 1817.7 \\ \hline \text{Total} & 321 & 0.66 & 2133 & 2162.2 \\ \hline \end{array}$ IOI 2016 $\begin{array}{c|lcr} & \# & \text{Fraction of CF users} & \text{Median} & \text{Mean} \\ \hline \text{Gold} & 26 & 1.0 & 2579 & 2643.9 \\ \hline \text{Silver} & 51 & 0.66 & 2362 & 2353.6 \\ \hline \text{Bronze} & 77 & 0.40 & 2089 & 2104.6 \\ \hline \text{No Medal} & 154 & 0.51 & 1779 & 1825.4 \\ \hline \text{Total} & 308 & 0.71 & 2103 & 2124.6 \\ \hline \end{array}$ IOI 2017 $\begin{array}{c|lcr} & \# & \text{Fraction of CF users} & \text{Median} & \text{Mean} \\ \hline \text{Gold} & 26 & 0.84 & 2609 & 2548.2 \\ \hline \text{Silver} & 50 & 0.57 & 2256 & 2293.4 \\ \hline \text{Bronze} & 76 & 0.42 & 2133 & 2129.4 \\ \hline \text{No Medal} & 152 & 0.60 & 1899 & 1889.8 \\ \hline \text{Total} & 304 & 0.73 & 2089 & 2104.1 \\ \hline \end{array}$ IOI 2018 $\begin{array}{c|lcr} & \# & \text{Fraction of CF users} & \text{Median} & \text{Mean} \\ \hline \text{Gold} & 29 & 0.96 & 2459 & 2482.3 \\ \hline \text{Silver} & 55 & 0.57 & 2223 & 2220 \\ \hline \text{Bronze} & 83 & 0.35 & 2164 & 2132 \\ \hline \text{No Medal} & 168 & 0.49 & 1771 & 1782.1 \\ \hline \text{Total} & 335 & 0.65 & 2089 & 2063.1 \\ \hline \end{array}$ Conclusion So, unsurprisingly, on average you can see that the function is decreasing by rank, however it fluctuates too much between close ranks. Therefore, on an individual level, you can't predict someone's future rating based on IOI ranking. #### History Revisions Rev. Lang. By When Δ Comment en3 I_Love_Tina 2019-03-27 14:04:58 901 (published) en2 I_Love_Tina 2019-03-27 02:11:49 4462 Tiny change: 'ground**\nI have r' -> 'ground**\n\nI have r' en1 I_Love_Tina 2019-03-26 19:46:41 306 Initial revision (saved to drafts)
2021-12-06 12:56:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47419989109039307, "perplexity": 3791.3564085778166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363292.82/warc/CC-MAIN-20211206103243-20211206133243-00251.warc.gz"}
https://bioriental.net/supplementary-materialsfig-s1-zero-time-points-for-each-cell-populace-obtained/
# Supplementary MaterialsFig S1. zero time points for each cell populace obtained Supplementary MaterialsFig S1. zero time points for each cell populace obtained from bulk RNA-seq time-course experiments, and indicated as log(fold-change). Related to Number 2B-D. NIHMS1004634-supplement-Table_S1.xlsx (4.3M) GUID:?60482DB4-D318-4220-863A-BA6473CB1ABF Summary Long-term hematopoietic stem cells (LT-HSCs) maintain hematopoietic output throughout an animals lifespan. However, with age the balance is definitely disrupted and LT-HSCs produce a myeloid-biased output, resulting in poor immune replies to infectious Duloxetine problem and the advancement of myeloid leukemias. Right here, we present that youthful and aged LT-HSCs respond in a different way to inflammatory stress, such that aged LT-HSCs produce a cell-intrinsic, myeloid-biased manifestation system. Using single-cell RNA-seq, we determine a myeloid-biased subset Duloxetine within the LT-HSC populace (mLT-HSCs) that is common among aged LT-HSCs. We determine CD61 like a marker of mLT-HSCs, and show that CD61-high LT-HSCs are distinctively primed to respond to acute inflammatory concern. We predict several transcription factors to regulate mLT-HSCs gene system, and display that and play an important part in age-related inflammatory myeloid bias. We have therefore recognized and isolated a LT-HSC subset that regulates myeloid versus lymphoid balance under inflammatory challenge and with age. (Baldridge et al., 2010), M-CSF (Mossadegh-Keller et al., 2013), and the gram-negative bacterial component lipopolysaccharide (LPS) (Nagai et al., 2006). In response to acute LPS exposure, LT-HSCs increase proliferation, mobilize to the peripheral bloodstream (King and Goodell, 2011), and initiate emergency myelopoiesis to increase the systems output of innate immune cells (Haas et al., 2015). This improved output may also be mediated by hematopoietic progenitors, such as multipotent progenitors (MPPs) (Pietras et al., 2015; Young et al., 2016), in part due to direct secretion of cytokines that travel myeloid differentiation (Zhao et al., 2014). Several hypotheses have been proposed to explain the age related changes in LT-HSC function (Kovtonyuk et al., 2016). First, cell-intrinsic changes within each aged LT-HSC might make it inherently myeloid-biased (Grover et al., 2016; Rossi et Duloxetine al., 2005). Second, the LT-HSC populace may be made up of subsets of myeloid- and lymphoid-biased cells, the structure of which adjustments with age group in a way that myeloid-biased LT-HSCs are more frequent inside the aged LT-HSC people (Dykstra et al., 2007; Graf and Gekas, 2013; Yamamoto et al., 2013). The real character of the age-related adjustments might actually end up being a mix of both these hypotheses, in a way that with age group there’s a developing subset of even more intrinsically myeloid-biased LT-HSCs. The transcriptional and useful condition of LT-HSCs in continuous condition and in response to inflammatory mediators can help reveal these questions, but continues to be poorly understood currently. Several epigenomic and transcriptomic adjustments have been noticed during mass and single-cell appearance analysis of youthful and aged LT-HSCs Rabbit Polyclonal to Tau (phospho-Thr534/217) (Cabezas-Wallscheid et al., 2014; Grover et al., 2016; Kowalczyk et al., 2015; Sanjuan-Pla et al., 2013; Sunlight et al., 2014; Yu et al., 2016). Nevertheless, it really is unclear if and exactly how these recognizable adjustments result in changed LT-HSC function, as noticed with age-related myeloid bias (Dykstra et al., 2011; Gekas and Graf, 2013; Yamamoto et al., 2018). Specifically, a previous research using single-cell RNA-seq (scRNA-seq) (Kowalczyk et al., 2015) of steady-state, relaxing LT-HSCs hasn’t discovered a subpopulation framework. A knowledge of how inflammatory mediators impact LT-HSCs response and exactly how this response adjustments with age group may as a result help elucidate the root system of age-related myeloid bias. This might offer understanding into age-related pathologies additional, such as incorrect immune reactions to vaccines or infectious challenge, and the development of myeloid leukemia. In this work, we investigate.
2021-08-02 16:42:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336890935897827, "perplexity": 10687.764229073955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154321.31/warc/CC-MAIN-20210802141221-20210802171221-00170.warc.gz"}
https://codegolf.stackexchange.com/posts/108923/timeline
# Timeline for Enthusiastically Russianify a String ### Current License: CC BY-SA 3.0 4 events when toggle format what by license comment May 5 '17 at 12:01 comment Also, the current state of your code doesn't work, (#(...) "codegolf" 125) must add 125 percent of the length of "codegolf" instead of 125 times the length of "codegolf". So, the fixed program would be: #(str %(apply str(repeat(*(count %)%2 1/100)\)))), which is 49 bytes. May 5 '17 at 11:58 comment Welcome to the world of Lisp! :P In Clojure, you can use the condensed form of anonymous functions #(...), and you can get rid of the print (since function returns should be acceptable). You can change reduce to apply for the str function, and you can change ")" to \), which does the same thing. So, the final code should be: #(str %(apply str(repeat(*(count %)%2)\))))). Feb 2 '17 at 3:49 history edited
2022-01-21 06:05:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3836194574832916, "perplexity": 3596.322350298726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302723.60/warc/CC-MAIN-20220121040956-20220121070956-00371.warc.gz"}
https://documen.tv/question/you-are-trying-to-overhear-a-juicy-conversation-but-from-your-distance-of-20-0-m-it-sounds-like-16976841-90/
## You are trying to overhear a juicy conversation, but from your distance of 20.0 m, it sounds like only an average whisper of 20.0 dB. So you Question You are trying to overhear a juicy conversation, but from your distance of 20.0 m, it sounds like only an average whisper of 20.0 dB. So you decide to move closer to give the conversation a sound level of 70.0 dB instead. How close should you come? in progress 0 1 year 2021-08-22T05:38:54+00:00 1 Answers 33 views 0 1. Given that, Distance = 20.0 m Average whisper = 20.0 dB Sound level = 70.0 dB We know that, The minimum intensity is $$I_{o}=10^{-12}\ W/m^2$$ We need to calculate the sound intensity in the distance of 20 m Using formula of sound intensity $$dB=10\log(\dfrac{I_{a}}{I_{o}})$$ Put the value into the formula $$20=10\log(\dfrac{I_{a}}{10^{-12}})$$ $$10^{2}=\dfrac{I_{a}}{10^{-12}}$$ $$I_{a}=10^{-10}\ W/m^2$$ If the conversation a sound level of 70.0 dB instead We need to calculate the sound intensity Using formula of sound intensity $$dB=10\log(\dfrac{I_{b}}{I_{o}})$$ Put the value into the formula $$70=10\log(\dfrac{I_{a}}{10^{-12}})$$ $$10^{7}=\dfrac{I_{b}}{10^{-12}}$$ $$I_{b}=10^{-5}\ W/m^2$$ We know that, The intensity is inversely proportional with the square of the distance. We need to calculate the distance Using formula of intensity $$\dfrac{I_{a}}{I_{b}}=\dfrac{R_{b}^2}{R_{a}^2}$$ Put the value into the formula $$\dfrac{10^{-10}}{10^{-5}}=\dfrac{R_{b}^2}{20^2}$$ $$R_{b}^2=20^2\times\dfrac{10^{-10}}{10^{-5}}$$ $$R_{b}=\sqrt{20^2\times\dfrac{10^{-10}}{10^{-5}}}$$ $$R_{b}=0.063\ m$$ Hence, The distance from the conversation should be 0.063 m.
2022-12-07 00:48:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6204554438591003, "perplexity": 2691.6407201109873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00607.warc.gz"}
https://planetmath.org/HypergeometricFunction
# hypergeometric function Let $(a,b,c)$ be a triple of complex numbers with $c$ not belonging to the set of negative integers. For a complex number $w$ and a non negative integer $n$, use Pochhammer symbol $(w)_{n}$ , to denote the expression : $(w)_{n}=w(w+1)\dots(w+n-1).$ The Gauss hypergeometric function, ${}_{2}F_{1}$, is then defined by the following power series expansion : ${}_{2}F_{1}(a,b;\,c\,;z)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}n!}z^% {n}.$ Title hypergeometric function HypergeometricFunction 2013-03-22 14:27:48 2013-03-22 14:27:48 rspuzio (6075) rspuzio (6075) 9 rspuzio (6075) Definition msc 33C05 TableOfMittagLefflerPartialFractionExpansions Gauss hypergeometric function
2020-03-31 02:56:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 8, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9725798964500427, "perplexity": 701.6791749778808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370499280.44/warc/CC-MAIN-20200331003537-20200331033537-00550.warc.gz"}
https://www.lesswrong.com/posts/TYAA4iNCFaDvPD6gB/a-rational-altruist-punch-in-the-stomach
# 12 Personal Blog Very distant future times are ridiculously easy to help via investment.  A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it. So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them?  How can you think anyone on Earth so cares?  And if no one cares the tiniest bit, how can you say it is "moral" to care about them, not just somewhat, but almost equally to people now?  Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much. So why do many people seem to care about policy that effects far future folk?   I suspect our paternalistic itch pushes us to control the future, rather than to enrich it.  We care that the future celebrates our foresight, not that they are happy. In the comments  some people gave counterarguments. For those in a rush, the best ones are Toby Ord's. But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100. I have some trouble conceiving of what would beat a consistent argument a googol fold. Things that changed my behavior significantly over the last few years have not been many, but I think I'm facing one of them. Understanding biological immortality was one, it meant 150 000 non-deaths per day. Understanding the posthuman potential was another. Then came the 10^52 potential lives lost in case of X-risk, or if you are conservative and think only biological stuff can have moral lives on it, 10^31. You can argue about which movie you'll watch, which teacher would be best to have, who should you marry. But (if consequentialist) you can't argue your way out of 10^31 or 10^52. You won't find a counteracting force that exactly matches, or really reduces the value of future stuff by 3 000 000 634 803 867 000 000 000 000 000 000 777 000 000 000 999  fold Which is way less than 10^52 You may find a fundamental and qualitative counterargument "actually I'd rather future people didn't exist", but you won't find a quantitative one. Thus I spend a lot of time on X-risk related things. Back to Robin's argument: so unless someone gives me a good argument against investing some money in the far future (and discovering some vague techniques of how to do it that will make it at least one in a millionth possibility) I'll set aside a block of money X, a block of time Y, and will invest in future people 12 thousand years from now. If you don't think you can beat 10^100, join me. And if you are not in a rush, read this also, for a bright reflection on similar issues. # 12 Mentioned in New Comment But I didn't bite any of the counterarguments to the extent that it would be necessary to counter the 10^100. I don't think this is very hard if you actually look at examples of long-term investment. Background: http://www.gwern.net/The%20Narrowing%20Circle#ancestors and especially http://www.gwern.net/The%20Narrowing%20Circle#islamic-waqfs First things: Businesses and organizations suffer extremely high mortality rates; one estimate puts it at 99% chance of mortality per century. (This ignores existential risks and lucky aversions like nuclear warfare, and so is an underestimate of the true risks.) So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240. That's a good chunk of the reason to not bother with long-term trusts right there! We can confirm this empirically by observing that there were what must have been many scores of thousands of waqfs in the Islamic world - perpetual charities - and very few survive or saw their endowments grow. (I have pointed Hanson at waqfs repeatedly, but he has yet to blog on that topic.) Similarly, we can observe that despite the countless temples, hospitals, homes, and institutions with endowments in the Greco-Roman world just 1900 years ago or so - less than a sixth of the time period in question - we know of zero surviving institutions, all of them having fallen into decay/disuse/Christian-Muslim expropriation/vicissitudes of time. The many Buddhist institutions of India suffered a similar fate, between a resurgent Hinduism and Muslim encroachment. We can also point out that many estimates ignore a meaningful failure mode: endowments or nonprofits going off-course and doing things the founder did not mean them to do - the American university case comes to mind, as does the British university case I cite in my essay, and there is a long vein (some of it summarized in Cowen's Good and Plenty) of conservative criticism of American nonprofits like the Ford Foundation pointing out the 'liberal capture' of originally conservative institutions, which obviously defeats the original point. (BTW, if you read the waqf link you'd see that excessive iron-clad rigidity in an organization's goal can be almost as bad, as the goals become outdated or irrelevant or harmful. So if the charter is loose, the organization is easily and quickly hijacked by changing ideologies or principal-agent problems like the iron law of oligarchy; but if the charter is rigid, the organization may remain on-target while becoming useless. It's hard to design a utility function for a potentially powerful optimization process. Hm.... why does that sentence sound so familiar... It's almost as if we needed a theory of Friendly Artificial General Organizations...) Survivorship bias as a major factor in overestimating risk-free return overtime is well-known, and a new result came out recently, actually. We can observe many reasons for survivorship bias in estimates of nonprofit and corporate survival in the 20th century (see previously) and also in financial returns: Czarist Russia, the Weimar and Nazi Germanies, Imperial Japan, all countries in the Warsaw Pact or otherwise communist such as Cuba/North Korea/Vietnam, Zimbabwe... While I have seen very few invocations recently of the old chestnut that 'stock markets deliver 7% return on a long-term basis' (perhaps that conventional wisdom has been killed), the survivorship work suggests that for just the 20th century we might expect more like 2%. The risk per year is related to the size of the endowment/investment; as has already been point out, there is fierce legal opposition to any sort of perpetuity, and at least two cases of perpetuities being wasted or stolen legally. Historically, fortunes which grow too big attract predators, become institutionally dysfunctional and corrupt, and fall prey to rare risks. Example: the non-profit known as the Catholic Church owned something like a quarter of all of England before it was expropriated precisely because it had so effectively gained wealth and invested it (property rights in England otherwise having been remarkably secure over the past millennium). Not to mention the Vatican States or its holdings elsewhere. The Buddhist monasteries in China and Japan had issues with growing so large and powerful that they became major political and military players, leading to war and extirpation by other actors such as Oda Nobunaga. Any perpetuity which becomes equivalent to a large or small country will suffer the same mortality rates. And then there's opportunity cost. We have good reason to expect the upcoming centuries to be unusually risky compared to the past: even if you completely ignore new technological issues like nanotech or AI or global warming or biowarfare, we still suffer under a novel existential threat of thermonuclear warfare. This threat did not exist at any point before 1945, and systematically makes the future riskier than the past. Investing in a perpetuity, itself investing in ordinary commercial transactions, does little to help except possibly some generic economic externalities of increased growth (and no doubt there are economists who, pointing to current ultra-low interest rates and sluggish growth and 'too much cash chasing safe investments', would deprecate even this). Compounding-wise, there are other forms of investment: investment into scientific knowledge, into more effective charity (surely saving peoples' lives can have compounding effects into the distant future?), and so on. So to recap: 1. organizational mortality is extremely high 2. financial mortality is likewise extremely high; and both organizational & financial mortality are relevant 3. all estimates of risk are systematically biased downwards, estimates indicating that one of these biases is very large 4. risks for organizations or finances increases with size 5. opportunity cost is completely ignored Any of these except perhaps #3 could be sufficient to defeat perpetuities, and I think that combined, the case for perpetuities is completely non-existent. Philip Trammel has criticized my comment here: https://philiptrammell.com/static/discounting_for_patient_philanthropists.pdf#page=33 He makes 3 points: 1. Perhaps the many failed philanthropies were not meant to be permanent? First, they almost certainly were. Most philanthropies were clan or religious-based. Things like temples and monasteries are meant to be eternal as possible. What Buddhist monastery or Catholic cathedral was ever set up with the idea that it'd wind up everything in a century or two? What dedication of a golden tripod to the Oracle at Delphi was done with the idea that they'd be done with the whole silly paganism thing in half a millennium? What clan compound was created by a patriarch not hoping to be commemorated and his grave honored for generations without end? Donations were inalienable, and often made with stipulations like a mass being said for the donators' soul once a year forever or the Second Coming, whichever happened first. How many funny traditions or legal requirements at Oxford or Cambridge, which survive due to a very unusual degree of institutional & property right continuity in England, came with expiration dates or entailments which expired? (None come to mind.) The Islamic world went so far as to legally remove any option of being temporary! To the extent that philanthropies are not encumbered today, it's not for any lack of desire by philanthropists (as charities constantly complain & dream of 'unrestricted' funds), but legal systems refusing to enforce them via the dead hand doctrine, disruption of property rights, and creative destruction. My https://www.gwern.net/The-Narrowing-Circle is relevant, as is Fukuyama's The Origins of Political Order, which makes clear what a completely absurd thing that is to suggest of places like Rome or China. Second, even if they were not, most of them do not expire due to reaching scheduled expiration dates, showing that existing structures are inadequate even to the task of lasting just a little while. Trammel seems to believe there is some sort of silver bullet institutional structure that might allow a charity to accumulate wealth for centuries or millennia if only the founders purchased the 1000-year charity plan instead of cheaping out by buying the limited-warranty 100-year charity plan. But there isn't. 2. His second point is, I'm not sure how to summarize it: Second, it is misleading to cite the large numbers of failed philanthropic institutions (such as Islamic waqfs) which were intended to be permanent, since their closures were not independent. For illustration, if a wave of expropriation (say, through a regional conquest) is a Poisson process withλ= 0.005, then the probability of a thousand-year waqf is 0.7%. Splitting a billion-dollar waqf into a billion one-dollar waqfs, and observing that none survive the millennium, will give the impression that “the long-term waqf survival rate is less than one in one billion”. I can't see how this point is relevant. Aside from his hypothetical not being the case (the organizational death statistics are certainly not based on any kind of fission like that), if a billion waqfs all manage to fail, that is a valid observation about the durability of waqfs. If they were split apart, then they all had separate managers/staff, separate tasks, separate endowments etc. There will be some correlation, and this will affect, say, confidence intervals - but the percentage is what it is. 3. His third point argues that the risk needs to grow with size for perpetuities to be bad ideas. This doesn't seem right either. I gave many reasons quite aside from that against perpetuities, and his arguments against the very plausible increasing of risk aren't great either (pogroms vs the expropriation of the Church? but how can that be comparable when by definition the net worth of the poor is near-zero?). A handful of relatively recent attempts explicitly to found long-term trusts have met with with partial success (Benjamin Franklin) or comical failure (James Holdeen). Unfortunately, there have not been enough of these cases to draw any compelling conclusions. I'd say there's more than enough when you don't handwave away millennia of examples. Incidentally, I ran into another failure of long-term trusts recently: Wellington R. Burt's estate trustees managed to, over almost a century of investment in the USA during possibly the greatest sustained total economic growth in all of human history, with only minor disbursements and some minor legal defeats, no scandals or expropriation or anything, nevertheless realize a real total return of around 75% (turning the then-inflation-adjusted equivalent of ~\$400m into ~\$100m). Hi gwern, thanks for the reply. I think you might be misunderstanding my points here. In particular, regarding point 2, I'm not suggesting that the waqfs split, or that anything at all like that might have happened. The “split waqfs” point is just meant to illustrate the fact that, when waqf failures are correlated for whatever reason, arbitrarily many closures with zero long-term survivors can be compatible with a relatively low annual hazard rate. The failure of a billion waqfs would be a valid observation, but it would be an observation compatible with the belief that the probability that a new waqf survives a millennium is non-negligible. In any event--I should probably have reached out to you sooner, sorry about that! Now unfortunately I'll be too busy to discuss this more until June, but let me know if you're interested in going over all three points (and anything else regarding the value of long-term philanthropic investment) once summer comes. I would sincerely like to understand the source of our disagreements on this. In the meantime, thanks for the Wellington R. Burt example, I'll check it out! So to survive, any perpetuity has a risk of 0.01^120 = 1.000000000000001e-240. The premises in this argument aren't strong enough to support conclusions like that. Expropriation risks have declined strikingly, particularly in advanced societies, and it's easy enough to describe scenarios in which the annual risk of expropriation falls to extremely low levels, e.g. a stable world government run by patient immortals, or with an automated legal system designed for ultra-stability. ETA: Weitzman on uncertainty about discount/expropriation rates. The premises in this argument aren't strong enough to support conclusions like that. Sure. But the support for other parts of the perpetuity argument like long-term real returns aren't strong either. And a better model would take into account diseconomies of scale. Improbability needs to work both ways, or else you're just setting up Pascalian wagers... Expropriation risks have declined strikingly, particularly in advanced societies, They have? and it's easy enough to describe scenarios in which the annual risk of expropriation falls to extremely low levels Even easier to describe scenarios in which the risk spikes. How's the Middle East doing lately? Are the various nuclear powers like Russia and North Korea still on friendly terms with everyone, and nuclear war utterly unthinkable? ETA: Weitzman on uncertainty about discount/expropriation rates. This seems to be purely theoretical modeling which does not address my many disjunctive & empirical arguments above against the perpetuity strategy. Expropriation risks have declined strikingly, particularly in advanced societies I see no reasons to conclude that. Au contraire, I see expropriation risks rising as the government power grows and the political need to keep the feeding trough full becomes difficult to satisfy. it's easy enough to describe scenarios in which the annual risk of expropriation falls to extremely low levels "Easy to describe" is not at all the same thing as "Are likely". Both utopias and dystopias are easy to describe. I have some trouble conceiving of what would beat a consistent argument a googol fold. Now I don't anymore. I stand corrected. Thank you Gwern. Robin used a Dirty Math Trick that works on us because we're not used to dealing with large numbers. He used a large time scale of 12000 years, and assumed exponential growth in wealth at a reasonable rate over that time period. But then for depreciating the value of the wealth due to the fact that the intended recipients might not actually receive it, he used a relatively small linear factor of 1/1000 which seems like it was pulled out of a hat. It would make more sense to assume that there is some probability every year that the accumulated wealth will be wiped out by civil war, communist takeover, nuclear holocaust, etc etc. Even if this yearly probability were small, applied over a long period of time, it would still counteract the exponential blowup in the value of the wealth. The resulting conclusion would be totally dependent on the probability of calamity: if you use a 0.01% chance of total loss, then you have about a 30% chance of coming out with the big sum mentioned in the article. But if you use a 1% chance, then your likelihood of making it to 12000 years with the money intact is 4e-53. Do you think the risk per year of losing the accumulated wealth is higher in the far future than in the near future? If the risk is not higher, doesn't your objection generalize to ordinary (near-future) investments? [-][anonymous]9y 8 Yes. If you're not around to manage the money, it's far more likely to be embezzled or end up used on something no longer useful. Also, many possible risks you can see coming before they actually happen. The Brazilian Empire isn't going to invade and pillage the USA in the next 10 years, but can you be so sure that it won't happen in the 3240s? Oh you know nothing about the Brazilian Empire... We look tame on the outside... but it's the atom's inside that counts... As I said in response to Gwern's comment, there is uncertainty over rates of expropriation/loss, and the expected value disproportionately comes from the possibility of low loss rates. That is why Robin talks about 1/1000, he's raising the possibility that the legal order will be such as to sustain great growth, and the laws of physics will allow unreasonably large populations or wealth. Now, it is still a pretty questionable comparison, because there are plenty of other possibilities for mega-influence, like changing the probability that such compounding can take place (and isn't pre-empted by expropriation, nuclear war, etc). Nice catch! I'm not sure what an investment in a particular far-future time would look like. Money does not, in fact, breed and multiply when left in a vault for long enough. It increases by being invested in things that give payoffs or otherwise rise in value. Even if you have a giant stockpile of cash and put it in a bank savings account, the bank will then take it and lend it out to people who will make use of it for whatever projects they're up to. If you do that, all you're doing is letting the bank (and the borrowers) choose the uses of your money for the first while, and then when you eventually take it out you take the choice back and make it yourself. The one way I can think of to actually invest in the distant future is to find or create some project that will have a massive payoff in the distant future but low payoffs before that, and I don't think anyone knows of a project that pays off further than 100 years in the future. Maybe you could try to create a fund that explicitly looks for far-future payoff opportunities and invests in them, but I don't think one exists right now, and the idea is non-trivial. I dunno, maybe there's something else I'm missing, though. Likewise, if one actually expects to collect a googol dollars from investment, then either (a) galactic economies would need to be servicing the interest payments or (b) inflation has rendered dollars nearly valueless. I'm not sure what an investment in a particular far-future time would look like. Maybe like this: Franklin [left] £1000 each to Philadelphia and Boston in his will to be invested for 200 years. He died in 1790, and by 1990 the funds had grown to 2.3, 5M\$, giving factors of 35, 76 inflation-adjusted gains, for annual returns of 1.8, 2.2%. This is more like a conservative investment in various things by the managing funds for 200 years, followed by a reckless investment in the cities of Philadelphia and Boston at the end of 200 years. It probably didn't do particularly more for the people 200 years from the time than it did for people in the interim. Also, the most recent comment by cournot is interesting on the topic: You may also be using the wrong deflators. If you use standard CPI or other price indices, it does seem to be a lot of money. But if you think about it in terms of relative wealth you get a different figure [and standard price adjustments aren't great for looking far back in the past]. I think a pound was about 5 dollars. So if we assume that 1000 pounds = 5000 nominal dollars and we use the Econ History's price deflators http://www.measuringworth.com/uscompare/ we find that this comes to over \$2M if we use the unskilled wage and about \$5M if we use nominal GDP. As a relative share of GDP, this figure would have been an enormous \$380M or so. The latter is not an irrelevant calculation. Given how wealthy someone had to be (relative to the poor in the 18th century) to fork over a thousand pounds in Franklin's time, he might have done more good with it then than you could do with 2 to 5 million bucks today. That is unreasonable because we have more access to means of helping the poor today. If you expect the trend to go on into the future, than 2 million tomorrow is always better than a thousand today, which approximates maximal 3 lives on AMF of SCI all you're doing is letting the bank (and the borrowers) choose the uses of your money for the first while, You're letting the bank and borrowers choose uses which they expect to be worth more than the cost, under the knowledge that they may be bankrupted if they choose poorly and keep the surplus profits if they choose well. These constraints tend to lead to fewer consumable luxury purchases and more carefully selected productive investments, and having more of the latter increases the potential economic output of the future. There are many caveats to this, though. Does our potential economic output really have no upper bound within a hundred orders of magnitude of its present state? That seems unlikely, but if not then those exponential returns are just the bottom tails of S-curves. Is this economic system going to be protected from overwhelming corruption, violence, and theft for a future period longer than all prior human history? That would be historically unprecedented, but it only takes one disaster to wipe out a fortune. Like a lot of Robin's stuff, he makes the assumption that everyone already knows about some argument that's mostly original to him, and then proceeds to deduce why they're not acting on that (nonexistent) knowledge. Personally, after reading his argument, I did become marginally more interested in doing something like what he describes. However, I'm not convinced that it's obviously the most altruistic way to use resources. For one thing, it seems possible that making the world a better place now would actually yield greater returns in the long run. Let's say I donate to AMF, saving 10 African children from dying. Those children grow up and do useful work, shifting the world economy a bit forward on its exponential growth curve (say, from t=10 to t=10.01). 12,000 years later, we're at t=12,010.01 instead of t=12.010--but since we're so far along the exponential growth curve at this point, that ends up making a much larger difference. The difference that your contribution makes will grow exponentially just like in Robin's plan: $e^\{t \+ 0\.01\} \- e^\{t\} = e^\{t\}\(e^\{0\.01\} \- 1\$) There are lots of people who are trying to maximize their individual return on investment (through buying stocks and stuff), so opportunities there are thoroughly picked over. By contrast, there are a relatively small number of dollars chasing Wikipedia-style projects that accelerate economic growth without delivering financial returns to their creators, so ROI should (theoretically) be much higher. One problem with this plan is that although you'll have a larger impact on growing the world economy, it's not clear to what degree a wealthier world economy contributes to altruistic ends. If everyone becomes extremely uncharitable between now and the very far future, then you'd be relatively better off following Robin's plan. Another problem: if we assume a constant background probability that your investment ends up being worthless (because humanity destroys itself, or because it gets stolen, or whatever), then this is exponential decay, which has the potential to cancel out exponential growth. Let's say your investment has a constant .99 probability of continuing to exist each year. In 100 years, the probability that your investment still exists is .36. After 12,000 years, the probability that your investment still exists is about .000000000000000000000000000000000000000000000000000004. Another problem with Robin's plan: although your investment grows exponentially, it stays constant as a factor of the world economy. Let's say global GDP is \$80 trillion and you put \$1000 in to Robin's idea (000000001.25% of global GDP). Your investment grows at the same rate as the global economy, so a billion years later, your investment is still amounts to about 000000001.25% of global GDP. If the number of people alive with problems to solve has stayed roughly constant in that period, then it's unlikely your investment will do much good, since everyone should be ridiculously wealthy anyway (unless there's extreme global inequality). So yeah, lots of potential problems with Robin's argument. Sorry if I'm repeating stuff that was already said in the comments here or there. unless there's extreme global inequality This is a bizarre caveat, given that we currently have extreme global inequality. Not to the point that you could cure world hunger by paying a few cents. More points, some in favor of Robin: The probability of losing your money each year is likely not independent. If you've managed to last the past 11,000 years, it's relatively more likely you'll last another 1,000. Also, if the expected gain year-on-year is positive (for example, if our assets grow by a factor of 1.02 every year, and there's a .99 chance that they continue to exist, that means their expected year-on-year growth is a factor of 1.0098), then the argument could still work. But you start getting in to Pascal's Mugging territory there. Another point is the implausibility of economic growth continuing for that long. Things don't grow exponentially in nature forever. Typically they reach some carrying capacity, run out of resources, etc. Overall I think Robin's idea has a low enough probability of working that it runs in to Pascal's Mugging territory, but it might be worth doing with a small fraction of available altruistic resources. The Southern red oak tree (Q. falcata) grows at a rate something like 1.25 feet every year, so in 12,000 years, I should have a tree over 12,000 feet tall, right? Making a 2 percent return on investment which you expect to pay off after 12 thousand years is like planting a tree and expecting it to grow to 12,000 feet, or starting with two rabbits and expecting them to cover the entire world in a dozen generations. The oldest bank in the world has existed a paltry few centuries Making a 2 percent return on investment which you expect to pay off after 12 thousand years is like planting a tree and expecting it to grow to 12,000 feet, I don't think this is a fair analogy. Robin explicitly assumed a 1/1000 chance of success. So he clearly didn't expect that his investment would pay off after 12k years. Is planting a thousand trees and expecting one of them to grow to 12,000 feet a better analogy? The point isn't that there is a small random percentage chance of failure/success. Your tree doesn't have 1/1000 chance of growing to 12,000 feet, there are structural problems with the way the world works that make it functionally impossible. There are trees near ten thousand years old, none of which have grown anywhere near that tall, because the number we assign to "growth rate" is actually one of the very very many variables that affect which trees can exist and how they do so. Using a self-multiplying "Growth-rate" number to try to figure out how your investment fund is going to do in 12k years is ignoring just as many variables. Do you expect your investment to grow as a fraction of total wealth (rather than just keeping pace with overall economic growth)? If yes: How high a proportion of total wealth do you expect it to become? Thought experiment: Suppose that a large fraction of the wealth on Earth was held by a "charitable trust" which was started 12,000 years ago, had spent the intervening time solely managing its wealth (not doing anything charitable), and now was seeking to use its resources for altruistic purposes (following the guidelines set forth by the person who gave it instructions 12,000 years ago). Would that be better than the status quo, or worse? By how much? Second thought experiment: Suppose that a large fraction of the wealth on Earth was held by a "charitable trust" which was started 11,000 years ago, had spent the intervening time solely managing its wealth (not doing anything charitable), and was under strict instructions to spend the next 1,000 years solely managing its wealth (not doing anything charitable) before it finally turned to altruistic purposes. Would that be better than the status quo, or worse? By how much? [-][anonymous]9y 15 Methuselah trusts are not entirely legal. Someone who tries to set one up today may be prevented from doing so. http://www.laphamsquarterly.org/essays/trust-issues.php?page=all You would need high status such as Benjamin Franklin to avoid having it robbed eventually anyway. And in order to have it last thousands of years you need your status to last thousands of years. The bitcoin blockchain looks like it will almost last forever, since there are many fanatics that would keep the flame lit even if there was a severe crackdown. So, an answer for the extreme rational altruist seems to lie in how to encode the values of their trust in something like a bitcoin blockchain, a peer to peer network that rewards participants in some manner, giving them the motive to keep the network alive. The bitcoin blockchain looks like it will almost last forever, since there are many fanatics that would keep the flame lit even if there was a severe crackdown. That seems highly unlikely, unless you actually meant something like 'the successors to the bitcoin blockchain'. We already know that quantum computing is going to lop off a large fraction of the security in the hashes used in Bitcoin, and no cryptographic hash has so far lasted even a century. I agree to a certain extent. I just pointed out one thing, probably the only thing, that is fairly immune from the law , is expected to last fairly long and rewards its participants. I did mention, something like a blockchain, a peer to peer network that rewards its participants. Contrarians and even reactionaries can use something like this to preserve and persist their values across time. Doesn't that assume that the bitcoin protocol isn't altered with updated security? It's hard to update the protocol when most of the network will ignore and shun any updated clients. Any update as invasive as changing the hash functions will probably require shifting to a new blockchain; hence: unless you actually meant something like 'the successors to the bitcoin blockchain' But the devs released updated clients all the time. There's a big difference between adding some bugfixes or nifty userland features to what is now merely one client among many - and making a fundamental backwards incompatible upgrade to the entire protocol which would affect every client, miner, and interfacing software with major security ramifications. Interoperability is fragile (witness the recent blockchain fork which led to lead dev Gavin paying out >\$70k in bitcoins to miners on the wrong side of the fork), and changing hash functions will break it. [-][anonymous]9y 1 If they need to change to a new hash function, there'll probably be plenty of warning, so a sensible rollout can be planned. If you need a new hash function, everyone's going to have to update anyway, and I think most people involved in bitcoin would prefer to keep the existing blockchain rather than start again from scratch. The recent fork was different, in that the problem wasn't detected until it happened (and the people running old versions are going to have to upgrade in any case). If you need a new hash function, everyone's going to have to update anyway, and I think most people involved in bitcoin would prefer to keep the existing blockchain rather than start again from scratch. But is this even possible? If the hashes are broken, depending on the attack any transaction on the 'old' blockchain may be a double-spend or theft, and so backwards compatibility just imports the new security problems. (Imagine there's a new attack which can double bitcoins at an old-chain address, but the new-chain with a hash forbidding it is backwards-compatible and accepts all old-address transmissions to new-addresses; then as soon as the attacks finally become practical, anyone can flood the new-chain with counterfeit coins.) Easiest to just make a clean break with an entirely new blockchain. People can sell out their old coins and buy in, or they can use a different scheme. (For example, Bitcoins can be verifiably destroyed, so the new blockchain's protocol might use that as a way to launder 1.0 into 2.0; at least as long as there is no largescale counterfeiting happening.) [-][anonymous]9y 1 I think it could be done, assuming there's enough time between the old hash looking vulnerable and it actually being broken: Release a new client version X which uses the new hash after some future block N. Once block N+1000 has been found , hash every block up to N using the new hash and bake a final result of that into client version X+1, such that it rejects all old-hash blocks that haven't been blessed by the new hash. Still, that is rather involved, and your destroy-to-convert scheme (which could be disabled once the old hash is looking too shaky) looks like it would work pretty well. I'm not sure how well selling old coins and buying in would work, though - someone's going to be left holding a large bag of worthless bitcoins at the end of that. I don't think the delineation between old and new will be quite so clear. Consider a new client that all the miners switch to, importing your wallet to this client causes a transaction to appear on the network transferring all your old coins to a new client, when confirmed all your old coins are now bitcoin2, which can't be sent to bitcoin1 wallets. ANy attempt to use your old bitcoin1 coins will show up as invalid. Immortal fanatics? Or fanatics very good at inspiring equal zeal in future generations? I guess after a point, the network takes care of itself, with self interest guiding the activities of participants. Of course, I could be wrong. Continued accumulation of risk of some kind of failure over the years is exponential just like the interest is, and can reach 10^-100 as easily. A space of correct arguments of N words, and space of invalid but superficially correct arguments of N words, differ in size by factor exponential in N, so improbability of correctness in general too can easily reach googol. That being said, there is genuinely a problem with description length priors and unbounded utilities. Given a valid theory of everything with length L , a fairly small increase in the length can yield a world with enormous invisible, undetectable consequences. E.g. an enormous (3^^^^3 , or BB(11) ) number of secondary worlds where the fundamental constants depend to the temperature of the coldest 1-gram piece of frozen hydrogen in our universe (something global which we may influence, killing all the aliens in those universes). The perversity is that we don't know those worlds exist, and we don't know they don't exist, and the theory where they do exist is not very much longer than simplest known ToE, and predicts exactly identical observations so it can never be disproved. I may not understand Robin's post. I think he said (paraphrased): "If you really cared about future bazillions of people, and if you are about to spend N dollars on X-risk reduction, then instead you should invest some of that so that some subset of future people - whoever would have preferred money/wealth to a reduced chance of extinction - can actually get the money; then everyone would be happier. We don't do that, which reveals that we care about appearing conscientious rather than helping future people." But this seems wrong. However high the dollar value of our investment at time T, it will only buy the inheritors some amount of wealth (computing power, intellectual content, safety, etc.). This amount is determined by how much wealth humanity has produced/has access to at time T. This wealth will be there anyway, and will benefit (some) humans with or without the investment. Then increasing the chances of this wealth being there at all - i.e. reducing X-risk - dominates our present day calculation. A flaw or trick in Robin's argument is talking about 12000 years as if that is the future that people care about. People concerned about AI and Global Warming make it a point that the harm comes, or at least is well on its way, within 100 years. 2% over 100 years is sort of crappy return, not game changing in anybody's imagination. I wonder what the longest time that has passed between an investment aimed at a return in the far future and an actual return similar to what was expected in the far future? The few things I think of are cathedrals and universities. There are a few universities that are 1000 years old, do they qualify? I think not for these reasons: 1) the universities were not founded with the intention of a payoff 500 years or a 1000 years later, they were founded with the intention of getting value from them soon after they were founded. 2) The universities that have lasted 1000 years have been re-invested in episodically and massively over the 1000 years, so the return now from the original investment is probably quite a small part of their total present value. And that's 1000 years, I am aware of NOTHING with the slightest hint of long term payoff from 1000 years ago. Look forward to being shown I'm wrong in comments, though :) 12000 years? Its a joke. 11,900 years after an AI with some real capacity is developed? Supposing I cared deeply about 12000 years from now, what "signal" would I have telling me how to invest for that time period that wouldn't be totally swamped by the noise of uncertainty and the interference of numerous possibilities? This of course says nothing of an investment that pays a consistent or even an average of 2% over 12000 years. The fact that I can imagine such a thing does not suggest that its probability of existence is > 1.02^(-12000). Benjamin Franklin bequeathed a reasonable sum of money (IIRC, 1000 pounds Stirling each to two cities) to get invested for two centuries. The fund is worth something like \$5 million today. I don't recall the exact details, but it's a good past example of something like this. I suspect the impact of this fund has been pretty small compared to the other stuff that Franklin did. Franklin donated a small amount of money which grew to a very small fraction of the economy for a time period less than 1/60th of the proposal; he got incredibly lucky that he did it in America, and not in any of the other growing powers or economies of his day or later, such as Russia/France/Germany/Mexico/Argentina/Japan, which might have wiped out his legacy; and even Americans have found it difficult to replicate his feat. I plan on being alive in 12,000 years, please send bitcoins. Note that Robin's post was an argument that nobody cares much about the far future. The conclusion is kind-of obvious - evolved organisms tend to care about things they can influence. How will your investment create the future wealth? If you intend loan it and create interest, how much harm will those loans do (through environmental degradation) over 12k years? If you instead invest that money right now in something that maximizes good, rather than investing it to maximize returns for 12k years first, would you expect higher total good? Why 12k years, and not 12k+5, or 50? It hardly seems like a likely place for a global or even local maximum. I can 10^100 trivially: if inflation averages about 2% per year, your compound interest keeps your value constant, providing no benefit over time. What values for inflation do you predict over the next 12,000 years, broken down into averages for each 200-year period? The first problem is that, given that average inflation rates have exceeded 2% per year, any investment at 2% annually is going to be a terrible investment in the future - you're actually LOSING money every year on average rather than gaining it, in terms of actual value. However, let us assume that for whatever reason you were actually getting 2% growth on top of inflation per year (an unlikely scenario for an unguided account, but bear with me a moment). The second problem is that the result is, obviously, irrational. There are not even a googol particles in the Universe; how could you have a googol dollars and have that be a reasonable result? There isn't anything to purchase with a googol dollars. Ergo, we must assume our assumption is flawed. The flaw in this assumption is, of course, the lack of understanding of exponential growth; all exponential growth is self-limiting in nature. In reality you run into real constraints to how much growth you can have, how much of the economy can be in your fund, ect. As someone else pointed out, you can't assume that you will have indefinite growth; it is confined by the size of the economy (which will never reach a googol dollars in current day money), by the likelihood of you actually getting said returns (which is 0), whether anyone would recognize a currency when too much of the world economy was bound up into it, ect. The truth is it is just a terrible argument to begin with. Anyone who promises you 2% growth for even a thousand years is a dirty liar. The rate seems reasonable but in actuality it is anything but. Do you think that the world economy is going to increase by 2% a year for the next 1000 years? I don't, at least not in real, inflation-adjusted dollars. The total amount of energy we can possibly use on the planet alone would constrain such economic growth. [-][anonymous]9y 0 I'm not sure what an investment in a particular far-future time would look like. Maybe like this: Franklin [left] £1000 each to Philadelphia and Boston in his will to be invested for 200 years. He died in 1790, and by 1990 the funds had grown to 2.3, 5M\$, giving factors of 35, 76 inflation-adjusted gains, for annual returns of 1.8, 2.2%. [This comment is no longer endorsed by its author]Reply Utilitarianism has these problems whether you look at thousands of years in the future or today. It's unlikely that any "utilitarian" reading this list today doesn't have resources they reserve for themselves or those close to them that wouldn't easily produce greater good for a greater number spent elsewhere.
2022-07-01 07:34:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35914885997772217, "perplexity": 1740.916670778892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00791.warc.gz"}
http://openstudy.com/updates/50ffb00be4b0426c63684d15
## LevB4Lev Group Title (4x+1)^2=20 solve each equation by the square root property one year ago one year ago 1. hartnn Group Title did you take square root on both sides, first ? what you get ? 2. LevB4Lev Group Title solve for X i mean 3. hartnn Group Title yeah, take square root on both sides of $$(4x+1)^2=20$$ what do you get ? 4. LevB4Lev Group Title not sure 5. LevB4Lev Group Title 4x-1=$\sqrt{20}$ = 6. hartnn Group Title that would be $$4x+1 =\pm \sqrt {20}$$ ok? now subtract 1 from both sides, what u get ? 7. LevB4Lev Group Title 4x=$\pm \sqrt{20}$ - 1 8. hartnn Group Title thats correct, now just divide both sides by 4 and you get 2 values of x. 9. LevB4Lev Group Title i got x = PM square root 4 10. hartnn Group Title ? how ? $$4x=\pm \sqrt{20}-1 \implies x = \dfrac{\pm \sqrt{20}-1}{4}$$ 11. LevB4Lev Group Title |dw:1358934890536:dw| 12. LevB4Lev Group Title so is it 19 divided by 4 13. hartnn Group Title no, it'll only be $$x = \dfrac{\pm \sqrt{20}-1}{4}$$ or one simplification will be $$x = \dfrac{\pm 2\sqrt{5}-1}{4}$$ 14. LevB4Lev Group Title interesting
2014-09-02 09:22:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6774420142173767, "perplexity": 8056.1722526256235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909040357-00429-ip-10-180-136-8.ec2.internal.warc.gz"}
http://www.mathynomial.com/problem/2054
# Problem #2054 2054 There are 5 yellow pegs, 4 red pegs, 3 green pegs, 2 blue pegs, and 1 orange peg to be placed on a triangular peg board. In how many ways can the pegs be placed so that no (horizontal) row or (vertical) column contains two pegs of the same color? $[asy] unitsize(20); dot((0,0)); dot((1,0)); dot((2,0)); dot((3,0)); dot((4,0)); dot((0,1)); dot((1,1)); dot((2,1)); dot((3,1)); dot((0,2)); dot((1,2)); dot((2,2)); dot((0,3)); dot((1,3)); dot((0,4)); [/asy]$ $\mathrm{(A)}\ 0 \qquad\mathrm{(B)}\ 1 \qquad\mathrm{(C)}\ 5!\cdot 4!\cdot 3!\cdot 2!\cdot 1! \qquad\mathrm{(D)}\ \frac{15!}{5!\cdot 4!\cdot 3!\cdot 2!\cdot 1!} \qquad\mathrm{(E)}\ 15!$ This problem is copyrighted by the American Mathematics Competitions. Note: you aren't logged in. If you log in, we'll keep a record of which problems you've solved.
2018-02-20 17:22:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4069281220436096, "perplexity": 676.6222426905822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813059.39/warc/CC-MAIN-20180220165417-20180220185417-00312.warc.gz"}
https://zbmath.org/?q=0866.22006
# zbMATH — the first resource for mathematics Rescaling of Markov shifts. (English) Zbl 0866.22006 For any compact metric space $$A$$ and any positive integer $$d$$ let $$A^{\mathbb Z^d}$$ denote as usual the space of sequences $$(a_{\mathbf n})_{\mathbf n\in \mathbb Z^d}$$ of elements from $$A$$ with indices in $$\mathbb Z^d$$ (endowed with the product topology). Let $$\sigma:\mathbb Z^d\times A^{\mathbb Z^d}\rightarrow A^{\mathbb Z^d}$$ denote the shift action of $$\mathbb Z^d$$ on $$A^{\mathbb Z^d}$$, that is, $$\sigma \left({\mathbf m}, (a_{\mathbf n})_{\mathbf n\in\mathbb Z^d}\right)=(a_{\mathbf n+\mathbf m})_{\mathbf n\in\mathbb Z^d}$$. A closed subset $$\Sigma_{(F,P)}\subset A^{\mathbb Z^d}$$ is called a Markov shift if there are a finite set $$F\subset\mathbb Z^d$$ and a set $$P\subset A^F$$ of finite sequences indexed by $$F$$ such that $$(a_{\mathbf n})_{\mathbf n\in\mathbb Z^d}\in \Sigma_{(F,P)}$$ if and only if $$(a_{\mathbf n+\mathbf m})_{\mathbf n\in F}\in P$$ for any $$\mathbf m\in\mathbb Z^d$$. The restriction of $$\sigma$$, $$\sigma^{(F,P)}:\mathbb Z^d\times\Sigma_{(F,P)} \rightarrow \Sigma_{(F,P)}$$, is then well defined and induces a discrete dynamical system $$(\Sigma_{(F,P)},\sigma^{(F,P)})$$ (which we simply denote again by $$\Sigma_{(F,P)}$$). The present paper deals with so-called rescalings of Markov shifts. Namely, let $$M$$ be a $$d\times d$$ integer matrix with $$\det M\neq 0$$. With the above notation, write $$M(F)=\{{\mathbf n}M: \mathbf n\in F\}$$ and define $$M(P)\subset A^{M(F)}$$ as follows: $$(a_{\mathbf m})_{\mathbf m\in M(F)}\in M(P)$$ if and only if $$(a_{{\mathbf m}M^{-1}})_{\mathbf m\in M(F)}\in P$$. Then $$\Sigma_{(M(F),M(P))}$$ is called the $$M$$-rescaling of the Markov shift $$\Sigma_{(F,P)}$$. While $$\Sigma_{(F,P)}$$ and $$\Sigma_{(M(F),M(P))}$$ need not be topologically conjugate, it is shown in the paper that they always have the same topological entropy. Some examples (particularly from the theory of group automorphisms) are brought into consideration. Markov shifts which are invariant under rescaling (that is, which are topologically conjugated to their $$M$$-rescaling for any matrix $$M$$) are also briefly studied. For example it is shown that if a Markov shift has $$s$$ fixed points and is invariant under rescaling then its entropy is greater than or equal to $$\log s$$. ##### MSC: 37B05 Dynamical systems involving transformations and group actions with special properties (minimality, distality, proximality, expansivity, etc.) 37B40 Topological entropy 22D40 Ergodic theory on groups 37C15 Topological and differentiable equivalence, conjugacy, moduli, classification of dynamical systems 37E99 Low-dimensional dynamical systems Full Text:
2021-10-26 05:23:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.933117151260376, "perplexity": 188.12277358995274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00338.warc.gz"}
https://datascience.stackexchange.com/questions/101918/creating-quality-data-with-sklearn-datasets-make-classification
# Creating quality data with sklearn.datasets.make_classification I'm doing some experiments on some svm kernel methods. My methodology for comparing those is having some multi-class and binary classification problems, and also, in each group, having some examples of p > n, n > p and p == n. However, finding some examples (5 or so for each of those subgroups) is really hard, so I want to generate them with sklearn. Said so, I don't know how to do it in a consistent and realistic way. I can generate the datasets, but I don't know which parameters set to which values for my purpose. So basically my question is if there is a metodological way to perform this generation of datasets, and if so, which is. • This is not that clear to me whet you need, but If I'm not wrong you are looking for a way to generate reliable syntactic data. If so you can use pypi.org/project/sdgym and I know this is not Scikit-learn as your requirement but might be a good start Sep 13 at 22:58 • @JulioJesus Gonna check it, thanks. I need some way to generate synthetic data with some restriction about p and n, due to the fact that I don't have any datasets with those restrictions. I could just try to generate them with sklearn methods, but I don't think that is a "reliable" way for my benchmarking purposes. I want to know if there is a known method for this kind of problem. Sep 13 at 23:19 ## A comprehensive guide on generating artificial datasets for testing purposes!! 1.) Blobs Classification Problem The make_blobs() function can be used to generate blobs of points with a Gaussian distribution. You can control how many blobs to generate and the number of samples to generate, as well as a host of other properties. The problem is suitable for linear classification problems given the linearly separable nature of the blobs. The example below generates a 2D dataset of samples with three blobs as a multi-class classification prediction problem. Each observation has two inputs and 0, 1, or 2 class values. from sklearn.datasets import make_blobs from matplotlib import pyplot from pandas import DataFrame # generate 2d classification dataset # n_samples is the no of points, n_features is the no of features for each sample and centers is the number of centers to generate, or the fixed center locations X, y = make_blobs(n_samples=100, centers=3, n_features=2) # scatter plot, dots colored by class value df = DataFrame(dict(x=X[:,0], y=X[:,1], label=y)) colors = {0:'red', 1:'blue', 2:'green'} fig, ax = pyplot.subplots() grouped = df.groupby('label') for key, group in grouped: group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key]) pyplot.show() Running the example generates the inputs and outputs for the problem and then creates a handy 2D plot showing points for the different classes using different colors. 2.) Moons Classification Problem The make_moons() function is for binary classification and will generate a swirl pattern, or two moons.You can control how noisy the moon shapes are and the number of samples to generate. This test problem is suitable for algorithms that are capable of learning nonlinear class boundaries. The example below generates a moon dataset with moderate noise. from sklearn.datasets import make_moons from matplotlib import pyplot from pandas import DataFrame # generate 2d classification dataset X, y = make_moons(n_samples=100, noise=0.1) # scatter plot, dots colored by class value df = DataFrame(dict(x=X[:,0], y=X[:,1], label=y)) colors = {0:'red', 1:'blue'} fig, ax = pyplot.subplots() grouped = df.groupby('label') for key, group in grouped: group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key]) pyplot.show() 3.) Circles Classification Problem The make_circles() function generates a binary classification problem with datasets that fall into concentric circles. Again, as with the moons test problem, you can control the amount of noise in the shapes. This test problem is suitable for algorithms that can learn complex non-linear manifolds. The example below generates a circles dataset with some noise. from sklearn.datasets import make_circles from matplotlib import pyplot from pandas import DataFrame # generate 2d classification dataset X, y = make_circles(n_samples=100, noise=0.05) # scatter plot, dots colored by class value df = DataFrame(dict(x=X[:,0], y=X[:,1], label=y)) colors = {0:'red', 1:'blue'} fig, ax = pyplot.subplots() grouped = df.groupby('label') for key, group in grouped: group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key]) pyplot.show() 4.) Imbalanced datasets The make_classification function can be used to generate a random n-class classification problem. This initially creates clusters of points normally distributed (std=1) about vertices of an n_informative-dimensional hypercube with sides of length 2*class_sep and assigns an equal number of clusters to each class. It introduces interdependence between these features and adds various types of further noise to the data. The complete example of defining the dataset and performing random oversampling (just one of the many methods) to balance the class distribution is listed below. from collections import Counter from sklearn.datasets import make_classification from imblearn.over_sampling import RandomOverSampler # define dataset # here n_samples is the no of samples you want, weights is the magnitude of # imbalance you want in your data, n_classes is the no of output classes # you want and flip_y is the fraction of samples whose class is assigned # randomly. Larger values introduce noise in the labels and make the X, y = make_classification(n_samples=10000, weights=[0.99], n_classes = 2, flip_y=0) # summarize class distribution print(Counter(y)) # define oversampling strategy oversample = RandomOverSampler(sampling_strategy='minority') # fit and apply the transform X_over, y_over = oversample.fit_resample(X, y) # summarize class distribution print(Counter(y_over)) Running the example first creates the dataset, then summarizes the class distribution. We can see that there are nearly 10K examples in the majority class and 100 examples in the minority class. Then the random oversample transform is defined to balance the minority class, then fit and applied to the dataset. The class distribution for the transformed dataset is reported showing that now the minority class has the same number of examples as the majority class. Before oversampling Counter({0:9900, 1:100}) After oversampling Counter({0:9900, 1:9900}) • I'm afraid this does not answer my question, on how to set realistic and reliable parameters for experimental data. This only gives some examples that can be found in the docs. Sep 13 at 17:55 • @Norhther As I understand from the question you want to create binary and multiclass classification datasets with balanced and imbalanced classes right? You can do that using the make_classification function mentioned in point 4. Sep 14 at 5:40 • @Norhther you can generate imbalanced classes using the weights parameters and if you want multi class dataset, you can use n_classes parameter. Sep 14 at 5:41
2021-10-26 16:13:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37270116806030273, "perplexity": 2097.4650919792434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00256.warc.gz"}
https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Sampling_theorem.html
# Nyquist–Shannon sampling theorem Example of magnitude of the Fourier transform of a bandlimited function In the field of digital signal processing, the sampling theorem is a fundamental bridge between continuous-time signals (often called "analog signals") and discrete-time signals (often called "digital signals"). It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal of finite bandwidth. Strictly speaking, the theorem only applies to a class of mathematical functions having a Fourier transform that is zero outside of a finite region of frequencies. Intuitively we expect that when one reduces a continuous function to a discrete sequence and interpolates back to a continuous function, the fidelity of the result depends on the density (or sample rate) of the original samples. The sampling theorem introduces the concept of a sample rate that is sufficient for perfect fidelity for the class of functions that are bandlimited to a given bandwidth, such that no actual information is lost in the sampling process. It expresses the sufficient sample rate in terms of the bandwidth for the class of functions. The theorem also leads to a formula for perfectly reconstructing the original continuous-time function from the samples. Perfect reconstruction may still be possible when the sample-rate criterion is not satisfied, provided other constraints on the signal are known. (See § Sampling of non-baseband signals below, and compressed sensing.) In some cases (when the sample-rate criterion is not satisfied), utilizing additional constraints allows for approximate reconstructions. The fidelity of these reconstructions can be verified and quantified utilizing Bochner's theorem.[1] The name Nyquist–Shannon sampling theorem honors Harry Nyquist and Claude Shannon. The theorem was also discovered independently by E. T. Whittaker, by Vladimir Kotelnikov, and by others. It is thus also known by the names Nyquist–Shannon–Kotelnikov, Whittaker–Shannon–Kotelnikov, Whittaker–Nyquist–Kotelnikov–Shannon, and cardinal theorem of interpolation. ## Introduction Sampling is a process of converting a signal (for example, a function of continuous time and/or space) into a numeric sequence (a function of discrete time and/or space). Shannon's version of the theorem states:[2] If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart. A sufficient sample-rate is therefore 2B samples/second, or anything larger. Equivalently, for a given sample rate fs, perfect reconstruction is guaranteed possible for a bandlimit B < fs/2. When the bandlimit is too high (or there is no bandlimit), the reconstruction exhibits imperfections known as aliasing. Modern statements of the theorem are sometimes careful to explicitly state that x(t) must contain no sinusoidal component at exactly frequency B, or that B must be strictly less than ½ the sample rate. The two thresholds, 2B and fs/2 are respectively called the Nyquist rate and Nyquist frequency. And respectively, they are attributes of x(t) and of the sampling equipment. The condition described by these inequalities is called the Nyquist criterion, or sometimes the Raabe condition. The theorem is also applicable to functions of other domains, such as space, in the case of a digitized image. The only change, in the case of other domains, is the units of measure applied to t, fs, and B. The normalized sinc function: sin(πx) / (πx) ... showing the central peak at x= 0, and zero-crossings at the other integer values of x. The symbol T = 1/fs is customarily used to represent the interval between samples and is called the sample period or sampling interval. And the samples of function x(t) are commonly denoted by x[n] = x(nT) (alternatively "xn" in older signal processing literature), for all integer values of n. A mathematically ideal way to interpolate the sequence involves the use of sinc functions. Each sample in the sequence is replaced by a sinc function, centered on the time axis at the original location of the sample, nT, with the amplitude of the sinc function scaled to the sample value, x[n]. Subsequently, the sinc functions are summed into a continuous function. A mathematically equivalent method is to convolve one sinc function with a series of Dirac delta pulses, weighted by the sample values. Neither method is numerically practical. Instead, some type of approximation of the sinc functions, finite in length, is used. The imperfections attributable to the approximation are known as interpolation error. Practical digital-to-analog converters produce neither scaled and delayed sinc functions, nor ideal Dirac pulses. Instead they produce a piecewise-constant sequence of scaled and delayed rectangular pulses (the zero-order hold), usually followed by an "anti-imaging filter" to clean up spurious high-frequency content. ## Aliasing Main article: Aliasing The samples of two sine waves can be identical when at least one of them is at a frequency above half the sample rate. When x(t) is a function with a Fourier transform, X(f): the Poisson summation formula indicates that the samples, x(nT), of x(t) are sufficient to create a periodic summation of X(f). The result is: (Eq.1) X(f) (top blue) and XA(f) (bottom blue) are continuous Fourier transforms of two different functions, x(t) and xA(t) (not shown). When the functions are sampled at rate fs, the images (green) are added to the original transforms (blue) when one examines the discrete-time Fourier transforms (DTFT) of the sequences. In this hypothetical example, the DTFTs are identical, which means the sampled sequences are identical, even though the original continuous pre-sampled functions are not. If these were audio signals, x(t) and xA(t) might not sound the same. But their samples (taken at rate fs) are identical and would lead to identical reproduced sounds; thus xA(t) is an alias of x(t) at this sample rate. which is a periodic function and its equivalent representation as a Fourier series, whose coefficients are Tx(nT). This function is also known as the discrete-time Fourier transform (DTFT) of the sequence Tx(nT), for integers n. As depicted, copies of X(f) are shifted by multiples of fs and combined by addition. For a band-limited function  (X(f) = 0 for all |f| ≥ B),  and sufficiently large fs, it is possible for the copies to remain distinct from each other. But if the Nyquist criterion is not satisfied, adjacent copies overlap, and it is not possible in general to discern an unambiguous X(f). Any frequency component above fs/2 is indistinguishable from a lower-frequency component, called an alias, associated with one of the copies. In such cases, the customary interpolation techniques produce the alias, rather than the original component. When the sample-rate is pre-determined by other considerations (such as an industry standard), x(t) is usually filtered to reduce its high frequencies to acceptable levels before it is sampled. The type of filter required is a lowpass filter, and in this application it is called an anti-aliasing filter. Spectrum, Xs(f), of a properly sampled bandlimited signal (blue) and the adjacent DTFT images (green) that do not overlap. A brick-wall low-pass filter, H(f), removes the images, leaves the original spectrum, X(f), and recovers the original signal from its samples. ## Derivation as a special case of Poisson summation When there is no overlap of the copies (aka "images") of X(f), the k = 0 term of Xs(f) can be recovered by the product: where: At this point, the sampling theorem is proved, since X(f) uniquely determines x(t). All that remains is to derive the formula for reconstruction. H(f) need not be precisely defined in the region [B, fsB] because Xs(f) is zero in that region. However, the worst case is when B = fs/2, the Nyquist frequency. A function that is sufficient for that and all less severe cases is: where rect(•) is the rectangular function.  Therefore: (from  Eq.1, above). [note 1] The inverse transform of both sides produces the Whittaker–Shannon interpolation formula: which shows how the samples, x(nT), can be combined to reconstruct x(t). • Larger-than-necessary values of fs (smaller values of T), called oversampling, have no effect on the outcome of the reconstruction and have the benefit of leaving room for a transition band in which H(f) is free to take intermediate values. Undersampling, which causes aliasing, is not in general a reversible operation. • Theoretically, the interpolation formula can be implemented as a low pass filter, whose impulse response is sinc(t/T) and whose input is which is a Dirac comb function modulated by the signal samples. Practical digital-to-analog converters (DAC) implement an approximation like the zero-order hold. In that case, oversampling can reduce the approximation error. ## Shannon's original proof Poisson shows that the Fourier series in Eq.1 produces the periodic summation of X(f), regardless of fs and B. Shannon, however, only derives the series coefficients for the case fs = 2B. Virtually quoting Shannon's original paper: Let be the spectrum of   Then since is assumed to be zero outside the band . If we let where n is any positive or negative integer, we obtain On the left are values of at the sampling points. The integral on the right will be recognized as essentially[n 1] the nth coefficient in a Fourier-series expansion of the function taking the interval –B to B as a fundamental period. This means that the values of the samples determine the Fourier coefficients in the series expansion of   Thus they determine since is zero for frequencies greater than B, and for lower frequencies is determined if its Fourier coefficients are determined. But determines the original function completely, since a function is determined if its spectrum is known. Therefore the original samples determine the function completely. Shannon's proof of the theorem is complete at that point, but he goes on to discuss reconstruction via sinc functions, what we now call the Whittaker–Shannon interpolation formula as discussed above. He does not derive or prove the properties of the sinc function, but these would have been familiar to engineers reading his works at the time, since the Fourier pair relationship between rect (the rectangular function) and sinc was well known. Let be the nth sample. Then the function is represented by: As in the other proof, the existence of the Fourier transform of the original signal is assumed, so the proof does not say whether the sampling theorem extends to bandlimited stationary random processes. ### Notes 1. The actual coefficient formula contains an additional factor of So Shannon's coefficients are which agrees with Eq.1. ## Application to multivariable signals and images Subsampled image showing a Moiré pattern Properly sampled image The sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to time-dependent signals and is normally formulated in that context. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. Grayscale images, for example, are often represented as two-dimensional arrays (or matrices) of real numbers representing the relative intensities of pixels (picture elements) located at the intersections of row and column sample locations. As a result, images require two independent variables, or indices, to specify each pixel uniquely—one for the row, and one for the column. Color images typically consist of a composite of three separate grayscale images, one to represent each of the three primary colors—red, green, and blue, or RGB for short. Other colorspaces using 3-vectors for colors include HSV, CIELAB, XYZ, etc. Some colorspaces such as cyan, magenta, yellow, and black (CMYK) may represent color by four dimensions. All of these are treated as vector-valued functions over a two-dimensional sampled domain. Similar to one-dimensional discrete-time signals, images can also suffer from aliasing if the sampling resolution, or pixel density, is inadequate. For example, a digital photograph of a striped shirt with high frequencies (in other words, the distance between the stripes is small), can cause aliasing of the shirt when it is sampled by the camera's image sensor. The aliasing appears as a moiré pattern. The "solution" to higher sampling in the spatial domain for this case would be to move closer to the shirt, use a higher resolution sensor, or to optically blur the image before acquiring it with the sensor. Another example is shown to the right in the brick patterns. The top image shows the effects when the sampling theorem's condition is not satisfied. When software rescales an image (the same process that creates the thumbnail shown in the lower image) it, in effect, runs the image through a low-pass filter first and then downsamples the image to result in a smaller image that does not exhibit the moiré pattern. The top image is what happens when the image is downsampled without low-pass filtering: aliasing results. The application of the sampling theorem to images should be made with care. For example, the sampling process in any standard image sensor (CCD or CMOS camera) is relatively far from the ideal sampling which would measure the image intensity at a single point. Instead these devices have a relatively large sensor area at each sample point in order to obtain sufficient amount of light. In other words, any detector has a finite-width point spread function. The analog optical image intensity function which is sampled by the sensor device is not in general bandlimited, and the non-ideal sampling is itself a useful type of low-pass filter, though not always sufficient to remove enough high frequencies to sufficiently reduce aliasing. When the area of the sampling spot (the size of the pixel sensor) is not large enough to provide sufficient spatial anti-aliasing, a separate anti-aliasing filter (optical low-pass filter) is typically included in a camera system to further blur the optical image. Despite images having these problems in relation to the sampling theorem, the theorem can be used to describe the basics of down and up sampling of images. ## Critical frequency To illustrate the necessity of fs > 2B, consider the family of sinusoids generated by different values of θ in this formula: A family of sinusoids at the critical frequency, all having the same sample sequences of alternating +1 and –1. That is, they all are aliases of each other, even though their frequency is not above half the sample rate. With fs = 2B or equivalently T = 1/(2B), the samples are given by: regardless of the value of θ. That sort of ambiguity is the reason for the strict inequality of the sampling theorem's condition. ## Sampling of non-baseband signals As discussed by Shannon:[2] A similar result is true if the band does not start at zero frequency but at some higher value, and can be proved by a linear translation (corresponding physically to single-sideband modulation) of the zero-frequency case. In this case the elementary pulse is obtained from sin(x)/x by single-side-band modulation. That is, a sufficient no-loss condition for sampling signals that do not have baseband components exists that involves the width of the non-zero frequency interval as opposed to its highest frequency component. See Sampling (signal processing) for more details and examples. For example, in order to sample the FM radio signals in the frequency range of 100-102 MHz, it is not necessary to sample at 204 MHz (twice the upper frequency), but rather it is sufficient to sample at 4 MHz (twice the width of the frequency interval). A bandpass condition is that X(f) = 0, for all nonnegative f outside the open band of frequencies: for some nonnegative integer N. This formulation includes the normal baseband condition as the case N=0. The corresponding interpolation function is the impulse response of an ideal brick-wall bandpass filter (as opposed to the ideal brick-wall lowpass filter used above) with cutoffs at the upper and lower edges of the specified band, which is the difference between a pair of lowpass impulse responses: Other generalizations, for example to signals occupying multiple non-contiguous bands, are possible as well. Even the most generalized form of the sampling theorem does not have a provably true converse. That is, one cannot conclude that information is necessarily lost just because the conditions of the sampling theorem are not satisfied; from an engineering perspective, however, it is generally safe to assume that if the sampling theorem is not satisfied then information will most likely be lost. ## Nonuniform sampling The sampling theory of Shannon can be generalized for the case of nonuniform sampling, that is, samples not taken equally spaced in time. The Shannon sampling theory for non-uniform sampling states that a band-limited signal can be perfectly reconstructed from its samples if the average sampling rate satisfies the Nyquist condition.[3] Therefore, although uniformly spaced samples may result in easier reconstruction algorithms, it is not a necessary condition for perfect reconstruction. The general theory for non-baseband and nonuniform samples was developed in 1967 by Landau.[4] He proved that the average sampling rate (uniform or otherwise) must be twice the occupied bandwidth of the signal, assuming it is a priori known what portion of the spectrum was occupied. In the late 1990s, this work was partially extended to cover signals of when the amount of occupied bandwidth was known, but the actual occupied portion of the spectrum was unknown.[5] In the 2000s, a complete theory was developed (see the section Beyond Nyquist below) using compressed sensing. In particular, the theory, using signal processing language, is described in this 2009 paper.[6] They show, among other things, that if the frequency locations are unknown, then it is necessary to sample at least at twice the Nyquist criteria; in other words, you must pay at least a factor of 2 for not knowing the location of the spectrum. Note that minimum sampling requirements do not necessarily guarantee stability. ## Sampling below the Nyquist rate under additional restrictions Main article: Undersampling The Nyquist–Shannon sampling theorem provides a sufficient condition for the sampling and reconstruction of a band-limited signal. When reconstruction is done via the Whittaker–Shannon interpolation formula, the Nyquist criterion is also a necessary condition to avoid aliasing, in the sense that if samples are taken at a slower rate than twice the band limit, then there are some signals that will not be correctly reconstructed. However, if further restrictions are imposed on the signal, then the Nyquist criterion may no longer be a necessary condition. A non-trivial example of exploiting extra assumptions about the signal is given by the recent field of compressed sensing, which allows for full reconstruction with a sub-Nyquist sampling rate. Specifically, this applies to signals that are sparse (or compressible) in some domain. As an example, compressed sensing deals with signals that may have a low over-all bandwidth (say, the effective bandwidth EB), but the frequency locations are unknown, rather than all together in a single band, so that the passband technique doesn't apply. In other words, the frequency spectrum is sparse. Traditionally, the necessary sampling rate is thus 2B. Using compressed sensing techniques, the signal could be perfectly reconstructed if it is sampled at a rate slightly lower than 2EB. The downside of this approach is that reconstruction is no longer given by a formula, but instead by the solution to a convex optimization program which requires well-studied but nonlinear methods. ## Historical background The sampling theorem was implied by the work of Harry Nyquist in 1928,[7]  in which he showed that up to 2B independent pulse samples could be sent through a system of bandwidth B; but he did not explicitly consider the problem of sampling and reconstruction of continuous signals. About the same time, Karl Küpfmüller showed a similar result,[8] and discussed the sinc-function impulse response of a band-limiting filter, via its integral, the step response Integralsinus; this bandlimiting and reconstruction filter that is so central to the sampling theorem is sometimes referred to as a Küpfmüller filter (but seldom so in English). The sampling theorem, essentially a dual of Nyquist's result, was proved by Claude E. Shannon.[2]  V. A. Kotelnikov published similar results in 1933,[9]  as did the mathematician E. T. Whittaker in 1915,[10]  J. M. Whittaker in 1935,[11]  and Gabor in 1946 ("Theory of communication").  In 1999, the Eduard Rhein Foundation awarded Kotelnikov their Basic Research Award "for the first theoretically exact formulation of the sampling theorem." In 1948 and 1949, Claude E. Shannon published the two revolutionary papers in which he founded the information theory. [12][13][2]  In Shannon 1948 the sampling theorem is formulated as “Theorem 13”:  Let f(t) contain no frequencies over W.  Then where Xn = f(n/2W).  It was not until these papers were published that the theorem known as “Shannon’s sampling theorem” became common property among communication engineers, although Shannon himself writes that this is a fact which is common knowledge in the communication art.[note 2] A few lines further on, however, he adds: ... "but in spite of its evident importance [it] seems not to have appeared explicitly in the literature of communication theory". ### Other discoverers Others who have independently discovered or played roles in the development of the sampling theorem have been discussed in several historical articles, for example by Jerri[14] and by Lüke.[15] For example, Lüke points out that H. Raabe, an assistant to Küpfmüller, proved the theorem in his 1939 Ph.D. dissertation; the term Raabe condition came to be associated with the criterion for unambiguous representation (sampling rate greater than twice the bandwidth). Meijering[16] mentions several other discoverers and names in a paragraph and pair of footnotes: As pointed out by Higgins [135], the sampling theorem should really be considered in two parts, as done above: the first stating the fact that a bandlimited function is completely determined by its samples, the second describing how to reconstruct the function using its samples. Both parts of the sampling theorem were given in a somewhat different form by J. M. Whittaker [350, 351, 353] and before him also by Ogura [241, 242]. They were probably not aware of the fact that the first part of the theorem had been stated as early as 1897 by Borel [25].27 As we have seen, Borel also used around that time what became known as the cardinal series. However, he appears not to have made the link [135]. In later years it became known that the sampling theorem had been presented before Shannon to the Russian communication community by Kotel'nikov [173]. In more implicit, verbal form, it had also been described in the German literature by Raabe [257]. Several authors [33, 205] have mentioned that Someya [296] introduced the theorem in the Japanese literature parallel to Shannon. In the English literature, Weston [347] introduced it independently of Shannon around the same time.28 27 Several authors, following Black [16], have claimed that this first part of the sampling theorem was stated even earlier by Cauchy, in a paper [41] published in 1841. However, the paper of Cauchy does not contain such a statement, as has been pointed out by Higgins [135]. 28 As a consequence of the discovery of the several independent introductions of the sampling theorem, people started to refer to the theorem by including the names of the aforementioned authors, resulting in such catchphrases as “the Whittaker–Kotel’nikov–Shannon (WKS) sampling theorem" [155] or even "the Whittaker–Kotel'nikov–Raabe–Shannon–Someya sampling theorem" [33]. To avoid confusion, perhaps the best thing to do is to refer to it as the sampling theorem, "rather than trying to find a title that does justice to all claimants" [136]. ### Why Nyquist? Exactly how, when, or why Harry Nyquist had his name attached to the sampling theorem remains obscure. The term Nyquist Sampling Theorem (capitalized thus) appeared as early as 1959 in a book from his former employer, Bell Labs,[17] and appeared again in 1963,[18] and not capitalized in 1965.[19] It had been called the Shannon Sampling Theorem as early as 1954,[20] but also just the sampling theorem by several other books in the early 1950s. In 1958, Blackman and Tukey cited Nyquist's 1928 paper as a reference for the sampling theorem of information theory,[21] even though that paper does not treat sampling and reconstruction of continuous signals as others did. Their glossary of terms includes these entries: Sampling theorem (of information theory) Nyquist's result that equi-spaced data, with two or more points per cycle of highest frequency, allows reconstruction of band-limited functions. (See Cardinal theorem.) Cardinal theorem (of interpolation theory) A precise statement of the conditions under which values given at a doubly infinite set of equally spaced points can be interpolated to yield a continuous band-limited function with the aid of the function Exactly what "Nyquist's result" they are referring to remains mysterious. When Shannon stated and proved the sampling theorem in his 1949 paper, according to Meijering[16] "he referred to the critical sampling interval T = 1/(2W) as the Nyquist interval corresponding to the band W, in recognition of Nyquist’s discovery of the fundamental importance of this interval in connection with telegraphy." This explains Nyquist's name on the critical interval, but not on the theorem. Similarly, Nyquist's name was attached to Nyquist rate in 1953 by Harold S. Black:[22] "If the essential frequency range is limited to B cycles per second, 2B was given by Nyquist as the maximum number of code elements per second that could be unambiguously resolved, assuming the peak interference is less half a quantum step. This rate is generally referred to as signaling at the Nyquist rate and 1/(2B) has been termed a Nyquist interval." (bold added for emphasis; italics as in the original) According to the OED, this may be the origin of the term Nyquist rate. In Black's usage, it is not a sampling rate, but a signaling rate. Wikimedia Commons has media related to Nyquist Shannon theorem. ## Notes 1. The sinc function follows from rows 202 and 102 of the transform tables 2. Shannon 1949, p 448. ## References 1. Nemirovsky, Jonathan; Shimron, Efrat (2015). "Utilizing Bochners Theorem for Constrained Evaluation of Missing Fourier Data". arXiv: [physics.med-ph]. 2. Shannon, Claude E. (January 1949). "Communication in the presence of noise". Proc. Institute of Radio Engineers. 37 (1): 10–21. Reprint as classic paper in: Proc. IEEE, Vol. 86, No. 2, (Feb 1998) 3. Marvasti (ed), F. (2000). Nonuniform Sampling, Theory and Practice. New York: Kluwer Academic/Plenum Publishers. 4. Landau, H. J. (1967). "Necessary density conditions for sampling and interpolation of certain entire functions". Acta Math. 117 (1): 37–52. doi:10.1007/BF02395039. 5. see, e.g., Feng, P. (1997). Universal minimum-rate sampling and spectrum-blind reconstruction for multiband signals. Ph.D. dissertation, University of Illinois at Urbana-Champaign. 6. Mishali, Moshe; Eldar, Yonina C. (March 2009). "Blind Multiband Signal Reconstruction: Compressed Sensing for Analog Signals". IEEE Trans. Signal Processing. 57 (3). CiteSeerX . 7. Nyquist, Harry (April 1928). "Certain topics in telegraph transmission theory". Trans. AIEE. 47: 617–644. Reprint as classic paper in: Proc. IEEE, Vol. 90, No. 2, Feb 2002 8. Küpfmüller, Karl (1928). "Über die Dynamik der selbsttätigen Verstärkungsregler". Elektrische Nachrichtentechnik (in German). 5 (11): 459–467. (English translation 2005). 9. Kotelnikov, V.A. (1933). "On the carrying capacity of the ether and wire in telecommunications". Material for the First All-Union Conference on Questions of Communication, Izd. Red. Upr. Svyazi RKKA (in Russian). Moscow. (English translation, PDF) 10. Whittaker, E.T. (1915). "On the Functions Which are Represented by the Expansions of the Interpolation Theory". Proc. Royal Soc. Edinburgh. 35: 181–194. ("Theorie der Kardinalfunktionen"). 11. Whittaker, J.M. (1935). Interpolatory Function Theory. Cambridge, England: Cambridge Univ. Press.. 12. Shannon, Claude E. (July 1948). "A Mathematical Theory of Communication". Bell System Technical Journal. 27 (3): 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x.. 13. Shannon, Claude E. (October 1948). "A Mathematical Theory of Communication". Bell System Technical Journal. 27 (4): 623–666. doi:10.1002/j.1538-7305.1948.tb00917.x. 14. Jerri, Abdul (November 1977). "The Shannon Sampling Theorem—Its Various Extensions and Applications: A Tutorial Review". Proceedings of the IEEE. 65 (11): 1565–1596. See also Jerri, Abdul (April 1979). "Correction to "The Shannon sampling theorem—Its various extensions and applications: A tutorial review"". Proceedings of the IEEE. 67 (4): 695. 15. Lüke, Hans Dieter (April 1999). "The Origins of the Sampling Theorem". IEEE Communications Magazine (4): 106–108. doi:10.1109/35.755459. 16. Meijering, Erik (March 2002). "A Chronology of Interpolation From Ancient Astronomy to Modern Signal and Image Processing". Proc. IEEE. 90 (3): 319–322. doi:10.1109/5.993400. 17. Members of the Technical Staff of Bell Telephone Lababoratories (1959). Transmission Systems for Communications. AT&T. pp. 26–4 (Vol.2). 18. Guillemin, Ernst Adolph (1963). Theory of Linear Physical Systems. Wiley. 19. Roberts, Richard A.; Barton, Ben F. (1965). Theory of Signal Detectability: Composite Deferred Decision Theory. 20. Gray, Truman S. (1954). Applied Electronics: A First Course in Electronics, Electron Tubes, and Associated Circuits. 21. Blackman, R. B.; Tukey, J. W. (1958). The Measurement of Power Spectra : From the Point of View of Communications Engineering (PDF). New York: Dover. 22. Black, Harold S. (1953). Modulation Theory. • Higgins, J.R.: Five short stories about the cardinal series, Bulletin of the AMS 12(1985) • Küpfmüller, Karl, "Utjämningsförlopp inom Telegraf- och Telefontekniken", ("Transients in telegraph and telephone engineering"), Teknisk Tidskrift, no. 9 pp. 153–160 and 10 pp. 178–182, 1931. • Marks, R.J.(II), Handbook of Fourier Analysis and Its Applications, Oxford University Press, (2009), Chapters 5-8. Google books Wikimedia Commons has media related to Nyquist Shannon theorem.
2022-05-19 13:26:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8099124431610107, "perplexity": 1228.6681853031052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00119.warc.gz"}
https://chemistry.stackexchange.com/questions/166897/smarts-distinguish-between-alcohol-and-carboxylic-acid
# SMARTS: distinguish between alcohol and carboxylic acid I'm new to SMARTS and have to identify substructures with the Python package rdkit, so far so good. Let's say I have a SMILES with an carboxylic acid and a alcohol such as OC(C1=CC(CO)=CC=C1)=O and I want to get only the alcohol, then I would try something like this: mol = Chem.MolFromSmiles("OC(C1=CC(CO)=CC=C1)=O") toSearch = Chem.MolFromSmarts("[#6][OX2H]") bool = mol.HasSubstructMatch(toSearch) subStruct = mol.GetSubstructMatches(toSearch) print(bool) print(subStruct) which would return: True ((1, 0), (5, 6)) However, the substructure search also identifies the OH of the acid which I don't want. How can I exclude that? My idea was to use some kind of recursive smarts for the alcohol to exclude all those "alcohols" where the carbon is bonded to another oxygen like "=O". New contributor dideldideldum is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. • Hello! You may also be interested in the Matter Modeling 'sister site'. (though please don't post questions twice) Aug 5 at 16:03 The simplest change to make for your SMARTS pattern would be specifying the total degree of the attached carbon is 4 with "[#6X4][OX2H]". This is enough to discriminate against the carboxylic acid. The SMARTS pattern is "[OX2H][CX4;!$(C([OX2H])[O,S,#7,#15])]", which stipulates that the carbon attached to the oxygen not also be connected to O, S, N, or P. I made a little interactive widget that takes a molecule name or SMILES in the first field and a SMARTS pattern in the second. • Awesome!!!! Thanks for the answer and the two great links! I have one more question. This smart doesn't hit if the alcohol is on an aromatic carbon. In this Case this smarts "[OX2H][CX4,c;!$(C([OX2H])[O,S,#7,#15])]" should work, shouldn't it? thx! Aug 6 at 13:37
2022-08-09 13:38:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5070772171020508, "perplexity": 2375.2748588671384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00542.warc.gz"}
https://radar.inria.fr/report/2015/polsys/uid40.html
EN FR • Legal notice • Accessibility - non conforme Section: New Results Solving Systems in Finite Fields, Applications in Cryptology and Algebraic Number Theory Polynomial-Time Algorithms for Quadratic Isomorphism of Polynomials: The Regular Case Let $𝐟=\left({f}_{1},...,{f}_{m}\right)$ and $𝐠=\left({g}_{1},...,{g}_{m}\right)$ be two sets of $m\ge 1$ nonlinear polynomials in $𝕂\left[{x}_{1},...,{x}_{n}\right]$ ($𝕂$ being a field). In [3] , we consider the computational problem of finding – if any – an invertible transformation on the variables mapping $𝐟$ to $𝐠$. The corresponding equivalence problem is known as Isomorphism of Polynomials with one Secret (IP1S ) and is a fundamental problem in multivariate cryptography. Amongst its applications, we can cite Graph Isomorphism (GI ) which reduces to equivalence of cubic polynomials with respect to an invertible linear change of variables, according to Agrawal and Saxena. The main result is a randomized polynomial-time algorithm for solving IP1S for quadratic instances, a particular case of importance in cryptography. To this end, we show that IP1S for quadratic polynomials can be reduced to a variant of the classical module isomorphism problem in representation theory. We show that we can essentially linearize the problem by reducing quadratic-IP1S to test the orthogonal simultaneous similarity of symmetric matrices; this latter problem was shown by Chistov, Ivanyos and Karpinski (ISSAC 1997) to be equivalent to finding an invertible matrix in the linear space ${𝕂}^{n×n}$ of $n×n$ matrices over $𝕂$ and to compute the square root in a certain representation in a matrix algebra. While computing square roots of matrices can be done efficiently using numerical methods, it seems difficult to control the bit complexity of such methods. However, we present exact and polynomial-time algorithms for computing a representation of the square root of a matrix in ${𝕂}^{n×n}$, for various fields (including finite fields), as a product of two matrices. Each coefficient of these matrices lie in an extension field of $𝕂$ of polynomial degree. We then consider #IP1S , the counting version of IP1S for quadratic instances. In particular, we provide a (complete) characterization of the automorphism group of homogeneous quadratic polynomials. Finally, we also consider the more general Isomorphism of Polynomials (IP ) problem where we allow an invertible linear transformation on the variables and on the set of polynomials. A randomized polynomial-time algorithm for solving IP when $𝐟=\left({x}_{1}^{d},...,{x}_{n}^{d}\right)$ is presented. From an algorithmic point of view, the problem boils down to factoring the determinant of a linear matrix (i.e. a matrix whose components are linear polynomials). This extends to IP a result of Kayal obtained for PolyProj . Factoring $N={p}^{r}{q}^{s}$ for Large $r$ and $s$ Boneh et al. showed at Crypto 99 that moduli of the form $N={p}^{r}q$ can be factored in polynomial time when $r\simeq logp$. Their algorithm is based on Coppersmith's technique for finding small roots of polynomial equations. In [15] we show that $N={p}^{r}{q}^{s}$ can also be factored in polynomial time when $r$ or $s$ is at least ${\left(logp\right)}^{3}$; therefore we identify a new class of integers that can be efficiently factored. We also generalize our algorithm to moduli with $k$ prime factors $N={\prod }_{i=1}^{k}{p}_{i}^{{r}_{i}}$; we show that a non-trivial factor of $N$ can be extracted in polynomial-time if one of the exponents ${r}_{i}$ is large enough. On the Complexity of the BKW Algorithm on LWE This work [1] presents a study of the complexity of the Blum–Kalai–Wasserman (BKW) algorithm when applied to the Learning with Errors (LWE ) problem, by providing refined estimates for the data and computational effort requirements for solving concrete instances of the LWE problem. We apply this refined analysis to suggested parameters for various LWE -based cryptographic schemes from the literature and compare with alternative approaches based on lattice reduction. As a result, we provide new upper bounds for the concrete hardness of these LWE-based schemes. Rather surprisingly, it appears that BKW algorithm outperforms known estimates for lattice reduction algorithms starting in dimension $n\approx 250$ when LWE is reduced to SIS . However, this assumes access to an unbounded number of LWE samples. Structural Cryptanalysis of McEliece Schemes with Compact Keys A very popular trend in code-based cryptography is to decrease the public-key size by focusing on subclasses of alternant/Goppa codes which admit a very compact public matrix, typically quasi-cyclic (QC), quasi-dyadic (QD), or quasi-monoidic (QM) matrices. In [5] , we show that the very same reason which allows to construct a compact public-key makes the key-recovery problem intrinsically much easier. The gain on the public-key size induces an important security drop, which is as large as the compression factor p on the public-key. The fundamental remark is that from the $k×n$ public generator matrix of a compact McEliece, one can construct a $k/p×n/p$ generator matrix which is - from an attacker point of view - as good as the initial public-key. We call this new smaller code the folded code. Any key-recovery attack can be deployed equivalently on this smaller generator matrix. To mount the key-recovery in practice, we also improve the algebraic technique of Faugère, Otmani, Perret and Tillich (FOPT). In particular, we introduce new algebraic equations allowing to include codes defined over any prime field in the scope of our attack. We describe a so-called “structural elimination" which is a new algebraic manipulation which simplifies the key-recovery system. As a proof of concept, we report successful attacks on many cryptographic parameters available in the literature. All the parameters of CFS-signatures based on QD/QM codes that have been proposed can be broken by this approach. In most cases, our attack takes few seconds (the harder case requires less than 2 hours). In the encryption case, the algebraic systems are harder to solve in practice. Still, our attack succeeds against several cryptographic challenges proposed for QD and QM encryption schemes, but there are still some parameters that have been proposed which are out of reach for the methods given here. However, regardless of the key-recovery attack used against the folded code, there is an inherent weakness arising from Goppa codes with QM or QD symmetries. It is possible to derive from the public key a much smaller public key corresponding to the folding of the original QM or QD code, where the reduction factor of the code length is precisely the order of the QM or QD group used for reducing the key size. To summarize, the security of such schemes are not relying on the bigger compact public matrix but on the small folded code which can be efficiently broken in practice with an algebraic attack for a large set of parameters. A Polynomial-Time Key-Recovery Attack on MQQ Cryptosystems In [16] , we investigate the security of the family of MQQ public key cryptosystems using multivariate quadratic quasigroups (MQQ). These cryptosystems show especially good performance properties. In particular, the MQQ-SIG signature scheme is the fastest scheme in the ECRYPT benchmarking of cryptographic systems (eBACS). We show that both the signature scheme MQQ-SIG and the encryption scheme MQQ-ENC, although using different types of MQQs, share a common algebraic structure that introduces a weakness in both schemes. We use this weakness to mount a successful polynomial time key-recovery attack that finds an equivalent key using the idea of so-called good keys. In the process we need to solve a MinRank problem that, because of the structure, can be solved in polynomial-time assuming some mild algebraic assumptions. We highlight that our theoretical results work in characteristic 2 which is known to be the most difficult case to address in theory for MinRank attacks and also without any restriction on the number of polynomials removed from the public-key. This was not the case for previous MinRank like-attacks against $\mathrm{ℳ𝒬}$ schemes. From a practical point of view, we are able to break an MQQ-SIG instance of 80 bits security in less than 2 days, and one of the more conservative MQQ-ENC instances of 128 bits security in little bit over 9 days. Altogether, our attack shows that it is very hard to design a secure public key scheme based on an easily invertible MQQ structure. Algebraic Cryptanalysis of a Quantum Money Scheme The Noise-Free Case In [14] , we investigate the Hidden Subspace Problem (${\mathrm{HSP}}_{q}$) over ${𝔽}_{q}$ which is as follows: Input : ${p}_{1},...,{p}_{m},{q}_{1},...,{q}_{m}\in {𝔽}_{q}\left[{x}_{1},...,{x}_{n}\right]$ of degree $d\ge 3$ (and $n\le m\le 2n$). Find : a subspace $A\subset {{𝔽}_{q}}^{n}$ of dimension $n/2$ ($n$ is even) such that ${p}_{i}\left(A\right)=0\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\forall i\in \left\{1,...,m\right\}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\text{and}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{q}_{j}\left({A}^{\perp }\right)=0\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\forall j\in \left\{1,...,m\right\},$ where ${A}^{\perp }$ denotes the orthogonal complement of $A$ with respect to the usual scalar product in ${𝔽}_{q}$. This problem underlies the security of the first public-key quantum money scheme that is proved to be cryptographically secure under a non quantum but classic hardness assumption. This scheme was proposed by S. Aaronson and P. Christiano at STOC'12. In particular, it depends upon the hardness of ${\mathrm{HSP}}_{2}$. More generally, Aaronson and Christiano left as an open problem to study the security of the scheme for a general field ${𝔽}_{q}$. We present a randomized polynomial-time algorithm that solves the ${\mathrm{HSP}}_{q}$ for $q>d$ with success probability $\approx 1-1/q$. So, the quantum money scheme extended to ${𝔽}_{q}$ is not secure for big $q$. Finally, based on experimental results and a structural property of the polynomials that we prove, we conjecture that there is also a randomized polynomial-time algorithm solving the ${\mathrm{HSP}}_{2}$ with high probability. To support our theoretical results we also present several experimental results confirming that our algorithms are very efficient in practice. We emphasize that S. Aaronson and P. Christiano proposes a non-noisy and a noisy version of the public-key quantum money scheme. The noisy version of the quantum money scheme remains secure. Folding Alternant and Goppa Codes with Non-Trivial Automorphism Groups The main practical limitation of the McEliece public-key encryption scheme is probably the size of its key. A famous trend to overcome this issue is to focus on subclasses of alternant/Goppa codes with a non trivial automorphism group. Such codes display then symmetries allowing compact parity-check or generator matrices. For instance, a key-reduction is obtained by taking quasi-cyclic (QC) or quasi-dyadic (QD) alternant/Goppa codes. We show that the use of such symmetric alternant/Goppa codes in cryptography introduces a fundamental weakness. It is indeed possible to reduce the key-recovery on the original symmetric public-code to the key-recovery on a (much) smaller code that has not anymore symmetries. This result [4] is obtained thanks to a new operation on codes called folding that exploits the knowledge of the automorphism group. This operation consists in adding the coordinates of codewords which belong to the same orbit under the action of the automorphism group. The advantage is twofold: the reduction factor can be as large as the size of the orbits, and it preserves a fundamental property: folding the dual of an alternant (resp. Goppa) code provides the dual of an alternant (resp. Goppa) code. A key point is to show that all the existing constructions of alternant/Goppa codes with symmetries follow a common principal of taking codes whose support is globally invariant under the action of affine transformations (by building upon prior works of T. Berger and A. Dür). This enables not only to present a unified view but also to generalize the construction of QC, QD and even quasi-monoidic (QM) Goppa codes. All in all, our results can be harnessed to boost up any key-recovery attack on McEliece systems based on symmetric alternant or Goppa codes, and in particular algebraic attacks. Improved Sieving on Algebraic Curves The best algorithms for discrete logarithms in Jacobians of algebraic curves of small genus are based on index calculus methods coupled with large prime variations. For hyperelliptic curves, relations are obtained by looking for reduced divisors with smooth Mumford representation (Gaudry); for non-hyperelliptic curves it is faster to obtain relations using special linear systems of divisors (Diem, Diem and Kochinke). Recently, Sarkar and Singh have proposed a sieving technique, inspired by an earlier work of Joux and Vitse, to speed up the relation search in the hyperelliptic case. In [20] , we give a new description of this technique, and show that this new formulation applies naturally to the non-hyperelliptic case with or without large prime variations. In particular, we obtain a speed-up by a factor approximately 3 for the relation search in Diem and Kochinke's methods.
2023-02-08 13:54:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 50, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7352219820022583, "perplexity": 525.7594444611802}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00776.warc.gz"}
http://nrich.maths.org/10118/solution
### Consecutive Numbers An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. ### Golden Thoughts Rectangle PQRS has X and Y on the edges. Triangles PQY, YRX and XSP have equal areas. Prove X and Y divide the sides of PQRS in the golden ratio. A 1 metre cube has one face on the ground and one face against a wall. A 4 metre ladder leans against the wall and just touches the cube. How high is the top of the ladder above the ground? # Alien Currency ##### Stage: 3 and 4 Short Challenge Level: Let the value of a green note and the value of a blue note be $g$ zogs and $b$ zogs respectively. Then $3g + 8b = 46$ and $8g + 3b = 31$. Adding these two equations gives $11g + 11b = 77$, so $b + g = 7$. Therefore $3g + 3b = 21$. Subtracting this equation from the original equations in turn gives $5b = 25$ and $5g = 10$ respectively. So $b = 5$, $g = 2$ and $2g + 3b = 19$. This problem is taken from the UKMT Mathematical Challenges. View the previous week's solution View the current weekly problem
2015-08-30 11:58:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6562186479568481, "perplexity": 1006.6369800168767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065306.42/warc/CC-MAIN-20150827025425-00111-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.semanticscholar.org/paper/Study-of-Surface-Spin-Polarized-Electron-in-Using-Tyagi-Dreyer/e03745f576022c96171d49426fc88e58676e571a
# Study of Surface Spin-Polarized Electron Accumulation in Topological Insulators Using Scanning Tunneling Microscopy @article{Tyagi2020StudyOS, title={Study of Surface Spin-Polarized Electron Accumulation in Topological Insulators Using Scanning Tunneling Microscopy}, author={Siddharth Tyagi and Michael E. Dreyer and David Bowen and Dan Hinkel and Patrick J. Taylor and Adam L. Friedman and Robert E. Butera and Charles Krafft and Isaak D. Mayergoyz}, journal={IEEE Magnetics Letters}, year={2020}, volume={11}, pages={1-4} } • S. Tyagi, +6 authors I. Mayergoyz • Published 1 November 2019 • Physics, Materials Science • IEEE Magnetics Letters We report the results of scanning tunneling microscopy experiments using iron-coated tungsten tips and current-carrying bismuth selenide (Bi<inline-formula><tex-math notation="LaTeX">$_2$</tex-math></inline-formula>Se<inline-formula><tex-math notation="LaTeX">$_3$</tex-math></inline-formula>) samples. Asymmetry in tunneling currents with respect to the change in the direction of bias currents through Bi<inline-formula><tex-math notation="LaTeX">$_2$</tex-math></inline-formula>Se<inline-formula… 3 Citations ## Figures from this paper Experimental detection of surface spin-polarized electron accumulation in topological insulators using scanning tunneling microscopy Spin-momentum locking in the surface mode of topological insulators (TI) leads to the surface accumulation of spin-polarized electrons caused by bias current flows through TI samples. Here, we Scanning Tunneling Microscopy Detection of Surface Spin-Polarized Electron Accumulations in Topological Insulators Spin-momentum locking in the surface mode of topological insulators leads to the surface accumulations of spin-polarized electrons caused by bias currents through topological insulator samples. It is ## References SHOWING 1-10 OF 30 REFERENCES Spin-polarized scanning tunneling microscopy/spectroscopy study of MnAu(001) thin films In this work we explore by means of spin-polarized scanning tunneling microcopy/spectroscopy (SP-STM/STS) and ab initio calculations the magnetic and electronic properties of thin MnAu alloyed films. Scanning tunneling microscopy of gate tunable topological insulator Bi 2 Se 3 thin films • 2013 Electrical-field control of the carrier density of topological insulators (TIs) has greatly expanded the possible practical use of these materials. However, the combination of low-temperature local On Local Sensing of Spin Hall Effect in Tungsten Films by Using STM-Based Measurements • T. Xie, +4 authors I. Mayergoyz • Materials Science, Physics IEEE Transactions on Nanotechnology • 2018 The spin Hall effect (SHE) in tungsten films has been experimentally studied by using STM-based measurements. These measurements have been performed by using tungsten and iron-coated tungsten tips. Absence of spin-flip transition at the Cr(001) surface: A combined spin-polarized scanning tunneling microscopy and neutron scattering study The spin-density wave (SDW) on Cr(001) has been investigated at temperatures between $20--300\phantom{\rule{0.3em}{0ex}}\mathrm{K}$ by means of spin-polarized scanning tunneling microscopy (SP-STM). Probing Dirac fermion dynamics in topological insulator Bi2Se3 films with a scanning tunneling microscope. It is found that τ exhibits a remarkable (E - EF)(-2) energy dependence and increases with film thickness, and is shown that the features revealed are typical for electron-electron scattering between surface and bulk states. Detection of the Spin-Chemical Potential in Topological Insulators Using Spin-Polarized Four-Probe STM. A new method for the detection of the spin-chemical potential in topological insulators using spin-polarized four-probe scanning tunneling microscopy is demonstrated, opening a new avenue to access the intrinsic spin transport associated with pristine TSS. Scanning tunneling microscopy measurements of the spin Hall effect in tungsten films by using iron-coated tungsten tips Scanning tunneling microscopy experiments using iron-coated tungsten tips and current-carrying tungsten films have been conducted. An asymmetry of the tunneling current with respect to the change of Spin accumulation in topological insulator thin films—influence of bulk and topological surface states • Physics Journal of Physics D: Applied Physics • 2018 Current-induced spin torques in topological insulators (TIs) include both field-like torques as well as damping-like torques. In many experimental spin torque measurements, the Fermi energies are Interplay of topological surface and bulk electronic states in Bi 2Se3 In this Letter we present scanning tunneling microscopy density-of-states measurements and electronic structure calculations of the topological insulator Bi2Se3. The measurements show significant Scanning Tunneling Microscopy Study of the Spin Hall Effect in Platinum and Highly Resistive Tungsten Films The results of a nanoscale experimental study of the spin Hall effect in platinum and highly resistive tungsten films are reported. These results are obtained by using scanning tunneling microscopy
2022-01-21 23:15:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6920676827430725, "perplexity": 6124.77885786406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303717.35/warc/CC-MAIN-20220121222643-20220122012643-00513.warc.gz"}
https://tex.stackexchange.com/questions/579540/how-to-add-the-url-and-urldate-to-footcite-output
# How to add the url and urldate to \footcite output? Here's the example that does almost everything that I need it to do. The only thing that is missing is that I need the url and urldate to show when using \footcite for online sources. I'd like to achieve something like this: Laubheimer, Virtual Tours: High Interaction Cost, Moderate Usefulness, 30. Aug. 2020. url: https://www.nngroup.com/articles/virtual-tours/ (besucht am 18.01.2021). I would greatly appreciate any pointers, thanks! \documentclass[12pt]{scrbook} \usepackage[utf8]{inputenc} \usepackage{csquotes} \usepackage[ngerman]{babel} \usepackage[style=authortitle-ibid,sorting=none,backend=biber,labeldateparts]{biblatex} \usepackage{xpatch} \xapptobibmacro{cite:title}{% \iffieldundef{labelyear}{} {\printtext[bibhyperref]{\printlabeldateextra}}}{}{} \begin{filecontents}{\jobname.bib} @article{einstein, author = {Albert Einstein}, title = {the true about tree}, journaltitle = {Annalen der Physik}, year = {1905}, volume = {322}, number = {10}, pages = {891--921} } @Online {laubpage, author = {Laubheimer, Page}, title = {Virtual Tours: High Interaction Cost, Moderate Usefulness}, date = {2020-08-30}, year = {2020}, file = {:./references/articles-virtual-tours-.html:html}, url = {https://www.nngroup.com/articles/virtual-tours/}, urldate = {2021-01-18} } \end{filecontents} \begin{document} Cite this\footcite[Vgl.][S. 32-33]{einstein} and this\footcite{laubpage} please. \printbibliography \end{document} The url field is printed by the doi+eprint+url macro, which is defined in the standard.bbx file. To use it with authortitle style (and similar) it must be imported into the macro title: \usepackage{xpatch}
2021-04-18 19:47:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6929433345794678, "perplexity": 12039.667878272088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038860318.63/warc/CC-MAIN-20210418194009-20210418224009-00429.warc.gz"}
https://socratic.org/questions/what-is-the-molar-mass-of-benzene
# What is the molar mass of benzene? Aug 10, 2017 $78.114$ $\text{g/mol}$ #### Explanation: To find the molar mass of a compound, we first look at a periodic table to find the molar masses of each element in the compound: The chemical formula for benzene is ul("C"_6"H"_6 So we need only look at the values for carbon and hydrogen. We find that the molar masses are • $\text{C} :$ $12.01$ $\text{g/mol}$ • $\text{H} :$ "1.01 $\text{g/mol}$ We need to multiply these values by however many of each element there is in the compound, so we would then have • $\text{C} :$ $12.011 \textcolor{w h i t e}{l} \text{g/mol" xx 6 = color(red)(ul(72.066color(white)(l)"g/mol}$ • $\text{H} :$ $1.008 \textcolor{w h i t e}{l} \text{g/mol" xx 6 = color(green)(ul(6.048color(white)(l)"g/mol}$ Finally, we add all the component elements up: color(red)(72.066color(white)(l)"g/mol") + color(green)(6.048color(white)(l)"g/mol") = color(blue)(ulbar(|stackrel(" ")(" "78.114color(white)(l)"g/mol"" ")|)
2019-08-18 21:04:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8345436453819275, "perplexity": 1895.2091394164747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314130.7/warc/CC-MAIN-20190818205919-20190818231919-00533.warc.gz"}
https://www.doubtnut.com/question-answer/if-intab-x3dx0-and-if-intab-x2dx2-3-find-a-and-b-642567093
# If int_a^b x^3dx=0, and If int_a^b x^2dx=2/3, find a and b. Updated On: 17-04-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free, Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Transcript hello everyone welcome to the doubtnut now the question is integration x cube into DX limit is from a to b is equal to zero and if x square into integration x square into a limited from A to B lower limit is a and upper limit is b is equal to 2 by 3 find the value of A and B now first we take integration a to b x cube into DX is equal to zero now we know that according to power rule integration of x power n is equal to integration x power n into DX is equal to x power n + 1 upon 1 + 1 + c is a in integration constant now if we put the value of n equal to 3 so we get integration of x cube is equal to x power 4 upon 4 now lower limit is a upper limit is equal to be equal to zero 1 by 4 is a constant so take outside from the limit so x power 4 so first we put upper limit - lower limit so x power 4 minus b power 4 minus lower limit is equal to a power 4 is equal to zero now from where we get B power 4 is equal to A 444 sorry before - a powerful is equal to zero now we factorise it so we get B square minus A square into B square + a square is equal to zero now in the next step B square minus A square is equal to zero B square minus A square is equal to zero we get b square is equal to a square from here is equal to plus minus this is first solution and if we if we take S word means b square + Y square is equal to zero so we get b square is equal to minus A square and b square is a post quantity and a square is a positive quantity so positive quantity is not equal to negative so we cannot take this case now from second condition if we take integration API integration a to b x square into DX is equal to 2 upon 3 so now again according to power rule we integrate x square into DX show integration of x square is x cube upon 3 now limit is a to b is equal to 2 upon 3 now 1 by 3 is the constant so take outside from bracket so we get x cube lower limit and upper limit is a and b su233 and three gets cancelled out so we get not put the limit so cute is what we cube minus lower limit a cube is equal to 2 now here we also make two cases because by solving first equation we get 20 equal to plus minus now put here Biji equal to a suppose this is our question number second so case first be equal to a so we get a cube minus b cube is equal to zero is equal to 2 and this is not possible now take test second so we get b is equal to minus 21 S so we get b is equal to minus put this value in equation number second inequation S so we get minus a minus b whole cube minus b cube is equal to two minus a cube minus b cube is equal to 2 so we get -2 a cube is equal to 2 x minus 2 a cube is equal to 22 and to get cancelled out a cube + 1 is equal to zero now we know that formula is now we know that formula a cube plus b cube is equal to a square A cube plus b cube is equal to a + b square minus a b + b square + b square now put this formula Now use this formula so we get plus one in the next step A + 1 a square minus A + 1 is equal to zero now again we make two cases once a + 1 is equal to zero case first if a + 1 is equal to zero if a + 1 is equal to zero so we get age call to -1 now put the value of a so we get DJ equal to b is equal to 5 - 120 equal to what - of -1 so we get 20 goal 21 b is equal to 1 now busy call to 1 and age equal to -1 this is the answer of the question now if we take this S so we get a square minus A + 1 is equal to zero that is a quadratic equation and if we get if we find the solution of this quadratic equation so we get a imaginary root and in this case we can not considered imaginary roots now our answer is equal to -1 and b is equal to 1 thank you so much guys I hope you like this video ## Related Videos 642535555 6.4 K+ 6.6 K+ 1:28 Evaluate: int_a^b x^3dx 646274059 5.4 K+ 7.6 K+ 1:33 Evaluate: int_a^b x^3dx 645274342 10.0 K+ 10.0 K+ 0:54 Evaluate: int_a^b x^3dx 645255351 8.4 K+ 9.0 K+ 4:40 int_a^b logx/x^2dx 8490980 2.8 K+ 6.7 K+ 3:40 int_a^b logx/x^2dx
2022-05-28 03:26:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8672693967819214, "perplexity": 516.1560105744311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663012542.85/warc/CC-MAIN-20220528031224-20220528061224-00534.warc.gz"}
https://indico.cern.ch/event/839985/contributions/3985307/
# LXX International conference "NUCLEUS – 2020. Nuclear physics and elementary particle physics. Nuclear physics technologies" Oct 11 – 17, 2020 Online Europe/Moscow timezone ## APPLICATION OF THE LIQUID SCINTILLATION ALPHA AND BETA SPECTROMETER QUANTULUS 1220 FOR DATING OF NATURAL OBJECTS Oct 15, 2020, 6:15 PM 1h Online #### Online Poster report Section 3. Modern nuclear physics methods and technologies. ### Speaker Konstantin Gruzdov (FGBU «VSEGEI») ### Description The Quantulus 1220 is a liquid scintillation counting (LSC) system for the quantitative measurement of extremely low levels of alpha and beta activity. With both passive and active shielding, the Quantulus 1220 employs a universal background reduction system which is optimized according to type of analysis. In the Centre of Isotopic Research (CIR) of FGBU «VSEGEI» Quantulus 1220 is used for radiocarbon dating of various organic objects (wood, peat, soil, bottom sediments, bones), dating young bottom sediments using ${}^{210}$Pb as well as determination the tritium content in water. For radiocarbon dating the organic matter of a sample is chemically converted to benzene. The ${}^{14}$C activity is measured relative to the modern standard. Also it is necessary to measure the activity of the background sample (benzene without ${}^{14}$C). When dating young bottom sediments by ${}^{210}$Pb, all the lead (99%) in the sample is chemically extracted. Then Optiphase HiSafe 3 liquid scintillator is added to the slightly acidic solution containing the lead. When measuring the tritium content in water, a water sample is directly mixed with the Optiphase TriSafe 3 liquid scintillator. The minimum detectable concentration of tritium in water is approximately 1 Bq/L. The obtained results are presented as the decay spectra of radioactive isotopes with age calculations. ### Primary author Konstantin Gruzdov (FGBU «VSEGEI»)
2023-03-31 19:25:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4144693613052368, "perplexity": 8238.035651430351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00676.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-10-quadratic-equations-10-1-quadratic-equations-problem-set-10-1-page-442/14
## Elementary Algebra {$-\frac{3}{4},6$} Using the rules of factoring trinomials and setting the factors equal to zero, we obtain: $4y^{2}-21y-18=0$ $4y^{2}+3y-24y-18=0$ $y(4y+3)-6(4y+3)=0$ $(4y+3)(y-6)=0$ $(4y+3)=0$ and $(y-6)=0$ $y=-\frac{3}{4}$ and $y=6$ Therefore, the solution set is {$-\frac{3}{4},6$}.
2019-04-23 05:53:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.638383686542511, "perplexity": 213.2663751203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578593360.66/warc/CC-MAIN-20190423054942-20190423080942-00108.warc.gz"}
https://codegolf.stackexchange.com/questions/125109/rooting-for-trees-with-the-right-nodes
# Background A rooted tree is an acyclic graph such that there is exactly one path from one node, called the root, to each other node. A node v is called the parent of another node u if and only if the path from the root to u goes through v and there is an edge connecting u and v. If node v is the parent of node u, node u is a child of node v. Write a program or function that, given a positive integer number of nodes and a set of non-negative integer numbers of children each parent has, outputs the number of possible rooted trees with that number of nodes (including the root) and each vertex having a number of children in the set, not counting those trees isomorphic to trees already found. Two trees are isomorphic if one can be transformed into another by renaming the nodes, or in other words look the same when the nodes are unlabelled. # Examples We shall represent trees as a 0-indexed list of children per index where 0 represents the root, for example [[1],[2],[]] represents that the root 0 has 1 as a child, node 1 has node 2 as a child, and node 2 has no children. Inputs n=3 and set = [0,1,2]. This is equal to binary trees with three nodes. The two possible trees are: [[1],[2],[]] and [[1,2],[],[]]. Because they are identical in structure to the two trees, we count neither [[2],[],[1]] nor [[2,1],[],[]]. There are two trees, so the output is 2 or equivalent. Here is a visualization: You can see that the second set of two trees are identical in structure to the first set of two and are thus not counted. Both sets are composed of two trees which have one of the following two structures (the root is the node at the top): Inputs n=5 and set=[0,2]. The only possible tree is [[1,2],[3,4],[],[],[]]. Note that, for example, [[1,2],[],[3,4],[],[]] and [[1,3],[],[],[2,4],[]] are not counted again because they are isomorphic to the sole tree which is counted. The output is 1 or equivalent. Here is another visualization: Clearly, all of the trees are isomorphic, so only one is counted. Here is what the trees look like unlabeled: Input n=4, set=[0,2]. There are no possible trees because each time children are generated from a node, there are either 0 or 2 more nodes. Clearly, 4 nodes cannot be produced by adding 2 or 0 successively to 1 node, the root. Output: 0 or falsey. # Input/Output Input and output should be taken in a reasonable format. Input is a positive integer representing n and a list of non-negative integers representing the set of valid numbers of children. The output is a non-negative integer corresponding to how many trees can be formed. # Test cases n ; set ; output 3 ; [0,1,2] ; 2 5 ; [0,2] ; 1 4 ; [0,2] ; 0 3 ; [0,1] ; 1 3 ; [0] ; 0 1 ; [0] ; 1 6 ; [0,2,3] ; 2 7 ; [0,2,3] ; 3 # Rules • The set of numbers of children will always include zero. • The root node always counts as a node, even if it has no children (see the 1; [0] test case) • This is , so shortest code wins. • When you say two trees are isomorphic, you mean as rooted trees, right? – xnor Jun 7 '17 at 23:47 • @xnor Yes. The isomorphic definition refers to rooted trees. – fireflame241 Jun 7 '17 at 23:48 _#0=1 i#j=div(i#(j-1)*(i+j-1))j s!n|let(0?1)0=1;(0?_)_=0;(m?n)c=sum[s!m#j*((m-1)?(n-j*m)$c-j)|j<-[0..c]]=sum$(n-1)?n<\$>s Try it online! ### How it works i#j is i multichoose j. (m?n)c is the number of n-node trees with c children at the root, each of which has a subtree of at most m nodes. It’s computed by summation over j, the number of these subtrees that have exactly m nodes. s!n is the number of n-node trees. It’s computed by summation over cs, the number of children at the root. # Pyth, 47 bytes .N?Nsm*/.P+tyNdd.!d:tN-T*dN-YdhY!|tTYLs:LtbbQyE Try it online A port of my Haskell answer, but it’s much faster in Pyth thanks to Pyth’s free automatic memoization. I had to work around a bug in Pyth’s binomial coefficient builtin, though. # Python, 127 bytes lambda*x:len(g(*x)) g=lambda n,s,d=0:[[]][n-1:d in s]or[[t]+u for c in range(1,n)for t in g(c,s)for u in g(n-c,s,d+1)if[t]+u>u] Try it online! The function g enumerates trees, and the main function counts them. Each tree is recursively generated by splitting the requiring number of nodes n between c nodes in the first branch t, and n-c nodes in the remaining tree u. For a canonical form up to isomorphism, we require the branches to be sorted in decreasing order, so t must be at most the first element of u is u is non-empty. This is enforced as [t]+u>u. We count the number of children d so far of the current node. When only one node n=1 is left, is must be the current node and there's no more node for children. If the current number of children d is valid, this level can be finished successfully, so we output a singleton of the one-node tree. Otherwise, we've failed and output the empty list.
2020-05-30 15:20:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6702784299850464, "perplexity": 891.551966693558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347409337.38/warc/CC-MAIN-20200530133926-20200530163926-00181.warc.gz"}
https://www.yannisgalanakis.com/courses/references/
# Letters of Recommendation I would be happy to support your applications for a masters/doctoral program or a funding application. Here are some instructions if you would like to ask me for a letter of recommendation. Key guidelines: • Please let me know at least three weeks before a letter is due. • If you have multiple applications, please provide me with a comprehensive list^[A comprehensive list could be an excel dashboard with the institution, the name of the program and the deadline.] ahead of time and arrange to have the letter requests sent at the same time so that they appear in one chunk in my e-mail box. • Try to keep the communication under the same subject title to avoid the loss of any email. • Half-life reminder rule: If you last reminded me about your recommendation some time $t$ before the due date, then remind me again when it is $\frac{t}{2}$ before the due date. ### STEP 1: Should you ask me for a letter of recommendation? Your strongest letters of recommendation will come from faculty who know you well and who can speak to your abilities above and beyond the standard coursework. For most applications , the ideal letter writer is a research mentor who you have worked with closely (e.g. for at least one year). A second tier of letters are those from professors of advanced coursework where you have excelled. ### STEP 2: How to ask for a letter of recommendation from me Did you answer “yes” in step 1? If so and if you still think that I am a good recommender for you, please contact me to schedule an appointment to discuss your application strategy (institutions, programs). Why? I’d like to understand what you’re applying for and how I can best support you in a letter. #### Initial email attachments 1. The evaluation criteria for your application. You may skip this for standard graduate school applications. 1. A draft of your application materials including your CV and any personal statement/cover letter. The statement does not need to be formal - not at this stage and not for me. I want to understand your next steps and why this is the right program for you. 1. A reminder of how I know you: what classes you’ve taken with me (and your grade in the course), any extracurricular activities where we have worked together, etc. 1. A list of specific achievements that I am qualified to highlight in your application. For example: class projects that you are proud of, economics discussions during office hours, achievements that may not be clear in the rest of your application. It may be useful to combine #3 and #4 into a draft letter of recommendation. Why? The purpose of this is not for me to rubber stamp the letter—I will rewrite everything. This exercise is to help you explain me how I can best support your application. If you do this, do not be bashful to sing your own praises; I’ll calibrate. #### After the first email/meeting 1. Most applications have an online system that will automatically send your recommenders an e-mail with instructions for how to upload their letter. Please arrange to have all of these e-mails sent at the same time so they appear in one chunk in my e-mailbox. If not possible, just keep reminding me about the deadlines. 2. Please e-mail me a list with all of the applications that require my letter and the due dates. 3. Remind me! I will not be annoyed if you are reminding me about a letter that I’ve promised you. A good rule of thumb is to remind me every half-life before the due date.
2022-09-25 01:47:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33982059359550476, "perplexity": 942.3212783109459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00238.warc.gz"}
https://dmtcs.episciences.org/2400
## Fu, Shishuo and Sellers, James, - Bijective Proofs of Partition Identities of MacMahon, Andrews, and Subbarao dmtcs:2400 - Discrete Mathematics & Theoretical Computer Science, January 1, 2014, DMTCS Proceedings vol. AT, 26th International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2014) Bijective Proofs of Partition Identities of MacMahon, Andrews, and Subbarao Authors: Fu, Shishuo and Sellers, James, We revisit a classic partition theorem due to MacMahon that relates partitions with all parts repeated at least once and partitions with parts congruent to $2,3,4,6 \pmod{6}$, together with a generalization by Andrews and two others by Subbarao. Then we develop a unified bijective proof for all four theorems involved, and obtain a natural further generalization as a result. Source : oai:HAL:hal-01207610v1 Volume: DMTCS Proceedings vol. AT, 26th International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2014) Section: Proceedings Published on: January 1, 2014 Submitted on: November 21, 2016 Keywords: bijection,generating function,residue classes,partition,[INFO.INFO-DM] Computer Science [cs]/Discrete Mathematics [cs.DM],[MATH.MATH-CO] Mathematics [math]/Combinatorics [math.CO]
2018-02-25 03:57:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6381176114082336, "perplexity": 5166.71699042537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816094.78/warc/CC-MAIN-20180225031153-20180225051153-00625.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-slope-of-x-7
# How do you find the slope of x=7? Jun 5, 2018 You can't: it's not defined! #### Explanation: The slope is defined as the ratio between the difference of the $y$ components and the difference of the $x$ components of a given pair of points on a line. In other words, given a line, pick two points ${P}_{1} = \left({x}_{1} , {y}_{1}\right)$ and ${P}_{2} = \left({x}_{2} , {y}_{2}\right)$, the slope $m$ is defined as $m = \setminus \frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}}$ In your case, the line $x = 7$ is composed, as the equation suggests, by all the points having the $x$ component equal to $7$, and any $y$ component. So, two points on the line have the form ${P}_{1} = \left(7 , {y}_{1}\right)$ and ${P}_{2} = \left(7 , {y}_{2}\right)$ Can you see the problem? If we compute the slope, we have $m = \setminus \frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}} = \setminus \frac{{y}_{2} - {y}_{1}}{7 - 7} = \setminus \frac{{y}_{2} - {y}_{1}}{0}$ And you can't divide by zero. This is the reason why all vertical lines (i.e. those with equation $x = k$, for some real number $k$) have no defined slope.
2022-09-29 13:45:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9331575036048889, "perplexity": 130.11880834698133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00606.warc.gz"}
https://www.jobilize.com/online/course/12-7-molecular-transport-phenomena-diffusion-osmosis-and-related?qcr=www.quizover.com&page=1
# 12.7 Molecular transport phenomena: diffusion, osmosis, and related  (Page 2/12) Page 2 / 12 Because diffusion is typically very slow, its most important effects occur over small distances. For example, the cornea of the eye gets most of its oxygen by diffusion through the thin tear layer covering it. ## The rate and direction of diffusion If you very carefully place a drop of food coloring in a still glass of water, it will slowly diffuse into the colorless surroundings until its concentration is the same everywhere. This type of diffusion is called free diffusion, because there are no barriers inhibiting it. Let us examine its direction and rate. Molecular motion is random in direction, and so simple chance dictates that more molecules will move out of a region of high concentration than into it. The net rate of diffusion is higher initially than after the process is partially completed. (See [link] .) The rate of diffusion is proportional to the concentration difference. Many more molecules will leave a region of high concentration than will enter it from a region of low concentration. In fact, if the concentrations were the same, there would be no net movement. The rate of diffusion is also proportional to the diffusion constant $D$ , which is determined experimentally. The farther a molecule can diffuse in a given time, the more likely it is to leave the region of high concentration. Many of the factors that affect the rate are hidden in the diffusion constant $D$ . For example, temperature and cohesive and adhesive forces all affect values of $D$ . Diffusion is the dominant mechanism by which the exchange of nutrients and waste products occur between the blood and tissue, and between air and blood in the lungs. In the evolutionary process, as organisms became larger, they needed quicker methods of transportation than net diffusion, because of the larger distances involved in the transport, leading to the development of circulatory systems. Less sophisticated, single-celled organisms still rely totally on diffusion for the removal of waste products and the uptake of nutrients. ## Osmosis and dialysis—diffusion across membranes Some of the most interesting examples of diffusion occur through barriers that affect the rates of diffusion. For example, when you soak a swollen ankle in Epsom salt, water diffuses through your skin. Many substances regularly move through cell membranes; oxygen moves in, carbon dioxide moves out, nutrients go in, and wastes go out, for example. Because membranes are thin structures (typically $6\text{.}5×{\text{10}}^{-9}$ to $\text{10}×{\text{10}}^{-9}$ m across) diffusion rates through them can be high. Diffusion through membranes is an important method of transport. Membranes are generally selectively permeable, or semipermeable    . (See [link] .) One type of semipermeable membrane has small pores that allow only small molecules to pass through. In other types of membranes, the molecules may actually dissolve in the membrane or react with molecules in the membrane while moving across. Membrane function, in fact, is the subject of much current research, involving not only physiology but also chemistry and physics. Why is the sky blue...? It's filtered light from the 2 forms of radiation emitted from the sun. It's mainly filtered UV rays. There's a theory titled Scatter Theory that covers this topic Mike A heating coil of resistance 30π is connected to a 240v supply for 5min to boil a quantity of water in a vessel of heat capacity 200jk. If the initial temperature of water is 20°c and it specific heat capacity is 4200jkgk calculate the mass of water in a vessel A thin equi convex lens is placed on a horizontal plane mirror and a pin held 20 cm vertically above the lens concise in position with its own image the space between the undersurface of d lens and the mirror is filled with water (refractive index =1•33)and then to concise with d image d pin has to Be raised until its distance from d lens is 27cm find d radius of curvature Azummiri what happens when a nuclear bomb and atom bomb bomb explode add the same time near each other A monkey throws a coconut straight upwards from a coconut tree with a velocity of 10 ms-1. The coconut tree is 30 m high. Calculate the maximum height of the coconut from the top of the coconut tree? Can someone answer my question v2 =u2 - 2gh 02 =10x10 - 2x9.8xh h = 100 ÷ 19.6 answer = 30 - h. Ramonyai why is the north side is always referring to n side of magnetic who is a nurse A nurse is a person who takes care of the sick Bukola a nurse is also like an assistant to the doctor explain me wheatstone bridge good app samuel Wheatstone bridge is an instrument used to measure an unknown electrical resistance by balancing two legs of a bridge circuit, one leg of which includes the unknown component. MUHD Rockwell Software is Rockwell Automation’s "Retro Encabulator". Now, basically the only new principle involved is that instead of power being generated by the relative motion of conductors and fluxes, it’s produced by the modial interaction of magneto-reluctance and capacitive diractance. The origin Chip what refractive index write a comprehensive note on primary colours relationship between refractive index, angle of minimum deviation and angle of prism Harrison Who knows the formula for binding energy,and what each variable or notation stands for? 1. A black thermocouple measures the temperature in the chamber with black walls.if the air around the thermocouple is 200 C,the walls are at 1000 C,and the heat transfer constant is 15.compute the temperature gradient what is the relationship between G and g G is the u. constant, as g stands for grav, accelerate at a discreet point Mark Olaiya pls explain in details Olaiya G is a universal constant Mark g stands for the gravitational acceleration point. hope this helps you. Mark balloon TD is at a gravitational acceleration at a specific point Mark I'm sorry this doesn't take dictation very well. Mark Can anyone explain the Hooke's law of elasticity? extension of a spring is proportional to the force applied so long as the force applied does not exceed the springs capacity according to my textbook Amber does this help? Amber Yes, thanks Olaiya so any solid can be compressed how compressed is dependent upon how much force is applied F=deltaL Amber sorry, the equation is F=KdeltaL delta is the triangle symbol and L is length so the change in length is proportional to amount of Force applied I believe that is what Hookes law means. anyone catch any mistakes here please correct me :) Amber I think it is used only for solids and not liquids, isn't it? Olaiya basically as long as you dont exceed the elastic limit the object should return to it original form but if you exceed this limit the object will not return to original shape as it will break Amber Thanks for the explanation Olaiya yh, liquids don't apply here, that should be viscosity Chiamaka hope it helps 😅 Amber also, an object doesnt have to break necessarily, but it will have a new form :) Amber Yes Olaiya yeah, I think it is for solids but maybe there is a variation for liquids? that I am not sure of Amber ok Olaiya good luck! Amber Same Olaiya aplease i need a help on spcific latent heat of vibrations Bilgate specific latent heat of vaporisation Bilgate how many kilometers makes a mile Faizyab Aakash equal to 1.609344 kilometers. MUHD
2019-11-22 18:28:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6262343525886536, "perplexity": 1439.7618016692957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671411.14/warc/CC-MAIN-20191122171140-20191122200140-00067.warc.gz"}
http://mathhelpforum.com/calculus/67733-new-challenging-integrals.html
# Math Help - new challenging integrals 1. ## new challenging integrals Here's two integrals I ran into I thought y'all might like a go. Perhaps they are cliche?. Show that: This one has its own name. The Ahmed Integral. $\int_{0}^{1}\frac{tan^{-1}}{x(x^{2}+1)}dx=\frac{\text{Catalan}}{2}+\frac{\ pi}{8}ln(2)$ In case, $\text{Catalan}=\sum_{k=0}^{\infty}\frac{(-1)^{k}}{(2k+1)^{2}}\approx .915....$ The other: $\int_{0}^{\infty}\frac{sin(x)}{x^{p}}dx=\frac{\sqr t{\pi}{\Gamma}(1-\frac{p}{2})}{2^{p}{\Gamma}(\frac{1}{2}+\frac{p}{2 })}$ or some equivalent form. I thought these were cool. 2. Originally Posted by galactus Show that: This one has its own name. The Ahmed Integral. $\int_{0}^{1}\frac{tan^{-1}}{x(x^{2}+1)}dx=\frac{\text{Catalan}}{2}+\frac{\ pi}{8}ln(2)$ In case, $\text{Catalan}=\sum_{k=0}^{\infty}\frac{(-1)^{k}}{(2k+1)^{2}}\approx .915....$ Your integral equals $\frac{1}{2}\left( \int_{0}^{1}{\frac{\ln \left( 1+{{y}^{2}} \right)}{{{y}^{2}}+1}\,dy}-\ln 2\int_{0}^{1}{\frac{dy}{{{y}^{2}}+1}} \right)-\int_{0}^{1}{\frac{\ln y}{{{y}^{2}}+1}\,dy}$ after some double integration manipulations. Those integrals are not hard to find, from there one gets the result. 3. Hey Kriz: You are better at this than me. That's for sure. I have to ask. How in the world did you get that ln thing from that integral with arctan?. I can do each of those integrals you have, but I never saw it in terms of ln. I started out using the series for arctan, but the x^2+1 in there caused me a fit. Using $tan^{-1}(x)=\sum_{k=0}^{\infty}\frac{(-1)^{k}x^{2k+1}}{2k+1}$. $\int_{0}^{1}\frac{1}{x^{2}+1}\sum_{k=0}^{\infty}\f rac{(-1)^{k}x^{2k}}{2k+1}$ $\sum_{k=0}^{\infty}\frac{(-1)^{k}}{2k+1}\int_{0}^{1}\frac{x^{2k}}{x^{2}+1}$..............[3] I can see $\int x^{2k} dx=\frac{x^{2k+1}}{2k+1}$, which when multiplied by the existing 2k+1 will result in the Catalan, but I am getting hung up because of the x^2+1. Now, $\sum_{k=0}^{\infty}(-1)^{k}x^{2k}=\frac{1}{1+x^{2}}$. Then we can write $\int_{0}^{1}\sum_{k=0}^{\infty}(-1)^{k}x^{2k}\cdot x^{2k}$ $\int_{0}^{1}x^{2k}\sum_{k=0}^{\infty}(-1)^{k}x^{2k}$ When we integrate, we have $\frac{x^{2k+1}}{2k+1}\sum_{k=0}^{\infty}(-1)^{k}x^{2k}$ When multiplied with [3], we get the Catalan. But what about the rest?. I know it;s there. I can smell it. I am making a stupid mistake. I just know it. Every time I tried this thing I ended up in a dead end. You probably see it right off, though. 4. $\int_{0}^{1}{\frac{\arctan x}{x\left( {{x}^{2}}+1 \right)}}\,dx=\int_{0}^{1}{\int_{0}^{x}{\frac{dy\, dx}{x\left( {{x}^{2}}+1 \right)\left( {{y}^{2}}+1 \right)}}}=\int_{0}^{1}{\int_{y}^{1}{\frac{dx\,dy} {x\left( {{x}^{2}}+1 \right)\left( {{y}^{2}}+1 \right)}}}.$ That leads the integrals I left before. 5. Oh, OK. Good egg. I see now what you done. Clever indeed. I also know that $\int_{0}^{\frac{\pi}{4}}ln(1+tan(x))dx=\frac{\pi}{ 8}ln(2)$ Which is part of our solution. May have to look into how to transform it. I bet there is a connection. Also,this integral becomes $\int_{0}^{\frac{\pi}{4}}xcot(x)dx$, but the Pi/4 makes it more difficult than the usual Pi/2 for this integral.
2015-11-29 08:41:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8212170600891113, "perplexity": 1425.5669893151628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398456975.30/warc/CC-MAIN-20151124205416-00102-ip-10-71-132-137.ec2.internal.warc.gz"}
https://socratic.org/questions/58f374377c0149726f4c856b
# Question #c856b Apr 16, 2017 There are 16 windows on the first floor. #### Explanation: Let $x$ equal the number of windows on the second floor. Thus, the number of windows on the first floor is equal to $x - 2$, and the number of windows on the third floor is $\frac{x}{2}$. Given that there are 9 windows on the third floor, we set $\frac{x}{2}$ equal to 9 and find that: $\frac{x}{2} = 9$ $x = 18$ There are thus 18 windows on the second floor, but there are two fewer on the first floor. Therefore the first floor has 16 windows. Apr 16, 2017 $16$ windows on the first floor. #### Explanation: We have to define a variable first to be able to make an equation. The first floor has the smallest number of windows, so, let the number of first floor windows be $x$. Write the windows on the other floors in terms of $x$ The second floor has two more: $\text{ } \therefore x + 2$ windows The third floor has half as many as the second floor: The number of windows on the the third floor = $\frac{x + 2}{2}$ But there are $9$ windows on the third floor. $\therefore \frac{x + 2}{2} = 9 \text{ } \leftarrow$ solve for x $x + 2 = 2 \times 9$ $x = 18 - 2$ $x = 16$ $16$ windows on the first floor.
2019-01-23 09:54:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7582237124443054, "perplexity": 687.0897309410336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584328678.85/warc/CC-MAIN-20190123085337-20190123111337-00249.warc.gz"}
https://web2.0calc.com/questions/rational-expressions-of-polynomials_8
+0 # Rational Expressions of Polynomials +1 46 1 +300 Find the sum of all $x$ that satisfies the equation $\frac{-9x}{x^2-1} = \frac{2x}{x+1} - \frac{6}{x-1}.$ Feb 21, 2021
2021-04-13 08:13:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999961853027344, "perplexity": 3494.707291329726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00282.warc.gz"}
http://peaceloveandcook.pl/books/change-and-language-papers-from-the-annual-meeting-of-the-british-association
# Change and Language: Papers from the Annual Meeting of the by Lynne Cameron By Lynne Cameron This quantity is a set of papers from the yearly assembly of the British organization for utilized Linguistics (BAAL) held on the college of Leeds, September 1994. It investigates the connection among switch and language within the broadest feel. By Lynne Cameron This quantity is a set of papers from the yearly assembly of the British organization for utilized Linguistics (BAAL) held on the college of Leeds, September 1994. It investigates the connection among switch and language within the broadest feel. Read Online or Download Change and Language: Papers from the Annual Meeting of the British Association for Applied Linguistics Held at the University of Leeds, September 1994 (British Studies in Applied Linguistics, 10) PDF Similar applied books Efficient numerical methods for non-local operators Hierarchical matrices current a good method of treating dense matrices that come up within the context of quintessential equations, elliptic partial differential equations, and keep watch over conception. whereas a dense $n\times n$ matrix in regular illustration calls for $n^2$ devices of garage, a hierarchical matrix can approximate the matrix in a compact illustration requiring purely $O(n ok \log n)$ devices of garage, the place $k$ is a parameter controlling the accuracy. CRC Standard Mathematical Tables and Formulae, 31st Edition A perennial bestseller, the thirtieth version of CRC ordinary Mathematical Tables and Formulae used to be the 1st "modern" version of the instruction manual - tailored to be worthy within the period of non-public pcs and strong hand-held units. Now this model will fast identify itself because the "user-friendly" variation. The State of Deformation in Earthlike Self-Gravitating Objects This e-book offers an in-depth continuum mechanics research of the deformation because of self-gravitation in terrestrial items, reminiscent of the interior planets, rocky moons and asteroids. Following a short heritage of the matter, glossy continuum mechanics instruments are offered with a purpose to derive the underlying box equations, either for strong and fluid fabric versions. Extra resources for Change and Language: Papers from the Annual Meeting of the British Association for Applied Linguistics Held at the University of Leeds, September 1994 (British Studies in Applied Linguistics, 10) Sample text Alternative Perspectives We have so far emphasised the division of this collection into three sub-themes. However, we must not neglect the fact that a number of other themes appear at different points. At one level, for example, it is striking how many of the contributors make use of educational contexts as the sites for their investigations (see Fairclough, Rogers, Rampton, Tonkyn, Turner & Hiraga, and Page viii Cortazzi & Jin, for example). This is understandable, for educational institutions are dedicated, par excellence, to the production of human change. Rogers' data, meanwhile, identifies a failure to change at more than a superficial level among a group of people who are potentially gatekeepers for change in language education: A-level examiners in Foreign Languages. By analysing documents issued by examining boards, she demonstrates how superficial changes in terminology are apparently not supported by underlying changes in understanding of language learning processes, and the confusion that results from this conflict. Page 3 1 Border Crossings: Discourse and social change in contemporary societies 1 Norman Fairclough Lancaster University Language in Sociocultural Change: A field of applied linguistics John Trim in his address to the twentieth anniversary meeting of BAAL in 1987 (Trim, 1988) emphasised the broad view of applied linguistics accepted as the remit for BAAL by its founders. No language as we know it would be a communicative engineer's ideal, just as no engineer would make a thumb out of a wrist bone if he could go back and start the design from scratch. Both systems are 'making use of old junk', in Jean Aitchison's phrase (1991, p. 148). I am not going to be so bold as to set out the parameters of an ideal language, but presumably it would be that which best realises the advantages that language is recognised to confer. If that form includes regular patterning at phonological, morphological and syntactic levels, then already there is a problem.
2018-01-22 16:15:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3242434561252594, "perplexity": 2970.680654490958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891485.97/warc/CC-MAIN-20180122153557-20180122173557-00272.warc.gz"}
https://www.mathscinotes.com/2015/08/sigmoid-potentiometer-taper/
# Sigmoid Potentiometer Taper By paying attention to mistakes, we invest more time and effort to correct them. The result is that you make the mistake work for you. — Jason Moser ## Introduction Figure 1: Taper Definitions as Used By State Electronics. Yesterday, I had a question from a reader on how to develop mathematical formulas for different potentiometer tapers. Normally, I would simply answer the questioner without a separate post, but my solution for this particular question provided a nice illustration of basic coordinate transformations. Since I have not shown any coordinate transformation applications in this blog before, I thought it would be worthwhile to make a post of my response. There many different names (e.g. "M", "W", "S") assigned to the common potentiometer tapers.  To my knowledge, the taper names vary by vendor. For this post, I will use the taper names as stated in Figure 1 by State Electronics, which the questioner referred to. My work here will focus on the M and W tapers, which are closely related. My analysis assumes that the potentiometer taper is an actual exponential curve. For ease of manufacture, many potentiometer suppliers approximate the exponential curve using a piecewise linear approximation. For example, Figure 1 shows a W taper that appears to be composed of three linear segments. I should also mention that I add a constant term to my exponential function to allow my curve fit to pass through zero, which is what happens with real potentiometers –  a true exponential curve, i.e. $\displaystyle y={{e}^{x}}$, would not pass through zero. ## Background ### Potentiometer Construction This discussion will focus on the common, three-terminal potentiometer. Figure 2(a) shows how State Electronics defines the terminals and Figure 2(b) shows what a potentiometer looks like inside. Figure 2(a): Terminal Definitions. Figure 2(b): Physical Construction of a Three-Terminal Potentiometer (source). ### W Taper The W taper is sometimes referred to as the antilog taper because it related to the exponential function. Its specific functional form of the potentiometer resistance between terminals 1 and 2 is dictated by the following definition. The “W” taper attains 20% resistance value at 50% of clockwise rotation (left-hand). I should mention that you rarely see the W taper described in terms of an actual function – it resistance versus wiper position is almost always shown as a graph. Remember that these are physical parts and they vary quite a bit from their nominal specifications. ### M Taper The M taper has a sigmoid shape and its resistance between terminals 1 and 2 is defined in terms of the W taper as follow. The “M” taper is such that a “W” taper is attained from either the 1 or 3 terminal to the center of the element. ## Analysis ### W-Taper Characteristic Figure 3 shows my approach to developing a W taper functional relationship. Figure 3: W Taper Resistance Between Terminals 1 and 2. ### M-Taper Characteristic Figure 4 shows my approach to developing an M taper functional relationship. Figure 4: M Taper Functional Relationship Between Terminals 1 and 2. Figure 5 shows my combined plot of the M and W tapers. They are very similar to that shown in Figure 1. Figure 5: Plot of My Functions for the M and W Tapers. ## Conclusion This post demonstrated how to develop functional relationships for the resistance of two common types of potentiometer tapers. It does seem odd that these functions are never actually stated in the vendor documentation, but hopefully I have alleviated that shortcoming here. This entry was posted in Electronics. Bookmark the permalink. ### 4 Responses to Sigmoid Potentiometer Taper 1. Gene Mirro says: For the W taper, what is F1(x,k) ? What is del sub K ? Can you explain how you knew that you needed the F1(x,k) expression? Thanks. • mathscinotes says: F1 and del sub K are just part of the setup that Mathcad requires to use its nonlinear curve fit routine. You can do the same calculation in Excel using Solver and minimizing the mean square error between the data and the curve fit. mathscinotes 2. Boyan Petrov says: I had 2 free days and nothing else to do. Inspired by this blog, I think I did it in more 'school view'. https://www.filedropper.com/audiopots http://www.filedropper.com/ratpot1 http://www.filedropper.com/potexpm20
2023-04-02 08:45:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5510269403457642, "perplexity": 1742.6222803566304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00317.warc.gz"}
http://math.stackexchange.com/questions/52482/why-is-gk-more-fundamental-than-the-hilbert-waring-function-gk
# Why is $G(k)$ “more fundamental” than the Hilbert-Waring function $g(k)$? In the Wikipedia entry for Waring's problem, the section on $G(k)$ starts as: “From the work of Hardy and Littlewood, more fundamental than $g(k)$ turned out to be $G(k)$, which is defined...” There is no real justification or citations. Do you believe this? Is there a specific, objective sense in which $G(k)$ is more fundamental? My answer is that the Hardy-Littlewood method itself works only for sufficiently large $x$ (let's say you are interested in density of subsets of $\mathbf N$ in the interval $[1;x]$), so it's better suited to handle $G(k)$ than $g(k)$. Is there a better answer? - Most often in analytic number theory one is interested in asymptotic behavior. The function $G(k)$ reflects that -- i.e., it ignores a finite number of exceptions. If you actually try to compute the function $g(k)$, you find that it can be quite a bit larger than $G(k)$ but for reasons which feel somewhat "accidental": that is, you are focusing on the difficulty of representing very small numbers. [Incidentally, I surmise that both the wikipedia article and the paragraph above are influenced by the passage on this in Hardy and Wright's Introduction to the Theory of Numbers. I think that Hardy and Wright said it better than both wikipedia and me, and I recommend you look to see what they said.] Anyway, notice that whether one quantity is "more fundamental" than another is not a mathematical statement per se: it is a statement of opinion, aesthetic and experience. One would be well within their rights to be more interested in the function $g(k)$ than $G(k)$: maybe the intricate, chaotic-looking behavior of small numbers appeals to you. - The number $g(k)$ depends only on a "small" prefix of the integers, as the Wikipedia page indicates: the worst-possible number is a rather small one, whose best representation uses $k$th powers of $1,2,3,4$. So $g(k)$ is determined by some small "exceptional" number. On the contrary, $G(k)$ really depends on all integers. There are other ways to "ignore" exceptional cases, for example we can find the number of powers which suffices for almost all integers, or a positive fraction of the integers, and so on. -
2015-09-05 01:33:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7194280028343201, "perplexity": 260.9550986900707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645371566.90/warc/CC-MAIN-20150827031611-00061-ip-10-171-96-226.ec2.internal.warc.gz"}
http://cms.math.ca/cjm/kw/Reidemeister%20torsion
Canadian Mathematical Society www.cms.math.ca location:  Publications → journals Search results Search: All articles in the CJM digital archive with keyword Reidemeister torsion Expand all        Collapse all Results 1 - 1 of 1 1. CJM 2001 (vol 53 pp. 780) Nicolaescu, Liviu I. Seiberg-Witten Invariants of Lens Spaces We show that the Seiberg-Witten invariants of a lens space determine and are determined by its Casson-Walker invariant and its Reidemeister-Turaev torsion. Keywords:lens spaces, Seifert manifolds, Seiberg-Witten invariants, Casson-Walker invariant, Reidemeister torsion, eta invariants, Dedekind-Rademacher sumsCategories:58D27, 57Q10, 57R15, 57R19, 53C20, 53C25 © Canadian Mathematical Society, 2015 : https://cms.math.ca/
2015-05-05 08:18:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8388537764549255, "perplexity": 10661.329038781056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430455283053.76/warc/CC-MAIN-20150501044123-00002-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/72531
## Files in this item FilesDescriptionFormat application/pdf 9305458.pdf (6MB) (no description provided)PDF ## Description Title: Scaling and Interior Point Methods in Optimization Author(s): Atkinson, David Steen Doctoral Committee Chair(s): Loui, Michael C.; Vaidya, Pravin M. Department / Program: Mathematics Discipline: Mathematics Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Mathematics Operations Research Abstract: We present four algorithms that use either scaling or interior point methods for convex optimization problems; two of the algorithms use both.We first present an algorithm that uses scaling of weights to find the weighted analytic center of a polytope defined by m hyperplanes. We prove that after we solve the problem at the base level--all weights set equal to 1--we can determine the solution with original weights in $O(\sqrt{m}\log W)$ iterations, where W is the largest original weight. Our second algorithm is a companion to the first: it determines the weighted analytic center of convex bodies defined by m convex constraints. We prove that convex constraints that lead to a self-concordant logarithmic barrier function define a convex set for which Newton's method is an efficient technique for finding the weighted analytic center. When we scale the weights, we can also solve this more general case in $O(\sqrt{m}\log W)$ iterations after the base problem is solved. For both algorithms, the complexity of each iteration is dominated by the time to find the Newton direction for minimization of a function.The convex feasibility problem is a general optimization problem in which the goal is to find any point that lies in a convex set S. We present a new cutting plane algorithm for the convex feasibility problem. Our algorithm uses the analytic center of a polytope known to contain S as the test point for feasibility. We give the first analysis of the time complexity of a cutting plane algorithm using analytic centers. Our algorithm requires $O((T+n\sp2 L+n\sp3)nL\sp2)$ arithmetic operations, where n is the dimension of the space, L is a parameter describing the size of S, and T is the time required to check the feasibility of a test point.Finally, we present an algorithm for the transportation problem in the plane. Our algorithm synthesizes several ideas--the two most important are scaling and common data structures from computational geometry--to achieve time complexity $O(n\sp{2.5}\log n \log N),$ where n is the number of nodes and N is the largest supply or demand. No currently known general transportation algorithm has better than $O(n\sp3)$ time complexity; the plane setting allows improvement. Issue Date: 1992 Type: Text Description: 150 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1992. URI: http://hdl.handle.net/2142/72531 Other Identifier(s): (UMI)AAI9305458 Date Available in IDEALS: 2014-12-17 Date Deposited: 1992 
2018-04-20 11:11:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8086022734642029, "perplexity": 716.4464416366766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937440.13/warc/CC-MAIN-20180420100911-20180420120911-00346.warc.gz"}
https://searchandrestore.com/what-does-e-mean-in-digits/
# What does e mean in digits? ## What does e mean in digits? exponent of 10 On a calculator display, E (or e) stands for exponent of 10, and it’s always followed by another number, which is the value of the exponent. For example, a calculator would show the number 25 trillion as either 2.5E13 or 2.5e13. In other words, E (or e) is a short form for scientific notation. ## What is the value e? The exponential constant is an important mathematical constant and is given the symbol e. Its value is approximately 2.718. Is e the same as x10? It means exponential, which means multiplied by 10 to the power of the number after the “e”. So in this case it means 2 * 10^10. ### How do you find e? We’ve learned that the number e is sometimes called Euler’s number and is approximately 2.71828. Like the number pi, it is an irrational number and goes on forever. The two ways to calculate this number is by calculating (1 + 1 / n)^n when n is infinity and by adding on to the series 1 + 1/1! + 1/2! ### Why is e so special? The number e is one of the most important numbers in mathematics. e is an irrational number (it cannot be written as a simple fraction). e is the base of the Natural Logarithms (invented by John Napier). e is found in many interesting areas, so is worth learning about. What is e to the power of? e (Napier’s Number) and its approximate value is 2.718281828. x is the power value of the exponent e. Based on the exponent e value 2.718281828 and raised to the power of x it has its own derivative, It is a famous irrational number and also called Euler’s number after Leonhard Euler. #### How do you convert E to number? The Scientific format displays a number in exponential notation, replacing part of the number with E+n, where E (which stands for Exponent) multiplies the preceding number by 10 to the nth power. For example, a 2-decimal Scientific format displays 12345678901 as 1.23E+10, which is 1.23 times 10 to the 10th power.” #### Is e related to pi? 2 Answers. These two numbers are not related. At least, they were not related at inception ( π is much-much older, goes back to the beginning of geometry, while e is a relatively young number related to a theory of limits and functional analysis). Which is the correct value for the number e? The number $e$, sometimes called the natural number, or Euler’s number, is an important mathematical constant approximately equal to 2.71828. When used as the base for a logarithm, the corresponding logarithm is called the natural logarithm, and is written as $\ln (x)$. ## Who is the creator of the number e? e (Euler’s Number) 2.7182818284590452353602874713527 (and more …) It is often called Euler’s number after Leonhard Euler (pronounced “Oiler”). e is the base of the Natural Logarithms (invented by John Napier). e is found in many interesting areas, so it is worth learning about. ## What does the E stand for on a calculator? On a calculator display, E (or e) stands for exponent of 10, and it’s always followed by another number, which is the value of the exponent. For example, a calculator would show the number 25 trillion as either 2.5E13 or 2.5e13. In other words, E (or e) is a short form for scientific notation. Which is the real number e in Algebra? The number e e is an important mathematical constant, approximately equal to 2.71828 2.71828. When used as the base for a logarithm, we call that logarithm the natural logarithm and write it as lnx ln x. ( x), is the power to which e e must be raised to obtain x x. ### What do you get when you use the number e? Try it with another number yourself, say 100, what do you get? 100 Decimal Digits. Here is e to 100 decimal digits: 2.71828182845904523536028747135266249775724709369995957 49669676277240766303535475945713821785251664274… Advanced: Use of e in Compound Interest. Often the number e appears in unexpected places. ### How to calculate the number of full time equivalent employees? To calculate the number of full-time equivalent employees, the total number of hours worked is compared with the number of hours that represent an official weekly full-time schedule in a company (e.g. 40 hours per week). e (Euler’s Number) 2.7182818284590452353602874713527 (and more …) It is often called Euler’s number after Leonhard Euler (pronounced “Oiler”). e is the base of the Natural Logarithms (invented by John Napier). e is found in many interesting areas, so it is worth learning about. Where does the number e make an appearance? Here are a few of the places where it makes an appearance: 1 It is the base of the natural logarithm. 2 In calculus, the exponential function ex has the unique property of being its own derivative. 3 Expressions involving ex and e-x combine to form the hyperbolic sine and hyperbolic cosine functions.
2022-07-04 13:16:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770099878311157, "perplexity": 751.3460794630198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104375714.75/warc/CC-MAIN-20220704111005-20220704141005-00169.warc.gz"}