url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.biostars.org/p/87260/
rRNA Removal In Rna-Seq Data 3 8 Entering edit mode 9.2 years ago jockbanan ▴ 420 Hello everyone! I have paired-end human RNA-seq data mapped with Tophat2 and I want to perform differential expression analysis. I know my data have quite high level of ribosomal RNA contamination, so I want to deal with it. When using cuffidiff, there is this -M/--mask-file option that allows me to support cuffdiff with "filter" .gtf file. I downloaded such file (rRNA + tRNA + mtRNA) from UCSC genome browser and everything worked fine. Now I want to perform the same analysis using htseq-count + deseq2 and I wander - is there such easy way to deal with rRNA contamination in this case? After lots of googling, I have 3 ideas: a) Use htseq-count with .gtf file that simply does not contain rRNA genes b) Take .fastq files with reads, map them to my "filter.gtf" file first, then take only unmapped reads and map them to reference, use resulting .bam for analysis c) Subtract reads mapping to regions in my "filter.gtf" from my sample .bam file. a) I don't have such such .gtf file. I use hg19 reference downloaded from here: http://tophat.cbcb.umd.edu/igenomes.shtml which seems to contain rRNA genes. b) Tried this a while ago and had some problems... anyway, this would be rather long solution, I want to try something easier first. c) This is my favorite. I used bedtools intersectBed with -v option to subtract reads mapping to regions in "filter.gtf" from my .bam file. But for some read pairs, this removes only one read from the pair and therefore causes htseq-count to raise the famous warning: "Read ... claims to have an aligned mate which could not be found. (is the SAM file properly sorter?)" So finally, my questions: 1) Do you think c) is the right way of removing rRNA contamination? 2) If so, do you think I should just ignore htseq-count warnings? Or should I try to somehow remove these "orphan" reads without mate? (Because htseq-count treats them as reads with mate not mapped. But when one of the mates is mapped to rRNA region, don't I want to always remove both of them?) rna-seq deseq • 18k views 0 Entering edit mode I like doing an initial alignment to just rRNA sequences using bowtie2 and then do downstream analysis with unmapped reads. You can nicely see what percentage of each file aligned to rRNA, for one thing. Maybe this is like b, but I'm not sure. I don't usually bother to do this unless there's a lot of rRNA contamination, though. 0 Entering edit mode Have you ever looked what difference it makes wrt DEGs when you remove or not remove rRNA reads? We are using Ribozero with a rather large fraction of rRNA reads (~20-25%), so I am wondering if removing rRNA reads improves results. Note: we are using DESeq2 to detect differentially expressed genes. 5 Entering edit mode 9.2 years ago Option (a) is the normal way to do this. If you don't count them, they're not there and the library sizes will be adjusted accordingly. Sure, (c) would work, but I imagine it'd be a lot easier to just use GenomicRanges (or even grep) to remove rRNA from the GTF file. Then you don't have an extra step for each aligned sample. Regarding the warnings, you might quickly check to ensure that the orphaned mates aren't mapping to a gene. I would assume not, but if you remove the mates in this case then the orphaned reads would start getting counted when they really shouldn't. There's also an option (d), which is to just run everything as normal and then just remove the rRNA lines from the counts files after importing into R. If you just make a "lines to exclude" file once, then you can efficiently remove the problematic counts. This might prove to be the easiest option. 1 Entering edit mode Thanks for reply! Actually, I've just realized that I can simply use bedtools intersectBed with -v to directly subtract one .gtf file from another. 0 Entering edit mode Never thought about that way, that'd certainly do the trick as well! 0 Entering edit mode I am using a script from our lab. My previous colleagues did with this way. But, I a having trouble with the bam file from bedtools when input it into picard.jar. This does not happen if I use the bam file before using bedtools to remove rRNA. So I will try option a. I hope there is no difference between those methods 1 Entering edit mode 9.2 years ago I personally use option b) when doing this. The reason is that it allows me to use multiply matching reads in subsequent analysis, while making sure that these definitely do not map to rRNA. But the other options you suggest also seem valid to me. 0 Entering edit mode 8 months ago Dreamer • 0 You could just remove rRNA reads before aligning your reads to your reference genome. There are a number of rRNA reads removal tools available. Recently, we also developed a rRNA reads detection software named RiboDetector (https://github.com/hzi-bifo/RiboDetector). Benchmarking: https://academic.oup.com/nar/advance-article/doi/10.1093/nar/gkac112/6533611 shows that RiboDetector is the most computationally efficient and most accurate software for rRNA reads removal. The analysis also suggests that rRNA reads should be removed before alignment, otherwise they could be mapped to certain coding genes which share partial sequence similarity to rRNAs. RiboDetector can be used out-of-the-box without any database: • GPU mode: ribodetector -t 20 \ -l 100 \ -m 10 \ -e rrna \ --chunk_size 256 \ • CPU mode ribodetector_cpu -t 20 \ -l 100 \ -e rrna \ --chunk_size 256 \
2023-01-27 01:05:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48301997780799866, "perplexity": 3822.8392674701436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00266.warc.gz"}
https://wviechtb.github.io/metafor/reference/print.permutest.rma.uni.html
Print method for objects of class "permutest.rma.uni". # S3 method for permutest.rma.uni print(x, digits=x\$digits, signif.stars=getOption("show.signif.stars"), signif.legend=signif.stars, ...) Arguments x an object of class "permutest.rma.uni" obtained with permutest. digits integer to specify the number of decimal places to which the printed results should be rounded (the default is to take the value from the object). signif.stars logical to specify whether p-values should be encoded visually with ‘significance stars’. Defaults to the show.signif.stars slot of options. signif.legend logical to specify whether the legend for the ‘significance stars’ should be printed. Defaults to the value for signif.stars. ... other arguments. Details The output includes: • the results of the omnibus test of moderators. Suppressed if the model includes only one coefficient (e.g., only an intercept, like in the equal- and random-effects models). The p-value is based on the permutation test. • a table with the estimated coefficients, corresponding standard errors, test statistics, p-values, and confidence interval bounds. The p-values are based on permutation tests. If permci was set to TRUE, then the permutation-based CI bounds are shown. Value The function does not return an object. Author Wolfgang Viechtbauer wvb@metafor-project.org https://www.metafor-project.org References Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1–48. https://doi.org/10.18637/jss.v036.i03 permutest.rma.uni for the function to create permutest.rma.uni objects.
2022-08-14 00:04:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37265950441360474, "perplexity": 4516.2172189777175}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00710.warc.gz"}
https://bodheeprep.com/circle-questions-cat
Bodhee Prep-Online CAT Coaching | Online CAT Preparation | CAT Online Courses 15% OFF on all CAT Courses. Discount code: BODHEE015. Valid till 7th Feb Enroll Now | Best Online CAT PreparationFor Enquiry CALL @ +91-95189-40261 # 10 CAT Circle [Geometry] Questions with Solutions Question 1: In the given figure, ABCD is a cyclic quadrilateral whose side AB is a diameter of the circle through A , B and C . If $\angle A D C = 130 ^ { \circ } ,$ find $\angle$ CAB. [1] $40 ^ { \circ }$ [2] $50 ^ { \circ }$ [3] $30 ^ { \circ }$ [4] $130 ^ { \circ }$ Option # 1 Since A B C D is a cyclic quadrilateral Therefore, $\angle A D C + \angle A B C = 180 ^ { \circ }$ $\Rightarrow 130 ^ { \circ } + \angle A B C = 180 ^ { \circ }$ $\Rightarrow \angle A B C = 50 ^ { \circ }$ Also, $\angle A C B = 90 ^ { \circ }$ Therefore,  in $\Delta A B C ,$ $\angle A C B + \angle A B C + \angle C A B = 180 ^ { \circ } ( A S P )$ $\Rightarrow 90 ^ { \circ } + 50 ^ { \circ } + \angle C A B = 180 ^ { \circ } \Rightarrow \angle C A B = 40 ^ { \circ }$ Question 2: PBA and PDC are two secants. AD is the diameter of the circle with centre at $0 . \angle A = 40 ^ { \circ } ,$ $\angle P = 20 ^ { \circ } .$ Find the measure of $\angle D B C$ [1] $30 ^ { \circ }$ [2] $45 ^ { \circ }$ [3] $50 ^ { \circ }$ [4] $40 ^ { \circ }$ Option # 1 In $\Delta \mathrm { ADP } , \angle \mathrm { ADC } = 40 ^ { \circ } + 20 ^ { \circ } = 60 ^ { \circ }$ Therefore, $\angle \mathrm { ABC } = \angle \mathrm { ADC } = 60 ^ { \circ }$ $\Rightarrow \angle A B D = 90 ^ { \circ }$ Or, $\angle D B C = \angle A B D - \angle A B C = 90 ^ { \circ } - 60 ^ { \circ } = 30 ^ { \circ }$ Question 3: In the given figure, o is the centre of a circle. If $\angle A O D = 140 ^ { \circ }$ and $\angle C A B = 50 ^ { \circ } ,$ what is $\angle E D B ?$ [1] $70 ^ { \circ }$ [2] $40 ^ { \circ }$ [3] $60 ^ { \circ }$ [4] $50 ^ { \circ }$ Option # 4 $\angle B O D = 180 - \angle A O D = 180 - 140 = 40 ^ { \circ }$ $O B = O D \Rightarrow \angle O B D = \angle O D B = 70 ^ { \circ }$ Also $\angle C A B + \angle B D C = 180$ [ because  A B C D is cyclic ] $\Rightarrow 50 ^ { \circ } + 70 ^ { \circ } + \angle O D C = 180 \Rightarrow \angle O D C = 60 ^ { \circ }$ $\angle E D B = 180 ^ { \circ } - \left( 60 ^ { \circ } + 70 ^ { \circ } \right) = 50 ^ { \circ }$ Question 4: In the following figure, the diameter of the circle is 3 cm. AB and MN are two diameters such that MN is perpendicular to AB. In addition, CG is perpendicular to AB such that AE:EB = 1:2, and DF is perpendicular to MN such that NL:LM = 1:2. The length of DH in cm is [1] $2 \sqrt { 2 } - 1$ [2] $\frac { ( 2 \sqrt { 2 } - 1 ) } { 2 }$ [3] $\frac { ( 3 \sqrt { 2 } - 1 ) } { 2 }$ [4] $\frac { ( 2 \sqrt { 2 } - 1 ) } { 3 }$ Option # 2 Radius 3$/ 2 \mathrm { cm }$ $\mathrm { AB } = 3 \mathrm { cm }$ $\mathrm { AE } : \mathrm { EB } = 1 : 2$ $\mathrm { AE } = 1$ and $\text{OE}=3/2-1=1/2\text{cm}$ $\mathrm { HL } = 1 / 2$ Similarly OL $= 1 / 2$ Let $\mathrm { OH } = \mathrm { x }$ and $\mathrm { OD } = 3 / 2$ radius in $\Delta \mathrm { ODL }$ by Pythagoras theorem $\mathrm { OD } ^ { 2 } = \mathrm { OL } ^ { 2 } + \mathrm { DL } ^ { 2 }$ $\left( \frac { 3 } { 2 } \right) ^ { 2 } = \left( \frac { 1 } { 2 } \right) ^ { 2 } + \left( x + \frac { 1 } { 2 } \right) \Rightarrow x = \frac { 2 \sqrt { 2 } - 1 } { 2 }$ Question 5: P, Q, S, and R are points on the circumference of a circle of radius r, such that PQR is an equilateral triangle and PS is a diameter of the circle. What is the perimeter of the quadrilateral PQSR? [1] 2$r ( 1 + \sqrt { 3 } )$ [2] 2$r ( 2 + \sqrt { 3 } )$ [3] $r ( 1 + \sqrt { 5 } )$ [4] 2$r + \sqrt { 3 }$ Option # 1 $\angle \mathrm { QPO } = 30 ^ { \circ }$ Or, $\angle \mathrm { QOS } = 60$ (angle at the center) Or, $\angle \mathrm { OQS } = \angle \mathrm { OSQ } = 60$ Or, $\mathrm { QS } = \mathrm { r } , \angle \mathrm { POQ } = 120$ By sine rule $\frac { \sin 30 } { r } = \frac { \sin 120 } { P Q } \quad$ therefore, $\frac { 1 } { 2 r } = \frac { \sqrt { 3 } } { 2 \times P Q } \Rightarrow P Q = r \sqrt { 3 }$ Perimeter $= \mathrm { r } \sqrt { 3 } + \mathrm { r } \sqrt { 3 } + \mathrm { r } + \mathrm { r } = 2 \mathrm { r } ( \sqrt { 3 } + 1 )$ Question 6: In the figure given below (not drawn to scale), A, B and C are three points on a circle with centre O. The chord BA is extended to a point T such that CT becomes a tangent to the circle at point C. If $\angle A T C = 30 ^ { \circ }$ and $\angle A C T = 50 ^ { \circ } ,$ then the angle $\angle B O A$ is [1] $100 ^ { \circ }$ [2] $150 ^ { \circ }$ [3] $80 ^ { \circ }$ [4] Cannot be determined Option # 1 In triangle $\mathrm { ACT } , \angle \mathrm { C } = 50 ^ { \circ } , \angle \mathrm { T } = 30 ^ { \circ }$, therefore,  $\angle \mathrm { A } = 100 ^ { \circ } .$ Applying tangent secant theorem $\angle B = 50 ^ { \circ }$ and since $\angle C A T$ is the external angle of the triangle ABC $\angle B C A = 50 ^ { \circ } \ldots \angle B O A = 100 ^ { \circ }$. Question 7: In the figure below, the rectangle at the corner measures 10 cm × 20 cm. The corner A of the rectangle is also a point on the circumference of the circle. What is the radius of the circle in cm? [1] 10 cm [2] 40 cm [3] 50 cm [4] None of these Option # 3 $( x - 20 ) ^ { 2 } + ( x - 10 ) ^ { 2 } = x ^ { 2 }$ $x ^ { 2 } + 400 - 40 x + x ^ { 2 } + 100 - 20 x = x ^ { 2 }$ $x ^ { 2 } - 60 x + 500 = 0$ $x ^ { 2 } - 50 x - 10 x + 500 = 0$ $x ( x - 50 ) - 10 ( x - 50 ) = 0$ $x = 50 , x = 10$ Since x cannot be 10 . Therefore x=50. Question 8: Given below is a circle with centre $O$ and four points $- P , Q , R$ and $S - o n$ the circle. If the chords SQ and PR intersect each other at 0 and the radius of the circle is $8 \sqrt { 3 } \mathrm { cm } ,$ find area (in sq.cm) of $\Delta PSQ$ [1] 108$\sqrt { 3 }$ [2] 54$\sqrt { 3 }$ [3] 81$\sqrt { 3 }$ [4] 96$\sqrt { 3 }$ Option # 4 $\Delta \mathrm { SPQ }$ is right-angled (angle in a semicircle) $\angle \mathrm { POQ } = 180 ^ { \circ } - 120 ^ { \circ } = 60 ^ { \circ }$ and $\mathrm { OP } = \mathrm { OQ } =$ radius (i.e. 8$\sqrt { 3 } \mathrm { cm } \rangle$ . Hence $\Delta P O Q$ is equilateral and $P Q =8\sqrt { 3 } \mathrm { cm }$ Now in $\Delta S P Q , S P = \sqrt { S Q ^ { 2 } - P Q ^ { 2 } } = 24 \mathrm { cm }= \sqrt { ( 2 \times 8 \sqrt { 3 } ) ^ { 2 } - ( 8 \sqrt { 3 } ) ^ { 2 } }$ $\Rightarrow$ Area of $\Delta SPQ$, right-angled at P , will be $\frac { 1 } { 2 } S P \times P Q = \frac { 1 } { 2 } \times 24 \times 8 \sqrt { 3 } = 96 \sqrt { 3 }$ sq.cm Question 9: In the given diagram CT is tangent at C, making an angle of $\frac { \pi } { 4 }$ with CD. O is the centre of the circle. CD = 10 cm. What is the perimeter of the shaded region $( \Delta A O C )$ approximately? [1] 27$\mathrm { cm }$ [2] 30$\mathrm { cm }$ [3] 25$\mathrm { cm }$ [4] 31$\mathrm { cm }$ Option # 1 $\angle O C T = 90 ^ { \circ } , \angle D C T = 45 ^ { \circ }$ OR, $\angle O C B = 45 ^ { \circ }$ OR, $\angle C O B = 45 ^ { \circ } ( \Delta B O C$ is a right angled triangle ) OR, $\angle A O C = 180 ^ { \circ } - 45 ^ { \circ } = 135 ^ { \circ }$ Now, Because $\mathrm { CD } = 10 \Rightarrow \mathrm { BC } = 5 \mathrm { cm } = \mathrm { OB }$ $\Rightarrow \mathrm { OC } = 5 \sqrt { 2 } \mathrm { cm } = \mathrm { O } \mathrm { A }$ Again, $\mathrm { AC } ^ { 2 } = \mathrm { OA } ^ { 2 } + \mathrm { OC } ^ { 2 } - 2 \mathrm { OA } \cdot \mathrm { OC } \cos 135 ^ { \circ }$ $= 2 ( \mathrm { OA } ) ^ { 2 } - 2 ( \mathrm { O } \mathrm { A } ) ^ { 2 } \cdot \cos 135 ^ { \circ }$ $= 2 ( 5 \sqrt { 2 } ) ^ { 2 } - 2 ( 5 \sqrt { 2 } ) ^ { 2 } \times \left( - \frac { 1 } { \sqrt { 2 } } \right)$ $= 100 + \frac { 100 } { \sqrt { 2 } }$ $A C ^ { 2 } \approx 170.70$ $\Rightarrow A C \approx 13 \mathrm { cm }$ OR,  Perimeter of $\Delta \mathrm { OAC } = \mathrm { OA } + \mathrm { OC } + \mathrm { AC }$ $= 5 \sqrt { 2 } + 5 \sqrt { 2 } + 13 = 27 \mathrm { cm } .$ Question 10: The radius of the incircle of a $\Delta$ is 4 cm and the segments into which one side is divided by the point of contact are 6 cm and 8 cm, then the length of the shortest side of the $\Delta$ is [1] 12 cm [2] 15 cm [3] 13 cm [4] 14 cm Option # 3 BD = BE = 6 cm and AB = (x + 6) cm, BC = (16 + 8)cm = 14cm AC = (x + 8)cm Hence, $\mathrm { S } = \frac { \mathrm { a } + \mathrm { b } + \mathrm { c } } { 2 } = \frac { 2 \mathrm { x } + 28 } { 2 } = \mathrm { x } + 14$ Now ar. ($\Delta$ ABC) = ar.( $\Delta$ OBC) + ar.( $\Delta$ OCA) + ar.( $\Delta$ OAB) $\Rightarrow \sqrt { S ( S - a ) ( S - b ) ( S - c ) }$ $= \frac { 1 } { 2 } \times O E \times B C + \frac { 1 } { 2 } \times O D \times A B$ $\Rightarrow 4 \sqrt { 3 x ^ { 2 } + 42 x } = 4 ( 14 + x )$ $\Rightarrow 2 x ^ { 2 } - 14 x - 196 = 0$ or $x ^ { 2 } - 7 x - 98 = 0$ Therefore,  x = 7 , x = - 14 ( not possible ) OR, Shortest side = 6 + 7 = 13 cm ##### CAT Online Courses FREE CAT prep Whatsapp Group
2023-02-02 02:01:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6320812702178955, "perplexity": 1461.9290619388748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499954.21/warc/CC-MAIN-20230202003408-20230202033408-00279.warc.gz"}
http://physics.stackexchange.com/tags/mssm/new
# Tag Info Maybe I've understood the problem. In the minimum we have (only one scalar field for simplicity): $$\frac{dV}{d\phi}=0=\phi [ m^2 + -kgq+g^2q^2 \phi^2]$$ If $m=0$ we are we are forced to choose a mexican hat potential with one maximum in $\phi=0$ and two degenerate minima. So we are forced to have a non zero vev for the scalar fields. If these scalar ...
2015-04-21 01:56:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7835336923599243, "perplexity": 285.06381239095936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639674.12/warc/CC-MAIN-20150417045719-00022-ip-10-235-10-82.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/278888/volume-form-induced-by-a-finsler-metric
# Volume form induced by a Finsler metric I'm interested in knowing more about the volume form canonically induced by a Finsler metric. I've found some reasoning about it in this article http://www.ams.org/journals/bull/1950-56-01/S0002-9904-1950-09332-X/home.html but I was wondering if someone could point out a more recent source with results explained in a clearer way. • There are in fact at least two different possible "canonical" volume forms. See library.msri.org/books/Book50/files/02AT.pdf – Deane Yang Aug 16 '17 at 14:56 • @User28341 is there a norm on $\mathbb{R}^2$ which does not satisfy the parallelogram equality but every isometry of this norm, preserves the standard volum form? – Ali Taghavi Aug 16 '17 at 17:14 • @AliTaghavi? all norms satisfy this. Every isometry for a norm in $\mathcal{R}^n$ is an isometry for the Euclidean metric associated to its John (or Legendre, or Binet) ellipsoid and therefore preserves the standard volume form. – alvarezpaiva Aug 18 '17 at 20:15 • @alvarezpaiva Thanks for this very interesting concept "John ellipsoid". BTW is there a norm on $\mathbb{R}^2$ whose isometry group is the whole $\pm sl(2,\mathbb{R})$? – Ali Taghavi Aug 19 '17 at 9:47 • @AliTaghavi, no, same reasoning: isometries of a finite-dimensional normed space are isometries of an adapted Euclidean metric. They are all Euclidean transformation. – alvarezpaiva Aug 19 '17 at 12:50 In fact there are very many ways to provide a Finsler manifold with a "canonical" volume. Personally I've gone from thinking that this is a nuisance and trying to pin down which one is really the best to thinking that this is part of the landscape and should be accepted. There is a very good notion of volume that goes by the name of "Holmes-Thompson" volume, but it was also introduced by Dazur and, in my opinion, half-heartedly studied by Busemann. You can find almost all that is known about it in the book Minkowski Geometry by Thompson and in the paper Deane Yang linked to in his comment. In the paper by Busemann that you mention, he claims the Hausdorff measure of the Finsler manifold, viewed as a metric space, is the right notion of volume. It is a very interesting notion, of course, but it has its quirks : totally geodesic submanifolds are not minimal, integral geometry goes out the window, some volume filling results fail, etc. It is also not great for non-reversible Finsler metrics. The Holmes-Thompson is nicer in this respect too because it is more sensitive to non-reversibility. Although it is too long to explain here, my viewpoint has changed and I think that one can and should consider, always with some measure and with a lot of good taste, all the natural notions of volume. Sometimes keeping the same questions and changing the notion of volume opens up new vistas and allows you to tie Finsler geometry to other fields, which is what I think is needed most. Check for example this paper. Behind the paper is the idea that there is a different geometry of numbers for every natural notion of volume. The classical results are for Hausdorff measure and this paper makes the case that for the Holmes-Thompson measure you also get an interesting theory (which was, by the way, forseen by Mahler).
2020-02-21 16:38:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7404990792274475, "perplexity": 501.29511320332267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145533.1/warc/CC-MAIN-20200221142006-20200221172006-00277.warc.gz"}
https://read.dukeupress.edu/jhppl/article/44/3/381/137546/Patterns-and-Mechanisms-of-Political-Participation
## Abstract Context: Previous research has shown that Americans with disabilities turn out to vote at significantly lower levels than people without disabilities, even after accounting for demographic and other situational factors related to political involvement. The authors examined the potential mechanisms underlying their low turnout. They asked whether people with disabilities exhibit participatory attitudes and behaviors at levels commensurate with their other individual-level characteristics. Methods: The present study conducted descriptive and predictive analyses on data from the 2012 and 2016 American National Election Studies. Findings: Despite low levels of turnout in recent elections, people with disabilities were just as participatory, if not more so, when considering alternative forms of political engagement. The authors' analyses indicate that, while disability status had no bearing on political efficacy or partisan strength, those with disabilities reported being even more interested in politics than those without disabilities. Evidence is provided that depressed turnout rates among those with disabilities may be due in part to lower levels of attentiveness to the news, political knowledge, and negative perceptions of government. Conclusions: The psychological impacts and behavioral consequences that emerge from possessing a disability and the broader role of disability in the American political context are multifaceted. This area of research would benefit from future studies that examine a variety of electoral contexts. In April 2018 Tammy Duckworth, Democratic senator for the state of Illinois, made headlines as she cast a vote on the Senate floor while holding her 10-day-old baby, Maile (Stolberg 2018). Not only was Ms. Duckworth the first senator to give birth during her tenure in office, but she was also the first disabled woman in the Senate, after losing both legs in Iraq (Biography.com2018). Although Senator Duckworth occupies a highly visible position, she represents just one of millions in the United States living with a disability. In fact, the 2010 Census estimates that 56.7 million Americans, or 18.7% of the noninstitutionalized population, live with a disability (Brault 2012). This population is important for the study and practice of American politics, particularly as the number of people with disabilities continues to rise due to aging (National Institutes of Health 2010). Due to population growth, the political status of people with disabilities is in flux not only in the United States but also in other countries around the world. The World Health Organization estimates that about 15% of the global population lives with a disability, making them members of the “world's largest minority” (United Nations 2017). We examined what scholars know about the voting habits of those with disabilities and assessed whether these patterns are in line with other forms of political engagement. Specifically, we asked whether attitudinal measures can be leveraged to shed light on the behavioral outcomes of individuals with disabilities. The goal of this research was multifaceted, with results contributing to the study of American democracy, political behavior, minority political incorporation, and identity politics. Theoretically, by offering more evidence about the engagement of the large, diverse population of persons with disabilities, this research further refines theories of political behavior. Empirically, this project tested how disability relates to a variety of traditional predictors of behavior and preferences in the United States. The broader scope of this research speaks to the role of health in facilitating or impeding political participation. ## Prior Literature Existing research paints a clear, if not bleak, portrait of political participation among Americans with disabilities. Scholars have documented sizable voter turnout gaps between people with and without disabilities. Schur and Kruse (2000), focusing on voters with spinal cord injuries in New Jersey, found that people with disabilities were 10% less likely to vote in 1992. In an analysis of data from the 1998 midterm election, Schur et al. (2002) found a 20% voter turnout gap between those with and without disabilities. Using data from the Currently Population Survey (CPS), Hall and Alvarez (2012) report that, relative to people without a disability, people with disabilities were 7% less likely to vote in 2008 and 3% less likely in 2010. Also drawing on data from the CPS and other surveys, including the 2006 General Social Survey (GSS), Schur and Adya (2013) and Schur and Kruse (2014) corroborated these findings. Schur, Ameri, and Adya (2017) added to this line of research by presenting a 5.7% voting disparity in 2012, again noting less political involvement among those with disabilities. When examining eight different types of political activities, Schur, Shields, and Schriner (2005) again found low participation among people with disabilities. However, as with previous studies, the authors omitted key explanatory factors. Particularly, exposure to political news, strength of partisanship, and strength of ideology all tend to be stronger and better developed with age, factors that were not given much consideration. More current scholarship on the topic from Schur and Adya (2013) examined multiple data sets to analyze the participatory practices of Americans with disabilities, concluding people with disabilities to be significantly less likely to vote. They also showed that differences between people with and without a disability diminish once controlling for education (Schur and Adya 2013). However, as in Schur, Shields, and Schriner (2005), the Schur and Adya (2013) study omitted news exposure, partisanship, and ideology as key control variables. In their report to the US Election Assistance Commission, Hall and Alvarez (2012) also provided recent data showing people with disabilities were less likely than those without a disability to participate in politics, though regression analyses were not reported. How do behavior scholars make sense of these findings? Motivational factors of political participation have always been of interest to political scientists. With Downs's (1957) paradox of voting, Riker and Ordeshook's (1968) addition of the “D term,” and Campbell et al.'s (1960) funnel of causality, social scientists have seemingly always sought to explore the internal origins of external behavior. Yet the root cause of the turnout disparity between individuals with and without disabilities remains largely elusive to disability scholars. Earlier research identified self-sufficiency as a mechanism behind the lower voting rates of people with disabilities, emphasizing the types of employment and mobility barriers that people with disabilities face (Schur and Kruse 2000). Potential policy solutions that might help with self-sufficiency problems are ones that increase the employment of people with disabilities. This is because “along with enhancing economic self-sufficiency and social integration, employment may also help this important segment of the population become more active citizens” (Schur and Kruse 2000: 586). Subsequent research from Schur et al. (2002) pointed toward a similar direction and, related to self-sufficiency, also called for more research into how “major life-transitions” affect people with disabilities differently than people without disabilities. Indeed, more recent work from Haselswerdt and Michener (2018) indicates that large-scale changes in health insurance policy, particularly the loss of one's insurance coverage, have a negative impact on political involvement. In addition to suggesting ways to increase self-sufficiency via increases in the employment rates of those with disabilities, scholars have also focused on election administration solutions. Schur et al. (2002) revealed that voter turnout might be depressed by actual and expected problems with polling place accessibility. In response to problems with voting technology in 2000, the Help America Vote Act was passed in 2002 to update voting machines. Notably, the Help America Vote Act contains a number of provisions relating to polling place accessibility for people with disabilities, though in practice it needs more rigorous enforcement (Schur and Adya 2013). Independent of voting accessibility on Election Day, other research has recommended better options for people with disabilities in terms of convenience voting, convenience registration, and ballot simplification (Hall and Alvarez 2012; Miller and Powell 2016; Schur and Adya 2013; Schur and Kruse 2014). These measures are particularly important because, as Schur, Ameri, and Adya (2017) found, when those with disabilities experience voting difficulties they are more likely to perceive group stigmatization or to hold negative perceptions about their group's political influence, which in turn affects willingness to vote. Although accessibility and administrative issues are outside the scope of the current project, they certainly may play a role in shaping political efficacy and/or attitudes toward government more broadly. ## Current Project The present project drew on data from the 2012 and 2016 American National Election Studies (ANES). Analysis of the 2012 ANES showed that people without disabilities reported voting at a rate of 81%, while people with a disability reported a voter turnout rate of 69%, thus resulting in a 12% voter turnout gap for that year. The 2016 ANES reveals a turnout gap of 12% again, with those reporting disabilities and those not reporting disabilities voting at rates of 87% and 75%, respectively. While self-reported voter turnout rates are likely exaggerated (Harbaugh 1996), a wealth of aforementioned empirical evidence in the voting behavior literature consistently suggests people with disabilities to be a group with untapped political potential. What are the causal mechanisms that links having a disability to lower levels of voter turnout? To what extent do those with disabilities engage with politics, beyond measures of voting? Conclusive answers to these questions remain difficult to obtain. Here we explore whether, commensurate with the pattern of low voter turnout, people with disabilities also exhibit low levels of political involvement across a variety of attitudinal and engagement measures. Our hypotheses propose several paths of influence. ### Hypotheses #### Psychological Resource Hypothesis Ojeda (2015) argues that, beyond the traditional resource model—time, money, and civic skills (Brady, Verba, and Schlozman 1995)—political participation necessitates both physical and mental exertion. That is to say, the political resources needed to successfully participate in a representative democracy are just as much (if not more so) physical and psychological as they are material. Following political events requires a significant amount of attentiveness and psychological entrenchment with the topic, a task that may not be easily attainable for those dealing with ill health on a daily basis. Living with a physical disability is taxing on one's mental well-being, much in the same way that living with a mental disability is taxing on one's physical health. Research has shown physical and mental health to have reciprocal effects on each other (Lenze et al. 2001; Schreurs, de Ridder, and Bensing 2002). To the extent that health-related concerns take precedence in day-to-day life, we should expect individuals with disabilities to report lower levels of political attentiveness, political interest, and political knowledge than individuals without a disability. #### Political Conviction Hypothesis A wealth of empirical research has demonstrated the powerful relationship between strength of political conviction and political engagement. In this project, we conceived of political conviction as strength of political ideology and strength of partisanship. Stronger ideologues, or those who report a strong conservative or strong liberal leaning, are not only more likely to vote (Palfrey and Poole 1987) but are also more likely to participate in a variety of political activities (Converse 1964) than are their more moderate counterparts. The authors of The American Voter Revisited (Lewis-Beck et al. 2008: 207) noted that “ideology summarizes a person's overall stance toward the political world. . . . An ideology can also give political meaning to an enormous variety of observations, events, and experiences that fall outside the immediate realm of politics.” Similarly, canonical models of voting have argued stronger partisans to experience greater levels of political engagement than those with weaker or moderate partisanship inclinations (Campbell et al. 1960). In fact, more recent studies have found that partisanship not only asserts influence on participation but also structures one's political identity (Bartels 2000; Green, Palmquist, and Schickler 2004; Huddy, Mason, and Aarøe 2015). We investigated whether these relationships hold for individuals with disabilities. Our political conviction hypothesis expects that individuals with disabilities have lower documented levels of political involvement due to weaker (i.e., more moderate) political convictions. Living with a disability and identifying as a disabled individual introduce a unique multidimensionality to one's sense of self and therefore to one's political convictions.1 If ideology is a summary judgment (Lewis-Beck et al. 2008), then ideological strength is the degree of confidence in that judgment. As a sense of self or social identity, disability status might bolster or attenuate one's political convictions depending on the salience of those conditions and how they fit with one's overall worldview. For example, a disabled individual who generally supports small government ideals might view providing government benefits for those with documented chronic illness, visual impairments, or limited mobility as an exception. In short, disability status presents another layer of one's social identity, a layer that may complicate political decision making. Such crosscutting identities among those with disabilities, we believe, could induce conflicting policy or candidate preferences that in turn lend themselves toward moderate political convictions (Treier and Hillygus 2009). #### Political Efficacy Hypothesis Political behavior scholars are well familiar with the influential nature of political efficacy. Campbell, Gurin, and Miller (1954: 187) identified efficacy as “the feeling that individual political action does have, or can have, an impact upon the political process, i.e. that it is worthwhile to perform one's civic duties. It is the feeling that the individual citizen can play a part in bringing about change.” Political efficacy then evolved as conceptually two-dimensional: internal and external. Lane (1959: 149) referred to internal efficacy as “the image of the self as effective” and external efficacy as “the image of democratic government as responsive to the people.” Scholars have since presented empirical evidence supporting a two-dimensional notion of efficacy (Converse 1972; Coleman and Davis 1976; Balch 1974). The more politically efficacious individuals feel, the more likely they are to engage with politics, perhaps because efficacy conveys a sense of personal control. Relatedly, Schur, Shields, and Schriner (2003) found that people with disabilities have significantly lower levels of both external and internal political efficacy compared to those without disabilities. In line with this and other previous findings (Schur 1998; Papadopoulos, Montgomery, and Chronopoulou 2013), we anticipated that people with disabilities—which are generally not bestowed on individuals by choice—to feel a minimized sense of control. One's experience with disability status, therefore, is expected to manifest in lower levels of both internal and external political efficacy. #### Perceptions of Government Hypothesis While lower levels of external political efficacy might indeed indicate more negative perceptions of government and public officials, we analyzed each as separate constructs. As stated, we expected political interest, attentiveness, and political knowledge to be lower among those with disabilities than those without disabilities. We expected that this disinterest, or “exiting of the system,” may be precipitated by (a) negative experiences with government and/or (b) a lack of perceived governmental representation. Throughout the process of securing legal disability status or filing for disability benefits, those with disabilities may have direct experiences with the unpleasantness of governmental red tape. Alternatively, those with disabilities might simply feel like government officials do not represent them or their interests. That is to say, elements of both descriptive representation, whereby elected officials possess physical traits similar to their constituency, and substantive representation, whereby elected officials pursue interests pertinent to their constituency, may be perceived as missing among individuals with disabilities (Wright 2016). Consider, for example, recent work by Ojeda and Slaughter (2018) demonstrating that the negative relationship between depression and turnout is attenuated in the presence of a coethnic representative, particularly for black men. Although we are not able to examine specific causal pathways, prior research has noted lower levels of trust in government and more cynical assessments of government performance among those with disabilities, especially in the area of managing unemployment (Schur and Adya 2013). In our study, we expected individuals living with disabilities to report lower levels of presidential and congressional approval and to report higher levels of perceived government corruption than individuals living without disabilities. ### Sources of Data and Measures The data for this project come from the 2012 and 2016 ANES, two nationally representative, cross-sectional data sets. In this article we report a series of quantitative analyses in which disability served as the primary independent variable of interest. In line with our four hypotheses, key dependent variables included news attentiveness, political knowledge, interest in politics, strength of ideology, strength of partisanship, political efficacy, and perceptions of government. Further, the data were drawn from both the pre- and postelection waves and were weighted by the full sample weight. Note that all of our analyses were restricted to US citizens and those 18 years of age or older. Please see  appendix A for complete question wording and coding of all variables. #### Measuring Disability With regard to our independent variable, there is no consensus on how to define and quantify who has a disability, even among those who have made it their life's work. However, in her seminal work The Disabled State (1984), Stone noted that pressures for expanding the concept of disability have come for years from the citizens who seek aid, the workers who make eligibility decisions, and the policy makers who set standards related to disability programs. Legal definitions of disability in the United States vary by state and also between state law and federal law. Further, international organizations, such as the United Nations, have constructed yet additional ways of defining the population of people with disabilities. Laws that require a definition of the population of people with disabilities share one important commonality: the definition presents disability as a binary concept. That is, either one has a disability, and is perhaps eligible for benefits under the law, or one does not have a disability and thus does not have access to such benefits. In attempts to measure the population of people with disabilities, surveys present a multitude of indicators of disability, both objective and subjective. To measure the population eligible for reasonable accommodation under the 1990 Americans with Disabilities Act, for example, one could ask whether or not the respondent has a record of a “physical or mental impairment that substantially limits one or more major life activities” (2019: n.p.). Other survey measures gauge disability by asking questions related to one's employment status or by inquiring how many days per month mental or physical disabilities interfered with one's routine activities. More detailed surveys may even incorporate questions that allow respondents to specify disability by type and severity. The GSS, for example, did this in a specialized module in 2006, and the CPS also regularly asks about types of disabilities. However, the GSS has not implemented the same module since 2006, and attitudinal measures related to politics in the CPS are limited. In line with previous research (Miller and Powell 2016; Schur, Ameri, and Adya 2017), in this study we used a binary operationalization of disability as our main independent variable. In both surveys we used respondents' preelection current employment status to gauge disability. Those who indicated being “permanently disabled” were coded as 1, and all other employment statuses were coded as 0.2 Using this operationalization, we found nearly 7% of the 2012 ANES sample (n = 394) and roughly 4% of the 2016 ANES (n = 182) to have a disability. While standard in the field of disability research, identifying disability according to employment status is conceivably fraught with measurement error. To gain some leverage on this potential for error and to differentiate the effects of employment from disability status, we included three additional comparison groups in our analyses: (a) employed persons, (b) retired persons, and (c) other unemployed persons. Employed individuals were those who indicated their employment status as “working now” (n = 3,095 in 2012, n = 2,547 in 2016). Retired individuals indicated their current employment status as “retired” (n = 1,315 in 2012, n = 922 in 2016). Other unemployed individuals were those who selected “unemployed,” “temporarily laid off,” “homemaker,” or “student” as their current employment status (n = 1,097 in 2012, n = 604 in 2016). #### Dependent Variables To explore the psychological resource hypothesis, measures of political interest, news attentiveness, and political knowledge were required. Political interest asks individuals how often they pay attention to what is going on in government and politics, with responses ranging from never to always. Although question wording and response options varied somewhat between the 2012 and 2016 ANES, our measures of news attentiveness generally gauged the extent to which respondents reported following politics/campaigns through four primary mediums: Internet, television, radio, and newspapers. Political knowledge was assessed using an index of seven questions in the 2012 data set and four questions in the 2016 data set related to both current and general knowledge about US government. Our political conviction hypothesis was tested based on a folded, four-point ideological scale ranging from moderate to extremely liberal/conservative and a folded 4-point partisanship scale ranging from independent to strong Democrat/Republican (see  appendix A). To investigate our political efficacy hypothesis, we included internal efficacy, which measures the extent to which one understands what is going on in government, and external efficacy, which measures the extent to which one feels that they can affect government. In 2012, the ANES offered four different question wordings of both internal and external political efficacy (each asked to half the sample); the 2016 ANES efficacy measures were composed of just two questions (one for internal and one for external) asked of the entire sample. Finally, we relied on three measures to evaluate the perceptions of government hypothesis: approval of Obama, approval of Congress, and evaluations of government corruption. Questions related to approval asked how each is handling his or her job, whereas measures of government corruption asked respondents to assess how many people in government are corrupt, ranging from none to all (see  appendix A). #### Demographic and Political Controls Standard demographic controls were employed within all of our inferential models: age, gender, marital status, race, ethnicity, religiosity, education, income, and geographic location (the South). Age is widely known to be an important predictor of political behavior (Verba and Nie 1972; Rosenstone and Hansen 1993), and indeed, disability status specifically addresses some of the reasons that an elderly person may be less likely to participate. Schur and Adya (2013) found age to be a significant predictor of participation in their study of the political participation levels of people with disabilities. Research has shown, as with age, one's gender influences one's relationship with politics. With regard to political efficacy in particular, Verba, Burns, and Schlozman (1997) showed men to display significantly higher levels, and Schur, Shields, and Schriner (2003) observed the same in their study of people with disabilities. Relatedly, research shows that marital status influences political engagement: married people tend to be more interested in politics relative to unmarried people, and married people tend to engage with politics together (Verba, Burns, and Schlozman 1997; Wolfinger and Rosenstone 1980; Leighley and Nagler 2014). As a multitude of empirical research will attest, political deliberation and participation largely take place among those with higher socioeconomic status (Schattschneider 1960; Verba and Nie 1972; Wolfinger and Rosenstone 1980; Rosenstone and Hansen 1993; Verba, Schlozman, and Brady 1995; Schlozman, Verba, and Brady 2012; Leighley and Nagler 2014). Due to this effect, and particularly due to our operationalization of disability according to employment status, we included household income and education as controls. To account for race and ethnicity, we included two binary variables: whether or not the respondent identified as black, and whether or not the respondent identified as Hispanic. Previous research has shown that, on the whole, blacks tend to exhibit stable and cohesive levels of political conviction, generally identifying as Democrats and liberals (Tate 1993, 2010; Black 2004). Hispanic political behavior research uncovers a similar pattern: many Hispanic Americans identify as Democrats and liberals and tend to vote along these lines (Lopez and Taylor 2012). Relatedly, we included religiosity as a control variable (a) because of the well-documented connection between it and race/ethnicity (Cox and Jones 2017) and (b) because religiosity is also a predictor of political engagement (Smith and Walker 2012). An additional demographic item we controlled for was whether or not the respondent lived in the South (see  appendix A for coding). We included this variable due to the South's history of voting discrimination, once necessitating special coverage under the Voting Rights Act (Overton 2006; Hasen 2012; Wang 2012); the more recent removal of such coverage (Shelby County v. Holder, 570 U.S. 529, 2013) (Blacksher and Guinier 2014); and empirical evidence demonstrating the relationship among southern states, voter identification laws, and decreased levels of voter turnout (Hajnal, Lajevardi, and Nielson 2017). Additionally, we controlled for the South due to its relationship with disability status itself: 46% of permanently disabled individuals reported living in the South in the 2012 ANES, and 28% in the 2016 ANES. As described earlier, party identification and ideological orientation are paramount in predicting an individual's political involvement and their perceptions of government. Thus, all model specifications accounted for these two variables. Other politically relevant controls in our models included strength of partisan identification, strength of ideological orientation, political interest, and political knowledge. See  appendix A for question wording and coding details. ## Results Our results first begin with the replication of extant patterns of turnout and political engagement among those with and without disabilities. As previously noted, turnout among individuals with disabilities was rather dismal in both the 2012 and 2016 general elections (fig. 1). Disabled individuals were 11% less likely to vote than employed persons in both years, but only about 3% and 7% less likely to vote than other unemployed individuals (i.e., nonemployed, nondisabled) in 2012 and 2016, respectively. With regard to voting and campaign-related activities beyond voting (e.g., attending a political rally, donating to a candidate) (fig. 2), retired persons demonstrated the most political engagement of any employment group. While patterns of campaign participation among people with disabilities were relatively on par with employed and other unemployed individuals (fig. 2), average engagement among disabled individuals slightly surpassed these groups in 2016. Table 1 provides two specifications—demographics only and demographics plus political variables—for models predicting voter turnout and campaign participation. Dummy variables for disability status, retired, and other unemployed were included in each model. Various model specifications in table 1 allow us to gauge the degree to which significant effects for disability status were altered when accounting for different types of variables, such as education, age, political interest, and partisan strength. When controlling only for demographic factors, disability status was not a statistically significant predictor of turnout, though in 2016 disabled individuals were significantly more likely than employed individuals (the excluded category) to participate in other ways. The same can be said of other unemployed individuals in both 2012 and 2016. While the addition of demographic factors largely had no effect on retired persons' inclination to vote or participate, we did observe that the positive, statistically significant effect on voting in 2012 for these individuals disappears in our 2016 models. In other words, retired persons were no more or less likely to vote in 2016 than employed persons. In contrast with demographic only models, fully controlled analyses (i.e., demographic and political models in table 1) indicated statistically less turnout in 2012 among disabled individuals (p = 0.019). While the same relationship remains negative in 2016, the effect is only marginal (p = 0.099). With regard to campaign-related activities, however, data from the 2016 ANES show those with disabilities were significantly more likely than employed persons to become engaged with campaign-related activities beyond turnout (p = 0.037); all else held constant. Interestingly, while retired persons remained more participatory than employed persons in both general elections, the positive and statistically significant relationship between this group and turnout dissipated as political controls were factored into the model (p = 0.081 in 2012, p = 0.492 in 2016). Likewise, those who were unemployed were no more or less likely to vote, relative to employed individuals, though this group did indicate engaging in other types of electoral behavior (p = 0.005 in 2012, p = 0.017 in 2016). We aimed to identify changes in the magnitude and direction of disability status on turnout and participation, given various model specifications outlined in table 1. Yet we believed it was most fruitful and, indeed, most accurate to analyze how disability status “works” in the context of both demographic and political variables. That is to say, we wished to explore the direct effect of disability status on political attitudes and behaviors, all else being equal. Therefore, we emphasized fully specified models as we evaluated each of our hypotheses. ### Psychological Resources Our psychological resource hypothesis contends that the gap in voter turnout among people with and without disabilities might be explained by lower levels of attentiveness to, interest in, and knowledge about politics. The results are presented in figure 3 and table 2. Figure 3 displays the main effect of disability status on each dependent variable, controlling for retired and other unemployed dummy variables, as well as individual differences in demographic and political factors. Across news attentiveness, political knowledge, and political interest, disability status had a more muted effect in the 2016 election than in 2012. We also observed that the effect size and directionality of disability on political knowledge changed dramatically between 2012 and 2016. Unstandardized coefficients in table 2, as well as the additional comparison categories, put these changes into perspective. In general, those with disabilities were less attentive to political news than were those who were employed (p = 0.000 in 2012, p = 0.295 in 2016), despite showing increased levels of political interest (p = 0.001 in 2012, p = 0.094 in 2016). While coefficient directionality of disability status on news and interest remained unchanged across ANES years, results for other categories of employment showed volatility (table 2). For instance, retired individuals were significantly less likely than employed individuals to be attentive to political news in 2012 (p = 0.000), while we found no such effect for 2016. Compared with those who were presently working, individuals identifying themselves as disabled showed less knowledge of politics in 2012 (p = 0.001). Curiously, in 2016 all employment categories but one failed to present a statistically significant effect on attentiveness, knowledge, or interest. Based on these results we might speculate that in this election year employment status of any type was largely unrelated to one's psychological involvement with the election, its candidates, and with politics more broadly. In total, results of the tests of our psychological resource hypothesis are somewhat muddied. There is evidence suggesting that the voting gap between disabled and nondisabled individuals rests on lower levels of news attentiveness and, at least in 2012, lower levels of political knowledge. However, disability status was consistently predictive of higher levels of political interest, which may indeed factor into this group's involvement in campaign activities aside from voting. We added to these results by next turning to our expectations of disability status and political efficacy. ### Political Efficacy Results of our analyses of our political efficacy hypothesis are shown in table 3. (An accompanying figure containing the effect size of disability on efficacy is not presented due to lack of statistical significance across all models.) Our political efficacy hypothesis expected disabled individuals to experience lower levels of both internal and external political efficacy, which may explain dampened turnout rates. Despite failing to reach statistical significance, patterns of unstandardized regression coefficients indicate higher levels of internal efficacy among disabled persons, compared to employed persons, in both 2012 and 2016 (p = 0.437, p = 0.562, respectively). With regard to external efficacy (e.g., that government is responsive one's preferences), having a disability exhibited a positive relationship in 2012 (p = 0.725) and a negative relationship in 2016 (p = 0.389). As indicated in table 3, other unemployed individuals reported feeling significantly more internally efficacious (e.g., that one understands and is well qualified to participate in politics) than employed individuals in 2016 (p = 0.011). On the whole, however, these results suggest that political efficacy does not vary greatly between those with and without disabilities—or between any employment category, for that matter. Self-assessments of internal and external efficacy do not appear to be a primary motivator of the voting gap. ### Political Conviction Our political conviction hypothesis suggests that people with disabilities may experience lower levels of political involvement due to weak ideological orientations and weak partisan affiliations. Figure 4 shows the main effect of disability status on partisan and ideological strength across 2012 and 2016 ANES respondents. Coefficient estimates from these fully specified models are shown in table 4. The effect of disability on partisan strength is minimal, though its effect on ideological strength is substantial and subject to fluctuation (fig. 4). Such opposing findings might not be altogether surprising, given that traditional theories of political attitudes (Converse 1964) assert that partisan affiliation and ideological orientation are not equivalent constructs. Compared with employed individuals, those who had a disability and those who were otherwise unemployed were generally less partisan in 2012 and 2016. On the other hand, despite minimal effects on strength of partisanship, retired individuals (p = 0.245 in 2012, p = 0.036 in 2016) and other unemployed individuals (p = 0.002 in 2012, p = 0.018 in 2016) showed stronger ideological convictions than did employed individuals. Of considerable importance is that disability status relates to strong liberal or conservative attitudes in 2012 and weak liberal or conservative attitudes in 2016 (fig. 4, table 4). Retired and other unemployed categories do not experience such volatility across years. Compared with those presently working, retired and other unemployed individuals consistently exhibit strong ideological orientations. A breakdown of ideological strength by disability status showed that approximately 40% of disabled persons reported moderate leanings, 24% reported weak leanings, 26% reported liberal or conservative, and 10% reported strong leanings within both 2012 and 2016 ANES data. While these raw percentages remained relatively unmoved from year to year, other demographic and political controls within our models exhibited volatility (see appendix table B1). Specifically, controls for female, married, black, and education—all statistically significant predictors of ideological strength in 2012 models—were no longer significant in 2016 (p = 0.158, p = 0.746, p = 0.820, and p = 0.454, respectively). Such dramatic changes in the predictive power of our control variables may have some bearing on the opposing coefficient estimates observed in figure 4.3 Indeed, when bivariate analyses of disability status on ideological strength are performed (see appendix table B2), coefficients in both 2012 and 2016 models revealed a clear negative relationship, though only 2016 results were significant (p = 0.027). In sum, when demographic and political variables are unaccounted for, individuals with disabilities appear to report weaker or more moderate ideological orientations. Other explanations for our findings also likely lie within the dynamics of the 2012 and 2016 campaigns and candidates themselves, discussed next. The results of our political conviction hypothesis, therefore, suggest that partisan strength was altogether not a significant component in explaining low turnout rates among those with disabilities, though ideological strength may matter depending on the electoral context. ### Perceptions of Government Finally, we considered the relationship between disability status and negative perceptions of government. The perceptions of government hypothesis expected persons with disabilities to report lower approval ratings of government and greater perceptions of government corruption. In general, ANES data showed that views of government between 2012 and 2016 grew increasingly pessimistic among those with disabilities. While in 2012 disability status was related to greater approval of Obama (p = 0.110) and Congress (p = 0.030) and fewer perceptions of government corruption (p = 0.038), in 2016 approval ratings fell and perceptions of corruption increased (fig. 5). In 2016, individuals with disabilities were significantly less approving of Obama (p = 0.028) and significantly more likely to view government as corrupt (p = 0.011) than those currently working. Comparisons across other unemployment categories did not yield such instability (table 5). Though the results were not statistically significant, retired individuals tended to approve of Obama in both 2012 and 2016, while other unemployed individuals tended to disapprove of Obama during this time. Likewise, other unemployed individuals were generally more disapproving of the way Congress handled its job in 2012 and 2016, compared with the excluded category of employed persons. When it comes to making sense of the perceptions of government hypothesis, our results indicate that the effect of disability on all three dependent variables changed direction and statistical significance across years (table 5). As in table 4, estimates of disability status on these three variables were heavily influenced by (volatility among) accompanying controls. With regard to perceptions of government corruption, for instance, blacks were less likely to view government as corrupt in 2012 (p = 0.009), though no relationship existed between these two variables in 2016 (p = 0.983) (see appendix table B3). Similarly, in 2012 higher-income individuals were less likely to view government as corrupt (p = 0.000), a finding that did not replicate in 2016 (p = 0.448). As with our examination of ideological strength, bivariate analyses are somewhat useful in clarifying the relationship between disability and perceptions of government (see appendix table B4). Disability status predicted perceptions of greater government corruption in both 2012 and 2016, though this effect was significant only in the latter (p = 0.686 in 2012, p = 0.000 in 2016). Again, we suspect that fluctuations in the predictive power of traditional demographic and control variables may be due in part to the nature of the 2016 general election. Taken as a whole, the perceptions of government hypothesis is not cleanly supported. The 2012 and 2016 ANES data clearly demonstrate increasingly negative sentiments toward government and governmental actors among those with disabilities. Still, given tenuous findings across years and model specifications, we remain hesitant to conclude that perceptions of high corruption and low governmental approval rates are historically responsible for the turnout gap among those with disabilities. ## Discussion Our findings diverge somewhat from previous research on disability status and political behavior. Although it appears that traditional forms of engagement (i.e., voting) by and large disenfranchise those with disabilities, this group is just as likely to display political signage, to donate money to candidates/campaigns, to advocate for political causes, and so forth. When it comes to political engagement in 2016, our fully controlled results indicate that individuals with disabilities were notably more involved than other employment groups. While participation in rallies and other political volunteer work remains low among persons with disabilities, 48% of all disabled individuals surveyed in 2016 reported talking to another person about voting for or against a party/candidate, up from 40% in 2012. This statistic may tie in with the fact that disabled individuals were more politically interested (especially in 2012) than employed persons, despite tuning out of traditional political news sources (e.g., television, newspapers, radio). Based on these results, we might further speculate that disability status influences not only the frequency with which one attends to political news but also the medium by which that information is sought. For example, physical or cognitive limitations may make it easier to get one's news from a television screen rather than a handheld newspaper. Likewise, individuals who are employed might be more likely to get their news from radio, particularly as they commute to work. Cursory results from the present ANES data provide some support for this conjecture. People with disabilities were less likely than those without disabilities to gather political news via newspaper, internet, or radio. We believe that, given varying news quality across mediums and outlets, coupled with lower levels of political knowledge among those reporting disabilities, subsequent empirical study of the media habits of those with disabilities should be a priority. In contrast to Schur, Shields, and Schriner (2003), we found no differences in external or internal efficacy among those with and without disabilities. Schur, Shields, and Schriner (2003: 121) hypothesized that “people with disabilities may have lower levels of political efficacy because of discrimination, prejudice, and negative social constructions. They may perceive themselves as less able to perform various politically relevant skills . . . and they may believe that they have less influence in politics and do not receive equal treatment from public officials.” Within the 2012 and 2016 ANES datasets there are no measures of perceived discrimination in relation to one's disability status, though we believe this theoretical rationale is a valuable starting point for future study. Specifically, perceptions of discrimination may play into the construction and maintenance of one's social identity (Johnstone 2004) and identity politics more broadly. Still, our findings here may serve as a positive indication that efficacy levels of people with disabilities have improved overall since 1998, when Schur, Shields, and Schriner (2003) collected their data. Ideological strength is traditionally a powerful determinant of voting and thus a key factor in our analysis of low turnout rates among individuals with disabilities. With the exception of respondents reporting a disability, employment status exhibits consistent effects on ideological strength. Moreover, we found the correlation between partisanship and ideological orientation to be particularly low for those with a disability (r = 0.32 in 2012, r = 0.57 in 2016). Individuals with disabilities may not have crystallized attitudes toward politics and/or their own political identity, a finding that we believe relates to lower levels of political knowledge and attentiveness to political news, particularly in 2012. Given inconsistent support for our political conviction hypothesis, future research must consider whether ideology and partisanship are constructed similarly for individuals with and without disabilities. Pertinent to modeling antecedents of the voting gap, we might also explore the extent to which disability status takes precedence over partisan or ideological affiliation as one considers their individual identity. We assert that the salience of one's disability plays a large part in shaping ideological preferences and thus the relationships between disability, identity, and political involvement. Still, it is imperative to note that in models predicting participation, as well as several of our other dependent variables, we observed that those reporting a disability were not like those who were either retired or unemployed. That is to say, there is genuinely some aspect of possessing a disability, rather than possessing “free time,” that exerts an influence on political attitudes. The ability to identify such differences between these employment groups is a benefit of the present research and an advancement to the study of disability and political behavior more broadly. Although partisanship was not a primary dependent variable in the current set of analyses, we did unearth several results related to disability, engagement, and partisan affiliation. Individuals with disabilities largely tend to identify as Democrats, though the degree to which they vote along party lines shows some instability. In the ANES data, of those disabled persons who cast a ballot in 2012 (n = 256), 72% voted for Obama, 22% voted for Romney, and 3% voted for a third-party candidate; of those who voted in 2016 (n = 101), 51% voted for Clinton, 30% voted for Trump, and 7% voted for a third-party candidate. Additional research confirms that physical and mental health impact not only if one votes but also for whom one votes. For example, in the 2016 presidential election, counties with poorer public health were significantly more likely to shift their vote in favor of Donald Trump, relative to Mitt Romney in 2012 (Wasfy, Stewart, and Bhambhani 2017). On one hand, such shifting loyalties help position our somewhat inconsistent results, particularly with regard to ideological strength. On the other hand, the nature of Donald Trump's 2016 candidacy as both a celebrity and a political outsider adds potential noise to our findings. How election contexts and other exogenous factors moderate partisan affiliations, and therefore behavioral outcomes, among those with disabilities is an avenue ripe for empirical study. Despite disabled individuals' tendency to identify as Democrats, our results with regard to the perceptions of government hypothesis indicate that this group was significantly less approving of Obama than employed persons in 2016. This may in part reflect the tenuous relationship between partisan strength and disability status. It is also possible that increased perceptions of government corruption bled over to or were conflated with attitudes toward Obama and Congress. One might also speculate that those with disabilities hoped the Obama administration would do a better job of representing disability rights, starting in 2008. At the outset, tenets of the 2010 Affordable Care Act (ACA) seemed promising to those with disabilities, specifically policies that bridged coverage for those with preexisting conditions (Collins et al. 2012). It is quite possible, however, that benefits afforded to disabled individuals were overlooked or “submerged” (Chattopadhyay 2018) by negative public sentiment, as private insurers left the ACA marketplace (Khazan 2017) and states continued to withdraw from ACA Medicaid expansion (Young 2017). In all, the unique nature of the 2016 election makes it difficult to draw definitive conclusions about partisan preferences and disability. Future research might explore when and under what circumstances (e.g., midterm elections, diversity of candidates) partisanship exerts influence on the voting decisions of those with and without disabilities. For scholars researching disability, operationalization of this construct has proved difficult and subject to constant criticism (Burkhauser, Houtenville, and Tennant 2014). As such, the way that we are able to measure disability in this project provides for several limitations and/or caveats of our results. Within large-sample datasets, like the ANES, disability has typically been measured according to employment status. While this operationalization potentially conflates physical or mental limitations with employment status, our present use of employed, retired, and other unemployed (e.g., laid off, homemakers, students) individuals as comparison groups seeks to disentangle the true nature of disability status on political behavior and attitudes. Still, it is quite possible that respondents who fall into one category are not precluded from another, muddying the raw effect of disability status.4 We also noted compositional differences between disabled persons surveyed in the 2012 ANES and in the 2016 ANES. We found that in 2012 nearly 7% of the ANES sample indicated living with a disability (n = 394), but in 2016 only 4% indicated the same (n = 182). Additionally, a breakdown of demographic factors (appendix table B5) indicated that these samples contain disabled individuals with varying characteristics. Notably, blacks and Hispanics made up 30% and 18%, respectively, of the 2012 sample, whereas they made up 10% and 20%, respectively, of the 2016 sample. We also observed that a larger percentage of disabled individuals reported living in the South in the 2012 ANES sample (46%) than in the 2016 ANES sample (28%). Average income of disabled respondents, which hovered around $20,000/year, remained fairly stable across ANES samples, as did average age (approximately 50–54 years) and average education level (high school graduate or high school graduate plus some college). While these demographic differences should be considered in the present analysis, we express more broadly a concern regarding discrepancies between the 4–7% of survey respondents who report disability status and the estimated 18% of Americans who live with a disability (Brault 2012). Disabled populations are often hard to reach and may require additional assistance to participate in survey research (e.g., transcription services, assisted listening devices), a circumstance we encourage disability researchers and survey methodologists alike to contemplate. Disability is similar conceptually to the notion of pan-ethnicity in race and ethnic studies, meaning not all disabilities are the same. To be sure, there is additional variance within the disabled population that cannot be addressed here. For instance, people with disabilities also differ from one another in important ways, one of which is time since the onset of disability and the severity of disability. One limitation of the current operationalization is that all disabilities were aggregated into one category. Though it matters indisputably for predicting political outcomes, we are not able to examine the effects of different types of disabilities, such as paraplegia, multiple sclerosis, or schizophrenia. We should expect each particular disability, as well as its onset and severity, to influence (a) the extent to which one identifies as a person with a disability, (b) the effect of this identification on political attitudes and behaviors, and (c) individuals' ability to report such attitudes and behaviors within surveys. Consider, for example, that surveys are not generally able to access the sample of people with the most severe of disabilities, as they may be institutionalized in health facilities or incarcerated. Federal efforts toward better measures are ongoing, particularly as light is shed on disparities between people with disabilities and those without them (Brucker and Houtenville 2015). Despite these limitations, shared by all disability scholars, the analytical approach presented here advances the study of people with disabilities. Beyond democratic ideals of inclusiveness, those with disabilities should be particularly encouraged to engage within the political sphere, as this type of activity contains potential healing properties. Bergstresser, Brown, and Colesante (2013) found that participation in politics is an important recovery tool for those suffering from mental illness, as engagement imparts a sense of empowerment and feelings of social connectedness. As we demonstrate here, the results of this project present implications for political inclusion, for partisan coalition building, for disability representation and policy in government, and for subsequent electoral outcomes. For the greater part of a century, scholars of American political behavior have given precedence to individual-level demographics such as education, income, and race as predictors of political engagement. While these factors certainly remain useful in explaining gaps in attentiveness and turnout, we implore scholars to consider a more holistic approach. We, along with a handful of contemporary researchers (Ojeda 2015; Pacheco and Fletcher 2015; Burden et al. 2016; Schur, Ameri, and Adya 2017, Ojeda and Slaughter 2018), are beginning to make strides in incorporating physical and mental health into the conversation on political outcomes. It goes without saying that both the American electorate and American political institutions are composed of living beings, with varying levels of physical mobility and cognitive functioning. With this in mind, we deem it essential that the foundation for political inquiry begin with considerations of physical and mental health. ## Notes 1. For more on narrative construction and social identification among individuals with disabilities, see Galvin 2005. 2. The 2012 ANES allows for the selection of multiple employment categories (e.g., laid off and homemaker, student and working now). Our operationalization categorizes employment status by respondent's first mention. 3. One might also suspect that ideology itself disproportionately influences coefficient estimates in these models. No changes, either in coefficient direction or statistical significance, were observed within our results when ideology was removed as a control variable. 4. Indeed, in the 2012 ANES nine respondents indicated being permanently disabled (first mention) and currently working (second mention). ## References References Americans with Disabilities Act National Network Website . 1990 . “ What Is the Definition of Disability under the ADA? adata.org/faq/what-definition-disability-under-ada. Balch George I. 1974 . “ Multiple Indicators in Survey Research: The Concept ‘Sense of Political Efficacy.’ Political Methodology 1 , no. 2 : 1 43 . Bartels Larry M. 2000 . “ Partisanship and Voting Behavior .” American Journal of Political Science . 44 , no. 1 : 35 50 . Bergstresser Sara M. , Brown Isaac S. , and Colesante Amy . 2013 . “ Political Engagement as an Element of Social Recovery: A Qualitative Study .” Psychiatric Services 64 , no. 8 : 819 21 . . 2018 . “ Tammy Duckworth .” April 10 . www.biography.com/people/tammy-duckworth-21129571. Black Merle . 2004 . “ The Transformation of the Southern Democratic Party .” Journal of Politics 66 , no. 4 : 1001 17 . Blacksher James , and Guinier Lani . 2014 . “ Free at Last: Rejecting Equal Sovereignty and Restoring the Constitutional Right to Vote, Shelby County v. Holder .” Harvard Law and Policy Review 39 , no. 8 : 39 69 . Brady Henry E. , Verba Sidney , and Schlozman Kay Lehman . 1995 . “ Beyond SES: A Resource Model of Political Participation .” American Political Science Review 89 , no. 2 : 271 94 . Brault Matthew W. 2012 . “ Americans with Disabilities: 2010 .” Washington, DC : US Department of Commerce, Economics and Statistics Administration, US Census Bureau . Brucker Debra L. , and Houtenville Andrew J. 2015 . “ People with Disabilities in the United States .” Archives of Physical Medicine and Rehabilitation 96 , no. 5 : 771 74 . Burden Barry C. , Fletcher Jason M. , Herd Pamela , Jones Bradley M. , and Moynihan Donald P. 2016 . “ How Different Forms of Health Matter to Political Participation .” Journal of Politics 79 , no. 1 : 166 78 . Burkhauser Richard V. , Houtenville Andrew J. , and Tennant Jennifer R. 2014 . “ Capturing the Elusive Working Age Population with Disabilities: Reconciling Conflicting Social Success Estimates from the Current Population Survey and American Community Survey .” Journal of Disability Policy Studies 24 , no. 4 : 195 205 . Campbell Angus , Converse Philip E. , Miller Warren E. , and Stokes Donald E. 1960 . The American Voter . New York : Wiley . Campbell Angus , Gurin Gerald , and Miller Warren E. 1954 . The Voter Decides . Evanston, IL : Row, Peterson . Chattopadhyay Jacqueline . 2018 . “ Why Health Insurance Regulations Struggle to Generate Citizen Constituencies: Adding Policy Interdependence to the List of Design Features That Shape a Policy's Feedback Potential .” Paper presented at the Journal for Health, Politics, Policy and Law Special Issue on Health and Political Participation Workshop, Columbia, MO , February 2 . Coleman Kenneth M. , and Davis Charles L. 1976 . “ The Structural Context of Politics and Dimensions of Regime Performance: Their Importance for the Comparative Study of Political Efficacy .” Comparative Political Studies 9 , no. 2 : 189 206 . Collins Sara R. , Robertson Ruth , Garber Tracy , and Doty Michelle M. 2012 . “ Gaps in Health Insurance: Why So Many Americans Experience Breaks in Coverage and How the Affordable Care Act Will Help .” Issue brief. New York : Commonwealth Fund . Converse Phillip . 1964 . “ The Nature of Belief Systems in Mass Publics .” In Ideology and Discontent , edited by Apter David , 206 61 . New York : Free Press . Converse Philip E. 1972 . “ Change in the American Electorate .” In The Human Meaning of Social Change , edited by Campbell Angus and Converse Philip E. , 263 337 . New York : Russell Sage Foundation . Cox Daniel , and Jones Robert P. 2017 . “ America's Changing Religious Identity .” Public Religion Research Institute , September 6 . www.prri.org/research/american-religious-landscape-christian-religiously-unaffiliated/. Downs Anthony . 1957 . An Economic Theory of Democracy . New York : Harper . Galvin Rose . 2005 . “ Researching the Disabled Identity: Contextualising the Identity Transformations Which Accompany the Onset of Impairment .” Sociology of Health and Illness 27 , no. 3 : 393 413 . Green Donald , Palmquist Bradley , and Schickler Eric . 2004 . Partisan Hearts and Minds: Political Parties and the Social Identities of Voters . New Haven, CT : Yale University Press . Hajnal Zoltan , Lajevardi Nazita , and Nielson Lindsay . 2017 . “ Voter Identification Laws and the Suppression of Minority Votes .” Journal of Politics 79 , no. 2 : 363 79 . Hall Thad , and Alvarez R. Michael . 2012 . “ Defining the Barriers to Political Participation for Individuals with Disabilities .” Information Technology and Innovation Foundation Accessible Voting Technology Initiative, Working Paper No. 1. Washington, DC : Information Technology and Innovation Foundation . Harbaugh W. T. 1996 . “ If People Vote Because They Like to, Then Why Do So Many of Them Lie? Public Choice 89 , no. 1–2 : 63 76 . Haselswerdt Jake , and Michener Jamila . 2018 . “ Disenrolled: Retrenchment and Voting in Health Policy .” Paper presented at the Journal for Health, Politics, Policy and Law Special Issue on Health and Political Participation Workshop, Columbia, MO, February 2. Hasen Richard L. 2012 . The Voting Wars: From Florida 2000 to the Next Election Meltdown . New Haven, CT : Yale University Press . Huddy Leonie , Mason Lilliana , and Aarøe Lene . 2015 . “ Expressive Partisanship: Campaign Involvement, Political Emotion, and Partisan Identity .” American Political Science Review 109 , no. 1 : 1 17 . Johnstone Chris . 2004 . “ Disability and identity: Personal Constructions and Formalized Supports .” Disability Studies Quarterly 24 , no. 4 . Khazan Olga . 2017 . “ Why So Many Insurers Are Leaving Obamacare .” Atlantic , May 11 . www.theatlantic.com/health/archive/2017/05/why-so-many-insurers-are-leaving-obamacare/526137/. Lane Robert E. 1959 . Political Life: Why People Get Involved in Politics . New York : Free Press of Glencoe . Leighley Jan E. , and Nagler Jonathan . 2014 . Who Votes Now? Demographics, Issues, Inequality, and Turnout in the United States . Princeton, NJ : Princeton University Press . Lenze Eric J. , Rogers Joan C. , Martire Lynn M. , Mulsant Benoit H. , Rollman Bruce L. , Dew Mary Amanda , Schulz Richard , and Reynolds Charles F. III . 2001 . “ The Association of Late-Life Depression and Anxiety with Physical Disability .” American Journal of Geriatric Psychiatry 9 , no. 2 : 113 35 . Lewis-Beck Michael S. , Jacoby William G. , Norpoth Helmut , and Weisberg Herbert F. 2008 . The American Voter Revisited . Ann Arbor : University of Michigan Press . Lopez Mark Hugo , and Taylor Paul . 2012 . “ Latino Voters in the 2012 Election .” Pew Research Center , November 7 . www.pewhispanic.org/2012/11/07/latino-voters-in-the-2012-election. National Institutes of Health . 2010 . “ Disability in Older Adults Fact Sheet .” report.nih.gov/nihfactsheets/Pdfs/DisabilityinOlderAdults(NIA).pdf. Miller Peter , and Powell Sierra . 2016 . “ Overcoming Voting Obstacles: Convenience Voting by People with Disabilities .” American Politics Research 44 , no. 1 : 28 55 . Ojeda Christopher . 2015 . “ Depression and Political Participation .” Social Science Quarterly 96 , no. 5 : 1226 43 . Ojeda Christopher , and Slaughter Christine . 2018 . “ Intersectionality, Depression, and Voter Turnout .” Paper presented at the Journal for Health, Politics, Policy and Law Special Issue on Health and Political Participation Workshop, Columbia, MO , February 2 . Overton Spencer . 2006 . Stealing Democracy: The New Politics of Voter Suppression . New York : Norton . Pacheco Julianna , and Fletcher Jason . 2015 . “ Incorporating Health into Studies of Political Behavior: Evidence for Turnout and Partisanship .” Political Research Quarterly 68 , no. 1 : 104 16 . Palfrey Thomas R. , and Poole Keith T. 1987 . “ The Relationship between Information, Ideology, and Voting Behavior .” American Journal of Political Science 31 , no. 3 : 511 30 . Papadopoulos Konstantinos , Montgomery Anthony J. , and Chronopoulou Elena . 2013 . “ The Impact of Visual Impairments in Self-Esteem and Locus of Control Research in Developmental Disabilities 34 , no. 12 : 4565 70 . Riker William H. , and Ordeshook Peter C. 1968 . “ A Theory of the Calculus of Voting .” American Political Science Review 62 no 1 : 25 42 . Rosenstone Steven , and Hansen John . 1993 . Mobilization, Participation, and Democracy in America . New York : Macmillan . Schattschneider E. E. 1960 . The Semi-Sovereign People: A Realist's View of Democracy in America . New York : Holt, Rinehart, and Winston . Schlozman Kay Lehman , Verba Sidney , and Brady Henry E. 2012 . The Unheavenly Chorus: Unequal Political Voice and the Broken Promise of American Democracy . Princeton, NJ : Princeton University Press . Schreurs Karlein M. G. , de Ridder Denise T. D. , and Bensing Jozien M. 2002 . “ Fatigue in Multiple Sclerosis: Reciprocal Relationships with Physical Disabilities and Depression .” Journal of Psychosomatic Research 53 , no. 3 : 775 81 . Schur Lisa A. 1998 . “ Disability and the Psychology of Political Participation .” Journal of Disability Policy Studies 9 , no. 2 : 3 31 . Schur Lisa , and Adya Meera . 2013 . “ Sidelined or Mainstreamed? Political Participation and Attitudes of People with Disabilities in the United States .” Social Science Quarterly 94 , no. 3 : 811 39 . Schur Lisa , Ameri Mason , and Adya Meera . 2017 . “ Disability, Voter Turnout, and Polling Place Accessibility .” Social Sciences Quarterly , 98 , no. 5 : 1374 90 . Schur Lisa , and Kruse Douglas . 2000 . “ What Determines Voter Turnout? Lessons from Citizens with Disabilities .” Social Science Quarterly 81 , no. 2 : 571 87 . Schur Lisa , and Kruse Douglas . 2014 . “ Disability Election Policies and Practices .” In The Measure of American Elections , edited by Burden Barry C. and Steward Charles III , 188 222 . New York : Cambridge University Press . Schur Lisa , Shields Todd , Kruse Douglas , and Schriner Kay . 2002 . “ Enabling Democracy: Disability and Voter Turnout .” Political Research Quarterly 55 , no. 1 : 167 90 . Schur Lisa , Shields Todd , and Schriner Kay . 2003 . “ Can I Make a Difference? Efficacy, Employment, and Disability .” Political Psychology 24 , no. 1 : 119 49 . Schur Lisa , Shields Todd , and Schriner Kay . 2005 . “ Generational Cohorts, Group Membership, and Political Participation by People with Disabilities .” Political Research Quarterly 58 , no. 3 : 487 96 . Smith Lauren E. , and Walker Lee Demetrius . 2012 . “ Belonging, Believing, and Group Behavior: Religiosity and Voting in American Presidential Elections .” Political Research Quarterly 66 , no. 2 : 399 413 . Stolberg Sheryl Gay . 2018 . “ ‘It's About Time’: A Baby Comes to the Senate Floor .” New York Times , April 19 . www.nytimes.com/2018/04/19/us/politics/baby-duckworth-senate-floor.html. Stone Deborah A. 1984 . The Disabled State . Philadelphia : Temple University Press . Tate Katherine . 1993 . From Protest to Politics: The New Black Voters in American Politics . New York : Russell Sage Foundation . Tate Katherine . 2010 . What's Going On? Political Incorporation and the Transformation of Black Public Opinion . Washington, DC : Georgetown University Press . Treier Shawn , and Hillygus D. Sunshine . 2009 . “ The Nature of Political Ideology in the Contemporary Electorate .” Public Opinion Quarterly 73 , no. 4 : 679 703 . United Nations . 2017 . “ Factsheet on Persons with Disabilities .” www.un.org/development/desa/disabilities/resources/factsheet-on-persons-with-disabilities.html. Verba Sidney , Burns Nancy , and Schlozman Kay Lehman . 1997 . “ Knowing and Caring about Politics: Gender and Political Engagement .” Journal of Politics 59 , no. 4 : 1051 72 . Verba Sidney , and Nie Norman H. 1972 . Participation in America . New York : Harper and Row . Verba Sidney , Schlozman Kay Lehman , and Brady Henry E. 1995 . Voice and Equality: Civic Voluntarism in American Politics . Cambridge, MA : Harvard University Press . Wang Tova Andrea . 2012 . The Politics of Voter Suppression: Defending and Expanding Americans' Right to Vote . Ithaca, NY : Cornell University Press . Wasfy Jason H. , Stewart Charles III , and Bhambhani Vijeta . 2017 . “ County Community Health Associations of Net Voting Shift in the 2016 U.S. Presidential Election .” PLoS One 12 , no. 10 : e0185051 . Wolfinger Raymond E. , and Rosenstone Steven J. 1980 . Who Votes? New Haven, CT : Yale University Press . Wright Kenicia . 2016 . Power and Minority Representation . In Global Encyclopedia of Public Administration, Public Policy, and Governance , edited by Farazmand Ali , 1 7 . Switzerland : Springer International Publishing . Young Jeffrey . 2017 . “ In States That Didn't Expand Medicaid, It's as If Obamacare Doesn't Even Exist for the Poor .” Huffington Post , December 6 . www.huffingtonpost.com/2014/07/09/obamacare-medicaid-uninsured_n_5572079.html. ### Appendix A: Variable Question Wording and Coding Notes: Refused, skipped, don't know, not asked, and no data responses were recoded as missing for all variables, with the exception of political knowledge. Unless otherwise indicated, question wording and coding are identical for 2012 and 2016. • Disability 2012: Permanent disability mentioned as first response to employment status of respondent. Coded 0 = permanent disability not mentioned first; 1 = permanent disability mentioned first. • Disability 2016: Permanent disability mentioned in response to employment status of respondent. Coded 0 = permanent disability not mentioned; 1 = permanent disability mentioned. • Employed 2012: Working now mentioned in response to employment status of respondent. Coded 0 = working now not mentioned; 1 = working now mentioned. • Employed 2016: Working now mentioned as first response to employment status of respondent. Coded 0 = working now not mentioned first; 1 = working now mentioned first. • Other unemployed 2012: Any category besides permanent disability or working now mentioned as first response to employment status of respondent. Coded 0 = working now or permanent disability; 1 = retired, unemployed, student, homemaker, or temporarily laid off. • Other unemployed 2016: Any category besides permanent disability or working now mentioned in response to employment status of respondent. Coded 0 = working now or permanent disability; 1 = retired, unemployed, student, homemaker, or temporarily laid off. • Political interest: Question wording: “How often do you pay attention to what's going on in government and politics?” Coded 1 = never; 2 = some of the time; 3 = about half the time; 4 = most of the time; 5 = always. • Follows news 2012: Combines responses to four questions about following news and national politics on the Internet, television, in printed newspapers, and on the radio. Values range from 1 = none at all for each question to 20 = a great deal for each question. • Follows news 2016: Question wording: “From which of the following sources have you heard anything about the presidential campaign?” Combines data from yes/no response options presented about the Internet, television, newspaper, and radio news. Values range from 0 = none selected to 4 = all selected. #### 2012 Political Knowledge • Index of responses to 7 questions about American politics. Values range from 0 to 7, with 7 being most knowledgeable. Each question coded 0 = incorrect; 1 = correct. Question wordings: • “Do you happen to know how many times an individual can be elected president of the United States under current laws?” • “Is the U.S. federal budget deficit, the amount by which the government's spending exceeds the amount of money it collects, now bigger, about the same, or smaller than it was during most of the 1990s?” • “For how many years is a United States senator elected, that is, how many years are there in one full term of office for a U.S. senator?” • “What is Medicare?” • “On which of the following does the U.S. federal government currently spend the least?” • “Do you happen to know which party had the most members in the House of Representatives in Washington BEFORE the election [this/last] month?” • “Do you happen to know which party had the most members in the U.S. Senate BEFORE the election [this/last] month?” #### 2016 Political Knowledge Index of responses to 4 questions about American politics. Values range from 0 to 4, with 4 being most knowledgeable. For each question, 0 = incorrect; 1 = correct. Question wordings: • “For how many years is a United States senator elected—that is, how many years are there in one full term of office for a U.S. Senator?” • “On which of the following does the U.S. federal government currently spend the least?” • “Do you happen to know which party currently has the most members in the U.S. House of Representatives in Washington? • “Do you happen to know which party currently has the most members in the U.S. Senate? #### Internal Efficacy The 2012 index combined four questions, each asked to half the sample, to produce values from 2 to 10, with 10 being most internally efficacious. The 2016 index had the same values, but only the latter two questions were asked, and were asked of the entire sample. Question wordings: • “Sometimes, politics and government seem so complicated that a person like me can't really understand what's going on. Do you agree strongly; agree somewhat; neither agree nor disagree; disagree somewhat; disagree strongly with this statement?” Coded 1 = agree strongly; 2 = agree somewhat; 3 = neither agree nor disagree; 4 = disagree somewhat; 5 = disagree strongly. • “I feel that I have a pretty good understanding of the important political issues facing our country. Do you agree strongly; agree somewhat; neither agree nor disagree; disagree somewhat; disagree strongly with this statement?” Coded 1 = disagree strongly; 2 = disagree somewhat; 3 = neither agree nor disagree; 4 = agree somewhat; 5 = agree strongly. • “How often do politics and government seem so complicated that you can't really understand what's going on?” Coded 1 = always; 2 = most of the time; 3; about half of the time 4; some of the time; 5 = never. • “How well do you understand the important political issues facing our country?” Coded 1 = not well at all; 2 = slightly well; 3 = moderately well; 4 = very well; 5 = extremely well. #### External Efficacy The 2012 index combined four questions, each asked to half the sample, to produce values from 2 to 10, with 10 being most internally efficacious. The 2016 index had the same values, but only the latter two questions were asked, and were asked of the entire sample. Question wordings: • “How much do public officials care what people like you think?” Coded 1 = not at all; 2 = a little; 3 = a moderate amount; 4 = a lot; 5 = a great deal. • “How much can people like you affect what the government does?” Coded 1 = not at all; 2 = a little; 3 = a moderate amount; 4 = a lot; 5 = a great deal. • “Public officials don't care much what people like me think. Do you agree strongly; agree somewhat; neither agree nor disagree; disagree somewhat; disagree strongly with this statement?” Coded 1 = agree strongly; 2 = agree somewhat; 3 = neither agree nor disagree; 4 = disagree somewhat; 5 = disagree strongly. • “People like me don't have any say about what the government does. Do you agree strongly; agree somewhat; neither agree nor disagree; disagree somewhat; disagree strongly with this statement?” Coded 1 = agree strongly; 2 = agree somewhat; 3 = neither agree nor disagree; 4 = disagree somewhat; 5 = disagree strongly. #### Campaign Activity Index combines “Yes” responses to seven questions. Values range from 0 to 7, with 7 being most involved. Question wordings: • “We would like to find out about some of the things people do to help a party or a candidate win an election. During the campaign, did you talk to any people and try to show them why they should vote for or against one of the parties or candidates?” (Yes, No) • “Did you go to any political meetings, rallies, speeches, dinners, or things like that in support of a particular candidate?” (Yes, No) • “Did you wear a campaign button, put a campaign sticker on your car, or place a sign in your window or in front of your house?” (Yes, No) • “Did you do any (other) work for one of the parties or candidates?” (Yes, No) • “During an election year people are often asked to make a contribution to support campaigns. Did you give money to an individual candidate running for public office?” (Yes, No) • “Did you give money to a political party during this election year?” (Yes, No) • “Did you give money to any other group that supported or opposed candidates?” (Yes, No) #### Party Identification Coded 1 = strong Democrat; 2 = not very strong Democrat; 3 = independent leans Democrat; 4 = independent; 5 = independent leans Republican; 6 = not very strong Republican; 7 = strong Republican. Question wordings: • “Generally speaking, do you usually think of yourself as a Democrat, a Republican an Independent, or what?” • If responded Democrat or Republican: “Would you call yourself a strong Democrat/Republican?” • If responded Independent, No Preferences, or Don't Know: “Do you think of yourself as closer to the Republican Party or to the Democratic Party?” #### Strength of Party Identification • Coded 1 = independent; 2 = independent leaner; 3 = not very strong Democrat, not very strong Republican; 4 = strong Democrat, strong Republican. Question wording same as for party identification. #### Ideology • Question wording: “Where would you place yourself on this scale, or haven't you thought much about this?” Coded 1 = extremely; 2 = liberal; 3 = slightly liberal; 4 = moderate (middle of the road); 5 = slightly conservative; 6 = conservative; 7 = very conservative. #### Strength of Ideology • Coded 1 = moderate; 2 = slightly liberal, slightly conservative; 3 = liberal, conservative 4 = extremely liberal, extremely conservative. Question wording same as for ideology. #### Congressional Approval • Question wording: “Do you approve or disapprove of the way the U.S. Congress has been handling its job?” Coded 1 = approve; 2 = disapprove. #### Presidential Approval • Question wording: “Do you approve or disapprove of the way Barack Obama has been handling his job as President?” Coded 1 = approve; 2 = disapprove. #### Voted • Summary variable of whether or not respondent voted in the November general election. Coded 0 = did not vote; 1 = voted. #### Vote Choice • Question wording: “How about the election for president? Did you vote for a candidate for president?” If yes, asked: “Who did you vote for?” For 2012 coded 0 = Obama; 1 = Romney; for 2016 coded 0 = Clinton; 1 = Trump. • Question wording: “How about the election for president? Did you vote for a candidate for president?” If yes, asked: “Who did you vote for?” For 2012 coded 0 = Obama or Romney; 1 = Other; for 2016 coded 0 = Clinton or Trump; 1 = Johnson, Stein, Other #### Corruption in Government: 2012 • Question wording: “How many of the people running the government are corrupt?” Coded 5 = all; 4 = most; 3 = about half; 2 = a few; 1 = none. #### Corruption in Government: 2016 • Question wording: “How many in government are corrupt?” Coded 5 = all; 4 = most; 3 = about half; 2 = a few; 1 = none. #### Demographic Variables • Age 2012: Respondent age in categories of years. Coded 1 = 17–20; 2 = 21–24; 3 = 25–29; 4 = 30–34; 5 = 35–39; 6 = 40–44; 7 = 45–49; 8 = 50–54; 9 = 55–59; 10 = 60–64; 11 = 65–69; 12 = 70–74; 13 = 75+ • Age 2016: Respondent age in years coded 18–90 years. • Female: Gender of the respondent coded 0 = male; 1 = female. • Married: Question wording: “Are you now married, widowed, divorced, separated or never married?” Coded 0 = widowed, divorced, separated, never married; 1 = married. • Black: Respondent race and ethnicity coded 0 = nonblack; 1 = black. • Hispanic: Respondent race and ethnicity coded 0 = non-Hispanic; 1 = Hispanic. • Education 2012: Respondent's highest level of education coded 1 = less than high school; 2 = graduated high school; 3 = some college; 4 = graduated college; 5 = graduate degree. • Education 2016: Respondent's highest level of education coded 1 = less than first grade; 2 = first, second, or third grade; 3 = fifth or sixth grade; 4 = seventh or eighth grade; 5 = ninth grade; 6 = tenth grade; 7 = eleventh grade; 8 = twelfth grade no diploma; 9 = high school graduate; 10 = some college; 11 = associate degree in college—occupational; 12 = associate degree in college—academic; 13 = bachelor's degree; 14 = master's degree; 15 = professional school degree; 16 = doctorate. • Household income: Family income coded 1 = under$5,000; 2 = $5,000–$9,999; 3 = $10,000–$12,499; 4 = $12,500–$14,999; 5 = “$15,000–$17,499; 6 = $17,500–$19,999; 7 = $20,000–$22,499; 8 = $22,500–$24,999; 9 = $25,000–$27,499; 10 = $27,500–$29,999; 11 = 30,000–$34,999; 12 = “$35,000–$39,999; 13 =$40,000–$44,999; 14 =$45,000–$49,999; 15 =$50,000–$54,999; 16 =$55,000–$59,999; 17 =$60,000–$64,999; 18 =$65,000–$69,999; 19 =$70,000–$74,999; 20 =$75,000–$79,999; 21 =$80,000–$89,999; 22 =$90,000–$99,999; 23 =$100,000–$109,999; 24 =$110,000–$124,999; 25 =$125,000–$149,999; 26 =$150,000–$174,999; 27 =$175,000–$249,999; 28 =$250,000 or more. • Religiosity: Question wording: “Do you consider religion to be an important part of your life, or not?” Coded 0 = not important; 1 = important.
2019-06-18 06:16:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25560975074768066, "perplexity": 4170.268358850174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998607.18/warc/CC-MAIN-20190618043259-20190618065259-00285.warc.gz"}
https://mathoverflow.net/questions/112224/cohomology-of-a-fiber-bundle-with-fiber-h-and-base-space-bg
# Cohomology of a fiber bundle with fiber $H$ and base space $BG$ Are there any general results on the (integral) cohomology of fiber bundle, where the fiber is a compact group $H$ (continuous or discrete) and the base space is the classifying space $BG$ of another compact group $G$ (continuous or discrete). Any literature references is much appreciated. Since we have two groups $G$ and $H$. I wonder if the result can be expressed as group cohomology of the two groups. - Serre's spectral sequence? –  Fernando Muro Nov 12 '12 at 21:35 I wonder if the Serre's spectral sequence can be expressed in terms of group cohomology of $H$ and $G$. –  Xiao-Gang Wen Nov 12 '12 at 21:50 Do you have in mind a definition of group cohomology that does not involve the cohomology of the classifying space? –  S. Carnahan Nov 12 '12 at 23:00 There is an elementary definition of group cohomology without involving the topological cohomology of the classifying space. (See Wiki en.wikipedia.org/wiki/Group_cohomology ) I stress group cohomology since we may need G-module where the group has non-trivial action. Using classifying space to define group cohomology with a non-trivial G-module, we may need the "local coefficient system" which I do not understand. This is why I prefer to state the results in terms of group cohomology. –  Xiao-Gang Wen Nov 12 '12 at 23:50 A typical example would be the case when $G$ is a subgroup of $H$. Then $(EH\times H)/G$ (diagonal action) is 1. homotopy equivalent to $H/G$, and 2. fibered over $BG=EH/G$ with fiber $H$. Note that this works both in the Lie case and the discrete case but in the latter case what we get is not very interesting since the fiber of our fibration is a potentially infinite discrete space. [upd: There is one thing one can extract from this though: the $i$-th cohomology group of $G$ with coefficients in the infinite product $\Pi_{h\in H}\mathbb{Z}_{(h)}$ is $\Pi_{h G\in H/G} \mathbb{Z}_{(hG)}$ when $i=0$ and is 0 otherwise; this may be of some use when $G$, or its index in $H$, is finite.] On the other hand, if $G$ is normal in $H$ one can go a bit further: $BH=EH/H$ is the quotient of $BG=EH/G$ by a free action of $H/G$. So, as above, we construct a fibration over $B(H/G)$ with fiber $BG$ and total space $BH$. If we now take an $H$-module $M$ (i.e., a local system on $BH$) we get the Hochschild-Serre spectral sequence $$E_2^{pq}=H^p(H/G,H^q(G,M))\Rightarrow H^{p+q}(H,M).$$ There are lots of references where this is discussed. One could take a look e.g. at the original paper by Hochschild and Serre (Cohomology of group extensions, Transactions AMS 1953). - Let $X$ be the fiber bundle with fiber $H$ and base space $BG$. I wonder do we have: $E_2^{pq}=H_g^p(H,H_g^q(G,M))\Rightarrow H^{p+q}(X,M)$ ? –  Xiao-Gang Wen Nov 13 '12 at 0:36 Xiao-Gang -- there are potentially lots of fiber bundles with fiber $H$ and base $BG$. If we have one that comes from an action of $G$ on $H$, then we do have a Serre spectral sequence converging to the cohomology of the total space, which is $\sim H/G$, but its $E_2$ term looks quite different from what you describe. The Hochschild-Serre sequence has a similar (but still different) $E_2$ but converges to the cohomology of $H$. –  algori Nov 13 '12 at 0:53 Indeed, we are considering fiber bundles with fiber $H$ and base $BG$ that comes from an action of $G$ on $H$. But $G$ may not be a subgroup of $H$. So we do not have $H/G$. –  Xiao-Gang Wen Nov 13 '12 at 2:39 Xiao-Gang -- re "We do not have..": yes we do, assuming the action comes from a group homomorphism $f:G\to H$, except that the action will be not free in general (and the base of the fibration will be $BIm(f)=BG/\ker(f)$). –  algori Nov 13 '12 at 2:49 algori: Thank you for explaining, but as a physicist, I am still confused (maybe over something simple). Do you mean $H^*(X,M)=H^*(H/G,M)$? But if $G$ is not a subgroup of $H$, we cannot compute the quotient group $H/G$. But do you mean $H/Im(f)$ instead of $H/G$? If the action is trivial, do you have $H^*(X,M)=H^*(H,M)$? (which does not look right, since $X=H\times BG$ in this case). –  Xiao-Gang Wen Nov 13 '12 at 3:17
2015-07-06 20:24:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728391170501709, "perplexity": 126.42972896906534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098808.88/warc/CC-MAIN-20150627031818-00133-ip-10-179-60-89.ec2.internal.warc.gz"}
http://answerparty.com/question/answer/why-is-marginal-utility-more-useful-than-total-utility-in-consumer-decision-making
Question: # Why is marginal utility more useful than total utility in consumer decision making? ## Marginal utility decreases as you get more of something. A consumer is less likely to buy something he already has a lot of. Tags: marginal utility Utility Marginal concepts Economics Consumer theory Microeconomics Welfare economics Decision theory Cardinal utility Marginalism Technology Internet 16
2014-03-12 19:52:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49272164702415466, "perplexity": 7187.349601307133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394023865238/warc/CC-MAIN-20140305125105-00058-ip-10-183-142-35.ec2.internal.warc.gz"}
https://quant.stackexchange.com/questions/16640/conditional-expectation-of-a-non-stochastic-process
# Conditional expectation of a non stochastic process In an example I was working through it was shown that $W_{t}^{2} - t$ was a martingale with respect to the Brownian motion filtration $\mathcal{F}_{s}^{W}$ with $t>s$. Everything was fine except a part in the proof where the author used the fact $$E(t|\mathcal{F}_{s}^{W}) = s$$ I can't quite see the rationale for this. For example if we take a process $X(t,\omega) = t$, then it seems that $X$ is not stochastic, and in fact is independent of $\omega$ for all $\omega$ in the sample space -- so why does the conditional expectation in the equation above make sense? The above question was a typo due to the author -- the expression should be evaluated as $$E(t|\mathcal{F}_{s}^{W}) = t$$ Let the Wiener process $W_{s}$ be a r.v. from $\left(\mathcal{F}_{s},\Omega\right)\to\left(\mathcal{B}\left(\mathbb{R}\right),\mathbb{R}\right)$. The Borel-$\sigma$-algebra $\mathcal{B}\left(\mathbb{R}\right)$ contains all intervals of the form $\left[x,y\right]$ for $x\neq y\in\mathbb{R}$, because you have to be able to tell at time $s\geq 0$ if the Wiener process $W_{s}$ has its value in this interval or not. In order for $W_{s}$ to be measurable all the pre-images of this intervals have to be in the $\sigma$-algebra $\mathcal{F}_{s}^{W}$. So the (deterministic) random variable $X\left(t,\omega\right)=t$ is also measurable at time $s\geq 0$ because we can say in which interval its value is. But the deterministic r.v. $X\left(t,\omega\right)=t$ does not depend on $\omega$, so the pre-image of every obtainable resp. not obtainable interval is $\Omega$ resp. $\emptyset$. Every deterministic r.v. is measurable to the trivial $\sigma$-algebra $\mathcal{F}_{0}:=\left\{\emptyset,\Omega\right\}$, which is contained in every other $\sigma$-algebra $\mathcal{F}_{s}^{W}$. So even if we condition on the coarser (smaller) $\sigma$-algebra $\mathcal{F}_{s}^{W}$ a deterministic r.v. is measurable and we only need the trivial $\sigma$-algebra $\mathcal{F}_{0}$. But that is $$\mathbb{E}\left[t\mid\mathcal{F}_{0}\right] = \mathbb{E}\left[t\right] = t \mathrm{.}$$
2020-02-25 04:06:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9410966038703918, "perplexity": 113.14820017778815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146004.9/warc/CC-MAIN-20200225014941-20200225044941-00101.warc.gz"}
https://stacks.math.columbia.edu/tag/01PQ
Lemma 27.25.3. Let $X$ be a quasi-compact scheme. Let $\mathcal{I} \subset \mathcal{O}_ X$ be a quasi-coherent sheaf of ideals of finite type. Let $Z \subset X$ be the closed subscheme defined by $\mathcal{I}$ and set $U = X \setminus Z$. Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_ X$-module. The canonical map $\mathop{\mathrm{colim}}\nolimits _ n \mathop{\mathrm{Hom}}\nolimits _{\mathcal{O}_ X}(\mathcal{I}^ n, \mathcal{F}) \longrightarrow \Gamma (U, \mathcal{F})$ is injective. Assume further that $X$ is quasi-separated. Let $\mathcal{F}_ n \subset \mathcal{F}$ be subsheaf of sections annihilated by $\mathcal{I}^ n$. The canonical map $\mathop{\mathrm{colim}}\nolimits _ n \mathop{\mathrm{Hom}}\nolimits _{\mathcal{O}_ X}(\mathcal{I}^ n, \mathcal{F}/\mathcal{F}_ n) \longrightarrow \Gamma (U, \mathcal{F})$ is an isomorphism. Proof. Let $\mathop{\mathrm{Spec}}(A) = W \subset X$ be an affine open. Write $\mathcal{F}|_ W = \widetilde{M}$ for some $A$-module $M$ and $\mathcal{I}|_ W = \widetilde{I}$ for some finite type ideal $I \subset A$. Restricting the first displayed map of the lemma to $W$ we obtain the first displayed map of Lemma 27.25.1. Since we can cover $X$ by a finite number of affine opens this proves the first displayed map of the lemma is injective. We have $\mathcal{F}_ n|_ W = \widetilde{M_ n}$ where $M_ n \subset M$ is defined as in Lemma 27.25.1 (details omitted). The lemma guarantees that we have a bijection $\mathop{\mathrm{colim}}\nolimits _ n \mathop{\mathrm{Hom}}\nolimits _{\mathcal{O}_ W}( \mathcal{I}^ n|_ W, (\mathcal{F}/\mathcal{F}_ n)|_ W) \longrightarrow \Gamma (U \cap W, \mathcal{F})$ for any such affine open $W$. To see the second displayed arrow of the lemma is bijective, we choose a finite affine open covering $X = \bigcup _{j = 1, \ldots , m} W_ j$. The injectivity follows immediately from the above and the finiteness of the covering. If $X$ is quasi-separated, then for each pair $j, j'$ we choose a finite affine open covering $W_ j \cap W_{j'} = \bigcup \nolimits _{k = 1, \ldots , m_{jj'}} W_{jj'k}.$ Let $s \in \Gamma (U, \mathcal{F})$. As seen above for each $j$ there exists an $n_ j$ and a map $\varphi _ j : \mathcal{I}^{n_ j}|_{W_ j} \to (\mathcal{F}/\mathcal{F}_{n_ j})|_{W_ j}$ which corresponds to $s|_{W_ j}$. By the same token for each triple $(j, j', k)$ there exists an integer $n_{jj'k}$ such that the restriction of $\varphi _ j$ and $\varphi _{j'}$ as maps $\mathcal{I}^{n_{jj'k}} \to \mathcal{F}/\mathcal{F}_{n_{jj'k}}$ agree over $W_{jj'l}$. Let $n = \max \{ n_ j, n_{jj'k}\}$ and we see that the $\varphi _ j$ glue as maps $\mathcal{I}^ n \to \mathcal{F}/\mathcal{F}_ n$ over $X$. This proves surjectivity of the map. $\square$ Comment #946 by correction_bot on In the proof, there are references to "(1)" and "(2)", but these aren't labeled in the statement of the lemma. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2019-05-20 07:10:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9908106923103333, "perplexity": 127.00132486529444}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255773.51/warc/CC-MAIN-20190520061847-20190520083847-00233.warc.gz"}
https://www.physicsforums.com/threads/why-do-physical-laws-always-feature-integer-indices.317736/
Why do physical laws always feature integer indices? 1. Jun 3, 2009 parsec This may be a stupid question or have a pretty obvious answer, but I can't seem to find one so I'll just go ahead and post :) I was looking at some empirical data for relationships defining (abstracted) values for ionization and recomination coefficients in gases as a function of electric field strength and gas number density. I noticed that none of them had integer indices, rather they featured fractional indices correct to three decimal places. I had come across similar empirical "laws" as an undergrad studying fluid dynamics, although those seemed to be a bit more acceptable because the variable that was raised to a fractional power was always non dimensional. So for example the equation for the drag force across a flat plate (as determined from experiments) was some function of reynold's number to the power of a fractional index. As reynold's number is non dimensional, the equation would always yield newtons, not newtons to the power of some arbitrary (nonsensical) fractional index. I noticed for these equations however, the variables that are raised to fractional indices indeed have units, and the resulting equation's result is defined to have the nearest integer units. For example a/N = 3.4473 x 1034(E/N)2.985 m2 where E is the electric field (V. m-1), N is the neutral gas number density (m-3) Clearly this equation should yield fractional units but is then redefined to yield m2 How is this possible? How is it that fundamental physical relationships are always defined in terms of integer indices? Do physical phenomena happen to form perfect functional laws that feature integer indices or are these laws mere approximations that they depart from in reality? This seems absurd so I'm guessing it has something to do with the way that our arbitrary mathematical constructs are formed, or perhaps how we define the dependent variables involved. (I'm thinking of 1/2kT2, where T is defined to have a functional relationship to energy involving an integer index, but perhaps there's a better example). As an unrelated sidenote, what makes e and pi the values that they are? Physical constants are arbitrary, but these constants are ratios of abstract concepts. Changing our number system would change their values superficially, but they would still represent the same quantity. 2. Jun 3, 2009 Jame I cannot see how a quantity of dimension could not be raised to an arbitrary fractional power since this would mostly yield nonsense units. One obvious case is taking the square root of an area to get a number of length. I don't know the law you're giving as an example, could you clarify it a bit? The constants Pi and e are numbers defined to have mathematical properties which are independent of how they are represented numerically. If we were using a different numbering system everything else would have to change the same way, so that number theory still holds true. Some reading on the subject can be found at: http://en.wikipedia.org/wiki/Dimensional_analysis 3. Jun 3, 2009 Tac-Tics The units don't really belong in the laws anyway. I'm not familiar with the example you selected, but you probably just made a mistake in copying it or in usage. If a value is raised to a fractional power, it was almost certainly unitless to begin with. Pi and e are defined mathematically. They have no bearing on the real world. Pi is the ratio of a perfect circle's circumference to its diameter. You can calculate its digits by approximating a circle with polygons having a very large number of sides. It's a calculus thing and there's a series for it. The number e is the base of the natural logarithm. It is weirder than pi. It's not actually a ratio. It's the only exponential function whose derivative is itself. It's also the natural consequence of compound interest. It's actually kind of neat. Think about it like this. Suppose a radioactive pile loses 100% of its mass every year. How long will it take before ALL the mass is gone? Hint: the answer is NOT a year. Why? Because 100% / year is the SAME as 50% / 6 months. So say you start with a 100 kilo pile of radioactive stuff. After 6 months, you lose half of it, so now you only have 50 kilos. Wait 6 MORE months, now you have 25. Wait 6 MORE months, now you have 12.5 kilos. A few years down the line, you'll have 0.0001 kilos, but the damn thing will never fully disappear! The decay of the radioactive stuff obeys a special curve called a logarithm. The number e pops up in there magically when you pick natural units for everything. 4. Jun 3, 2009 vanesch Staff Emeritus Often, people write down "empirical" laws in which non-dimensionless quantities enter in formulae, with "strange" coefficients who do the unit-correcting work. This is because practical working formulae are easiest to use when you can just plug in the numerical values of quantities in commonly-used units for them. Take something like your example : The "secret" lies in the numerical coefficient 3.4473 1034. It will do the "unit-compensating" work. You can easily see this by telling yourself: imagine that I was living on the planet Zork, empirically deriving the same relationship, but where my "meters" are 5 earth-meters, say. How would I write down this equation, given that the physics is going to be the same ? You will see that the numerical coefficient you will publish in Zork's Anals of Gas Properties will scale with the fractional power as compared to the value on earth. So this numerical coefficient has the "fractional units". You mean, why is Newton's second law, F = m a and not F = m^(1.032) a^(0.997) or something ? In a way, that's indeed a mystery. Of course, there are reasons why F = m a as a function of certain symmetries of nature, but then this begs the question as of why nature has these symmetries. Yes, it is a big mystery as of why we seem to be able to express fundamental laws of physics by relatively simple mathematical constructs (that said, when looking at modern theoretical physics, those mathematical constructs don't seem to be so simple anymore, which renders the mystery even deeper: why do very good approximate laws turn out to be so simple ?) However, as to why do the units in natural laws balance (and do they balance ?) ? That's something else: they have to balance perfectly, as long as we consider that our choices of units is arbitrary and could have been different. That is, that our unit of length, the meter, has something arbitrary to it, and that we could have chosen another unit of length. If you consider that you had freedom in fixing your unit of length, then you are entitled to think that the particular relationship between quantities you are studying shouldn't depend on your choice of unit. And that comes down to having a perfect balance of units in your equations. If you consider that you could have set up physics just as well with a different length as "meter", then all your units must have the same "power of meter" on the left hand and the right hand of your equations, because otherwise, switching from one to another wouldn't work anymore. Let's give a stupid example: suppose that you find a relationship between the mass of a cube of iron and its side (in other words, you are studying the density of iron). So you find: m_cube = 7874 kg/m^3 L^3 where L is the length of the side of the cube in meter, and m_cube is its mass in kg. Now, suppose that you do more carefull measurements, and that you think that you now have a better relationship: m_cube = 7874 kg/m^3 L^3.002 (you could think of some kind of gravitational effect that makes iron more dense when you have more of it - although the effect would be way way smaller than what I have here). Is this possible ? Answer: no. Because we take it as a principle that the choice of the unit of "meter" is arbitrary wrt whatever property of iron, and that we could have expressed this property just as well with a different unit of length, say the zork-meter. If we had been working in zork-meters, but still in kg, then the mass of a given cube would numerically remain the same (m_cube remains the same number). But numerically, our left-hand side would not be ok anymore. Let's say,for simplicity, that a zork-meter is 10 earth-meters. 7874 kg/m^3 in earth-meters would become 7874000 kg/(zm)^3 of course. But then our relationship wouldn't work anymore. If we'd express L in zork-meters, then we would NOT find: m_cube = 7874000 kg/zm^3 L^3.002 In fact, if we would calculate the mass of a cube of 1 earth meter (so 0.1 zork meters), we would have found with the "earth" formula: 7874 kg, and with the "zork" formula: 7837.8 kg. While we are talking about the same cube, with the same dimensions, only in one case, we expressed the size of the cube in "meters" and in the other case, we expressed them in zork-meters. So IF our property of L^3.002 is a correct physical law (which it COULD be if we think of some effect like self-gravitation and compression), then that means that we have to adapt our coefficient. It means that the numerical coefficient in zork-units is not going to be 7874000, but rather 7874000 x 7874/7837.8 = 7910000. With this number in the second (zork) formula, our same cube will have again its same mass. But that means that the unit of the coefficient was not kg/ zm^3, but rather kg/zm^(3.002). And the units balance again in the equation. As we assume the principle that the "meter" has nothing special, and that we were just as entitled to have taken the "zork-meter" as our unit of length than we did take our "meter", we cannot accept the situation that when transforming all quantities in "zork units" our formula doesn't work anymore. It is only if we would have assumed that "meter" has something particular to do with "iron" that we might expect this to be different, but we take it that the meter is just as good a unit to express a property of iron cubes as zork-meters. And then, the units have to balance in order not to have the correctness of the formula to depend on the specific choice of a particular unit of length. 5. Jun 3, 2009 Dr.D Parsec, another example where a fractional exponent appears is in the law governing an isentropic expansion, P*V^(gamma) = constant where gamma = a property of the gas, typically about 1.4 for air 6. Jun 4, 2009 parsec This is a little patronising. I didn't make a mistake copying it (copy-paste rarely makes errors). I'm well aware of the origins of e and Pi, I was more after a deeper discussion of why they are the values that they are. I have both seen the open form solution of Pi and encountered the natural logarithm... I find it strange that something abstract (I guess it is spatial) has this well defined ratio between it's circumference and diameter. I guess I can deal with physical constants being arbitrary as they're determined from experiments, but why is Pi its value? Why not 3.1, or 1? I know the intrinsic value of both e and Pi transcend our number system, but to what extent? Is there a weird mathematical analog to natural units where you can define Pi and e to be 1 and use all of the same mathematical techniques? 7. Jun 4, 2009 parsec The law is taken from a paper entitled "A Survey of the Electron and Ion Transport Properties of SF6" by R. Morrow. It's not the best example, I know. 8. Jun 4, 2009 parsec Ah yes, that's a good example. 9. Jun 4, 2009 parsec Ah yes, this seems pretty obvious now that I think of it. Thanks. Yeah, that's basically what I was getting at. High level abstractions of nature (e.g. in classical mechanics) seem to be able to be described by such simple laws. Fudgy engineering descriptions based on high level descriptions that have no derivation (e.g. a curve fit from an experiment) seem to have fractional indices. I'm fine with this, however this relationship should work if we were to define the coefficient (density) with units of kg/m3.002 right? I guess this could be a linearisation of a density expressed as some function of mass, length and gravitational constants. Do fractional indices occur in any of the tendrils of modern physics? Could you give me some examples? So far I haven't encountered any, but I haven't really looked very deeply into modern physics. 10. Jun 4, 2009 vanesch Staff Emeritus This is usually because empirical laws are a kind of approximative summary of a very complicated system ; a kind of "curve fitting". Not only purely empirical laws do so, you can derive, from first principles, also "system laws" which have fractional powers, or are "complicated" functions in other respects. It is because they summarize the behavior of a "complicated" system. Yes. I don't know what you call "fundamental". The adiabatic compression law, which can be derived from first principles, has a fractional power, but that's a kind of "system response" of a "complicated system" in a way (though it is not an empirical approximation, in that it is the correct solution of an ideal system). As "system solutions" there are many examples of fractional powers and otherwise complicated functions. In fact, you are going down the same road as the mathematicians since the 17th century, having to "open up" their set of "acceptable functions" in nature, from "Euclidean constructions (ruler and compass), to algebraic curves, to power series, to fractional power series, to, finally, a general function concept without a specific closed-form prescription, but simply with a certain amount of analytic and topological properties, and even beyond that (in quantum field theory) However, as to the "fundamental laws" behind this, and not the "solutions" for specific systems no matter how fundamental themselves, usually their formulation is indeed "simpler". And it is indeed a mystery as why this is so. 11. Jun 4, 2009 parsec Thanks, I was hoping that it wasn't a mystery, but I guess I sort of have an answer now. I guess I was thinking of "fundamental" as any sort of low level description of the behavior of individual particles. I wasn't really considering macrostate thermodynamic descriptions involving state variables to be fundamental despite the fact that they're derived from first principles. Now realise that my categorisation isn't very terse as you could possibly consider a description of the motion and dynamics of atoms to be an abstraction we use to describe the interaction of smaller constituent particles. Sorry, I'm not very educated in modern physics so I might be completely off the ball here. I guess it would be more surprising to find out that s=klog2.546$$\Omega$$ or that E=1/2kT2.12 than pvn=const. I find the elegance of relationships like this quite remarkable. Last edited: Jun 4, 2009 12. Jun 4, 2009 ZapperZ Staff Emeritus There is nothing to prevent anyone from giving such number is "fractional" units. For example, let's look at speed. There's nothing to say that you cannot express speed as something like this "A travels 30 m in 2.2 seconds". So you have the speed as 30m/2.2s". Now, is THIS what you had in mind? If it is, fine. But take a look at that fraction. The speed is 30/2.2 m/s. I could rescale this and do that fraction, i.e. take 30/2.2. What do I get? I get 13.63. But what's the units? "m/s!" So in going 30 m in 2.2 seconds, that is equivalent to saying that it goes 13.63 m in one second. The speed has been "renormalized" in units of 1 second. Or if you don't like this, normalize this in units of 1 minute, 1 hour, 1 day, etc... In other words, it is simply carrying out the fraction into more easily read numbers in whole units of the denominator. There nothing profound about such exercises. If you don't like it, you can continue to express it as 30m/2.2s. But you'll soon learn that this is a tedious way to carry such numbers. Zz. 13. Jun 4, 2009 parsec Sorry, I don't think this is what I meant. The discussion is about units being raised to fractional indices. E.g. going 30m in 2.2 s1.1 14. Jun 4, 2009 ZapperZ Staff Emeritus Oh, then I misread the question completely! :( Zz. 15. Jun 4, 2009 vanesch Staff Emeritus The interaction of one single electron with another one is already a problem that doesn't have a "simple solution" (not even a mathematically sound solution at all!). Nope, that's just a change of value for k. log_2.546 (X) = log_e(X)/log_e(2.546). Well, in relativistic physics, the relationship is already more complicated ! 16. Jun 4, 2009 Count Iblis A John Cardy explains in one of his books on renormalization group theory (I forgot the exact title), the correct answer has to do with scaling laws. According to Cardy, ordinary dimensional analysis is a special case of using scaling arguments based on the renormalization group in statistical physics or quantum field theory. The (hidden) assumption in the detailed posting by Vanesh above is the way relations between macroscopic quantities should behave under a rescaling. Now, in principle, this should follow from the laws of physics describing the microscopic degrees of freedom of the system. WHen you eliminate all references to the microscopic degrees of freedom and write down relations involving only macroscopic observables, you can find certain powerlaws. In certain cases you don't get power laws with integer exponents.
2017-08-20 17:01:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7998080849647522, "perplexity": 658.0450488253456}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106779.68/warc/CC-MAIN-20170820150632-20170820170632-00086.warc.gz"}
http://mathhelpforum.com/advanced-algebra/125787-abstract-algebra-3-a.html
Thread: abstract algebra #3 1. abstract algebra #3 If p is prime and p|a^n, is it true that p^n|a^n? Justify your answer. Thank you. 2. Yes indeed. If $p|a^n \Rightarrow p|a$. You can show this inductively: It follows from the prime-property: $p|xy \Rightarrow p|x$ or $p|y$. Take $x= a, y= a^{n-1}$. If $p|a$ you're done if $p|a^{n-1}$ you go on with $x= a, y= a^{n-2}$. Eventually you'll obtain $p|a$ Conclusion: $p^n|a^n$ 3. Originally Posted by Dinkydoe Yes indeed. If $p|a^n \Rightarrow p|a$. You can show this inductively: It follows from the prime-property: $p|xy \Rightarrow p|x$ or $p|y$. Take $x= a, y= a^{n-1}$. If $p|a$ you're done if $p|a^{n-1}$ you go on with $x= a, y= a^{n-2}$. Eventually you'll obtain $p|a$ Conclusion: $p^n|a^n$ thank you. you showed p|a. but what about p^n. 4. Originally Posted by Deepu thank you. you showed p|a. but what about p^n. $p\mid a\implies a=kp$ so $a^n=k^np^n$...so
2017-12-12 06:54:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971364438533783, "perplexity": 1918.0110375032893}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515309.5/warc/CC-MAIN-20171212060515-20171212080515-00052.warc.gz"}
https://math.stackexchange.com/questions/2971119/what-is-the-maximum-value-of-n-with-average-value-must-be-an-integer
# What is the maximum value of $n$ with average value must be an integer? Let $$M$$ be a positive integer greater than $$1$$. All integers from $$1$$ to $$M$$ were written on a board. Each time we erase a positive integer on the board in a way that the average value of all numbers that have been erased must always be an integer. Assume that there are $$n$$ numbers that have been erased ($$1 \leq n \leq M$$, $$n$$ is not a constant number). The process will end with $$n$$ numbers if and only if it is impossible to erase the $$(n+1)th$$ number so that the average value of $$n+1$$ erased numbers can be an integer. For all possible ways to erase the numbers, what is the maximum and the minimum value that $$n$$ can reach? For example, with $$M=3$$, we have the maximum of $$n$$ is $$3$$ (choose $$a_1=1$$, $$a_2=3$$, $$a_3=2$$ ) , the minimum value of $$n$$ is $$1$$ (choose $$a_1=2$$, then it is impossible to choose $$a_2=1$$ or $$a_2=3$$ because $$\frac{2+1}{2}, \frac{2+3}{2}$$ are not integers). For larger $$n$$, I thought that I can solve with Chinese Remainder Theorem, but I didn't know how to use it. Is it possible to find the minimum or maximum value of $$n$$?. If not, what are the conditions of $$M$$ so that the minimum or maximum value of $$n$$ can be found? (Sorry, English is my second language, so the questions may unclear for some readers) EDIT: I've edited the post because at first I did average as $$\frac{1}{2}\sum a_i$$ instead of $$\frac{1}{i}\sum a_i$$. Below is the corrected answer: Modeling the problem for maximum or minimum we get: \begin{align*} \text{max/min }&\sum_{i=1}^{m}b_i\\ \text{such that }& b_11+b_22+\cdots+b_MM=Mk\\ &b_i\in\{0,1\},k\in\mathbb{N} \end{align*} Suppose $$M=1$$. We can erase the only value: $$1$$, and it's average is $$\frac{1}{1}=1$$ an integer. Therefore max=min=$$1$$. Suppose $$M=2$$. We can only erase $$2$$, because $$\frac{3}{2}\notin \mathbb{Z}$$. Therefore max=min=$$1$$. Now let's suppose $$M\geq 3$$. Let's see for what $$M$$'s we can erase all numbers, that is $$n=M$$: $$1+2+\cdots+M = \frac{M(M+1)}{2} = Mk \rightarrow M=2k-1$$ Hence for $$M\in\{3,5,7,9,\cdots\}$$ we can erase all numbers to get it's average as an integer. We've covered the cases for $$M$$ when it's odd and $$M\geq 3$$. Let's see what happens for the cases where $$M$$ is even, that is, $$M=2q$$. $$1+2+\cdots+2q = \frac{2q(2q+1)}{2} = q(2q+1) = 2q^2+q$$ We want that result to be equal to $$Mk=2qk$$ for some $$k \in \mathbb{N}$$. Therefore if we let $$b_q=0$$ we get the sum as: $$1+2+\cdots + 2q - q = 2q^2+q - q = 2q^2$$ And that is equal to $$2qk$$ for $$k=q$$. Therefore for $$M=2q$$ we get $$n=M-1$$. That means that when $$M=2q$$ we only need $$b_q=0$$ to guarantee that the average of the sum of the erased numbers is an integer. Finally see that for $$M\geq 1$$ you can always use the same reasoning for $$M=1$$ for the minimum... You remove $$1$$ and the average of the removed numbers is going to be $$\frac{1}{1}=1$$ that is an integer. Since we've covered all cases for $$M$$, we're done. Examples: For $$M=12345$$: $$1 + 2 + \cdots + 12345 = 76205685 \text{ and } \frac{76205685}{12345}=6173$$ For $$M=124=2\cdot 62$$: $$1 + 2 + \cdots + 124 - 62 = 7750 - 62 = 7688 \text{ and } \frac{7688}{124}=62$$ • Thanks for your answer. However, my question is that the average value, means the total sum of numbers divided by the amount of numbers, ($\frac{a_1+a_2+...+a_i}{i}$ is always an integer, not half of the sum. – apple Oct 26 '18 at 15:22 • @apple Oh mate, sorry!!! I'll try to correct it. – Bruno Reis Oct 26 '18 at 15:28 • @apple I've got the result. I just don't have time to post it now. I'll try to post it within 3 hours! – Bruno Reis Oct 26 '18 at 15:42 • @apple It's done! See if you can get it. – Bruno Reis Oct 26 '18 at 18:21 • @Brunu: I don't think you understood the problem correctly. For odd $M$ you only prove that you can get the last sum correctly, that is (comperatively) easy. The problem is to select one number $a_i$ after the other, and after each step have an averge that is an integer. So for $M=5$, the first chosen number must be divisible by $1$ (easy), the sum of the first and second must be divisible by $2$, the sum of the first $3$ must be divisible by $3$, a.s.o, as far as you can get. Reread the description again and I think you will see what you misunderstood. For $M=5$, the max $n$ is e.g. 3. – Ingix Oct 26 '18 at 21:33
2019-05-22 19:20:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 67, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8527206182479858, "perplexity": 139.6554964782593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256948.48/warc/CC-MAIN-20190522183240-20190522205240-00513.warc.gz"}
https://ateneophysicsnews.wordpress.com/tag/erasmus-mundus/
## Erasmus+ ARTIST project for science teaching innovation at AdMU: An interview with Mr. Ivan Culaba of the Physics Department The ARTIST partners during the kick-off meeting at University of Bremen, Bremen, Germany on January 2017. In the photo are Dr. Joel Maquiling (back row, 3rd from the left) and Mr. Ivan Culaba (back row, 2nd from the right) of the Department of Physics, School of Science and Engineering, Ateneo de Manila University. Source: Action Research To Innovate Science Teaching (ARTIST) Ateneo de Manila University and De La Salle University-Manila were chosen by the European Union’s Erasmus+ Program as its two partner universities in the Philippines for the ARTIST (Action Research To Innovate Science Teaching) project. The other eight partner universities are University of Bremen (Germany), Ilia State University (Georgia), Alpe-Adria-University (Austria), University of Limerick (Ireland), Gazi University (Turkey), Batumi Shota Rustaveli State University (Georgia), The Academic Arab College of Education (Israel), and Oranim Academic College of Education (Israel). The project coordinators are Prof. Dr. Ingo Eilks of the University of Bremen and Prof. Dr. Marika Kapanadze of Ilia State University. The ARTIST project aims to innovate science education through classroom‐based and teacher‐driven Action Research–a cycle of innovation, research, reflection and improvement–by forming networks of higher education institutions, schools and industry partners in each partner country. The ARTIST project allows the partner universities to acquire state-of-the art audio-visual and science equipment for teacher trainings and instructions. Training materials on action research will be developed and used in workshops and courses. Below is an interview with Mr. Ivan Culaba, manager of the ARTIST project in Ateneo de Manila University. 1. What is your role in the project?  Are there other AdMU faculty involved here? I am the manager of the ARTIST project in Ateneo. In the Department of Physics, Dr. Joel T. Maquiling and Ms. Johanna Mae M.  Indias are also involved in the project. Joel has accompanied me in the meetings and helped in the presentations. Joel and Johanna helped in the identification of possible industry partners. Johanna also visited the high schools for evaluation as possible network partners. Ms. Via Lereinne B. Chuavon of the Office of Social Concern and Involvement assisted us in the networking with high schools and communications with the Schools Division Office of Marikina City. I also had very constructive discussions with Mr. Christopher Peabody of the Department of Chemistry. Mr. Tirso U. Raza, of the Office of Facilities and Sustainability has assisted us in finding the source of the audio visual equipment and in the preparation of the rooms for their installation. Our technicians, Mr, Numeriano Melaya, Mr. Colombo Enaje, Jr. and Mr. Ruel Agas have been working on making the ADMU ARTIST Network Center and Physics Education Resource Center (F-230, Faura Hall) become functional. Action Research for the Reflective Practitioner workshop at Ateneo de Manila University, 7 April 2017 3. How did you get involved in the project? This project was conceived by Prof. Eilks and Prof. Kapanadze after their successful implementation of TEMPUS project SALiS. I met Prof. Kanapadze during the Active Learning in Optics and Photonics workshop at Ilia State University, Tbilisi, Georgia in 2014, where she was the organizer. She invited me into the ARTIST project and I extended the same invitation to Dr. Lydia Roleda of the De La Salle University-Manila. I became interested in the ARTIST project since we had just started with the NSTP activity wherein our Physics majors were assigned to Sta. Elena High School for the area engagements. While our students were facilitating in the high school students’ physics activities we were also engaged in the Physics training of the science teachers in the same school. We thought that the high schools would immensely benefit from the ARTIST project in line with the university’s thrust for greater social involvement and service learning. The ARTIST project was approved by the EU commission on October 2016 but the first tranche of the budget was released on January 2017. The kick-off meeting was held at the University of Bremen, Bremen, Germany on 18-20 January 2017. It was the first time that we met our collaborators in the project. The objectives of the project, deliverables, work plans, and financial management among other topics were discussed. The second meeting was held at the Alpen-Adria-Universität Klagenfurt, Vienna, Austria on 14-15 September 2017. Progress reports on the networking with schools and industries, financial status of each partner university, scheduling of the workshops, planning of the e-journal ARISE and other matters were discussed in the meeting. The EU officials were not present in the meeting. Physics Education Resource Center (PERC) and ARTIST Network office Room F-230, Faura Hall 5. How is the Physics Education Resource Center in Faura Hall? I am very happy that we now have a Physics Education Resource Center (PERC) where the Physics Education group can meet and hold meetings and where the valuable lecture demonstration experiment set-ups can be displayed and made accessible to the faculty of the Department. A number of the demos have been transferred from F-229 and SEC C labs to PERC. Acquisition and development of lecture demonstration experiments will be a continuing process. The next step is the documentation of the resources so that the faculty may know what demos are available and how to use them. The room will also serve as the office of the ARTIST project. The science equipment which will be purchased under this project will be placed in this room. We have ordered Physics equipment which are aligned to the Physics topics in Grades 7-10, although they may also be used for senior high school Physics. The list covers mechanics, heat and thermodynamics, waves and sound, optics and electromagnetism. There will also be materials which will be locally fabricated like ticker taper timers, circuit boards and Plexiglass lenses. 6. What are upcoming activities of the ARTIST project for this year? We have held two seminar-workshops on Action Research. The first one was held on August last year in Ateneo de Manila. Prof. Maricar S. Prudente, who is an expert in Action Research, was the main speaker. The facilitators were Dr. Lydia S. Roleda, Dr. Minie Rose C. Lapinid and Dr. Socorro C. Aguja. They are all from the Science Department, Bro. Andrew Gonzales, FSC College of Education, De La Salle University. There were about ten participants from Roosevelt College, Inc. and some graduate students. The second seminar-workshop was held recently on 7 April 2018 at Faura Hall, Ateneo de Manila. It was organized by the ARTIST team of Ateneo and De La Salle. The same team of speaker and facilitators from De La Salle University ran the seminar-workshop. A total of 31 participants from the ARTIST network of high schools – Parang High School, Sta. Elena High School, Marikina High School, Colegio de San Agustin, and graduate students in MS Science Education attended the workshop. Another workshop on Action Research will be held on 15-18 May 2018 at De La Salle University-Manila. The ARTIST partners from Germany, Ireland, Austria, Georgia and Israel will facilitate the workshop. The first three days will be spent on understanding AR and writing AR proposals by selected teacher-participants. There will be an AR symposium, open to other teachers, on the fourth day where AR case studies will be presented. Come October 2018 the workshop on Action Research and a meeting of the collaborators will be held in Haifa, Israel. 7. Any parting thoughts? We hope that this project will have a positive impact on the way science is taught in the partner high schools and the lessons learned from these experiences may be adapted by other schools in the country. Participants of Action Research for the Reflective Practitioner workshop at Ateneo de Manila University, 4 August 2017 ## Ateneo Physics faculty Artoni Ang went to a two-week internship at NAIST by Quirino Sugon Jr. Artoni Ang setting up of the UHV SEM for Auger Electron Spectroscopy Artoni Ang, an Assistant Instructor and a graduate student of the Department of Physics of Ateneo de Manila University, went to Narra Institute of Technology (NAIST) last October 2012 for a two-week internship.  NAIST is a graduate school for Material Science, Information Science and Biological Sciences in Nara, Japan. Since 2006, it has been holding the NAIST Project for Interns (NAPI) where qualified students from the Ateneo de Manila University are invited to the laboratory of their choice for a 2 week internship.  For his internship, Artoni went to the Surface and Materials Laboratory under Professor Hiroshi Daimon.  This laboratory focuses on the study of nanomaterials, surfaces, and interfaces using the 10 m long Ultra High Vacuum (UHV) total analysis system developed by the laboratory. Below is an interview of Artoni by the Ateneo Physics News: 1. How long have you been teaching in Ateneo? Less than a year.  This is my second semester. I am teaching Ps 1 and 2 (Natural Science course) and various lab classes for Health Science and Biology majors. I am teaching 13 units this semester. 2.  Where do you do your research in Ateneo? I work in Mr. Ivan Culaba’s Vacuum Coating Laboratory at the first floor of Faura Hall. Right now I am working on thin films on elastomeric substrates. I am trying to make stretchable diffraction grating. Specifically, I wish to reduce the cracking on the metal film as the grating is stretched. Metal films on stretchable substrates have many applications.  Diffraction gratings are just one of them.  Diffraction gratings are surfaces with very fine line grooves like furrows in a field, except that the distance between furrows is in the order of the wavelength of light, which is a few hundred nanometers or a fraction of the width of a hair strand.  Reducing cracking of the grating would increase the lifetime of such material.  I am working on the optical properties of materials by using the grating as a beam scanner. If we have a beam incident to the grating, we can change the angle of the of the reflected beam by stretching the grating. Stretching would change of the grating pitch or the distance between the line grooves. 3.  How is your work in the lab related to your work in the NAIST laboratory? It is not exactly related but similar . Here we work with thin films with thickness levels in the nanometer and micrometer range in the wavelength of light. In NAIST we work with even thinner films in the Angstrom level or about 10 layers of atoms thick. Here we have high vacuum systems with pressures of $10^{-5}$ torr. In NAIST they have ultra high vacuum high systems of $10^{-10}$ torr. Most of the procedures in running the equipment are the same, except when the pressures reach $10^{-10}$: they have to bake the chambers. They wrap the chambers with heating blankets and bake the chambers for a month to get it to $10^{-10}$ torr. In our case to reach $10^{-5}$ torr, we only need 2 hours to pump it down. We use rotary pump and oil diffusion pump. In NAIST they use turbo molecular pumps and titanium sublimation pumps. After they bake their chambers they leave it at that pressure range. Then they leave all their pumps turned on 24 hours a day. In our case, we shut the system down once we are done with a specific experiment. We don’t need to keep it turned it overnight, because we can regain the same pressure the next day after 2 hours. The panel that controls the substrate holders in their UHV system 3.  How many interns were from Ateneo? There were 10 of us: 1 from Biology, 4 from Materials Science, and 5 from Information Science. I am part of the Material Science group. We were all assigned to different labs. We only see each other during scheduled trips or if we run into each other during the day. I am on my own from 9:00 a.m. to 5:00 p.m. 4.  What was your day like in the NAIST laboratory? During my first day there, they held a welcoming tea party for me. So all of the grad students and most of the pofessors were there. I get to meet everyone. Since they were around 20 of them, I can’ t remember all their names. They opened the dried mangoes I brought. They all liked it. It wasn’t a formal Japanese tea ceremony. I was there for 2 weeks. But lab work was only about 8 days. The usual day starts with me going to the laboratory at around 9:00 a.m, though I usually try to arrive a bit later. I don’t like to be the first one in the laboratory alone. And I stay outside to wait for a graduate student to arrive. They actually they told me where they hide the key, but I am not comfortable going inside without them. My day actually starts around 10:00 a.m. I waste an hour waiting outside. Their lab is divided into two main parts: experimental section and the offices. In my  first day they assigned  me to an empty desk. And that is where I stay. In my first day, too, I met with one of the professors: Sakura Takeda-sensei. She created a schedule for me so that I will be working with different students with their own research projects. When working with them, they perform their experiments and explain the details to me. And in some cases. I get hands-on. In one particular case, we were working for two days on a scanning tunneling microscope. But it was repair and maintenance duties. We have to remove some of the main components. It was a long job. I think they finished all the maintenance work a few days before I left. And they started baking it. I guess they had to wait a month before they can even start using it. I was assigned to do analysis on the data we collected in the experiments. I did image processing on diffraction patterns from RHEED (Reflection High Energy Electron Diffraction) experiments. I analyzed the data collected using ARPES (Angle Resolved Photoelectron Spectroscopy). From that data we were able to obtain the electron band structure of the Lead monolayer on Germanium. I was  suppose to get the mass of heavy hole from that data, but I did not get to finish the calculations. They had their own software which came with equipment. And there was another software that I think one of the graduate students wrote using java. It just converts the data collected from ARPES to electron band diagram we are all familiar with. I worked with another student doing RHEED experiments on Indium monolayer on Silicon substrates. I also used the Scanning Electron Microscopes on Iron polycrystalline sample. I was suppose to help on the experiment involving Bismuth on Silicon, but one of the major gauges broke down, so we have to stop. I attended study sessions, a laboratory meeting, and a laboratory colloquium. In the study session, we spent around an hour discussing theoretical principles behind ARPES. In the colloquium, we spent the entire morning listening to two graduate students presenting papers relevant to their work. It would have been were more interesting if they were reporting in English, but they were speaking in Japanese. I sat there the entire morning looking at their slides. In the afternoon is the colloquium where every graduate student presented a slide or two about their progress since the last lab meeting. Some of the students were presenting slides whose only progress is that  they attended courses or studied their exam. Nevertheless, they still have to present those because it is apart of their process. There are also students who made a lot of progress. They presented a lot of the data they were collecting. They also made me present a brief overview of the research that I do in the Philippines. I had to leave after 4 hours. I think their meeting lasted 6 hours–the whole afternoon. Between the colloquium and laboratory meeting is lunch break. And there is 30 minutes of general laboratory cleanup.  Everybody cleans by sweeping or mopping the floors. During the first week my sensei gave me a lot of books. After 5:00 p.m. , I usually go straight to the dorm and read the books–not the entire book but only the selected chapters. I think she was surprised that I can read them overnight, because she is just used that her students have difficulty reading books in English. So from their point of view, I read really fast. A group photo with the professors and students of the Surface and Material Science Laboratory 5.  What do you like best during your stay in Japan? Their transportation system is very organized. If the train is scheduled to arrive at 8:02 a.m., it will actually arrive at 8:02 a.m..  So if we go out for dinner or cultural trip, our entire travel itinerary was already arranged, because they know the schedules of the trains and buses. It was easy getting around even without a car.  And this was in Narra which is not one of the big urbanized area. But despite that the transportation system is very good. In fact when you go out to the gate of NAIST, the first thing that you see is a rice field and it smells like a rice field. But then there is a bus station in front of the gate.  So even if it is in the rural part of Narra, we can still get around. We can also actually walk to the closest train station, but it takes 40 minutes. It seems very safe there. There were times we walked to the train station in the middle of the night beside the big mall at around 10 or 11 p.m. We were not worried about being held up. The sense of security is also visible in the campus itself. They don’t have a close gate. It is just an open road that goes toward the campus. I don’t see any security guard walking around. Of course the food was great. The organizers brought us to Japanese restaurants. We got to try sushi, yakiniku, okonomiyaki, ramen, and some other Japanese foods. Before I went there, I promised myself that I will never say no. I will eat whatever served to me. Half of what I ate there, I don’t know what it was. And then we had weekend trips to Kyoto, Osaka, and Narra. We got to visit some of the old temples and an aquarium in Osaka. During our last day there, they took us to the shopping district in Osaka,where they sold everything from electronics to anime things to clothes. 6.  Any parting thoughts? Overall it was a good experience. You get to see how research is done in universities in other countries. The research culture is very different. Most of the students are full-time researchers. They don’t attend courses. They only worry about their research projects. They spend an entire day in the lab, because they have a desk there. They are really focused on what they are doing in the lab. Unlike in my experience as a student, my attention is divided in the courses I am taking and the research I am doing. Of course, it would be easier if you are only focused on research work. It was also eye-opening to me to see how disciplined the Japanese people are.  After eating in the cafeteria, they clean up. We don’t see people littering. They all follow traffic rules, unlike here in the Philippines where traffic is very chaotic. After I finish my Masters degree, I plan to apply for Ph.D. degree outside the Philippines. I am now looking at Erasmus Mundus program for Materials Science. I have already informed my Professors in NAIST that I will be applying there, too. Hopefully, I get accepted to one of them. If not , I shall also apply to universities in the United States. ## Johanna Mae Indias of Ateneo Physics Department to study at Trento University through Erasmus Mundus by Quirino Sugon Jr. Johanna Mae Indias Johanna Mae M. Indias, Asst. Instructor in the Department of Physics of Ateneo de Manila University, will be taking her M.S. in Physics at the University of Trento, Italy, through a 22-month grant from Erasmus Mundus Action 2 (EMA2). “Sunshine”, as she is fondly called, is interested in enrolling in University of Trento’s  Biological and Medical Physics program.  This program consists of 30 units of mandatory courses in Quantum Mechanics, Statistical Mechanics, and Nuclear Physics.  The students then take 36 units of area courses such as Biological Physics, Optical Spectroscopy, and Photonics.  There are 12 units of free electives and 42 units of thesis courses under Prof. Antolini and Prof. Scarpa.  Some possible thesis topics are as follows: • Protein science and technology • Physical basis of heart and brain functions • Cell tissue and imaging by nonlinear optical microscopy • Medical and health physics and technology • Advanced technological approaches to biophysical investigation (nanodevices, biosensors, etc) At present, Shine is part of the Photonics Laboratory.  Under the supervision of Dr. Raphael Guerrero, Shine worked in the dark room, blasting laser beams on hapless fluid-filled elastomeric lenses or placing obstacles on laser beams to see if they can reconstruct themselves unscathed (Bessel beams) or capturing 3D  image of light in a crystal (holography).  Soon she will leave all these and go to the place where the  the grass is greener and the sun shines warmer.  At Trento, she’ll find new laser toys to enlighten the human brain and see how it sees. Farewell, Shine! May the road rise up to meet you. May the wind be always at your back. May the sun shine warm upon your face; the rains fall soft upon your fields and until we meet again, may God hold you in the palm of His hand. ## Dr. Erees Queen Macabebe of ECCE Department to do solar cell research in Italy funded by Erasmus Mundus by Quirino Sugon Jr. Dr. Erees Queen Macabebe of ECCE Department Time flies fast. Six years ago in 2005, Reese and I played table tennis with other faculty members in the Ateneo Dorm playing area near the Cervini Cafeteria.  On that night, she said goodbye to each one of us.  She was leaving for South Africa then for her Ph.D. in Physics at the Nelson Mandela Metropolitan University. Fast forward to 2009.  I met Reese again at Manang’s Club House in a table facing the tennis court.  She had finished her Ph.D. in Physics.  Her field is in photovoltaics or solar cells.  She uses an algorithm based on swarm intelligence of bees and fishes to characterize the performance of solar cells.  Her work earned her a distinction of being one of the finalist in the Solar World Junior Einstein Award.  Upon her return to Ateneo de Manila University, she joined the ECCE department and led the photovoltaic research group. Reese finished at the  Philippine Science High School (1998) in Western Visayas.  She was awarded a DOST scholarship to study in Ateneo de Manila University.  She worked in the Vacuum Coating Laboratory under the mentorship of Mr. Ivan Culaba.  Her thesis is on anti-reflection coatings using thin films.  She finished BS Physics in 2002 and BS Computer Engineering in 2003.  Reese then taught at the physics department and finished her Master in Physics Education in 2005.  Upon the suggestion of Dr. Jerrold Garcia who was the Physics Department chair then, Reese looked for a Ph.D. program abroad and chose South Africa. Now, Reese is saying goodbye again to us her friends.  She was accepted for an Erasmus Mundus grant, an excerpt of which reads: In the framework of the European Programme ERASMUS MUNDUS Action 2 EMMA West, coordinated by Université de Nice Sophia-Antipolis (France) with the partnership of Università degli Studi di Padova (Italy) and funded by the European Commission, Dr Macabebe, Erees Queen  will spend a period of study/research of 6 months at Università degli Studi di Padova -Department of Technical Physics from October 15th 2011. Her supervisor will be Prof Davide Del Col. In her correspondence with Prof Del Col, Reese was informed that she will be working on characterization of dye-sensitized solar cells in collaboration with Prof Vito di Noto of the Department of Chemistry. She will leave for Italy on the 17th of October. Reese has gone to many parts of Europe, Africa, and Asia, circling the world with the sun, and for a time plants herself like a sunflower in foreign soil, unfolds her photovoltaic petals, and directs them to the sun till it passes over the horizon. Farewell, Reese!  Best wishes!  And before you leave, let’s play table tennis.
2018-09-22 23:07:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2684870660305023, "perplexity": 2997.4961345555066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158766.53/warc/CC-MAIN-20180922221246-20180923001646-00016.warc.gz"}
http://tex.stackexchange.com/questions/94906/subfigure-numeration-inside-the-section
# Subfigure numeration inside the section [duplicate] Possible Duplicate: Numbers of figure references I'm having some problem about subfigure numeration. The figures in my work are numbered inside the section using the line \numberwithin{figure}{section}, so they are like figure 2.1, 2.2, 2.3... I tried to use the subfigure command and make reference to that figure, so the numeration should be 2.1(a), but it ignores the section number, showing just "figure 1(a)". \documentclass{article} \usepackage{graphicx} \usepackage{amsmath} \usepackage{subfigure} \numberwithin{figure}{section} \begin{document} \begin{figure} \subfigure[subfigure a]{\includegraphics{image1} \label{image1}} \subfigure[subfigure b]{\includegraphics{image2} \label{image2}} \end{figure} According to figure \ref{image1} we ... \end{document} Is that a way I could use this kind of numeration and the subfigure command? If I remove the line \numberwithin{figure}{section} the numeration is fine -
2014-07-10 09:15:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270052313804626, "perplexity": 5652.281408966084}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776407319.89/warc/CC-MAIN-20140707234007-00068-ip-10-180-212-248.ec2.internal.warc.gz"}
https://www.scienceforums.net/topic/67333-could-there-be-a-god/page/4/
# Could there be a God? ## Recommended Posts The concept "God" is a very diffuse concept, you need to clearly define it before one can debate about it, but human antropology makes it quite clear I guess that "Gods" are human inventions (and not the other way around) and just portray some idealist / fictuous ideas about ourselves. Moreover there is no need for "inventing" a God, because the philosophy of materialism (matter as the substance - being the ground for all observable phenomena - and consciousness only being a secondary feature of the world and later development and emergent property of matter in living organisms) has no need of there being any God. EDIT: And an argument against a God/creator would be that matter/energy itself does not get created or destroyed, and hence no "creation" event whatsoever needed to take place, just transformation from one kind of material form into other forms (and there might be different kinds of stuff as we currently know of, and perhaps that kind of stuff causes the big bang, and maybe the dark energy and/or dark matter are remnants of that stuff), and secondly, how could a spiritual being (i.e. non-material) "outside of space / time" have anything to do with the material, actual world, as these are all just gross absurdities. The logic of why there would have to be a God is just stupid logic (like saying, the material interactions that lead to the formation of earth, life and human beings made it possible for us to exist, and THEREFORE, those material interactions must have been guided with some pre-set goal, intend and purpose, which is the same basic flawed logic as thinking that if you win a lottery, there was some collaboration, intend and purposese in all the material forces that caused your ticket to win the lottery). IMO it's a pointless question. Unless someone can prove that there could not be a god then the default answer is, "sure, a god is possible". We could of course follow this with an infinite number of other pointless questions, i.e., Could there be unicorns?, Could there be leprechuans?, Could there be wormholes?, Could there be Raleians?, etc. and so on forever. Sure, anything is possible. Why bother? There are a lot of possibilities, but on the other hand also a lot of impossibilites. I'm sure the existence of any "Gods" is one of those impossibilities. For example, how could it be that "god" supposedly "created the world" as the mere supposed existence of a God already means that a world (which at least contains that God-entity) is existing? So far, nobody could explain me how such would be possible at all. And there are a lot of other problems for making the concept of God into some real existing phenomena what could be placed in the world, and thus far nobody succeeded in doing that, hence there is no reason to contemplate such utterly imaginary beings. Edited by robheus • Replies 170 • Created #### Popular Posts I'm frustrated that you're being so unnecessarily evasive and petulant, but I'm hardly angry.   I knew up front what you likely meant when referencing Einstein. You're not the first, nor will you The lack of evidence is not evidence against. Something you will learn in science friend. Sure, there "could" be a god. There "could" also be microscopic garden gnomes living in your armpits and singing songs accompanied by tiny fiddles. The concept "God" is a very diffuse concept, you need to clearly define it before one can debate about it, . How do you clearly define God....In my approach, I took the same approach any religion might... But I defied those religions by saying their God cannot know anything beyond quantum physics... Uncertainty principles is not a back corner I took.. it is matter of physical fact. ##### Share on other sites How do you clearly define God....In my approach, I took the same approach any religion might... But I defied those religions by saying their God cannot know anything beyond quantum physics... Uncertainty principles is not a back corner I took.. it is matter of physical fact. Aren't you assuming the absolute truth of quantum theory by asserting that scenario? ##### Share on other sites Aren't you assuming the absolute truth of quantum theory by asserting that scenario? Can I clarify what I have just said to you....? I appreciate your contributions... '' God is just an amazingly baggage of thoughts and information. HE is is not a persona.'' i liked your post, because you asked the right kind of question. ##### Share on other sites Although I am still at the elementary levels of science, I believe that "singularities" were created to keep the harm that human beings create "for themselves" out of the barriers of the universe... I truly believe that "something" knew that mankind one day would discover things about the atom and at that, humans have already created much damage in this world as we see. Radiation is everywhere causing people to be sick, killing wild life and animals, "INNOCENT VICTIMS." From pollution to even an economy system that is ruled by numbers, greed lies and etc... All these things were created by scientist, mathematicians, philosophers and even 'religion." Animals that get tortured by scientific experiments to even starvation, and deprivation on other "selected" races"".... From "NUCLEAR WARS", to the very creation of the internet that now serves as a system ruled by lies, anyone can be anyone now! The point here is that "something" keeps humanity at a distance of true knowledge and escape from their creation, in this case the harnessing of the atom....I know some may get offended when I say, but from what I gather, science still has no clue about the scale of this earth, what the size of the universe is, nor do they have any clue in where exactly time references "everything." I was told to admit when I was wrong about anything that came out of my mouth, with the current events in the world, I think it is time for the "world" to admit that we need god now more than ever before and perhaps science needs an entire upgrading system in where more theories can be accepted and or proven. When someone says the word ''God'' most people think of divine, omnipotent, omnipresent and all-knowing entities. There are some problems with an all-knowing entity, such as the Uncertainty Principle. If a God truly exists, he must abide by the rules of quantum mechanics. If he didn't it surely would cause a tremendous discharge of energy from each and every particle in the universe due to $\Delta E \Delta t$. There is one reason why (a) God cannot be outside of the rules of quantum physics, assuming that relativity has any universal truth or precedence. Since nothing exists outside of the universe, we must assume God is contained within his own creation - indeed, assuming he even created the universe. A possibility of such an entity would be that they were created inside of the bubble of the universe, entwined if you like in a ''creation'' which he (or indeed she) had no control over. Many people have traditional views of God today, mostly evolved from scriptures and ancient proverbs - but these have been adapted by men on Earth who have created these views to suit their doctrine and way of thoughts and systematic beliefs and foibles. What does seem certain, if a God does exist and are so superior, beyond the intellect of man, it is doubtful he or she would even find us interesting. Indeed, the God of Einstein was Spinoza's God, a God who does not care for the doings of mankind. This is likely, the kind of God we can deal with in physics, or any kind of understanding of any physical kind of science. God is not outside of science, so long as you realize that God must be ignorant of many physical qualities that we often think he is superior for. So what is ''God'' if not something we associate to scripture? God in my eyes, should be ''something'' which has as quantum nature about it. Usually in quantum mechanics, to encode the information about a particular system, we consider a ''State Function'' often denoted with a $\Psi$ ''a capitol Psi''. The is the wave function which describes if you like, all the information of a system, which could be from a particle to the entire universe. The problem however, is, just like a particle you can only know $\frac{1}{2}$ of any attribute of a particle system. You may know for instance, with almost correct parameters the position of a particle, but doing so would result in an amazing uncertainty inherent in its momentum/trajectory. The wave function therefore itself, or rather, the state function cannot ever really be known completely unless we where talking about systems which was ''macroscopic'' because such systems are devoid of quantum effects (not entirely, but enough) to be ignored. A position of Schrodinger's cat is not smeared over space for instance. So in it's full form, is the universe a victim of quantum effects? It is after all, something large and can be modeled as a macroscopic system? Well, most of the universe is made up of about 99% space. The rest of it exists as tangible ''existing out there'' matter, the kind that our most functional telescopes can hone in on and take pictures of. The rest of space is made up of ghostly matter which appears to be smeared over all spacetime. Some of it in the form of radiation, others will be smeared over spacetime as particles or other types of matter resonating from other distant galaxies. And even, some of this matter might actually turn up in different parts of space which a most recent experiment has shown (citations can be given if asked for). I have even speculated within myself whether anamolous gravitational effects show up in the universe because the matter in the universe turn up in places they shouldn't according to this experiment, and thus, adding a reason why we pick up gravitational distortions where they should not be present. God could even be some kind of ''supercomputer'' who is located in the future sending signals back in the form of (what I will call) Cramer Waves. Cramers delayed choice experiment has shown that actions in the future can in fact alter present conditions we see today. In relativity, we have no such thing as a ''true past'' or even a ''true future''. So maybe God is really some kind of machine in our future horizon which creates the world we see around us today, (which would mean ultimately) that things we do and observe in the present is really shaping the world in the past, when the universe was young and ripe. Edited by The Architekt ##### Share on other sites Although I am still at the elementary levels of science, I believe that "singularities" were created to keep the harm that human beings create "for themselves" out of the barriers of the universe... I truly believe that "something" knew that mankind one day would discover things about the atom and at that, humans have already created much damage in this world as we see. Radiation is everywhere causing people to be sick, killing wild life and animals, "INNOCENT VICTIMS." From pollution to even an economy system that is ruled by numbers, greed lies and etc... All these things were created by scientist, mathematicians, philosophers and even 'religion." Animals that get tortured by scientific experiments to even starvation, and deprivation on other "selected" races"".... From "NUCLEAR WARS", to the very creation of the internet that now serves as a system ruled by lies, anyone can be anyone now! Science has led humanity into an "occultism order" now a victim of its creation like chaos theory.. I believe god previously knew this, so yes I believe there is a god that created singularities to keep the bad things out. The point here is that "something" keeps humanity at a distance of true knowledge and escape from their creation, in this case the harnessing of the atom....I know some may get offended when I say, but from what I gather, science still has no clue about the scale of this earth, what the size of the universe is, nor do they have any clue in where exactly time references "everything." I was always told to admit I was wrong about anything that came out of my mouth, with the current events in the world, I think it is time for the "world" to admit that we need god now more than ever before and perhaps science needs an entire upgrade system. i think you are conflating the perceived problems of the earth with possible problems with the universe due to your earth centrism thinking... The Earth is not important in the grand scheme of things to anyone but us and nuclear weapons and pollution is meaningless on a universal scale... and if anything the concept of god or gods makes it much much worse for us here on the earth... Much of the contention and strife on the earth is associated with god concepts and the dehumanizing influence god has on society.. ##### Share on other sites Indeed, if any specific measurement was out of a specific type of order, then the universe as we know it today would have been drastically different. This is what I speak of a ''Super-Order'' - an underlying deterministic universe with a specific path which has led to this wonderful construction which allows even humans today to speak about the things they have. If it had not, we would not be here today. But you haven't accounted for observational effects. The anthropic principle (illusion of perfect design) can also be explained by observation selection. If the universe's physics (or evolutionary laws) had been different than maybe other types of beings would be observing that universe and commenting about how perfect it is. Or maybe there'd be no conscious beings to observe it at all. "Not being here today" could be the effort of graceful design (which is pure speculation) but this hypothesis doesn't rule out the above observational biases. This is the equivalent to throwing partially cooked pasta at the wall to see what sticks and calling the whole pot perfectly cooked by ignoring everything on the floor. (wow what a terrible metaphor!) ##### Share on other sites If a God truly exists, he must abide by the rules of quantum mechanics. If he didn't it surely would cause a tremendous discharge of energy from each and every particle in the universe due to $\Delta E \Delta t$. There is one reason why (a) God cannot be outside of the rules of quantum physics, assuming that relativity has any universal truth or precedence. Since nothing exists outside of the universe, we must assume God is contained within his own creation - indeed, assuming he even created the universe... Read back over this part of your OP. The verbiage used sounds like you are asserting these things as facts, not possibilities. If you are making such assertions then the Speculations Forum Rules clearly say, "Speculations must be backed up by evidence or some sort of proof. If your speculation is untestable, or you don't give us evidence (or a prediction that is testable), your thread will be moved to the Trash Can. If you expect any scientific input, you need to provide a case that science can measure." You should not be making complaints if you have been asked to provide some sort of proof for these assertions and you have not done so. If you have received neg rep as a result of not abiding by the rules you shouldn't be complaining about that either. Read through your other posts and see where you have provided actual proof and not just some opinion you have proffered as proof. I've not read through them myself but I suspect you will find some explanation as to why some members have reacted to your posts the way they have. Look particularly for any links you have posted to support your assertions from outside sources. If you can't find any then maybe you should post some. ##### Share on other sites Read back over this part of your OP. The verbiage used sounds like you are asserting these things as facts, not possibilities. If you are making such assertions then the Speculations Forum Rules clearly say, "Speculations must be backed up by evidence or some sort of proof. If your speculation is untestable, or you don't give us evidence (or a prediction that is testable), your thread will be moved to the Trash Can. If you expect any scientific input, you need to provide a case that science can measure." You should not be making complaints if you have been asked to provide some sort of proof for these assertions and you have not done so. If you have received neg rep as a result of not abiding by the rules you shouldn't be complaining about that either. Read through your other posts and see where you have provided actual proof and not just some opinion you have proffered as proof. I've not read through them myself but I suspect you will find some explanation as to why some members have reacted to your posts the way they have. Look particularly for any links you have posted to support your assertions from outside sources. If you can't find any then maybe you should post some. That is fact. Nothing is outside quantum mechanics. If anything violated the Uncertainty Principle directly it be disasterous in nature. Nothing would be able to exist. But you haven't accounted for observational effects. The anthropic principle (illusion of perfect design) can also be explained by observation selection. If the universe's physics (or evolutionary laws) had been different than maybe other types of beings would be observing that universe and commenting about how perfect it is. Or maybe there'd be no conscious beings to observe it at all. "Not being here today" could be the effort of graceful design (which is pure speculation) but this hypothesis doesn't rule out the above observational biases. This is the equivalent to throwing partially cooked pasta at the wall to see what sticks and calling the whole pot perfectly cooked by ignoring everything on the floor. (wow what a terrible metaphor!) But it's fact, again, that there are an infinite amount of beginning our universe could have chose, with only a handful of other kinds of universe which are sustainable today. That certainly has massive implications in the theory of statistics. Has anyone here actually read the Anthropic Principle by Tipler and Barrow? ##### Share on other sites That is fact. No, that seems to be your opinion more than an objective fact, the below statement is not a fact. This might be the reason why you're being neg repp quite often. Nothing is outside quantum mechanics. If anything violated the Uncertainty Principle directly it be disasterous in nature. Nothing would be able to exist. Not necessarily, we still have not understood how entanglement works and the experiments by German scientists suggest that there is something far more important is at work. The uncertainty principle failed to account for the results generated from the experiments, so its not that uncertaintly principle is violated but there is something very strange happening which we don't know yet. There are many physicists who think that quantum mechanics is incomplete. Origin of QM complementarity probed by a 'which-way' experiment in an atom interferometer. - S. DuÈ rr, T. Nonn & G. Rempe. Your basic assumption in this thread that if a God exists then he must be confined to the rules of Quantum mechanics is fundamentally flawed. Could there be a God? Yes there is still room for God but such an hypothesis is outside of science. It is not science. The scientist who leaves room for spirituality - Read this interview with the Templeton prize winner Bernard d'Espagnat who has worked under the architects of Modern Physics like Bohr and De brouglie. ##### Share on other sites No, that seems to be your opinion more than an objective fact, the below statement is not a fact. This might be the reason why you're being neg repp quite often. Not necessarily, ? How can you say, not necessarily? Read my sentence again: Nothing can violate the uncertainty principle. It is a cornerstone of physics as we know it. You can't know the position and trajectory of every particle in the universe, it just won't let you!!! Physics 101. So the idea of an all-knowing entity is fundamentally-flawed. Not my argument. Also I believe physics is incomplete - its very incomplete - that is irrelevant however because no amount of tweaking our theories will the uncertainty principle ever be proven wrong or can be violated directly. There are, as I have shown, some very special ways one can know the location and trajectory of a particle but it requires making two-time measurements. ##### Share on other sites ? How can you say, not necessarily? Read my sentence again: Nothing can violate the uncertainty principle. It is a cornerstone of physics as we know it. You can't know the position and trajectory of every particle in the universe, it just won't let you!!! That's exactly what my point is, as shown in the paper which I cited we still don't know why nature won't let us to simultaneously know the position and momentum of a particle in the universe or why it behaves like that way when we make a measurement. There is a mechanism which is working at the heart of the measurement process. So the uncertainty principle might not be a fundamental law of nature so that you can draw some absolute conclusions about the nature of the universe. So the idea of an all-knowing entity is fundamentally-flawed. Not my argument. As I said, not necessarily. We still don't know a lot about how nature works. ##### Share on other sites That doesn't make sense. You either agree with my statement on the uncertainty principle - then you say we don't know how nature works. You can't have it both ways. ##### Share on other sites That doesn't make sense. You either agree with my statement on the uncertainty principle - then you say we don't know how nature works. You can't have it both ways. For a layman description of what that paper concludes can be found below. http://www.daviddarling.info/encyclopedia/Q/quantum_entanglement.html The revisionist picture of the Bohr-Einstein debates stems partly from a suggestion made in 1991 by Marlan Scully, Berthold-Georg Englert, and Herbert Walther of the Max Planck Institute for Quantum Optics in Garching, Germany.3 These researchers proposed using atoms as quantum objects in a version of Young's two-slit experiment. Atoms have an important advantage over simpler particles, such as photons or electrons: they have a variety of internal states, including a ground state (lowest energy state) and a series of excited states. These different states, the German team reckoned, could be used to track the atom's path. Seven years later, Gerhard Rempe and his colleagues at the University of Konstanz, also in Germany, brought the experiment to life – and made a surprising discovery.4 Their technique involved cooling atoms of rubidium down to within a hair's breadth of absolute zero. (Cold atoms have long wavelengths, which make their interference patterns easier to observe.) Then they split a beam of the atoms using thin barriers of pure laser light. When the two beams were combined, they created the familiar double-slit interference pattern. Next, Rempe and his colleagues looked to see which path the atoms followed. The atoms going down one path were left alone, but those on the other path were nudged into a higher energy state by a pulse of microwaves (short wavelength radio waves). Following this treatment, the atoms, in their internal states, carried a record of which way they'd gone. The crucial factor in this version of the double-slit experiment is that the microwaves have hardly any momentum of their own, so they can cause virtually no change to the atom's momentum – nowhere near enough to smear out the interference pattern. Heisenberg's uncertainty principle can't possibly play a significant hand in the outcome. Yet with the microwaves turned on so that we can tell which way the atoms went, the interference pattern suddenly vanishes. Bohr had argued that when such a pattern is lost, it happens because a measuring device gives random kicks to the particles. But there aren't any random kicks to speak of in the rubidium atom experiment; at most, the microwaves deliver momentum taps ten thousand times too small to destroy the interference bands. Yet, destroyed the bands are. It isn't that the uncertainty principle is proved wrong, but there's no way it can account for the results. The only reason momentum kicks seemed to explain the classic double slit experiment discussed by Bohr and Einstein turns out to be a fortunate conspiracy of numbers. There's a mechanism at work far deeper than random jolts and uncertainty. What destroys the interference pattern is the very act of trying to get information about which paths is followed. The effect at work is entanglement. Do we really know how nature works now? ##### Share on other sites No, that seems to be your opinion more than an objective fact, the below statement is not a fact. This might be the reason why you're being neg repp quite often. I agree. It started in the OP and has not stopped. As the supernatural is outside the realm of science, making scientific claims about the supernatural shows a fundamental lack of understanding about science. It is inviting trouble on a science forum. The problem is only compounded with the attitude he shows for those who don't agree. ##### Share on other sites Can there be a God? Simple answer no, because nobody can explain me how the logic works that seems to dictate there must be a God. The logic goes like this: 1. There is a world. 2. It must have a cause. 3. Therefore there must be a God who created the world. etc. What is wrong with this logic is that as soon as you say such a being exists, it satisfies the condition that a world exists, even if it only contained that being (ie. a world only containing God). But then, there would have been no creation act of God necessary, since the world already existed, even if only in the form of just God existing. And if we ask the very basic question, how it is that the world seems to need a cause of existing, but God does not need to have a cause of existing, the most likely answer you get is because God exists indefinately or eternally. But since the world exist by definition then also, this then just turns out to be that the world itself is eternal and ever existing. No need of any such God creators. This makes the question of wether any such beings which we can call God unnessary by definition, since no creation event needed to have taken place if it turns out the world is eternal, and we don't need to argue wether negative (or absense) of existence can be proven, etc. ##### Share on other sites That is fact... But it's fact, again... Prove it. Rules are rules so let's see the proof. Then again, maybe this thread does belong in the trash can as the rules imply. I can't see the op actually stepping up with any evidence to support his/her opinion. ##### Share on other sites But it's fact, again, that there are an infinite amount of beginning our universe could have chose, with only a handful of other kinds of universe which are sustainable today. I'm not sure how you reached this conclusion, nor how this presupposes that the "evolution" of universe space is directed by a diety (as opposed to anthropic selection) Has anyone here actually read the Anthropic Principle by Tipler and Barrow? ##### Share on other sites For a layman description of what that paper concludes can be found below. http://www.daviddarl...tanglement.html Do we really know how nature works now? I have hardly had enough to time look at anything, besides a paper. I know for a fact however the paper won't be telling me that you can defy the uncertainty principle directly, which was my point all along, one which you side-stepped by saying we don't know everything in physics, (whatever that is meant to mean in the context of things). I'm not sure how you reached this conclusion, nor how this presupposes that the "evolution" of universe space is directed by a diety (as opposed to anthropic selection) It's called the wave function. When the universe was very small, we believe it was still subject to quantum effects. In other words, the rules of quantum mechanics is the same everywhere. This would mean that just a single particle may have several outcomes to any state, the universe also had many states it could have arisen in. In fact, according to current belief, the universe could have had an infinite amount of possible states it could have arose in, but only so many of those states would allow the kind of stable vacuum we observe today. Now the reason why this creates a question of God, is who made the first measurement which pulled the universe out of this superpositioning? We are led to this question because if the universe had arose out of so many states, we would effectively still see some of these states smeared over spacetime. We don't.. however, this is one reason why parallel universes was created. Prove it. Rules are rules so let's see the proof. Then again, maybe this thread does belong in the trash can as the rules imply. I can't see the op actually stepping up with any evidence to support his/her opinion. Start Proof: $\Delta E \Delta t$ and $\Delta x \Delta p$ Corner stone principles, cannot be directly violated. End proof. I agree. It started in the OP and has not stopped. As the supernatural is outside the realm of science, making scientific claims about the supernatural shows a fundamental lack of understanding about science. It is inviting trouble on a science forum. The problem is only compounded with the attitude he shows for those who don't agree. No, you don't understand. I have made suppositions in the OP based on ''IF God exists''.... notice the ''IF''. You are then, it seems, treating this as me saying ''God does exist and is usually within the context of science''. Which is wrong. I am sick and tired of people not reading what I write, its almost as if they are intentionally trying to wrap things I say to mean other things. If a God DID exist, then he would be subject to the rules of quantum mechanics, (the one named in the OP), the Uncertainty Principle. The reasons why have been explained time and time again. If anything, EVEN a God knew the location and position of every particle in the universe it would cause a tremendous discharge of energy. Edited by Aethelwulf ##### Share on other sites I have hardly had enough to time look at anything, besides a paper. I know for a fact however the paper won't be telling me that you can defy the uncertainty principle directly, which was my point all along, one which you side-stepped by saying we don't know everything in physics, (whatever that is meant to mean in the context of things). Yes, you're using the uncertainty principle to assert that nothing cannot exist outside of quantum mechanics but as we know every respectable physicist knows that QM and SR are incomplete theories and I don't think anyone who is of a scientific attitude would use such a theory to draw factual conclusions about an ill defined entity like God. Its like an argument from ignorance, a logical fallacy. We don't know in any other way how an entity can surpass the uncertainty principle and know everything therefore nothing must exist outside of quantum mechanics or an all knowing entity must be impossible. This is a logical fallacy. Start Proof: $\Delta E \Delta t$ and $\Delta x \Delta p$ Corner stone principles, cannot be directly violated. End proof. First of all you need to define God. What is your definition of God? Why should a God be subjected to such a proof? or how can you prove that nothing cannot exist outside quantum mechanics? In science we don't prove anything, there are no ultimate proofs from which you can draw absolute conclusions, looking at that way your proof cannot be applied to even an human observer. We don't go by verification, in science we go by falsification which means that even your axioms or assumptions can be wrong and can be overthrown. Which is wrong. I am sick and tired of people not reading what I write, its almost as if they are intentionally trying to wrap things I say to mean other things. If a God DID exist, then he would be subject to the rules of quantum mechanics, (the one named in the OP), the Uncertainty Principle. The reasons why have been explained time and time again. If anything, EVEN a God knew the location and position of every particle in the universe is would cause a tremendous discharge of energy. There are inconsistencies in your statements and we are reading you correctly. Aethelwulf, on 26 June 2012 - 08:14 PM, said: "If a God truly exists, he must abide by the rules of quantum mechanics This is a fact." Aethelwulf, Today said: "If a God DID exist, then he would be subject to the rules of quantum mechanics." Now you're rephrasing your statements and accusing us for intentionally trying to frame you. Everyone can see what your claims were. Now you're stating it as your opinion which gives more support to my previous prediction that you were stating your opinions as a scientific fact and that's what brought the trouble. ##### Share on other sites Yes, you're using the uncertainty principle to assert that nothing cannot exist outside of quantum mechanics but as we know every respectable physicist knows that QM and SR are incomplete theories and I don't think anyone who is of a scientific attitude would use such a theory to draw factual conclusions about an ill defined entity like God. I never said God was not ill-defined. I have said for the take of this thread, if God existed. You obviously don't seem to realize why I say it would be impossible for anything violate the uncertainty principle... and I can only assume this is with a certain lack of understanding the topic. Whatever God, if he or she exists, there still cannot be such a violation. The fact we are here, speaking and talking is because this principle is preserved. Understand? First of all you need to define God. What is your definition of God? Why should a God be subjected to such a proof? or how can you prove that nothing cannot exist outside quantum mechanics? In science we don't prove anything, there are no ultimate proofs from which you can draw absolute conclusions, looking at that way your proof cannot be applied to even an human observer. We don't go by verification, in science we go by falsification which means that even your axioms or assumptions can be wrong and can be overthrown. I took a few ways that could help define God. In my OP, I explained some traditional ways that he or she is seen. My definition of God, is Einstein's God - A God of nature.... but here we go again. I have actually told you this already. There are inconsistencies in your statements and we are reading you correctly. Aethelwulf, on 26 June 2012 - 08:14 PM, said: "If a God truly exists, he must abide by the rules of quantum mechanics This is a fact." Aethelwulf, Today said: "If a God DID exist, then he would be subject to the rules of quantum mechanics." Now you're rephrasing your statements and accusing us for intentionally trying to frame you. Everyone can see what your claims were. Now you're stating it as your opinion which gives more support to my previous prediction that you were stating your opinions as a scientific fact and that's what brought the trouble. I am not rephrasing anything. I said above, ''IF'' God exists. Where do you see an inconsistency? (Seriously, the last one has me quite amused) - NO where in those sentences have I rephrased anything. Everything said depends on the BIG ''IF'' question. You're now trying reshape the argument to fit your own. ''Now you're stating it as your opinion which gives more support to my previous prediction that you were stating your opinions as a scientific fact and that's what brought the trouble.'' Well no, because you seem to be having some problems reading what I write. I have explained that it is scientific fact that if every position and trajectory of every particle was known would be extremely volatile. This is scientific fact. It would cause a tremendous instability of spacetime. That is FACT. Now, what part of a ''God'' knowing the position and trajectory of each particle in the universe is impossible, which disturbs you? Is it the fact we have pre-supposed the existence of God, or that I am saying nothing can know these probabilities certainly? Say a God did exist, and he did know the trajectory and positions of every particle, what makes you think we'd still be around? As I have explained, such a notion is physically-impossible. Edited by Aethelwulf ##### Share on other sites Start Proof: $\Delta E \Delta t$ and $\Delta x \Delta p$ Corner stone principles, cannot be directly violated. End proof. That's not proof that any and/or all possible gods MUST fall under the rules as we understand them. There is plenty we don't know. Please fill in those gaps for us in order to complete your proof. ##### Share on other sites That's not proof that any and/or all possible gods MUST fall under the rules as we understand them. There is plenty we don't know. Please fill in those gaps for us in order to complete your proof. What part of the Uncertainty Principle do you not understand? You do realize, in it's fullest that it is a Law of Nature - an inherent law within all matter... You do realize, that particles could not be sustainable if such a law broke down at any time? So explain, if a God existed, why don't we see these violations? (Not that we'd be around for long if he did anyway...) ##### Share on other sites I never said God was not ill-defined. I have said for the take of this thread, if God existed. You obviously don't seem to realize why I say it would be impossible for anything violate the uncertainty principle... and I can only assume this is with a certain lack of understanding the topic. Whatever God, if he or she exists, there still cannot be such a violation. The fact we are here, speaking and talking is because this principle is preserved. Understand? Again you're making factual claims by assuming your presupposed notions of God. Why should God be omniscient only by simultaneously knowing the position and momentum of a particle? He might acquire knowledge in ways which we don't know or his epistemology might be different. Now just because God cannot be omniscient in this way you seem to conclude that God cannot be omniscient in any other way. Understand? I took a few ways that could help define God. In my OP, I explained some traditional ways that he or she is seen. My definition of God, is Einstein's God - A God of nature.... but here we go again. I have actually told you this already. If you're arguing about God as a scientific hypothesis I request you to give a precise falsifiable defintion of God making testable predictions so that we can falsify your claims, someone else's subjective opinions are not science. I am not rephrasing anything. I said above, ''IF'' God exists. Where do you see an inconsistency? (Seriously, the last one has me quite amused) - NO where in those sentences have I rephrased anything. Everything said depends on the BIG ''IF'' question. You're now trying reshape the argument to fit your own. Yes you did, see my bolded part there is lot of difference between the words "must" and "would". You cannot use it interchangeably it changes the meaning of your claims. Now, what part of a ''God'' knowing the position and trajectory of each particle in the universe is impossible, which disturbs you? Is it the fact we have pre-supposed the existence of God, or that I am saying nothing can know these probabilities certainly? Say a God did exist, and he did know the trajectory and positions of every particle, what makes you think we'd still be around? As I have explained, such a notion is physically-impossible. The measurement process or what we call observation is one of the ways of epistemology (how we know what) used by humans to acquire knowledge. Now my question is why a God or even a human should be subjected to only this way of knowing, there might be other ways of knowing the world. Why should a God use a detector or a measuring device to acquire knowledge? There might be other ways of knowing. I know the uncertainty principle is not a consequence of an imprecise detector, don't tell me that I lack understanding. Your argument seem to imply that there are absolutely no other ways of knowing the world and that a God must abide himself to the rules of QM. God could easily come up with other ways of knowing and be omniscient (i.e to know everything that is there to know). What part of the Uncertainty Principle do you not understand? You do realize, in it's fullest that it is a Law of Nature - an inherent law within all matter... You do realize, that particles could not be sustainable if such a law broke down at any time? So explain, if a God existed, why don't we see these violations? (Not that we'd be around for long if he did anyway...) I can show you ways how a God could exist without violating the uncertainty principle and at the same time be omnisicent. ##### Share on other sites Correct me if I am wrong (I am going somewhere with this), but according to the uncertainty principle, it is the actual measuring that causes the probability wave to collapse, and prevents you from accurately measuring the other property (for instance, if I measure position accurately, I cannot also measure velocity). Is that a correct summary? ## Create an account Register a new account
2021-07-24 07:30:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4460241496562958, "perplexity": 917.885802422115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150134.86/warc/CC-MAIN-20210724063259-20210724093259-00051.warc.gz"}
http://cms.math.ca/cmb/kw/square-free%20numbers
On the Average Number of Square-Free Values of Polynomials We obtain an asymptotic formula for the number of square-free integers in $N$ consecutive values of polynomials on average over integral polynomials of degree at most $k$ and of height at most $H$, where $H \ge N^{k-1+\varepsilon}$ for some fixed $\varepsilon\gt 0$. Individual results of this kind for polynomials of degree $k \gt 3$, due to A. Granville (1998), are only known under the $ABC$-conjecture. Keywords:polynomials, square-free numbersCategory:11N32
2015-01-26 22:31:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9735917448997498, "perplexity": 125.38800864870149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121914832.36/warc/CC-MAIN-20150124175154-00246-ip-10-180-212-252.ec2.internal.warc.gz"}
https://crypto.stackexchange.com/questions/64641/ecdsa-signing-and-verifying-signatures-between-python-and-js
# ECDSA Signing and verifying signatures between Python and JS [closed] I create a signature on js, here jsrsasign Signatures are obtained in the format: 3045022045c61e649ca9f6011a8d34ac865c4780421de08ff50ac3dad0da36043b6de478022100b19208d1ec51f6dd6f6b725342618f55f9fc90c96c5b5409998d66774749a0b Always start with 30 ... In Python, I use the python-ecdsa library. In this library, the signature format is a8c7dd7e9b669b1bc841ddf66bc08b10bc1112fa14fce2a5a2246edf997c577450af6b9edfe373546e17ab7363c097ab468db04ed707fb65992e20eabfd1bf40 Therefore, verification does not occur. How to bring signatures to one format? ## closed as off-topic by kelalaka, Gilles, Maeher, Maarten Bodewes♦, Ella Rose♦Dec 14 '18 at 18:10 This question appears to be off-topic. The users who voted to close gave this specific reason: • "Programming questions are off-topic even if you are writing or debugging cryptographic code. Unless your question is specifically about how the cryptographic algorithm, protocol or side-channel (mitigation) works, you should look into asking on Stack Overflow instead." – kelalaka, Gilles, Maeher, Maarten Bodewes, Ella Rose If this question can be reworded to fit the rules in the help center, please edit the question. ## 1 Answer An ECDSA signature is, formally, a pair of integers $$(r,s)$$. There are two main conventions for encoding these integers into bytes: • Encode both integers into unsigned big-endian, using the same size for both, and concatenate the values. This is the traditional way in, for instance, PKCS#11 and OpenPGP; python-ecdsa apparently uses that format. • Encode the integers as an ASN.1/DER structure (a SEQUENCE of two INTEGER values). This is what is normally used in everything that relates to X.509 certificates, and also in SSL/TLS exchanges. jsrsasign apparently uses that format. Conversion between these formats can be done, but it is surprisingly tricky to do correctly (it's a parser, after all). It seems that python-ecdsa can also encode and decode ASN.1-based signatures (see the functions sigencode_der() and sigdecode_der(), for instance). • Thx my frend, sloved – Vadim Dec 10 '18 at 11:20
2019-10-17 11:21:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5128726363182068, "perplexity": 4142.829932161709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673538.21/warc/CC-MAIN-20191017095726-20191017123226-00478.warc.gz"}
http://energyeducation.ca/encyclopedia/Beta_decay
# Beta decay Figure 1. A model of beta-minus decay, showing the ejection of an electron from the nucleus and the specific transformation of a neutron.[1] Beta decay is a nuclear decay process where an unstable nucleus transmutes and ejects particles to become more stable. There are two different types of beta decay - beta minus and beta plus. In both of these decays, a nucleon in the nucleus is transformed into a different type of nucleon, releasing particles in the process. Both beta minus and beta plus decay are moderately penetrating (ie the radiation can go deep inside a solid object). There's a closely related process called electron capture, where an electron is captured in the nucleus which acts just like beta plus. Beta minus decay occurs whenever a nucleus has too many neutrons. In this type, a neutron from the nucleus is transformed into a proton and an electron, with the electron being ejected from the nucleus. To ensure the rules of particle physics hold, a tiny particle known as an anti-neutrino is also released.[2] The general equation representing beta minus decay is: $^A_ZX_N \rightarrow ^A_{Z+1}Y_{N-1} + e^- + \bar{\nu}$ where: • $^A_ZX_N$ is the parent nucleus • $^A_{Z+1}Y_{N-1}$ is the daughter nucleus • $e^-$ is the released beta particle, an electron • $\bar{\nu}$ is the released anti-neutrino Beta plus decay comes from a nucleus with too many protons. In this type of decay, a proton from the nucleus is transformed into a neutron and a positron (which is simply a "positive version" of the electron). To ensure rules of particle physics hold, a tiny particle known as a neutrino is also released.[2] The general equation representing beta positive decay is: $^A_ZX_N \rightarrow ^A_{Z-1}Y_{N+1} + e^+ + \nu$ where: • $^A_ZX_N$ is the parent nucleus • $^A_{Z-1}Y_{N+1}$ is the daughter nucleus • $e^+$ is the released beta particle, a positron • $\nu$ is the released neutrino In both beta minus and beta plus decay it is the weak nuclear force that results in the changing of a nucleon into a different nucleon. ## Safety Beta radiation is slightly more penetrating than alpha radiation, but still not nearly as penetrating as gamma radiation. Generally speaking, because beta radiation isn't extremely penetrating it is mainly an issue when ingested. If a beta source enters the body, it causes tissue damage and can increase the risk of cancer.[3] Figure 2 shows the relative levels of penetration of a variety of different radiation types. Figure 2. Different penetration levels of different products of decay, with gamma being one of the most highly penetrating and alpha being one of the least penetrating.[4][5] Exposure to beta radiation can cause a wide variety of health effects. Generally speaking, exposure to beta decay sources are chronic in nature. Chronic effects are the result of low-level exposures to beta radiation over an extended period of time, and can take anywhere between 5 and 30 years to develop.[3] The most prominent side effect of exposure is cancer. Some beta emitters are distributed throughout the body - such as carbon-14 (which occurs naturally at levels that cause no harm to the human body)- while others accumulate in specific organs. An example of this would be iodine-131, which concentrates in the thyroid gland and increases the risk of thyroid cancer.[3] ## Applications and Importance Elements that have beta decay can have useful medical applications. Radionuclide therapy (RNT) or radiotherapy is a cancer treatment that uses beta decay. In this process, lutetium-177 or yttrium-90 is attached to a molecule and ingested.[6] Once inside the body, this molecule travels to the cancer cells. The radioactive atoms then undergo a decay process, releasing beta particles and killing nearby cancer cells. Additionally, carbon dating relies on the properties of beta decay. To determine the approximate age of artifacts, wood, and animal remains the ratio of carbon-14 to carbon-12 in an object must be determined.[6] Carbon-14 is generated from sunlight in the atmosphere from nitrogen-14, which plants breathe in with photosynthesis, and thus there is a certain amount of carbon-14 in organic remains. Plants are eaten by animals and get carbon-14 as well. When an organic body begins to decay, some of this carbon-14 becomes nitrogen-14 (through a beta decay process), and over the years the amount of carbon-14 in the sample is depleted.[6] By looking at the ratio of carbon-14 to carbon-12, the approximate age of the artifact can be determined. ## PhET The University of Colorado has graciously allowed us to use the following PhET simulation. This simulation illustrates how radioactive nuclei decay through beta decay, and shows the half-life of these atoms. Click to Run ## References 1. Wikimedia Commons. (July 22, 2015). Beta Minus Decay [Online]. Available: https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Beta-minus_Decay.svg/1024px-Beta-minus_Decay.svg.png 2. Study Physics. (July 22, 2015). Beta Decay [Online]. Available: http://www.studyphysics.ca/2007/30/08_atomic/43_decay.pdf 3. US EPA. (July 22, 2015). Beta Particles [Online]. Available: http://www.epa.gov/radiation/understand/beta.html#healtheffects
2017-01-24 17:07:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 10, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6799479722976685, "perplexity": 1200.3805930565377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00424-ip-10-171-10-70.ec2.internal.warc.gz"}
http://spades-tools.predictiveecology.org/reference/distances.html
pointDistance2 performs Pythagorean Theorem and cbinds all columns from to to new dists column. It is only defined for one point (from) to many (to) points. pointDistance3 performs Pythagorean Theorem and is to be used internally within distanceFromEachPoint as an alternative to .pointDistance, where it does many points (from) to many (to) points, one from point at a time. The results are then rbinded internally. It does not cbind extra columns from to. pointDistance2(to, from) pointDistance3(fromX, toX, fromY, toY, maxDistance) .pointDistance(from, to, angles = NA, maxDistance = NA_real_, otherFromCols = FALSE) ## Arguments to Numeric matrix with 2 or 3 columns (or optionally more, all of which will be returned), x and y, representing x and y coordinates of "to" cells, and optional "id" which will be matched with "id" from from. Default is all cells. Numeric matrix with 2 or 3 or more columns. They must include x and y, representing x and y coordinates of "from" cell. If there is a column named "id", it will be "id" from to, i.e,. specific pair distances. All other columns will be included in the return value of the function. Numeric vector of x coordinates for 'from' points Numeric vector of x coordinates for 'to' points Numeric vector of y coordinates for 'from' points Numeric vector of y coordinates for 'to' points Numeric scalar. The maximum distance cutoff for returned distances. Logical. If TRUE, then the function will return angles in radians, as well as distances. TODO: description needed ## Value pointDistance2: a matrix with all the to columns plus one extra dists column. pointDistance3: a matrix with x and y columns from to plus one extra dists column. A matrix with 3 columns, x0, y0 and dists. ## Details A slightly faster way to calculate distances.
2019-07-19 15:23:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3471059203147888, "perplexity": 3341.553986661058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526254.26/warc/CC-MAIN-20190719140355-20190719162355-00466.warc.gz"}
https://www.techtrekking.com/how-to-check-if-a-file-exists-using-python/
# How to Check if a File Exists using Python Recently I was working on file generation using python, before generating any file, I had to check if file exists or not, to check if file exists or not, I had to find a way to do in in program. As usual, this task is very easy using python. Checking if file exists or not can be done in multiple ways using python, here is one using “os” module os.path module has functions such as isfile, isdir and exists which helps us check if file or directory exists or not. Here is the output $python3.6 file_exists_01.py file_exists : True file_exists : False dir_exists : False dir_exists : True dir_exists : False exists : True exists : True If you use isfile() on directory, outcome will be False, you need to use isfile or isdir as per requirement. Alternatively, you can use exists function as well, this returns True if input file or directory path is valid. Please refer to os.path documentation for further details. Here is another way to check if file exists or not using pathlib module. Output is $ python3.6 file_exists_02.py var : False var : True var : True var : False var : True var : True Both the modules have similar features, you can choose whichever is convenient to you. This site uses Akismet to reduce spam. Learn how your comment data is processed.
2021-04-16 12:05:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5440959334373474, "perplexity": 2229.2462573181915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056325.1/warc/CC-MAIN-20210416100222-20210416130222-00132.warc.gz"}
https://mathoverflow.net/questions/188422/poincare-like-inequality-on-compact-riemannian-manifolds/188450
# Poincare-like inequality on compact Riemannian manifolds I am looking for a Poincare Inequality on balls but instead of euclidean space, I have a compact Riemannian manifold without boundary. The inequality I am looking for is the equivalent of $$\int_{B_{r}(x)} |f(y) - f(z)|^{p} dy \leq c r^{n+p-1} \int_{B_{r}(x)} |Df(y)|^{p} |y-z|^{1-n} dy$$ where $f \in C^{1}(B_{r}(x))$ and $z \in B_{r}(x)$. I would be grateful for any source you can point me to. I am also interested to know if there is a version of this inequalilty if the manifold has boundary. Thank you. • This inequality should not be called Poincaré. It is merely a kind of Hardy inequality. – Denis Serre Nov 30 '14 at 17:57 The same inequality holds on Riemannian manifolds, at least if you want it for small $r$. Fix a point $x\in M$ and let $r_0$ be the injectivity radius at it. If $0<r\leq\frac12r_0$, then $\exp_x:U_r\to B_r(x)$ is a diffeomorphism and bi-Lipschitz continuous with a Lipschitz constant independent of $r$. Here $B_r(x)\subset M$ is the ball in the Riemannian metric and $U_r\subset T_xM$ is the ball of radius $r$ in the tangent plane at $x$. The tangent plane is just a Euclidean space, so your original estimate holds in $U_r$, and you can apply it to $f\circ\exp_p:U_r\to\mathbb R$. Since the exponential map is a bi-Lipschitz diffeomorphism $\exp_x:U_r\to B_r(x)$, you get the same estimate (possibly with worse a constant) on $B_r(x)$. If you take $r$ very small, you can make the Lipschitz constant arbitrarily close to one and give an arbitrarily small loss of constant in the final estimate. If you have some curvature bounds (especially if the manifold is compact), the constant $c$ in the estimate can be chosen uniformly; without any such assumptions it will depend on $x$. A more accurate concept of the inequality you are looking is a Sobolev-type inequality or more precisely Hardy's inequality. The proof of this inequality can be found in most books on weighted Sobolev inequality, say the excellent book by V. G. Maz'ya. If you just want a directly proof, say for instance the following paper "Kinnunen, Juha; Martio, Olli, Hardy's inequalities for Sobolev functions. (English summary) Math. Res. Lett. 4 (1997), no. 4, 489–500. " I would like to talk more about this inequality from the point view of capacity or equivalently non-linear potential. The standard proof of all (fractional) Sobolev-type inequality based on point-wise characterization of Sobolev functions, which basically reads as for a.e. $x,y$ $$|u(x)-u(y)|\leq c(n)|x-y|^{1-\alpha/p}(M_{\alpha/p}|Du|(x)+M_{\alpha/p}|Du|(y)),$$ where $M_\beta$ is the standard fractional maximal operator. In order to gain global estimates, one just integrate the above inequality and use the boundedness of the maximal operator. The equivalence of characterizations of Sobolev function can be found in the nice book of J.Heinonen, "Lectures on Analysis on Metric Spaces, Springer Verlag, Universitext 2001." or the forthcoming book of J.Heinonen et al. "Sobolev spaces on metric measure spaces, an approach based on upper gradients." Hardy-type inequalities are usually regarded as weighted Sobolev type inequality by considering the potential term $|y-z|^{k}$ or $d(y,\partial \Omega)^k$ as a general weight. I encourage you to read the nice paper by P.Koskela, P.Hajlasz, Isoperimetric inequalities and imbedding theorems in irregular domains. J. London Math. Soc. (2) 58 (1998), no. 2, 425–450 and P.Koskela, P.Hajlasz, Sobolev met Poincaré. Mem. Amer. Math. Soc. 145 (2000), no. 688, x+101 pp. The basic capacity view is to regard the right-hand side as certain capacity, and characterize the Sobolev/Hardy type inequality in terms of certain capacity. Then one could use the standard techniques from potential theory to do capacity estimates. This point of view is quite important in proving Sobolev/Hardy type inequality for irregular domains or in metric setting. As a sample, you could also read the following paper. Koskela, Pekka; Lehrbäck, Juha, Weighted pointwise Hardy inequalities. (English summary) J. Lond. Math. Soc. (2) 79 (2009), no. 3, 757–779. See also the homepage of J.Luhrback for the recent progress on Hardy type inequality for general domains: http://users.jyu.fi/~juhaleh/publ.html. If you want a global inequality with a uniform constant, then you have to impose (non-negative) lower bounded on the Ricci curvature (in metric measure spaces as well). In the smooth compact Riemannian setting, the constant is of course uniform.
2019-04-23 06:48:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9010865688323975, "perplexity": 205.02171313981026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578593360.66/warc/CC-MAIN-20190423054942-20190423080942-00519.warc.gz"}
https://www.physicsforums.com/threads/infinite-well-potential-going-from-0-x-l-to-0-x-2l.295061/
# Infinite Well Potential : going from 0<x<L to 0<x<2L 1. Feb 24, 2009 ### lstellyl 1. The problem statement, all variables and given/known data A particle of mass m is in the lowest energy (ground) state of the infinite potential energy well (V(x)=0 for 0<x<L, and infinite elsewhere) At time t=0 the wall located at x=L is suddenly pulled back to a position at x=2L. This change occurs so rapidly that instantaneously the wave function does not change. Calculate the probability that a measurement of the energy will yield the ground-state energy of the new well. What is the probability that a measurement of the energy will yield the first excited energy of the new well? 2. Relevant equations So far I am using the following relevant equations/formulas: (1) $$\Psi (x) = \sum c_n \sqrt{\frac{2}{L}} \sin{\frac{n \pi x}{L}} \: \: for \: 0<x<L$$ (2) $$\int {\psi_{n}}^{\ast} \Psi dx = c_n$$ and (3) $$E_n = \frac{n^2 h^2 \pi^2}{2 m L^2}$$ 3. The attempt at a solution This problem is tripping me up a little bit... I am not really sure what is implied/asserted by the phrase "instantaneously the wave function does not change" So far, I have come up with the following solution.... Obviously, in the new system E will be able to take on 2 values instead of just the ground state since $$E'_1=\frac{h^2 \pi^2}{8 m L^2}$$ and $$E'_2 = E_1 = \frac{h^2 \pi^2}{2 m L^2}$$ I try to use (1) and (2) to calculate $$c_1^'$$ and $$c_2^'$$ using : $$c'_2 = \frac{\sqrt{2}}{L} \int \sin{\frac{\pi x}{L}} \sin{\frac{\pi x}{L}}$$ from 0 to L which comes out as 1/sqrt(2) which seems reasonable and is nice (P = 1/2) so i left it... the reasoning i used behind this calculation is that i am multiplying the psi_2 prime with the first wave equation, which has a c_1 of 1, and 0 elsewhere.... which makes the limits 0 to L... next, i find c'_1 by: $$c'_1 = \frac{\sqrt{2}}{L} \int \sin{\frac{\pi x}{L}} \sin{\frac{\pi x}{2 L}}$$ from 0 to L which comes out as $$\frac{4 \sqrt{2}}{3 \pi}$$ in which case P=.3602, and thus this must be wrong unless the particle can take the 2nd excited energy state, which doesn't seem to make sense to me (plus that answer doesn't look as clean or nice) Can someone help me out or explain to me where it is I am going wrong???? also... if anyone can qualitatively explain to me how it is the wave function "does not change"??? any help appreciated!!! 2. Feb 24, 2009 ### lanedance When the potential instantly changes so do the basis functions for the wavefunction... ie the eigenfunctions So what you're effectively doing is decomposing the wavefunction in terms of the new eignfunctions I haven't checked the integrals, but at a quick glance the reasoning looks right. The decomposition is no longer an energy eigenstate & so will evolve with time. Its an assumption in the problem that the wavefunction does not change so you can decompose in term of the new energy eigenstates. So by changing the potential you are changing the whole state of the system. Note also that the energy levels in the new potential will have different (lower) values for a given n as L has effectively doubled, so each energy level will be reduced by 1/4 3. Feb 25, 2009 ### lstellyl right... I got that the energy levels would be reduced by 1/4.... thus E2 of the new system is equal to E1 of the old system.... i guess the question I have is this: is it correct for me to find the constants for the new system c1 and c2 by using the psi_n formula of sqrt(2/L)sin(n*pi*x/L) with L=2L (ie, using the psi_n of the NEW system) and then multiplying that by the wavefunction of the old system which i have determined to be sqrt(2/L)sin(n*pi*x/L) with L=1L since the particle is in the ground energy state, and thus the c1 of the old state is 1... i feel like there is a flaw in my reasoning here... using this logic, i found an answer i am happy with for c2, but c1 looks a bit odd to me... PLUS, adding up c1^2 and c2^2 i do not get 1, meaning there would be some probability that the particle could be in the 2nd excited state if my c1 and c2 are correct
2017-10-21 21:38:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6930832862854004, "perplexity": 439.3127643321198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.43/warc/CC-MAIN-20171021205648-20171021225648-00884.warc.gz"}
https://cs.stackexchange.com/questions/44748/how-do-i-tell-if-a-comparison-network-sorts/44762
# How do I tell if a comparison network sorts? I am presented with a comparison network. How can I determine if the comparison network is a sorting network? In the image below there is an example of a selection sort and insertion sort network. The intent is to have a comparison network and sort numeric values. If I test 2^n values in this case 2^8. This is a lot of work|non-efficient way to test it. I'm looking for a mathematical model/proof to verify this is a valid sorting network. • What did you search, read , or try? What blocked you? The more precise the question, the better the answer. Is your question about any arbitrary comparison network, i.e. a general procedure to determine whether an arbitrary comparison network is a sorting network? Or is your question about the given disgram. – babou Jul 26 '15 at 11:23 • You can use the zero one principle: a sorting network is correct if it works on all zero one inputs. – Yuval Filmus Jul 26 '15 at 12:39 • @YuvalFilmus That is an exponential test (in number of inputs). Is it an NP complete problem.? – babou Jul 26 '15 at 14:23 • @babou I have no idea. In this case it's practical. – Yuval Filmus Jul 26 '15 at 14:24 • an obvious idea is an empirical approach, just consider its action on random permutations of [1..n], observe its result, either it will fail or succeed, if it succeeds you have a near-proof approach... – vzn Jul 26 '15 at 15:27 In general, verifying whether a particular comparison network is indeed a correct sorting network is a Co-NP complete problem. If you want to check by testing, then you need to try exponentially many tests. In particular, there exist sorting networks that sort all but a single value correctly, so you can't hope to test whether the network is correct or not simply by feeding it a few inputs. One standard method is to test whether it correctly sorts all $2^n$ inputs that are composed solely of zeros and ones. If it does, then it turns out that it will sort all inputs (even ones that aren't limited to zeros and ones). However, this requires exponentially many tests. Moreover, the number of tests cannot be reduced significantly: for zero-one inputs, it is possible to prove that at least $2^n-n-1$ tests are needed, to very that the sorting network is correct. Alternatively, one can use tests where the inputs are permutations of $1,2,\dots,n$. This reduces the number of tests needed somewhat, but you still need exponentially many tests. In particular, $C(n, \lfloor n/2 \rfloor)-1$ tests are necessary and sufficient. For proofs of these facts, see the following papers: On the Computational Complexity of Optimal Sorting Network Verification. Ian Parberry. Parle'91 Parallel Architectures and Languages Europe, 1991. Bounds on the size of test sets for sorting and related networks. Moon Jung Chung and B. Ravikumar. Discrete Mathematics, vol 81, pp.1--9, April 1990. I'm looking for a mathematical model/proof to verify this is a valid sorting network. While D.W.'s (excellent) answer deals with the general case, I will consider your specific example. A network of this form with $n$ inputs can be shown to be a sorting network by induction: (see image for illustration) 1. $n=1$ input is always sorted; 2. Assume that a network of size $n-1$ of this form is a sorting network, and consider a network of size $n$. 1. The left-most "diagonal" will always correctly bring the largest element to the $n$-th position (in your case, $b_8$); 2. You are left with a smaller, similar network with the remaining $n-1$ elements; 3. This smaller network will sort all the remaining elements by the inductive hypothesis. When you look on a general sorting network, you might have no idea how to proof that it sorts every sequence of values (having the right length for the sorting network) correctly. But I've learned about this nice trick, how to simplify the task: # The 0-1-principle When a sorting network sorts every sequence (with the right length) consisting only of "0" and "1" correctly, then it sort any sequence (with the right length) correctly. Of course "0" and "1" are placeholders for any distinct elements in the domain of the sorting network. So you can construct a proof like this: 1. Take two distinct elements from the domain of the sorting network and call them "0" and "1", so that "0" < "1" 2. Construct all binary strings with the exact length of the sorting network 3. In these strings substitute the 0-bit and the 1-bit with "0" and "1" 4. Apply these strings to the sorting network 5. Each string must be sorted to something like 000..01...1 # Testing $2^n$ values For an exhaustive test of a sorting network of length $n$ you usually would have to test all input combinations. But with the 0-1-principle you can bring this down to $2^n$ tests (testing all binary strings of length $n$). # Can we do it cheaper? Unfortunately we probably can't get much cheaper than exhaustive testing, at least not when using a Turing machine to construct the proofs. Of course when you look an a specific sorting network, you might have a creative idea how to make a simple proof. But in general an algorithm for constructing such proofs is very likely as complex as testing all binary strings. The reason for this is that proofing sorting network is related to the NP complete complexity class as outlined in the other answers. "Much cheaper" in this context means "polynomial time". It might be possible to find an algorithm that can do it "slightly" faster than exponential time but still needs more than polynomial time. See the comments for an example: Running in $2^{\sqrt n}$ steps is (slightly) faster than exponential time but still (much) slower than polynomial time. # Prospect / Outlook ## Is your brain a Turing machine A philosophical consequence is: When you believe that you can find a creative proof for the correctness of each sorting network, then you are also believing that you brain is very likely not a Turing machine. ## Parallel sorting The "0-1 principle" is also used to proof the correctness of parallel sorting algorithms. I have a (hopefully) nice presentation about this on Github. ## Correcting the sorting network If one of the strings is incorrectly sorted (so you have proven the sorting network wrong), you can use this to construct a sorting network without that bug. Just add an additional comparison on the position of the "1-0 border" in the wrong result string. • Nitpick: even if P!=NP, (co-)NP-completeness does not preclude the existence of sub-exponential algorithms, say with a runtime in $O(2^{\sqrt{n}})$. (Such algorithms exist for many NP-complete problems.) – Raphael Jul 27 '15 at 8:36 • @Raphael If I get it right, P!=NP should only prohibit "polynomial" solutions for (co-)NP-complete problems, i.e. solutions where runtime (and memory usage) are bound by a polynomial term for arbitrary long inputs. So I only have to check if the term you have given is bigger than any polynomial for arbitrary big values of $n$. Right? – stefan.schwetschke Jul 27 '15 at 8:47 • Something like that, yes. (I read your answer like "here's how to do it in exponential time, and we probably can't do better since it's co-NP-hard". That's a non-sequitur, hence my comment.) – Raphael Jul 27 '15 at 8:52
2020-02-25 02:32:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7033255696296692, "perplexity": 678.3323790557178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146004.9/warc/CC-MAIN-20200225014941-20200225044941-00442.warc.gz"}
https://brilliant.org/problems/number-of-sets-2/
Number of Sets Algebra Level 2 Consider the following subsets of the integers: $A=\{1, 2, 3, 4, 5, 6\} \mbox{ and } B=\{4, 5, 6, 7, 8\}.$ How many subsets $$X$$ of integers satisfy $X\cap {A}^{c}=\emptyset, \quad (A-B)\cup X=X?$ ×
2018-01-17 15:17:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9175800681114197, "perplexity": 1014.3170039998284}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886946.21/warc/CC-MAIN-20180117142113-20180117162113-00638.warc.gz"}
https://primegap-list-project.github.io/faq/
# FAQ I’ve only just got here, what's all this about? It’s about the computation of the first occurrence of gaps between consecutive prime numbers and is part of a wider effort researching aspects of Goldbach’s conjecture, one of the oldest and best-known unsolved problems in number theory - and all of mathematics. Goldbach’s conjecture in modern form is “every even number larger than four is the sum of two odd prime numbers”. The conjecture has been shown to hold for all integers less than $4 · 1018$ but remains unproven despite considerable effort. The computation of the first occurrence of prime gaps of a given (even) size between consecutive primes has some theoretical interest. Richard Guy (Erdős number 1) assigns this as problem A8 (“A8 Gaps between primes. Twin primes”) in chapter 1 (“Prime Numbers”) of his book “Unsolved Problems in Number Theory”. Guy’s description of A8 is usefully available to read online at Google books (scroll down to p31). So what’s the actual problem? To describe the problem precisely we need to establish some terms, so ... if we let pk be the kth prime number, i.e. $p$1=2, $p$2=3, $p$3=5, ..., and let $g$k=p(k+1)-pk be the gap between the consecutive primes $p$k and p(k+1). The interest is in how $g$k (the size of the gap) grows as the size of the prime numbers grows. In 1936, in a paper submitted to Acta Arithmetica titled “On the order of magnitude of the difference between consecutive prime numbers”, Swedish mathematician Harald Cramér offered a conjecture --- based on probabilistic ideas --- that the large values of $g$k grow like $\left(log p$k)^2. The actual problem is that our empirical data does not allow us to discriminate between the growth rate conjectured by Cramér and other conjectured possible growth rates, say $\left(log pi\left(p$k))^2 for example (where $pi\left(x\right)$ is the usual prime counting function and $pi\left(p$k)=k.) Another example is identified by Tomás Oliveira e Silva in Gaps between consecutive primes where he observes that his empirical data suggests yet another growth rate, namely that of the square of the Lambert W function --- or “omega function” (not the title of a Robert Ludlum thriller, I learn). The trouble is that these growth rates differ by very slowly growing factors (such as $log log p$k) and much more data is needed to verify empirically which one is closer to the true growth rate. The actual actual problem is that right now, we don’t know of any general method more sophisticated than an exhaustive search for the determination of first occurrences and maximal prime gaps. In essence, we’re limited to sieving successive blocks of positive integers for primes, recording the successive differences, and thus determining directly the first occurrences and maximal gaps. And, as the size of the prime numbers increases, so does the amount of computational effort required to do the sieving, etc. Why the focus on gaps of “record” size? Large (or small) gaps can be more interesting if they are of sufficient merit. A gap’s merit indicates how much larger the gap is than the average gap between primes near that point (the average being $ln\left(x\right)$ as a consequence of the Prime Number Theorem). The greater the merit, the more unusual the gap. The more unusual the gap, the more interesting it is, as an outlier, from a number theory perspective. The following graph (taken from Tomás Oliveira e Silva’s Gaps between consecutive primes) charts the available values of $P\left(g\right)$ that they were able to compute (between 2001 and 2012) and illustrates the principle of merit. The black line represents the lower bound for $P\left(g\right)$ suggested by Cramér's conjecture, the white dots are gaps between probable primes. The noticeable outlier - the gap of 1132 - is of significance to the related conjectures put forth by Cramér (1936) and Shanks (1964), concerning the ratio $g/ln2\left(p$1). Shanks reasoned that its limit, taken over all first occurrences, should be 1; Cramér argued that the limit superior, taken over all prime gaps, should be 1. Granville (1994), however, provides evidence that the limit superior is $>= 2exp\left(-gamma\right)= 1.1229$. For the 1132 gap, the ratio is 0.9206, the largest value observed for any $p$1 > 7 thus far. What’s the current state of play? Over the last few decades, exhaustive search has continued to push the envelope, courtesy of faster computers and concerted effort. All prime gaps in $0 < x < 264$ have now been analyzed, where $264= 18446744073709551616$, the smallest positive integer requiring more than 64 bits in its binary representation i.e. not representable in C as a uint64_t. The final push from 18446744000000000000 to $264$ was carried out by the combined efforts of members of the Prime Gap Searches (PGS) project at the Mersenne Forum: Jerry LaGrou, Dana Jacobsen, Robert Smith, and Robert Gerbicz. Does getting high merits get harder when you get to larger gaps? Primality tests take longer, so the whole search process takes longer. For example, searches with 11k digit numbers are very slow. Empirically in the 100-8000 digit range, the BPSW test is about $O\left(log2.5\left(n\right)\right)$, i.e. 2x larger size is 5-6x longer time. The larger size also means a longer range for a large merit, which means more tests. Presumably $log\left(n\right)$ growth. There is a complicating factor of the partial sieve that has a dynamic $log2\left(n\right)$ depth. Usually the tradeoff is that small sizes run faster but are better covered, hence need high merits to get a record. Large sizes (200k+) are slow but are so sparse that almost anything found is a record. The sweet spot this year (2015 at the time of writing) seems to be in the 70-90k range for efficiency of generating records. There are lots of gaps with merit under 10. A little experiment looking at the time and number of merits >= 5.0 found using $k\ast p#/30-b$ where k=1..10000 without multiples of 2,3,5. ```p=20: 1.7s 102 found = 60/s (28-30 digits) p=40: 4.1s 236 found = 58/s (69-71 digits) p=80: 19.6s 515 found = 26/s (166-169 digits) p=160: 235s 985 found = 4/s (392-395 digits)``` Interestingly with this form, the number we find with merit >= 5 goes up as p gets larger but the time taken goes up quite a bit faster. This explains the shape of the graph of current records: high at the beginning and dropping off as gap size increases. It’s certainly possible that a different method of selecting the search points would be more efficient and it’s also possible to improve the speed of this or other methods vs. doing prev/next prime with my GMP code. For example with numbers larger than ~3000 digits using gwnum would be faster than GMP. Gapcoin uses a different method but it’s not obvious how to get exact efficiency comparisons. Where to look for gaps? There is little point in looking for gaps <1,352 as an exhaustive search of primes up to 4 · 1018 has been carried out and all gaps smaller than this have been found. As of the summer of 2014, the Nicely site had early instance prime gaps with merit > 10 listed for all possible gaps < 60,000 and an early effort by the Mersenne Forum has been to extend the early instance list up to 100,000. At the far end of the scale, the Mersenne Forum is helping to support the largest gap search, looking at a candidate gap (4,680,156) provide by Mersenne Forum member mart_r. This has a merit > 20. Whats the best primality test which guarantees 100% accurate result but can be done in a polynomial time? The following two are only 100% accurate within the range given. • For 64-bit inputs, BPSW. There are also other known methods, and the optimal solution is a mix. The result is unconditionally correct for all 64-bit inputs, and is extremely fast. It’s also commonly used on larger inputs as a compositeness test (sometimes called a probabilistic primality test), as it is fast and has no known counterexamples, with some good underlying reasons as to why we expect them to be rare. • For up to about 82-bit inputs, deterministic Miller-Rabin. This is a fairly recent result. All the following methods (ECPP, APR-CL, and AKS) are unconditionally correct for all sizes if they give an output, and all finish in polynomial time for the input sizes that are at all practical on today’s computers (e.g. finishing withing 100 years on a large cluster). • For heuristic polynomial time, ECPP using Atkin-Morain methods. It is $O\left(log5n\right)$ or $O\left(log4n\right)$ depending on implementation. It is not guaranteed to finish in this time, but there well-written heuristic analyses that show this complexity, and many millions of runs of practical software showing it matches those results. Primo uses ECPP. Almost all recent general-form proof records in the last 20 years have been done with ECPP. The output includes a certificate of primality which can be verified in guaranteed polynomial time (with a small exponent). • APR-CL is another good method that is polynomial time in practice although not asymptotically so (the exponent has a factor of $log\left(log\left(log\left(n\right)\right)\right)$ in it, which is less than a small constant for any size $n$ we would be applying it to). Pari/GP uses this. It does not output a certificate. • AKS is deterministic and polynomial-time for all general form inputs or all sizes. and unconditionally correct like the others. It is also horrendously slow in practice. It is not used in practice because we have much better methods. If you’re writing a paper or dealing with theoretical complexity, just say “AKS shows this problem is in P” and move on. That is the “best” result considering it’s short and people will nod and move on to the rest of your paper. For small inputs such as 64-bit (numbers smaller than 18,446,744,073,709,551,616), we’ve known for a few years that BPSW is unconditionally correct. It is extremely fast and easy. Slightly easier to program are best-known deterministic Miller-Rabin base sets. For 32-bit the optimal solution seems to be trial division for tiny inputs and a hashed single-base Miller-Rabin test for the rest. For use in practice, APR-CL or ECPP. They don’t check every box that AKS does (non-randomized and asymptotically polynomial), but they finish in polynomial time for numbers the size we care about, with a lower exponent and much less overhead than AKS. If you want a certificate, then ECPP. This lets others quickly verify that the result actually is prime rather than just taking it on trust that you ran the test. APR-CL and AKS do not have certificates. Some questions and answers have been compiled from posts by members of the Mersenne Forum, a forum established in support of the Great Internet Mersenne Prime Search (GIMPS) but mostly they are my brutalisation of the concise and accurate writing of Drs Thomas R. Nicely and Tomás Oliveira e Silva, whose forgiveness I beg.
2020-07-13 10:36:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6283656358718872, "perplexity": 910.6080606728995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00253.warc.gz"}
http://clubprivenaturista.it/gzuo/z-test-pdf.html
# Z Test Pdf 960 or is Z > 1. Z = x-− μ 0 σ ∕ n. People infected with. com provides A-Z Drug Facts for the professional. We deliver certification and licensure exams for leading organizations in virtually every industry. Page 1 of 1 of C:\data\StatPrimer\z-two-tails. Unlike the paired t-test, the 2-sample t-test requires independent groups for each sample. This report by itself does not imply that the material, product, or service is or has ever been under an Intertek certification program. Here are the various cases of the test equation: a. Some of these. The Practice Test can be used to familiarize students with the ELPAC test questions and tasks they will be asked to complete to. This function is based on the standard normal distribution and creates confidence intervals and tests hypotheses for both one and two sample problems. X-CARVE CNC CALIBRATION TEST PATTERN Developed by Robert A. fithe set of all values of z for which X(z) attains a nite value. These Wald tests are not always optimal, so other methods are preferred, particularly for small sample sizes. If this is a t-test, use alpha, the number of tails and the degrees of freedom to look up the. ReadyToTest. Study Design. We can easily estimate statistical power for a z-test but not for a binomial test. The Leadership Framework Self assessment tool Leadership in the health and care services is about delivering high quality Leadership Framework: Self assessment tool. Students cannot use books, notes, computers, or help from other people while taking the exam. Step Three: Evaluate. Move variables to the Test Variable (s) area by selecting them in the list and clicking the arrow button. The test statistic t is a standardized difference between the means of the two samples. Starting from 10 /1/2015, Z -51 test will become a 110 -question exam which additionally includes questions based on the Building Operation, Maintenance, and Recordkeeping m aterial (refer to page 4 and 5 of this Notice of Exam ). Zero trust network access. 645 1 X X − =⇒= 2. 8 Mann-Whitney U-Test - Procedure - 1. eggPlant is a GUI test automation tool for professional software applications and enterprise teams. Since this is a two-tail test, you need to find the Z-value when the probability is 0. 1shows the main window of the program) covers statistical power analyses for many different statisti-cal tests of the • F test, • t test,. 31, and the two sided test yields a p-value equal to 2*(1-. CS 2110 Placement Test Students who wish to take the placement exam can pick it up during business hours no later than 3pm from the CS office in Rice Hall 527. Critical Values for Statistical Significance ! Significance level of 0. 10 Selection of Test Sites within a Test Lot [PDF 266 Kb] Test Method RC 316. SECURE THE WORKFORCE. Write an equal sign and then the value of the test statistic (2 decimal places) 4. Decision: Reject H 0,thus we can conclude that the population mean is smaller than 30. Carry out an appropriate statistical test and interpret your findings. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. providing patient genetic/genomic information and ancillary data that are not proprietary, utilizing methodologies for which Clinical Validity (CV) and Clinical Utility (CV) are well-. Power Series. Look to the alternative hypothesis to determine this. The test statistic for the Sign Test is the smaller of the number of positive or negative signs and it follows a binomial distribution with n = the number of subjects in the study and p=0. Worksheet Page Worksheet 1: Pre-reading 126 Worksheet 2: Phonics - Letters A - Z 128 and Alphabet Maze 154. 1shows the main window of the program) covers statistical power analyses for many different statisti-cal tests of the • F test, • t test,. Albyn Jones Math 141. SECURE THE WORKFORCE. 96, with alpha = 0. It can be positive or negative value. The DEX Z-Code™ Identifier is a unique 5-character alpha-numeric code associated with certain molecular diagnostics (MDx) tests and is used by certain payers as an adjunct to non-specific CPT codes. 96 (note: this is found in the proportion of the tail, where. a!0, and we cannot de ne the z-derivative of <(z). Dictionary in PDF for free for you to download for students learning English A dictionary in pdf for you to use when you don't understand. If the production line gets out of sync with a statistical significance of more than 1%, it must be shut down and repaired. Education ministries, universities and schools around the world are using the EF SET to survey thousands of students. Math 2311 Review for Test 3. For each explanatory variable in the model there will be an associated parameter. Applicant/Lab Test name Z-identifier Test Details Checklist/Questionnaire. A Test Variable (s): The variable whose mean will be compared to. Just add the unequal parameter to our earlier commands. failure, or loss of feet or legs. 645, do not reject. • Updated VCE P4070-005 - IBM System z and z/OS Fundamentals Mastery. SDS Test Tool User's Manual 4 2 Functionality The SDS test tool is an application that helps developers to test on SDS formatted messages retrieved from the SABRE host. 35 questions will be asked in the NZ road code test, Learners have to answer at least 32 questions correctly (33 for heavy vehicle driving licence) within 30 minutes to pass the road code exam. 3 hours, and s 2. This service manual covers the Genie Z-45/22 2WD and 4WD models introduced in 1994. Statistics: The Standard Normal Probability Distribution 10 Questions | 1212 Attempts normal distribution, statistics, math, tutoring, z-score, probability, normal curve, Tammy the Tutor, MathRoom Contributed By: Tammy the Tutor. Although they contain the complete. Prove that (8) also applies to negative integer powers z n= 1=znfrom the limit de nition of the derivative. TTEST (array1, array2, tails, type) Array1 is the first data set. For each question in the chart below, place an X in one box that best describes your answer. The DEX Z-Code™ Identifier is a unique 5-character alpha-numeric code associated with certain molecular diagnostics (MDx) tests and is used by certain payers as an adjunct to non-specific CPT codes. More than 50,000 Satisfied Customers. 6 Tests Concerning Expectations 41 3. A Test Variable (s): The variable whose mean will be compared to. 10B - Calculator for Selection of Test Sites in a Test Lot xslx [XLSX 34 Kb] Test Method RC 316. NOTE: Praxis 5018 is the Praxis Elementary Education: Content Knowledge exam, Praxis 5001 is the. chi2 a scalar of the Kruskal-Wallis test statistic adjusted for ties. ttesti 8 7 1 10 5. 0178 that the population means are the same. Peer Review Agenda — see OMB Information Quality Peer Review Agenda. A z test statistic is based on the normal distribution and the t test statistic is based on the t-distribution. A T-test is appropriate when you are handling small samples (n < 30) while a Z-test is appropriate when you are handling moderate to large samples (n > 30). Study Design. Financial Distress Prediction: A Comparative Study of Solvency Test and Z-Score Models with Reference to Sri Lanka. Real ACT Tests: ACT December 2019 Form C03 Test pdf download. Can I take your camera? Sure, I (not use) it now. Test whether or not the plane 2x+ 4y + 3z = 0 is a subspace of R3. There are hundreds of English exercise aspects for your to practice. ) The hypothesis we want to test is if H 1 is \likely" true. To leave a comment for the author, please follow the link and comment on their blog: Statistic on aiR. Z = Z Score (mean of 0 and a standard deviation of 1) Scaled = Scaled Score (mean of 10 and a standard deviation of 3) Test Score Comparisons Within Three Standard Deviations Above and Below The Mean PR SS T Z Scaled 99. Be sure to bring your completed Non-Commercial Learner's Permit Application (PDF), completed Parent and Guardian Consent form (PDF) and required proof of identity and residency, and appropriate fee. 645, do not reject. HYPOTHESIS TESTING: TWO MEANS, PAIRED DATA, TWO PROPORTIONS Example 10. 50) × (1 - 0. The Independent Samples t Test compares the means of two independent groups in order to determine whether there is statistical evidence that the associated population means are significantly different. "Testbook was the one and only Website/App that I ever used. 1 Main Window Double Clicking on Sds. 32960 Service Manual - Second Edition ii Important Read, understand and obey the safety rules and operating instructions in the Genie Z-45/22 Operator's Manual before attempting any maintenance or repair procedure. Microsoft Word - Biochemical Test Chart - 2010. For each explanatory variable in the model there will be an associated parameter. On the other hand, consider the following constrained maximization problem, max θ∈Θ L(θ)s. These backtesting procedures are reviewed from both a sta-tistical and risk management perspective. The general framework of Hypothesis testing will be covered in future lessons. Conceptually, t -values are an extension of z -scores. Diagnostic Grammar Test. The sign test is used when dependent samples are ordered in pairs, where the bivariate random variables are mutually independent It is based on the direction of the plus and minus sign of the observation, and not on their numerical magnitude. Both cases are essential for telling a. Z-Test: A z-test is a statistical test used to determine whether two population means are different when the variances are known and the sample size is large. The p-value would be P(z <-2. The Normal distribution can be used to compute the probability of obtaining a certain z score. Normal distribution with a mean of 100 and standard deviation of 20. 10 and chapter 8 A typical study design question: A new drug regimen has been developed to (hopefully) reduce weight in obese teenagers. Test for continuous data. Deriving Z-Test Formulas: 1-Sample, 1-Sided; Latest Posts. A score of 144 on a test with a mean of 128 and a standard deviation of 34. The testbook test series is all you would need to get through. If you are using t test, use the same formula for tstatistic and compare it now to tcritical for two tails. doesn't have a trend) and potentially slow- turning around zero, use the following test equation: Δ =θt t − α+ Δ t− +αΔz z z z t−1 1 1 2 2 +L+αΔ − +z a p t p t where the number of augmenting lags (p) is determined by minimizing the. MOTORCYCLE RIDER SKILL TEST INSTRUCTIONS This test consists of four riding exercises that measure your motorcycle control and hazard response skills. Breathe E-Z Systems, Inc. The Migraine Disability Assessment Test The MIDAS (Migraine Disability Assessment) questionnaire was put together to help you measure the impact your headaches have on your life. Deliver fast and secure access to information no matter where it lives. If you're behind a web filter, please make sure that the domains *. Page 2 of 2 TEST METHOD(S): JIS Z 2801:2010 Antimicrobial products - Test for antimicrobial activity and efficacy TEST ORGANISM(S):. Standard Normal Table (right tailed) Z is the standard normal random variable. With an independent-samples t test, each case must have scores on two variables, the grouping (independent) variable and the test (dependent) variable. 96, or z > zα/2 = z 0. test(y1,y2) # where y1 and y2 are numeric # paired t-test. Free delivery on millions of items with Prime. Pool data together, sort data in ascending order,. The upper and lower limits are now always within the range [0,n] instead of [-1,n+1]. Three letters that continue to change the world. Średni wynik: 58 %. Prove formula (8) from the limit de nition of the derivative [Hint: use the binomial formula]. The sign test is used when dependent samples are ordered in pairs, where the bivariate random variables are mutually independent It is based on the direction of the plus and minus sign of the observation, and not on their numerical magnitude. z-Test Approximation of the Binomial Test A binary random variable (e. c) The population is known to follow a normal distribution. We conclude therefore that the mean of our sample is significantly different from the mean of the population. 31, and the two sided test yields a p-value equal to 2*(1-. Yale University PRIME Screening Test For accurate results, you must be entirely honest in your response to all twelve questions in this test. This report by itself does not imply that the material, product, or service is or has ever been under an Intertek certification program. Yukon Department of Education Box 2703 Whitehorse,Yukon Canada Y1A 2C6. Choosing which statistical test to use - statistics help. A team of scientists wants to test a new medication to see if it has either a positive or negative effect on intelligence, or no effect at all. Test Studio is an automated functional and load testing tool that helps you test applications on various platforms built using different frameworks and tools. DIAGRAM the question. Hypothesis Testing (test statistics and their distributions under the null) n X - 0 σ µ ∼z. Can I take your camera? Sure, I (not use) it now. The Z variable is designed to be sensitive to the alternative hypothesis; effectively, the magnitude of the Z variable is larger when the alternative hypothesis is true. Passwords are deleted instantly after processing. Hypothesis Test Difference 4 If you are using Z test, use the same formula for Zstatistic but compare it now to Zcritical for two tails. But did you know they also help Chronic kidney disease (CKD) is a diagnosis that means that your kidneys are not working as well as. 11DECISION: The sample mean has a z-score greater than or equal to the critical value of 1. 05/2) • Look for the table value of 1 - 0. Reading Comprehension Practice Test Page 3 Question 7 ‘More distance is needed to safely stop in rain or poor visibility. Dolby® Digital Dolby Digital is the universal standard for 5. Peer Review Agenda — see OMB Information Quality Peer Review Agenda. test (x, y = NULL, alternative = "two. 1 Maximum Likelihood. Lab Tests Online (LTO) is an award-winning health information web resource designed to help patients & caregivers understand the many lab tests that are a vital part of medical care. n is the number of observations, p is the number of regression parameters. State Decision. Campbell April 20, 2005 Abstract This paper reviews a variety of backtests that examine the adequacy of Value-at-Risk (VaR) measures. Validační protokol – test [PDF, 397 kB] Ilustrační test 2018. Excel IF Function The format of the IF function is: IF(logical_test, value_if_true, value_if_false) logical_test is any value or expression that can be evaluated to TRUE or FALSE value_if_true is the value that is returned if logical_test is TRUE value_if_false is the value that is returned if logical_test is FALSE. These backtesting procedures are reviewed from both a sta-tistical and risk management perspective. Chi-Square Test under Null Hypothesis 4. So what we've. Z-tests are among the most basic of statistical hypothesis testing methods, and are often taught at an introductory level. Here is the primary example: X∞ n=0 zn. ) The hypothesis we want to test is if H 1 is \likely" true. That's why we invented the Portable Document Format (PDF), to present and exchange documents reliably — independent of software, hardware, or operating system. The F-test follows the Snedecor’s F- distribution. Thus the test statistic is. the example, this was already done by computing z, the test statistic. Hypothesis Test Difference 4 If you are using Z test, use the same formula for Zstatistic but compare it now to Zcritical for two tails. In Part A, the circles are numbered 1 - 25, and the patient should draw lines to connect the numbers in ascending order. • A two-sided z-test on p1 −p2 will give the same p-value as a chi-squared test of homogeneity on a 2x2 table. DevTest Solutions. Calculus II , Final (practice test) 9:00–12:00 noon, Friday, Dec. docx Author: ACC Created Date: 20100719213858Z. Choosing which statistical test to use - statistics help. Get ready for your English exam! On test-english. 3 Ciguatera toxin is commonly found in A amberjack. Amniocentesis for Rh sensitization during pregnancy. 3 Likes Authority 4 Enthusiastic 2 Sensitive Feelings 1 Likes Instructions L O G B 1. 6) Examples of Paired Differences studies: • Similar subjects are paired off and one of two treatments is given to each subject in the pair. When to Use a Particular Statistical Test Univariate Descriptive Central Tendency Mode • the most commonly occurring value ex: 6 people with ages 21, 22, 21, 23, 19, 21 - mode = 21 Median • the center value • the formula is N+1 2 ex: 6 people with ages 21, 22, 24, 23, 19, 21 line them up in order form lowest to highest 19, 21, 21, 22, 23, 24. DIAGRAM the question. 96, with alpha = 0. All HP Z Displays are rigorously tested and backed by an exclusive HP Zero Bright Dot Guarantee that replaces the screen if even one bright sub-pixel fails. STATISTICAL TABLES 1 TABLE A. Helps you pronounce Korean and learn its nuances. It can be used to automate different application types, such as. Use the function generator to output a 1. Since P = 0:0026 <0:05 = , we make the decision to reject the null hypothesis. •The intent of hypothesis testing is formally examine two opposing conjectures (hypotheses), H 0 and H A •These two hypotheses are mutually exclusive and Two sided test: z "/2 (i. 5mm screwdriver recommended). SECURE THE WORKFORCE. Standard Normal Table (right tailed) Z is the standard normal random variable. Answer: False: the two second rule only applies to cars in dry conditions 17. Reject H 0 if Z <-1. Determined Visionary Calm, Even Keel Consistent 4. Financial Distress Prediction: A Comparative Study of Solvency Test and Z-Score Models with Reference to Sri Lanka. 645 H 0: µ = 7000 H a: µ > 7000 (one-sided test) z= 7160!7000 1200/250 =2. Decision: Reject H 0,thus we can conclude that the population mean is smaller than 30. Set up decision rule. 3% of the data is within ±1S (therefore 31. Houghton Mifflin Phonics/Decoding Screening Test:Recording Sheet 6 R175 H. In a way, the t -value represents how many standard units the means of the two groups are apart. An international normalised ratio (INR) test measures the time taken for your blood to clot. Always use wire with next smaller impedance Z per 1,000 feet than that calculated. We conclude by summarizing the difierent tests (what conditions must be met to use them, what the test statistic is, and what the critical region is). It can be used when n ≥≥≥≥30, or when the population is normally distributed and σσσσis known. The T-Test formula in excel used is as follows: =TTEST (A4:A24,B4:B24,1,1) The output will be 0. Compute the test statistic. If z is dichotomous and was originally coded 0 and 1 to denote group membership, simply enter 0 and 1 for cv z1 and cv z2 and leave cv z3 blank. P a vector of p-values corresponding to Z. A t-test is a type of inferential statistic, that is, an analy-sis that goes beyond just describing the numbers provided by data from a sample but seeks to draw. 1 Practice Ch 6. is a positive integer. =NORMSINV) 2 (α =NORMSINV) 2 0. How is COVID-19 treated? There is no specific antiviral treatment recommended for COVID-19 infection. 5 Tensile properties provide useful data for plastics engineering design purposes. Hypothesis test. A paired t-test is used to compare two population means where you have two samples in which observations in one sample can be paired with observations in the other sample. The general framework of Hypothesis testing will be covered in future lessons. 36 , ) , ) 100 10 ( 50 1. It can be used to automate different application types, such as. The main aim of Gillz mentor is to endow students and job seekers with a common platform to study so that they may get prepared for all Punjab govt exam and All others competitive exams. 0178 that the population means are the same. n is the number of observations, p is the number of regression parameters. • In terms of the z-distribution (or t. Data are interval 2. Education ministries, universities and schools around the world are using the EF SET to survey thousands of students. 05 5 Upper Interval 95% Samples σ x __ ⎯X µ-1. The p-value would be P(z <-2. There are different types of Z-test each for different purpose. ) χ2 test is an implementation of one of the simplest tests in statistics, called the Binomial test, or population z test (Sheskin 1997: 118). Zadání didaktického testu [PDF, 2,04 MB] Záznamový arch [PDF, 38 kB] Klíč správných řešení [PDF, 150 kB] Zadání didaktického. For tests of fixed effects the p-values will be smaller. A Z-test is a hypothesis test based on the Z-statistic, which follows the standard normal distribution under the null hypothesis. Basic Statistics Formulas z (SE) (37) To test H 0: p 1 = 2; use (38) z= p^ 1 p^ 2 s p^(1 p^) 1 n 1 + 1 n 2 (39) Original PDF and LATEX les available at http. 8 Graphical Methods for Comparing Means 44 IV χ2-TESTS 44 4. Tighten the test sockets to the recommended torque. Unified Dashboards and Reporting for Infrastructure Management. Likes Authority Enthusiastic Sensitive Feelings Likes Instructions 2. The test statistic was -3. Since the company promoting the SAT prep program specifically claims that the program will improve SAT scores, we will conduct a directional (i. I have found a nice example for you to check out on the Penn State web page. 3 An actual quote from Tolstoy (1893): "The most difficult subjects can be explained to the most slow-witted man if. 57, t(27) = 20. This handout will take you through one of the examples we will be considering during class. This Stage Z Test Report describes the independently administered and transparent test bed process established to develop and validate a proposed Z-axis (vertical) metric for indoor wireless 9-1-1 calls, as required by the Federal ommunications ommissions (F s) 9-1-1. To select the z-test tool, click the Data tab's Data Analysis command button. Quantitative data is a numerical measurement expressed not by means of a natural language description, but rather in terms of numbers. This function is based on the standard normal distribution and creates confidence intervals and tests hypotheses for both one and two sample problems. Agile Operations Analytics Base Platform. One-sample z test Two-sample z test Paired z test Adjust for clustering Immediate form For the purpose of illustration, we assume that variances are known in all the examples below. As with any other test of significance, after the test statistic has been computed, it must be determined whether this test statistic is far enough from zero to reject the null hypothesis. Because this test is two‐tailed, that figure is doubled to yield a probability of 0. Statistics: The Standard Normal Probability Distribution 10 Questions | 1212 Attempts normal distribution, statistics, math, tutoring, z-score, probability, normal curve, Tammy the Tutor, MathRoom Contributed By: Tammy the Tutor. students’ diagnostic test. These English Grade 10 Exercises Worksheets were designed as PDF format so that you can print and photocopy easily for your students or classes. The Accuplacer Test is an adaptive test. , the number of answers that agree with the scoring key) has little meaning. Agile Requirements Designer. Validační protokol – test [PDF, 397 kB] Ilustrační test 2018. A-Z Drug Facts (Facts & Comparisons) Drugs. The final two exercises involve speeds of about 15 mph. 8 Test B: 𝑋𝑋 = 77, 𝑆𝑆. 1 Cumulative Standardized Normal Distribution A(z) is the integral of the standardized normal distribution from −∞to z (in other words, the area under the curve to the left of z). The A to Z of alternative words allocate divide, share, add, give along the lines of like, as in alternative choice, other alternatively or, on the other hand ameliorate improve, help amendment change anticipate expect apparent clear, plain, obvious, seeming applicant (the) you application use appreciable large, great apprise inform, tell. Pediculus humanus capitis — see Head Lice. APP Synthetic Monitor. The two-second rule applies to cars and trucks in dry conditions with good brakes and tires. Test Name/Specific Test System Manufacturer Approved CPT Code(s) Adenovirus AdenoPlus (human eye fluid) Rapid Pathogen Screening, Inc. It checks if the difference between the means of two groups is statistically significance, based on sample averages and known standard deviations. For example 20 = (2)(2)(5) and 30 = (2)(3)(5). To create this article, volunteer authors worked to edit and improve it over time. 05 ( = NORMSINV(0. giver recipient Role Givers' Perceived and Recipients' Actual Gift Appreciations Mean appreciation 4. This practice test contains one multiple-choice question, one short-answer question, and one open-response question. 7% is outside of ±1S). The F statistic is defined as the ratio between the two independent chi square variates that are divided by their respective degree of freedom. observations in the model are ordered by the size of z. AUDIT PATIENT: Because alcohol use can affect your health and can interfere with certain medications and treatments, it is important that we ask some questions about your use of alcohol. WILSON CUSTOM GUN TEST TARGET RANGE - 100 YDS METHOD - Benchrest RIFLE TEST/SIGHT - IN LOAD AS FOLLOWS: "c R Z/ 9/(03 Job #: Serial #: Your Custom Rifle Was Built With Pride By The Following Gun. Find your exam program’s homepage in the alphabetical list below by clicking on the first letter of the test sponsor / organization and then selecting your program. Diagnostic Grammar Test. • VCE P4070-005 - IBM System z and z/OS Fundamentals Mastery objectives and course content. the past 14 days, your health care provider can test you for the virus. Nolan and Thomas Heinzen (with a few modifications). equipment required: tape measure, marking cones, stopwatch, timing gates (optional) pre-test: Explain the test procedures to the subject. a—Jfrn Watson a Robbie St. Approve job candidates eligible for employment with and get people to work faster. We have n 35, x̄ 6. =NORMSINV) 2 (α =NORMSINV) 2 0. Both cases are essential for telling a. A z-test is computationally less heavy, especially for larger sample sizes. The F-test for linear regression tests whether any of the independent variables in a multiple linear regression model are significant. 05 " One-sided right-tailed test H a:μ>μ 0! Critical value is ! iTunes library example: 14 z=1. • Normal probability tables give you the percent of the distribution that would exceed the specification limit for a given z value • Remember that 68. Stata solution. The sample is large and the population standard deviation is known. NOTE: Praxis 5018 is the Praxis Elementary Education: Content Knowledge exam, Praxis 5001 is the. - Duration: 9:33. Using Your TI-NSpire Calculator for Hypothesis Testing: The 1-Proportion z Test Dr. Unlike most statistical packages, the default assumes unequal variance and applies the Welsh df modification. Objective examinations include multiple choice,. Applicant/Lab Test name Z-identifier Test Details Checklist/Questionnaire. Split PDF files based on content. The test statistic is assumed to have. Use Excel to find the critical value of z for each hypothesis test. Worksheet Answer Key. That includes 11 percent of those age 65 and older and one-third of those 85 and older. The 1Z0-1048 test training pdf owns the most useful question training, in other words, the best materials to pass the exam. The estimated value (point estimate) for m is x, the sample mean. Dummy PDF file. Calculus II , Final (practice test) 9:00–12:00 noon, Friday, Dec. values, the normal distribution may be used to conduct the test of. HYPOTHESIS TEST FOR ONE POPULATION PROPORTION – STEP 3 Unit 4A - Statistical Inference Part 1 1. Prove formula (8) from the limit de nition of the derivative [Hint: use the binomial formula]. Power and Sample Size in a Nutshell Useful Properties of the Normal Distribution Brief Introduction to Z-Tests Deriving Z-Test Formulas: 1-Sample, 1-Sided. demonstrate their English language proficiency. Proportion problems are never t-test problems - always use z! However, you need to check that np_ {0} and n (1-p_ {0}) are both greater than 10, where n is your sample size and p_ {0} is your hypothesized population proportion. It can be used a) in place of a one-sample t-test b) in place of a paired t-test. Do these results imply a difference in the reliability of these two machines? (Use α = 0. Pediculus humanus capitis — see Head Lice. about hCG test. • Most Recent Latest P4070-005 Test Practice Questions and Answers. 8 Summarize the results of a one-independent sample z test in American Psychological Association (APA) format. Laura Schultz The 1-proportion z test is used to test hypotheses regarding population proportions. Contribute to CmIm/sapui5-display_smartform_pdf development by creating an account on GitHub. ttesti 8 7 1 10 5. Computing the Power of a test Consider nobservations from a normal distribution with unknown mean and known variance ˙2. Meaning of Chi-Square Test: The Chi-square (χ2) test represents a useful method of comparing experimentally obtained […]. The test statistic was -3. For a fixed confidence level, when the sample size increases, the length of the confidence interval for a population mean decreases. Do these results imply a difference in the reliability of these two machines? (Use α = 0. pdf do cwiczen chomikuj cw z fizyki z dzialu Iloraz wielomianów - PracaDomowa24. the past 14 days, your health care provider can test you for the virus. The z Test: An Example μ= 156. In simple terms, a hypothesis refers to a supposition which is to be accepted or rejected. a!0, and we cannot de ne the z-derivative of <(z). It gives the probability of a normal random variable not being more than z standard deviations above its mean. Sittinger February 19, 2010 1 Introduction. Approve job candidates eligible for employment with and get people to work faster. This handout deals with using Wilcoxon with small sample sizes. Split PDFs using personal criteria, such as blank pages, text lines, bookmarks, offered by the fle. The formula for the z test is given on the next slide. • Please wait 28 days to do the test after your last dose after your last dose of antibiotic therapy*, bismuth products, antimicrobial herbals* (i. The z value indicates the number of standard deviation units of the sample from the population mean. The test statistic is a z-score (z) defined by the following equation. You know you use PDFs to make your most important work happen. The distance between W and X is 2: W X 2 The distance between X and Y is 4. Standard normal table. > ncp <- 1. The appropriate test statistic is. 7: Two-Sample Problems Paired t-test (Section 4. Decision Rule Critical value approach: Compare the test statistic with the critical values defined by significance level α, usually α= 0. To select the z-test tool, click the Data tab's Data Analysis command button. Click here to begin the test. Carry out an appropriate statistical test and interpret your findings. 8169381693 GRE Pract General Test cs4 MAC dr01 038,1010 lg edits dr01 031610 lg edits dr01 031810 lg r02 5510 w r02Edits 51410 w dr02 51710 mc r03 5 2810 w r03edits 6210 w rft04 6910 db Preflight 65110 db dr01 12910 mc dr01revs 122210 mc pdf 122210 mc dr02 11011 mc pdf 11911 mc dr03 012511 lg edits dr03 012511 lg dr05. Dolby® Digital Dolby Digital is the universal standard for 5. The Normal distribution can be used to compute the probability of obtaining a certain z score. Prove formula (8) from the limit de nition of the derivative [Hint: use the binomial formula]. The partial sums Sm = Xm n=0 zn = 1. APP Synthetic Monitor. A T-test is appropriate when you are handling small samples (n < 30) while a Z-test is appropriate when you are handling moderate to large samples (n > 30). Prepare forms and record basic information such as age, height, body weight, gender. NAPLAN 2008, final test – language conventions, Year 7 (PDF 618 kb) NAPLAN 2008, final test – language conventions, Year 9 (PDF 733 kb) NAPLAN 2008, final test – numeracy, Year 3 (PDF 3. Math test activities for students and teachers of all grade levels. In statistic tests, the probability distribution of the statistics is important. Download free » Order » Learn more » A-PDF Image to PDF (Scan to PDF) Convert photos, drawings, scans and faxes into Acrobat PDF documents. Pdf compliance. Yes, a paired t-test suggests that the average difference in hours slept (Dalmane - Halcion) = 0. : Some Basic Concepts of hemistry, Structure of Atom Bot. Analysts agree: the industry is destined for. ${z = \frac{(p - P)}{\sigma}}$ where P is the hypothesized value of population proportion in the null hypothesis, p is the sample proportion, and ${\sigma}$ is the standard deviation of the sampling distribution. 3 Likes Authority 4 Enthusiastic 2 Sensitive Feelings 1 Likes Instructions L O G B 1. The Practice Test can be used to familiarize students with the ELPAC test questions and tasks they will be asked to complete to. Education ministries, universities and schools around the world are using the EF SET to survey thousands of students. Using the z-chart, like the t-table, we see what percentage of. The information on this questionnaire is also helpful for your primary care provider to. As in Figure 1, 68% of the distribution is within one standard deviation of the mean. We denote the mean of the population by „ and its variance by ¾2: Z 1 ¡1 xp(x)dx = E[X] ¾2 = Z 1 1 (x¡„)2p(x)dx = E[X2]¡E[X]2: (1. Split PDFs using personal criteria, such as blank pages, text lines, bookmarks, offered by the fle. In our previous article Power and Sample Size in a Nutshell we gave a broad overview of power and sample size calculations. 8 Graphical Methods for Comparing Means 44 IV χ2-TESTS 44 4. For example, the social security number is a number, but not something that one can add or subtract. A test statistic is a random variable used to determine how close a specific sample result falls to one of the hypotheses being tested. MOTORCYCLE RIDER SKILL TEST INSTRUCTIONS This test consists of four riding exercises that measure your motorcycle control and hazard response skills. n 1 ≥30 and n 2 ≥30. (As such, it’s usually easy to guess how these formulas generalise for arbitrary n. Corrected Sum of Squares for Model: SSM = Σ i=1 n. Here is an example of how a z-score applies to a real life situation and how it can be calculated using a z-table. An instant test kit will save your company dollars and time by using disposable on-site instant screening devices that display results in 5 to 10 minutes. 1 has the probability distribution given by f(˜2) = 1 2 =2( =2) e ˜ 2=2(˜2)( =2) 1 (2) This is known as the ˜2-distribution with degrees of freedom. Encrypt your PDF with a password to prevent unauthorized access to the file content, especially for file sharing or archiving. Reference to the test methods in this specification should specifically state the particular test or tests desired. We consider a Bell test in the form proposed by Clauser, Horne, Shimony and Holt (CHSH)18 (Fig. Returns the probability associated with a Student’s t-Test. Inserting the data into the formula for the test statistic gives. a—Jfrn Watson a Robbie St. 3 Test for Homogeneity 50 V MAXIMUM LIKELIHOOD ESTIMATION 50 5. Note: although the con dence interval produced by the z-test is fairly accurate when compared to the t-test for the same problem if n>30, the p-value produced by a z-test can be very much smaller than the p-value computed by the corresponding t-test, especially when the p-. I want to use this video to kind of make sure we intuitively and otherwise and understand the difference between a Z-statistic-- something I have trouble saying-- and a T-statistic. Laura Schultz Statistics I The 1-proportion z test is used to test hypotheses regarding population proportions. doesn't have a trend) and potentially slow- turning around zero, use the following test equation: Δ =θt t − α+ Δ t− +αΔz z z z t−1 1 1 2 2 +L+αΔ − +z a p t p t where the number of augmenting lags (p) is determined by minimizing the. The appropriate test statistic is. A limitation of kappa is that it is affected by the prevalence of the finding under observation. The testbook test series is all you would need to get through. data from an observational study - the typical case when using a chi-squared test for independence. 0mm LED that enables never before seen color consistency, luminance, flux density and design flexibility for lighting solutions. Critical value of z: ±1. Clinilog Blood Sugar Logbook Download. These Wald tests are not always optimal, so other methods are preferred, particularly for small sample sizes. test(y~x) # where y is numeric and x is a binary factor # independent 2-group t-test t. Access Our Products. A random sample of 29 were weighed and had gained an. These tests yield identical p-values but the z-test approach allows you to compute a confidence interval for the difference between the proportions. Multiplication Facts to 10: How Many Can You Do In 1 Minute? 1. , a coin flip), can take one of two values. Of these 100 doctors, 82 indicate that they recommend aspirin. Rejection & Acceptance Regions Type I and Type II Errors (S&W Sec 7. The p-value would be the area to the left of the test statistic. Do these sample results support the bottler's claim? (Use a level of. New Product Wing Union/Hammer Union Pressure Sensors, Models 434, 435, & 437. Here is an example of how z-scores can translate into grades. Two-Sample Z-Tests Assuming Equal Variance Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample z-tests when the variances of the two groups (populations) are assumed to be known and equal. 4 Features Demonstrated: • Primary bookmarks in a PDF file. The testbook test series is all you would need to get through. Deriving Z-Test Formulas: 1-Sample, 1-Sided. The DEX Z-Code™ Identifier is a unique 5-character alpha-numeric code associated with certain molecular diagnostics (MDx) tests and is used by certain payers as an adjunct to non-specific CPT codes. 37 males were randomly selected and the mean number of calories burned per hour playing squash was 534. Negative value should be indicated by a minus sign) (a) 8 percent level of significance, two-tailed test. No test will be administered to. for colonoscopy) or barium radiography. This test compares a sample observation against a predicted value which is assumed to be Binomially. (For 95%, z = 1:96. If a Z-score is 0, it represents the score as identical to the mean score. The general framework of Hypothesis testing will be covered in future lessons. P a vector of p-values corresponding to Z. TTEST (array1, array2, tails, type) Array1 is the first data set. 'Student's' t Test is one of the most commonly used techniques for testing a hypothesis on the basis of a difference between sample means. The sample is large and the population standard deviation is known. 909QT/LF909QT shown no. (As such, it’s usually easy to guess how these formulas generalise for arbitrary n. Test whether or not the plane 2x+ 4y + 3z = 0 is a subspace of R3. A health researcher read that a 200-pound male can burn an average of 524 calories per hour playing tennis. (Try the confirmatory test for "abstract nouns. The first type assesses a child's ability to associate a sound with a given symbol, and the second type assesses a child's ability to decode nonsense words. 1, 2 and 3 on reduced pressure assembly then close test cock No. For this one sample z test, you want the area in the right tail, so subtract from 1: 1 – 0. 3 An actual quote from Tolstoy (1893): "The most difficult subjects can be explained to the most slow-witted man if. The Five Love Languages Test By Dr. In this case p is greater than 0. • In terms of the z-distribution (or t. 96, with alpha = 0. For a more accurate assessment, choose the 5 Minute Test. The following table is the result. Background Information Location Tests Definition of “Location Test” Allow us to test hypotheses about mean or median of a population. This test compares a sample observation against a predicted value which is assumed to be Binomially. Comparison of the means of two paired. a!0, and we cannot de ne the z-derivative of <(z). This means that:. exe file to pop up the main window of the SDS test tool. two sample z-test: A hypothesis test that is used to compare two sample groups to determine if they have originated from the same population. 2 and a real-valued positive reference impedance Z 0 as follows: a 1 = V 1 + Z 0 I 1 2 Z 0 b 1 = V 1 −Z 0I 1 2 Z 0 a 2 = 2 − 0 2 2 Z 0 b 2 = V 2 +Z 0I 2 2 Z 0 (traveling waves) (14. For questions 5 through 7, solve the given proportions for x: On a map, 1 inch represents 100 feet, How many inches would represent 350 feet? 1 meter equals 100 centimeters. This Stage Z Test Report describes the independently administered and transparent test bed process established to develop and validate a proposed Z-axis (vertical) metric for indoor wireless 9-1-1 calls, as required by the Federal ommunications ommissions (F s) 9-1-1. 95) numeric vector; NA s and Inf s are allowed but will be removed. For example, a z-score of 1. 50) × (1 - 0. In essence for this table a z-score of 10 is off the charts, we could use 10 to "act like" infinity. Test statistic: z= x 0 ˙x Test statistic: z= x 0 ˙x Rejection region: z< z Rejection region: z< z =2 or z>z =2 (or z>z when H a: > 0) where z is chosen so that where z =2 is chosen so that P(z< z ) = P(z>z =2) = =2 Note: 0 is the symbol for the numerical value assigned to under the null hypothesis. 05, the null hypothesis of equal means could. A Review of Backtesting and Backtesting Procedures Sean D. Tails specifies the number of distribution tails. The general framework of Hypothesis testing will be covered in future lessons. Request information. 35 questions will be asked in the NZ road code test, Learners have to answer at least 32 questions correctly (33 for heavy vehicle driving licence) within 30 minutes to pass the road code exam. Prove formula (8) from the limit de nition of the derivative [Hint: use the binomial formula]. 1Z0-1003-20 pass4sure dumps are highly recommended as a good study material for the preparation of 1Z0-1003-20 actual test. It can be used to automate different application types, such as. They must complete and return the exam within 90 minutes. Wald test is based on the very intuitive idea that we are willing to accept the null hypothesis when θ is close to θ0. Statistics: The Standard Normal Probability Distribution 10 Questions | 1212 Attempts normal distribution, statistics, math, tutoring, z-score, probability, normal curve, Tammy the Tutor, MathRoom Contributed By: Tammy the Tutor. ) theAJS9 test sockets using suitable screwdriver (isolated Dia. Geometry Chapter 6 Practice Test Multiple Choice Identify the choice that best completes the statement or answers the question. The LRT of mixed models is only approximately $$\chi^2$$ distributed. A one-sample t-test is designed to answer a null hypothesis that concerns the data set's mean when the data are from independent observation and follow a normal distribution. • the distribution of R 1 under H 0 doesn't depend on the distribution of X or Y - it is a fixed distribution (which does, however, depend on n 1 and n 2). z z = 1 2 N N. Thus for binomial population, the hypothesis we want to test is whether the sample proportion is representative of the Population proportion P = P 0 against H 1: P≠P 0 or H 1: P>P 0 or H 1: P. Multisyllabic Words Administer these items only if the student is able to read six of the eight items in Task G. n 1 ≥30 and n 2 ≥30. LUXEON Z ES is undomed, a feature that provides unmatched optical flexibility for precise beam angle control. data an optional data frame containing the variables in the model. 8169381693 GRE Pract General Test cs4 MAC dr01 038,1010 lg edits dr01 031610 lg edits dr01 031810 lg r02 5510 w r02Edits 51410 w dr02 51710 mc r03 5 2810 w r03edits 6210 w rft04 6910 db Preflight 65110 db dr01 12910 mc dr01revs 122210 mc pdf 122210 mc dr02 11011 mc pdf 11911 mc dr03 012511 lg edits dr03 012511 lg dr05. The product is undomed for precise. about International normalised ratio (INR) test. It checks if the difference between the means of two groups is statistically significance, based on sample averages and known standard deviations. I like it when you hug me. Calculus II , Final (practice test) 9:00–12:00 noon, Friday, Dec. This handout will take you through one of the examples we will be considering during class. From its unique two-tone exterior to its Alcantara®-wrapped steering wheel, we proudly present the 2020 50th Anniversary Edition. 31, and the two sided test yields a p-value equal to 2*(1-. Pelvic Inflammatory Disease (PID) People at High Risk. PDF of April 2019 ACT Form Z15 - Special Testing TIR (Test Information Release) As far as I know, this is the first time that ACT Inc. For each significance level, the z-test has a single critical value. Does this test result in a report/information that is. We will use the following as a running example. The test statistic is assumed to have. small amount as a test to ensure proper angles on both the Mountain Goat and the Desert Fox- Once I know that my recovery is as expected, I'll sit down in my comfy garage chair and begin feeding the Mountain Goat, watching as the gold and a very small amount of black sands drop onto the flume and into the leads Of the Desert Fox. Multiplication Facts to 10: How Many Can You Do In 1 Minute? 1. IBM® Z® Development and Test Environment is a platform for mainframe application development, testing, demonstration and education. Two Sample z-test in Excel - Duration: 6:45. ) Power Proportions 3 / 31 Proportionsand hypothesis tests. You may be offline or with limited connectivity. • Updated VCE P4070-005 - IBM System z and z/OS Fundamentals Mastery. Application Delivery Analysis. Sprawdzian Fizyka 2 Lo Zamkor listy plików PDF sprawdzian Polecany. What is hypothesis testing?(cont. Combinations with z Scores Example 1: The Chapin Social insight test evaluates how accurately the subject appraises other people. Each level has at least 1 fiction-nonfiction. On the other hand, a statistical test, which determines the equality of the variances of the two normal datasets, is known as f-test. A Test Variable (s): The variable whose mean will be compared to. 8 Graphical Methods for Comparing Means 44 IV χ2-TESTS 44 4. So, there are two possible outcomes: Reject H 0 and accept 1 because of su cient evidence in the sample in favor or H 1; Do not reject H 0 because of insu cient evidence to support H 1. Does this test result in a report/information that is. The test is considered robust for violations of normal distribution and it. Text Book : Basic Concepts and Methodology for the Health Sciences 21 n X Z V-Po 10 20 27 30. Please input numbers in the required fields and click CALCULATE. End-of-the-Year Test - Grade 3 This test is quite long, so I do not recommend having your child/student do it in one sitting. For example, the value for 1. teachyourselfalesson. At the end of your monthly term, you will be automatically renewed at the promotional monthly subscription rate until the end of the promo period, unless you elect to. •The intent of hypothesis testing is formally examine two opposing conjectures (hypotheses), H 0 and H A •These two hypotheses are mutually exclusive and Two sided test: z "/2 (i. (from table D) 6. This is a complete online exam preparation hub for all the competitive Exams in India. The 2-sample t-test takes your sample data from two groups and boils it down to the t-value. More than 50,000 Satisfied Customers. ReadyToTest. Command Direct: Commander directs an individual test for fitness for duty. It is a best choice to improve your professional skills and ability to face the challenge of 1z1-997 Actual Test. Objective examinations include multiple choice,. 2014 (d) Construct a 95 percent confidence interval on the mean breaking strength. This stanadrd Z-score helps to decide whether to keep or reject the null hypothesis. Using Table A-2, the area to the left of the test statistic is 0. Salah satu metode untuk menguji hipotesis adalah sample t-Test, dimana metode sample t-Test dibagi menjadi tiga, yaitu one sample t-Test, paired sample t-Test dan independent sample t-Test. Set up decision rule. see also Influenza. Mann-Whitney U test (Non-parametric equivalent to independent samples t-test) The Mann-Whitney U test is used to compare whether there is a difference in the dependent variable for two independent groups. Rieke, Manhattan Wood Project CALIBRATION TEST PATTERN DESCRIPTION The X-Carve is an affordable hobby-sized CNC, with an accuracy of up to. <(z) is continuous everywhere, but nowhere z-di erentiable! Exercises: 1. As in the one sample case, the Wald iterval and test performs poorly relative to the score interval and test For testing, always use the score test For intervals, inverting the score test is hard and not o ered in standard software A simple x is the Agresti/Ca o interval which is obtained by calculating ~p. The 1-Proportion z Test Dr. The following table is the result. Practice Tests and Answer Keys Practice Test Name Date 1 Which group of individuals has a higher risk of foodborne illness? A Teenagers B Elderly people C Women D Vegetarians 2 Parasites are commonly associated with A seafood. We can't compare an observed value for t to a critical value for Z. , a time series). The test statistic is assumed to have. Z-test คือ การทดสอบค่าซี ในการทดสอบเกี่ยวกับค่าเฉลี่ย ของกลุ่มตัวอย่างในกรณีที่กลุ่มตัวอย่างมีจ านวนมาก n ≥30 โดย. How This Practice Test Differs From an Actual LSAT This practice test is made up of the scored sections from the actual disclosed LSAT administered in June 2007 as well as the writing sample topic. CompareCorrCoeff. Sprawdzian Fizyka 2 Lo Zamkor listy plików PDF sprawdzian Polecany. The first attempt of the On-Site exam should be scheduled within a 6 month period from the date that the candidate receives the FDNY Z-59 letter indicating a. overall, and both are common stressors for Gen Z as well. Introductory Statistics Hypothesis Testing Review( Critical Value Approach) MULTIPLE CHOICE. The samples are independent. , z or t) 2. Test Card We believe that step 1: hypothesis And measure step 3: metric To verify that, we will. The quadric surface given by equation x2 +y2 +z2 = 10 is a sphere of radius √ 10. Statistics: The Standard Normal Probability Distribution 10 Questions | 1212 Attempts normal distribution, statistics, math, tutoring, z-score, probability, normal curve, Tammy the Tutor, MathRoom Contributed By: Tammy the Tutor. Using the z-chart, like the t-table, we see what percentage of. CONTOUR DIABETES app. Driver Safety Test Answers Page 2 of 2 16. Home > Test Information > A-Z Test List > # # A B C D E F G H I J K L M N O P Q R S T U V W X Y Z. In statistic tests, the probability distribution of the statistics is important. Each level has at least 1 fiction-nonfiction. "p" is the probability the variables are. The test can be used for paired or unpaired groups. Z Series (Z Foil) FIGURE 5 – STANDARD IMPRINTING AND DIMENSIONS W Lead Material #22 AWG Round Solder Coated Copper Optional Customer Part Number Print Specification, etc. The kappa statistic (or kappa coefficient) is the most commonly used statistic for this purpose. This Stage Z Test Report describes the independently administered and transparent test bed process established to develop and validate a proposed Z-axis (vertical) metric for indoor wireless 9-1-1 calls, as required by the Federal Communications Commission’s (FCC’s) 9-1-1 Location Accuracy Fourth Report & Order. Test statistics: (Step 3) Hypothesis testing for a mean (σ is known, and the variable is normally distributed in the population or n > 30 ) z x n = − µ σ 0 (TI-83: STAT TESTS 1:Z-Test) Hypothesis testing for a mean (σ is unknown, and the variable is normally distributed in the population or n > 30 ) 0 t x s n = − µ. All needle valves must be closed on test kit. 96, or z > zα/2 = z 0. SECURE THE WORKFORCE. fl Writing each z in the polar form z = re|˚, on p. More than 50,000 Satisfied Customers. Hypothesis Testing The idea of hypothesis testing is: Ask a question with two possible answers Design a test, or calculation of data Base the decision (answer) on the test Example: In 2010, 24% of children were dressed as Justin Bieber for Halloween. eggPlant is a GUI test automation tool for professional software applications and enterprise teams. All of the variables in your dataset appear in the list on the left side. We denote the mean of the population by „ and its variance by ¾2: Z 1 ¡1 xp(x)dx = E[X] ¾2 = Z 1 1 (x¡„)2p(x)dx = E[X2]¡E[X]2: (1. Now let’s look at STEP 3 for the z-test for the population proportion \⠀瀀尩. • Z is the number of standard deviations that the specification is from the mean. xhg3s428lfja, vevsv6giz2291, fix5s1whnxryel5, w3moydbtibvw4a, m95rbdg52ss3, vt8ty6xmf1qilnn, rs30e2jiiibx, gw3t4hh3cdy51d, rr6hhiof3se, fsjgdyyfasmpw45, qm5p8a4ssh9, 0wez4sbifj2c, bprqvt39cqvltz, bn9dlhfrtipg3, t2neccpolcct8sb, xmnvitnw2qvl, dr9p97rccv28n4n, h555x8kbvj6ti, 6m2lsjjl0ud18k, tinspxma5bv8, c6fkycxl10roky, eenkbdlmw2, ldtuecmua8ecw4h, 9dhs3y89rpq0g, 6benzoke8ilxd8
2020-07-10 02:47:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5367485880851746, "perplexity": 2492.2937418063048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902496.52/warc/CC-MAIN-20200710015901-20200710045901-00561.warc.gz"}
http://mlweb.loria.fr/book/en/unsupervisedlearning.html
# Unsupervised Learning ## In words... In unsupervised learning, the computer is given a data set of unlabeled patterns from which it should extract "knowledge". This knowledge can take different forms, each giving rise to a different learning problem. Clustering aims at classifying the data into a number of groups containing similar patterns. Density estimation aims at learning a generative model of the data, which could be used for instance to generate new data instances or estimate the probability that a pattern falls within a given region. Dimensionality reduction methods can also be seen as unsupervised learning ones. ## In pictures... ### Abstract view of unsupervised learning Hover over the elements of the diagram to get additional information. ## In maths... The goal of unsupervised learning is to extract knowledge from a data set of $N$ unlabeled input vectors, $$\{ \g x_1, \dots, \g x_N \} \in \X^N .$$ This knowledge can take different forms, such as groups of similar patterns or a probabilistic model of the data, typically a probability density function $$p_X(x).$$
2017-11-23 07:10:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3029208481311798, "perplexity": 531.8106531763331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806760.43/warc/CC-MAIN-20171123070158-20171123090158-00140.warc.gz"}
https://deepai.org/publication/mutants-and-residents-with-different-connection-graphs-in-the-moran-process
# Mutants and Residents with Different Connection Graphs in the Moran Process The Moran process, as studied by Lieberman et al. [L05], is a stochastic process modeling the spread of genetic mutations in populations. In this process, agents of a two-type population (i.e. mutants and residents) are associated with the vertices of a graph. Initially, only one vertex chosen u.a.r. is a mutant, with fitness r > 0, while all other individuals are residents, with fitness 1. In every step, an individual is chosen with probability proportional to its fitness, and its state (mutant or resident) is passed on to a neighbor which is chosen u.a.r. In this paper, we introduce and study for the first time a generalization of the model of [L05] by assuming that different types of individuals perceive the population through different graphs, namely G_R(V,E_R) for residents and G_M(V,E_M) for mutants. In this model, we study the fixation probability, i.e. the probability that eventually only mutants remain in the population, for various pairs of graphs. First, we transfer known results from the original single-graph model of [L05] to our 2-graph model. Among them, we provide a generalization of the Isothermal Theorem of [L05], that gives sufficient conditions for a pair of graphs to have the same fixation probability as a pair of cliques. Next, we give a 2-player strategic game view of the process where player payoffs correspond to fixation and/or extinction probabilities. In this setting, we attempt to identify best responses for each player and give evidence that the clique is the most beneficial graph for both players. Finally, we examine the possibility of efficient approximation of the fixation probability. We show that the fixation probability in the general case of an arbitrary pair of graphs cannot be approximated via a method similar to [D14]. Nevertheless, we provide a FPRAS for the special case where the mutant graph is complete. • 8 publications • 10 publications • 6 publications • 7 publications 06/21/2017 ### Faster Monte-Carlo Algorithms for Fixation Probability of the Moran Process on Undirected Graphs Evolutionary graph theory studies the evolutionary dynamics in a populat... 02/07/2018 ### Strong Amplifiers of Natural Selection: Proofs We consider the modified Moran process on graphs to study the spread of ... 10/08/2021 ### Pick Your Battles: Interaction Graphs as Population-Level Objectives for Strategic Diversity Strategic diversity is often essential in games: in multi-player games, ... 03/26/2020 ### On Structural Parameterizations of Node Kayles Node Kayles is a well-known two-player impartial game on graphs: Given a... 06/02/2021 ### Stability of Special Graph Classes Frei et al. [6] showed that the problem to decide whether a graph is sta... 01/20/2022 ### Invasion Dynamics in the Biased Voter Process The voter process is a classic stochastic process that models the invasi... 02/16/2021 ### About Weighted Random Sampling in Preferential Attachment Models The Barabási-Albert model is a popular scheme for creating scale-free gr... ## 1 Introduction The Moran process[14] models antagonism between two species whose critical difference in terms of adaptation is their relative fitness. A resident has relative fitness 1 and a mutant relative fitness . Many settings in Evolutionary Game Theory consider fitness as a measure of reproductive success; for examples see [15, 7, 3]. A generalization of the Moran process by Lieberman et al[10] considered the situation where the replication of an individual’s fitness depends on some given structure, i.e. a directed graph. This model gave rise to an extensive line of works in Computer Science, initiated by Mertzios et al. in [12]. In this work we further extend the model of [10] to capture the situation where, instead of one given underlying graph, each species has its own graph that determines their way of spreading their offsprings. As we will show, due to the process’ restrictions only one species will remain in the population eventually. Our setting is by definition an interaction between two players (species) that want to maximize their probability of occupying the whole population. This strategic interaction is described by an 1-sum bimatrix game, where each player (resident or mutant) has all the strongly connected digraphs on nodes as her pure strategies. The resident’s payoff is the extinction probability and the mutant’s payoff is the fixation probability. The general question that interests us is: what are the pure Nash equilibria of this game (if any)? To gain a better understanding of the behaviour of the competing graphs, we investigate the best responses of the resident to the clique graph of the mutant. This model and question is motivated by many interesting problems from various, seemingly unrelated scientific areas. Some of them are: idea/rumor spreading, where the probability of spreading depends on the kind of idea/rumor; computer networks, where the probability that a message/malware will cover a set of terminals depends on the message/malware; and also spread of mutations, where the probability of a mutation occupying the whole population of cells depends on the mutation. Using the latter application as an analogue for the rest, we give the following example to elaborate on the natural meaning of this process. Imagine a population of identical somatic resident cells (e.g. biological tissue) that carry out a specific function (e.g. an organ). The cells connect with each other in a certain way; i.e., when a cell reproduces it replaces another from a specified set of candidates, that is, the set of cells connected to it. Reproduction here is the replication of the genetic code to the descendant, i.e. the hardwired commands which determine how well the cell will adapt to its environment, what its chances of reproduction are and which candidate cells it will be able to reproduce on. The changes in the information carried by the genetic code, i.e. mutations, give or take away survival or reproductive abilities. A bad case of mutation is a cancer cell whose genes force it to reproduce relentlessly, whereas a good one could be a cell with enhanced functionality. A mutation can affect the cell’s ability to adapt to the environment, which translates to chances of reproduction, or/and change the set of candidates in the population that should pay the price for its reproduction. Now back to our population of resident cells which, as we said, connect with each other in a particular way. After lots of reproductions a mutant version of it shows up due to replication mistakes, environmental conditions, etc. This mutant has the ability to reproduce in a different rate, and also, to be connected with a set of cells different than the one of its resident version. For the sake of argument, we study the most pessimistic case, i.e. our mutant is an extremely aggressive type of cancer with increased reproduction rate and maximum unpredictability; it can replicate on any other cell and do that faster than a resident cell. We consider the following motivating question: Supposing this single mutant will appear at some point in time on a random cell equiprobably, what is the best structure (network) of our resident cells such that the probability of the mutant taking over the whole population is minimized? The above process that we informally described captures the real-life process remarkably well. As a matter of fact, a mutation that affects the aforementioned characteristics in a real population of somatic cells occurs rarely compared to the time it needs to conquer the population or get extinct. Therefore, a second mutation is extremely rare to happen before the first one has reached one of those two outcomes and this allows us to study only one type of mutant per process. In addition, apart from the different reproduction rate, a mutation can lead to a different “expansionary policy” of the cell, something that has been overlooked so far. ## 2 Definitions Each of the population’s individuals is represented by a label and can have one of two possible types: (resident) and (mutant). We denote the set of nodes by , with , and the set of resident(mutant) edges by (). The node connections are represented by directed edges; A node has a type R(M) directed edge () towards node if and only if when is chosen and is of type () then it can reproduce on with positive probability. The aforementioned components define two directed graphs; the resident graph and the mutant graph . A node’s type determines its fitness; residents have relative fitness 1, while mutants have relative fitness . Our process works as follows: We start with the whole population as residents, except for one node which is selected uniformly at random to be mutant. We consider discrete time, and in each time-step an individual is picked with probability proportional to its fitness, and copies itself on an individual connected to it in the corresponding graph ( or ) with probability determined by the (weight of the) connection. The probability of (given that it is chosen) reproducing on when is resident(mutant) is by definition equal to some weight (), thus for every . For , every edge has weight if , and otherwise. Similarly for . For each graph we then define weight matrices and which contain all the information of the two graphs’ structure. After each time-step three outcomes can occur: (i) a node is added to the mutant set , (ii) a node is deleted from , or (iii) remains the same. If both graphs are strongly connected the process ends with probability 1 when either (extinction) or (fixation). An example is shown in Figure 1. We denote by the probability of fixation given that we start with the mutant set . We define the fixation probability to be for a fixed relative fitness . We also define the extinction probability to be equal to . In the case of only one graph (i.e. ), which has been the standard setting so far, the point of reference for a graph’s behaviour is the fixation probability of the complete graph (called Moran fixation probability) . is an amplifier of selection if and or and because it favors advantageous mutants and discourages disadvantageous ones. is a suppressor of selection if and or and because it discourages advantageous mutants and favors disadvantageous ones. An undirected graph is a graph for which if and only if . An unweighted graph is a graph with the property that for every : for every with incoming edge from , where is the outdegree of node . In the sequel we will abuse the term undirected graph to refer to an undirected unweighted graph. In what follows we will use special names to refer to some specific graph classes. The following graphs have vertices which we omit from the notation for simplicity. • as a shorthand for the Clique or complete graph . • as a shorthand for the Undirected Star graph . • as a shorthand for the Undirected Cycle or 2-regular graph . • : as a shorthand for the Circulant graph for even . Briefly this subclass of circulant graphs is defined as follows. For even degree , the graph (see Fig. 2) has vertex set , and each vertex is connected to vertices . By “Resident Graph vs Mutant Graph” we refer to the process with Resident Graph and Mutant Graph and by we refer to the fixation probability of that process. We note that in this paper, we are interested in the asymptotic behavior of the fixation probability in the case where the population size is large. Therefore, we employ the standard asymptotic notation with respect to ; in particular, is almost always treated as a variable independent of . Furthermore, in the rest of the paper, by and we mean graph classes and respectively, and we will omit the since we only care about the fixation probability when . ## 3 Our Results In this paper, we introduce and study for the first time a generalization of the model of [10] by assuming that different types of individuals perceive the population through different graphs defined on the same vertex set, namely for residents and for mutants. In this model, we study the fixation probability, i.e. the probability that eventually only mutants remain in the population, for various pairs of graphs. In particular, in Section 5 we initially prove a tight upper bound (Theorem 5.1) on the fixation probability for the general case of an arbitrary pair of digraphs. Next, we prove a generalization of the Isothermal Theorem of [10], that provides sufficient conditions for a pair of graphs to have fixation probability equal to the fixation probability of a clique pair, namely ; this corresponds to the absorption probability of a simple birth-death process with forward bias . It is worth noting that it is easy to find small counterexamples of pairs of graphs for which at least one of the two conditions of Theorem 2 does not hold and yet the fixation probability is equal to ; hence we do not prove necessity. In Section 6 we give a 2-player strategic game view of the process where player payoffs correspond to fixation and/or extinction probabilities. In this setting, we give an extensive study of the fixation probability when one of the two underlying graphs is complete, providing several insightful results. In particular, we prove that, the fixation probability when the mutant graph is the clique on vertices (i.e. ) and the resident graph is the undirected star on vertices (i.e. ) is , and thus tends to 1 as the number of vertices grows, for any constant . By using a translation result (Lemma 1), we can show that, when the two graphs are exchanged, then . However, using a direct proof, in Theorem 6.2 we show that in fact , i.e. it is exponentially small in , for any constant . In Theorem 6.4, we also provide a lower bound on the fixation probability in the special case where the resident graph is any undirected graph and the mutant graph is a clique. Furthermore, in Subsection 6.3, we find bounds on the fixation probability when the mutant graph is the clique and the resident graph belongs to various classes of regular graphs. In particular, we show that when the mutant graph is the clique and the resident graph is the undirected cycle, then , for any constant . A looser lower bound holds for smaller values of . This in particular implies that the undirected cycle is quite resistant to the clique. Then, we analyze the fixation probability by replacing the undirected cycle by 3 increasingly denser circulant graphs and find that, the denser the graph, the smaller is required to achieve a asymptotic lower bound. We also find that the asymptotic upper bound stays the same when the resident graphs become denser with constant degree, but it goes to when the degree is . In addition, by running simulations (which we do not analyse here) for the case where the resident graph is the strongest known suppressor, i.e. the one in [5], and the mutant graph is the clique, we get fixation probability significantly greater than for up to nodes and values of fitness . All of our results seem to indicate that the clique is the most beneficial graph (in terms of player payoff in the game theoretic formulation). However, we leave this fact as an open problem for future research. Finally, in Section 7 we consider the problem of efficiently approximating the fixation probability in our model. We point out that Theorem 6.2 implies that the fixation probability cannot be approximated via a method similar to [2]. However, when we restrict the mutant graph to be complete, we prove a polynomial (in ) upper bound for the absorption time of the generalized Moran process when , where is the maximum ratio of degrees of adjacent nodes in the resident graph. The latter allows us to give a fully polynomial randomized approximation scheme (FPRAS) for the problem of computing the fixation probability in this case. ## 4 Previous Work So far the bibliography consists of works that consider the same structure for both residents and mutants. This 1-graph setting was initiated by P.A.P. Moran [14] where the case of the complete graph was examined. Many years later, the setting was extended to structured populations on general directed graphs by Lieberman et al. [10]. They introduced the notions of amplifiers and suppressors of selection, a categorization of graphs based on the comparison of their fixation probabilities with that of the complete graph. They also found a sufficient condition (in fact [4] corrects the claim in [10] that the condition is also necessary) for a digraph to have the fixation probability of the complete graph, but a necessary condition is yet to be found. Since the generalized 1-graph model in [10] was proposed, a great number of works have tried to answer some very intriguing questions in this framework. One of them is the following: which are the best unweighted amplifiers and suppressors that exist? Díaz et al. [2] give the following bounds on the fixation probability of strongly connected digraphs: an upper bound of for , a lower bound of for and they show that there is no positive polynomial lower bound when . An interesting problem that was set in [10] is whether there are graph families that are strong amplifiers or strong suppressors of selection, i.e. families of graphs with fixation probability tending to 1 or to 0 respectively as the order of the graph tends to infinity and for . Galanis et al. [4] find an infinite family of strongly-amplifying directed graphs, namely the “megastar” with fixation probability , which was later proved to be optimal up to logarithmic factors [6]. While the search for optimal directed strong amplifiers was still on, a restricted version of the problem had been drawing a lot of attention: which are the tight bounds on the fixation probability of undirected graphs? The lower bound in the undirected case remained , but the upper bound was significantly improved by Mertzios et al. [13] to , when is independent of . It was again improved by Giakkoupis [5] to for where , and finally by Goldberg et al. [6] to where they also find a graph which shows that this is tight. While the general belief was that there are no undirected strong suppressors, Giakkoupis [5] showed that there is a class of graphs with fixation probability , opening the way for a potentially optimal strong suppressor to be discovered. Extensions of [10] where the interaction between individuals includes a bimatrix game have also been studied. Ohtsuki et al. in[16] considered the generalized Moran process with two distinct graphs, where one of them determines possible pairs that will play a bimatrix game and yield a total payoff for each individual, and the other determines which individual will be replaced by the process in each step. Two similar settings, where a bimatrix game determines the individuals’ fitness, were studied by Ibsen-Jensen et al. in[8]. In that work they prove NP-completeness and #P-completeness on the computation of the fixation probabilities for each setting. ## 5 Markov Chain Abstraction and the Generalized Isothermal Theorem This generalized process with two graphs we propose can be modelled as an absorbing Markov chain [15]. The states of the chain are the possible mutant sets ( different mutant sets) and there are two absorbing states, namely and . In this setting, the fixation probability is the average absorption probability to , starting from a state with one mutant. Since our Markov chain contains only two absorbing states, the sum of the fixation and extinction probabilities is equal to 1. Transition probabilities. In the sequel we will denote by the set and by the set . We can easily deduce the boundary conditions from the definition: and . For any other arbitrary state of the process we have: f(S)=∑i∈S,j∉SrF(S)wMij⋅f(S+j)+∑j∉S,i∈S1F(S)wRji⋅ f(S−i)+ +⎛⎝∑i∈S,j∈SrF(S)wMij+∑i∉S,j∉S1F(S)wRij⎞⎠⋅ f(S), (1) where is the total fitness of the population in state . By eliminating self-loops, we get f(S)=∑i∈S,j∉Sr⋅wMij⋅f(S+j)+∑j∉S,i∈SwRji⋅f(S−i)∑i∈S,j∉Sr⋅wMij+∑j∉S,i∈SwRji. (2) We should note here that, in the general case, the fixation probability can be computed by solving a system of linear equations using this latter relation. However, bounds are usually easier to be found and special cases of resident and mutant graphs may have efficient exact solutions. Using the above Markov chain abstraction and stochastic domination arguments we can prove the following general upper bound on the fixation probability: ###### Theorem 5.1 For any pair of digraphs and with , the fixation probability is upper bounded by , for . This bound is tight for independent of . ###### Proof We refer to the proof of Lemma 4 of [2], as our proof is essentially the same. Briefly, we find an upper bound on the fixation probability of a relaxed Moran process that favors the mutants, where we assume that fixation is achieved when two mutants appear in the population. In their work the resident and mutant graphs are the same and undirected, but this does not change the probabilities of the first mutant placed u.a.r. to be extinct or replicated in our model. Finally, we note that this result is tight, by Theorem 6.1. We now prove a generalization of the Isothermal Theorem of [10]. ###### Theorem 5.2 (Generalized Isothermal Theorem) Let , be two directed graphs with vertex set and edge sets and respectively. The generalized Moran process with 2 graphs as described above has the Moran fixation probability if: 1. , , that is, and are doubly stochastic, i.e. and are isothermal (actually one of them being isothermal is redundant as it follows from the second condition), and 2. for every pair of nodes : . ###### Proof It suffices to show that in every state of the Markov chain of the process with mutants, the probability to go to a state with mutants is times the probability to go to a state with mutants (ch.6 in[15]). In our setting, by (5) these probabilities are and respectively. So, to establish the theorem, it suffices to show that its hypotheses hold if and only if relation (3) holds. ∑i∉S∑j∈SwRij=∑i∉S∑j∈SwMji,∀∅⊂S⊂V. (3) Consider all the states where only one node is resident, i.e. . Then from relation (3) we get the following set of equations that must hold: ∑j≠iwMji=∑j≠iwRij=1,∀i∈V. (4) Similarly, for all the states where we get from relation (3): ∑j≠iwRji=∑j≠iwMij=1,∀i∈V. (5) Now, (for general S) the two parts of (3) are: ∑i∉S∑j∈SwRij=|V|−|S|−∑i∉S∑j∉SwRij (6) and∑i∉S∑j∈SwMji=|V|−|S|−∑i∉S∑j∉SwMji,(using (???)). (7) Thus, by relation (???) it must be: ∑i∉S∑j∉SwRij=∑i∉S∑j∉SwMji,∀∅⊂S⊂V. (8) Now, consider all the states where only two nodes and are resident, i.e. . Then from relation (8) we get the following set of relations that must hold: wRij+wRji=wMij+wMji,∀i,j∈V. (9) To prove the other direction of the equivalence we show that the sets of relations (4),(9) suffice to make (3) true. If (9) is true, then (8) is obviously true. And, by using (4), the left-hand side of (6) and (7) are equal, thus (3) is true. Observe that when we have the isothermal theorem of the special case of the generalized Moran process that has been studied so far. ## 6 A Strategic Game View In this section we study the aforementioned process from a game-theoretic point of view. Consider the strategic game with 2 players; residents (type R) and mutants (type M), so the player set is . The action set of a player consists of all possible strongly connected graphs111We assume strong connectivity in order to avoid problematic cases where there is neither fixation nor extinction. that she can construct with the available vertex set . The payoff for the residents (player R) is the probability of extinction, and the payoff for the mutants (player M) is the probability of fixation. Of course, the sum of payoffs equals 1, so the game can be reduced to a zero-sum game. The natural question that emerges is: what are the pure Nash equilibria of this game (if any)? For example, for fixed , if we only consider two actions for every player, namely the graphs and , then from our results from Subsection 6.1, when , we get and from [15, 1], and . Therefore, we get the following bimatrix game: which has a pure Nash equilibrium, namely . Trying to understand better the behaviour of the two conflicting graphs, we put some pairs of them to the test. The main question we ask in this work is: what is the best response graph of the residents to the Clique graph of the mutants? In the sequel, we will use the abbreviations pl-R and pl-M for the resident and the mutant population, respectively. In the proofs of this paper we shall use the following fact from [15]: ###### Fact 1 In a birth-death process with state space , absorbing states and backward bias at state equal to , the probability of absorption at , given that we start at is fi=1+∑i−1j=1∏jk=1γk1+∑n−1j=1∏jk=1γk. ### 6.1 Star vs Clique The following result implies (since as ) that when the mutant graph is complete and the resident graph is the undirected star, the fixation probability tends to 1 as goes to infinity. ###### Theorem 6.1 If pl-R has the graph and pl-M has the graph for , then the payoff of pl-M (fixation probability) is lower bounded by . ###### Proof We will find a lower bound to the fixation probability of our process , by finding the fixation probability of a process that is dominated by (has at most the fixation probability of) . Here is : Have the undirected star graph for the residents and the clique graph for the mutants. We start with a single mutant on a node uniformly at random from the vertex set. If that node is the central one of , then at the next time step it is attacked by a resident with probability 1 and the process ends with the residents occupying the vertex set. If the initial mutant node is a leaf, then the process continues with the following restriction: whenever a mutant node is selected to reproduce on the central node of , instead it reproduces on itself, unless all leaves of are mutants. can be modelled as the following Markov chain: In Figure 3 we denote by the state of process that has mutants at the center of (star graph) and mutants at the leaves of . We also denote by the fixation probability given that the initial mutant node of process is the center of , and by the fixation probability given that the initial mutant node is a leaf of . Now, the exact fixation probability of process is: f′=1nf1,0+(1−1n)f0,1=(1−1n)f0,1, since f1,0=0. Now, for a state where , the probability of going to state in the next step is: p0,i−10,i=1ir+n−i⋅in−1. For a state where the probability of going to state in the next step is: p0,i+10,i=irir+n−i⋅n−i−1n−1 andp0,i+10,i=(n−1)r(n−1)r+1⋅1n−1when i=n−1 and the probability of remaining to state is: . In our case, where we want the fixation probability given that we start from state , by using Fact 1, we get the following: f0,1=11+∑n−1j=1∏jk=1γk (10) From the transition probabilities of our Markov chain, we can see that: γk=1r⋅1n−k−1, for 1≤k≤n−2 and γk=1r, for k=n−1. So, from (10) we get: f0,1 =11+1r(n−2)+1r2(n−2)(n−3)+1r3(n−2)(n−3)(n−4)+⋯+1rn−2(n−2)(n−3)⋯1+1rn−1(n−2)(n−3)⋯1 ≥11+1r(n−2)+1r2(n−2)(n−3)⋅(n−2), for r>(n−4)!−1/(n−2) =11+1r(n−2)+1r2(n−3) and for the required fixation probability we get: f′ =1−1n1+1r(n−2)+1r2(n−3) →1asn→∞. This completes the proof of Theorem 6.1. It is worth noting that, since the game we defined in Subsection 6 is 1-sum, we immediately can get upper (resp. lower) bounds on the payoff of pl-R, given lower (resp. upper) bounds on the payoff of pl-M. Now we give the following lemma that connects the fixation probability of a process with given relative fitness, resident and mutant graphs, with the fixation probability of a “mirror” process where the roles between residents and mutants are exchanged. . ###### Proof We denote by the probability of fixation when our population has a set of mutants with relative fitness , resident graph and mutant graph . We first prove the following: . ###### Proof The probability of fixation for a mutant set and mutant graph is the same as the probability of extinction of the resident set , i.e. one minus the probability of the set conquering the graph. Thus, if we exchange the labels of residents and mutants, the relative fitness of the new residents is 1 and the relative fitness of the new mutants is , the new resident graph is , the new mutant graph is and the new mutant set is . We can now prove Lemma 1 as follows: By the above Claim we have for every . Since for every , we get that . Averaging over all nodes in we get the required inequality. This result provides easily an upper bound on the fixation probability of a given process when a lower bound on the fixation probability is known for its “mirror” process. For example, using Theorem 6.1 and Lemma 1 we get an upper bound for on the fixation probability of vs ; this immediately implies that the probability of fixation in this case tends to 0. However, as we subsequently explain, a more precise lower bound is necessary to reveal the approximation restrictions of the particular process. ###### Theorem 6.2 If pl-R has the graph and pl-M has the graph for , then the payoff of pl-M (fixation probability) is upper bounded by . ###### Proof In order to show this, we give a pair of graphs that yields fixation probability upper bounded by an function. Have the Clique graph for the residents and the Undirected Star graph for the mutants; we will call this process . We will find an upper bound of its fixation probability by considering the following process that favors the mutants. Here is : Have the aforementioned graphs. We start with a single mutant on the central node of . If a mutant is selected to reproduce on a mutant, it reproduces according to the exact same rules of . If a resident is selected to reproduce on a resident, it also reproduces according to the exact same rules of . If a resident is selected to reproduce on a mutant, it reproduces according to the exact same rules of , unless that mutant is the central one; then the resident reproduces on itself, unless all leaves of are residents. The corresponding Markov chain has states. A state , where is the number of mutants and the only absorbing states are and . For state the probability of going to state in the next step is: p01=n−1r+n−1⋅1n−1=1r+n−1. For a state , where , the probability of going to state in the next step is: pi−1i=n−iir+n−i⋅i−1n−1. For a state , where , the probability of going to state in the next step is: pi+1i=rir+n−i⋅n−in−1, and the probability of staying to state in the next step is: . In our case, where we want the fixation probability given that we start from state , by using Fact 1 we get the following: f1=11+∑n−1j=1∏jk=1γk (11) From the transition probabilities of our Markov chain, we can see that: γ1=1r and γk=1r⋅(k−1), for 2≤k≤n−1. So, from (11) we get: f1 =11+1r1+1r21+1r32+1r43!+⋯+1rn−1(n−2)! ≤rn−1(n−2)! ∈o(1an), wherea>1is constant. This completes the proof of Theorem 6.2. This bound shows that, not only there exists a graph that suppresses selection against the (which is an amplifier in the 1-graph setting), but it also does that with great success. In fact for any mutant with constant arbitrarily large, its fixation probability is less than exponentially small. In view of the above, the following result implies that the fixation probability in our model cannot be approximated via a method similar to [2]. ###### Theorem 6.3 (Bounds on the 2-graphs Moran process) There is a pair of graphs such that the fixation probability is , for some constant , when the relative fitness is constant. Furthermore, there is a pair of graphs such that the fixation probability is at least , for constant . ###### Proof See Theorem 6.1 and proof of Theorem 6.2. ### 6.2 Arbitrary Undirected Graphs vs Clique The following result is a lower bound on the fixation probability. ###### Theorem 6.4 When pl-R has an undirected graph for which for every and pl-M has the graph, the payoff of pl-M (fixation probability) is lower bounded by , for . In particular, for the lower bound tends to as . ###### Proof Notice that, given the number of mutants at a time-step is , the probability that a resident becomes mutant is , and the probability that a mutant becomes resident is upper bounded by . That is because the maximum possible number of resident-to-mutant edges in at a step with mutants is achieved when either every mutant has edges in only towards residents, or every resident has edges in only towards mutants; and the most extreme case is when every one of the nodes has sum of weights of incoming edges equal to the maximum ratio of degrees of adjacent nodes in , i.e. . This means that the number of mutants in our given process of an undirected graph vs Clique stochastically dominates a birth-death process that is described by the following Markov chain: A state , where is the number of mutants on the vertex set and the only absorbing states are and . Using Fact 1, we get: , where . From the aforementioned transition probabilities of our Markov chain we have: Now we can calculate a lower bound on the fixation probability of using the fact that : f1 ≥11−(cr)logn1−cr(1+o(1))+(2cr)logn∑n−logn−1j=0(2cr)j, (γk is upper bounded % by 2cr) =11−(cr)logn1−cr(1+o(1))+(2cr)logn1−(2cr)n−logn1−2cr. From the theorem above it follows that if is undirected regular then the fixation probability of vs is lower bounded by for and , which equals (defined in Section 2). Also, by Lemma 1 and the above theorem, when , is an undirected graph with for every , and relative fitness , then the upper bound of the fixation probability tends to as . ### 6.3 Circulant Graphs vs Clique In this subsection we give bounds for the fixation probability of vs . We first prove the following result that gives an upper bound on the fixation probability when is the graph as described in Section 2 and is the complete graph on vertices. ###### Theorem 6.5 When mutants have the graph, if residents have a graph and , then the payoff of pl-M (fixation probability) is upper bounded by for and for . In particular, for constant the upper bound tends to . If , then the upper bound is , for , where is a function of such that and . The bound improves as is picked closer to and, in particular, for it tends to . ###### Proof We will bound from above the payoff of the mutant (i.e. the fixation probability) of our process , by finding the fixation probability of a process that dominates (has at least the fixation probability of) . The dominating process is the least favorable for the residents. Here is : Have the graph for the residents, as defined in Section 2 in the more general case where the number of its vertices does not concern us, and the clique graph for the mutants. We start with a single mutant on a node (w.l.o.g. we give it label ) uniformly at random from the vertex set. Throughout the process, if a resident is selected to reproduce on a resident, it reproduces according to the exact same rules of . If a mutant is selected to reproduce on a mutant, it reproduces according to the exact same rules of . However, if a mutant is selected to reproduce on a resident, it obeys to the following restriction: it can only reproduce on a resident that is connected to the maximum number of mutants possible (equiprobably, but it does not really matter due to the symmetry of the produced population). If a resident is selected to reproduce on a mutant when the number of mutants is , then the last among the mutants that was inserted becomes resident, thus preserving the minimality of the probability of the residents to hit the mutants (see Figure 4). It is easy to see that process allocates the mutants in a chain-like formation that allows residents to “hit” the mutants with the smallest possible number of resident edges. In other words, if we consider the mutant set and the resident set , in every step of the process the number of resident edges on the cut of is minimum. This process is the worst the residents could deal with. Due to the symmetry that our process brings on the population instances, the corresponding Markov chain has states, as every state with the same number of mutants can be reduced to a single one. A state , where is the number of mutants and the only absorbing states are and . After careful calculations we get that, for a state , where , the probability of going to state in the next step is: pi−1i=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩1ir+n−i⋅i(1−i−1d),for i∈{1,2,..,d2+1}1ir+n−i⋅12⋅(d2+1),for i∈{d2+2,..,n−d2}1ir+n−i⋅[12⋅(d2+1)−1d⋅(i−n+d2)(i−n+d2+1)],for i∈{n−d2+1,..,n−1} the probability of going to state in the next step is: pi+1i=irir+n−i⋅n−in−1. and the probability of staying to state in the next step is: . In our case, where we want the fixation probability given that we start from state , by using Fact 1 we get the following: f1=11+∑n−1j=1∏jk=1γk (12) If is constant: from the transition probabilities of our Markov chain, we can see that: γk≥1r⋅n−1k(n−k), for % 1≤k≤n−1. So, from (12) we get: f1 =11+1rn−1n−1+1r2(n−1)22(n−1)(n−2)+1r3(n−1)33!(n−1)(n−2)(n−3)+⋯+1r(n−1)(n−1)(n−1)[(n−1)!]2 ≤11+1r+1r212+1r313!+⋯+1r(n−1)1(n−1)! =1e1r−[1rn1n!+1r(n+1)1(n+1)!+⋯] =1e1r−1rn1n![1+1r1n+1+1r2
2022-10-01 11:15:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067015051841736, "perplexity": 490.4153943194945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00258.warc.gz"}
https://motls.blogspot.com/2018/09/le-pen-and-psychichiatrist-alex-jones.html?m=1
## Friday, September 21, 2018 ### Le Pen and psychiatrist, Alex Jones and PayPal France and the U.S. are turning into full-blown totalitarian countries Our prime minister is a former communist rat and an unfixable Bolshevik and criminal (and today, we learned about the numbers showing that his "EET" online cash registers to harass the small businesses were indeed the kind of utter failure that all sensible people were predicting – just 1.2% increased collection of the value-added tax) but I am still grateful to live in Czechia. It's becoming a paradise, relatively speaking. Pôle emploi, a French government agency, is luring the unemployed French people to Czechia, promising them €1,500 monthly wage before taxation (some 25% above the Czech average), great castles, and super cheap pubs everywhere. Some years in Czechia are surely not a way for a generic Western European to become rich after you return home (and many French get shocked by the "low" number when they see it) but the life expenses are correspondingly lower so that things may indeed be more relaxed in Czechia. The unemployment in Czechia approaches 2% according to some methodologies so the country does need workers. But I mean "workers", not any "people". Muslim migrants wouldn't be OK because most of them couldn't become "workers". Most Czechs were stunned by the newest story about Marine Le Pen. The formidable presidential candidate has posted some pictures of the violence done by ISIS, in order to point out that ISIS was violent – only a complete loon could disagree. There are loons in France, however, and she was sued by someone for spreading hate or insulting the Islamic State or something like that. That was already extreme enough. Now, the judge has ordered a psychiatric evaluation of Marine Le Pen, to determine whether she's mentally capable of sitting in the courtroom or something along these lines. Le Pen has gained some 7-11 million votes in various elections... and you dare to suggest that this smart woman is mentally ill? Needless to say, this is a theme that we remember extremely well from the totalitarian communism. The communists were routinely abusing the courts and the psychiatric asylums that were used as a form of prison for the dissidents. Just to be sure, I do think that some of these dissidents were "partly mentally ill" but even for those, it's still true that the institutions were abused. A civilized country simply has to guarantee the freedom from psychiatrists for all the citizens who don't want to see psychiatrists and who are living safely according to themselves and according to the people in their immediate environment. I was careful while I was articulating this principle. I mean it. And that's why France is no longer a civilized country. By the way, don't they have an ombudsman or someone like that who would defend the French citizen (and lawmaker!) against this self-evident abuse of the institutions and power? Alex Jones' payments Last month, we began to talk about the Big Tech companies' censorship – their ban on Alex Jones' and Infowars' accounts was a great example. Apple, Facebook, Spotify, YouTube, and later Twitter almost simultaneously banned the influential opinion maker. Some apologists for fascism claimed that it was just OK because Alex Jones or his friends could create his own competitors of the companies etc. Those are just private companies that have the right to terminate their relationships with customers such as Alex Jones. This defense no longer works because the fascist Internet companies' crackdown has escalated to a new level: PayPal, the global monopoly in smaller online payments, has refused to serve Alex Jones and Infowars, too. PayPal is how they were collecting donations and selling the supplements, I guess. I am rather shocked that the police wasn't immediately acting. Isn't a financial institution prohibited from harassing a client for some legal acts that the institution (or its manager) doesn't like and that have nothing to do with their business? According to the law, PayPal is probably not a financial institution in the U.S. at all which is why they may do such things legally, right? That's too bad. PayPal obviously is a bank using the modern technological tools to achieve similar things as classical banks are still doing in more conventional ways. It should have all the duties and restrictions that the normal banks have. Just to be sure, exactly the same rules should apply to cryptocurrency exchanges and other entities that are doing "the same thing". (Ebay and PayPal keep on helping terrorist organizations and despicable extreme left-wing pressure groups to fund their business.) Feel free to disagree with Alex Jones – I would surely disagree with a big part of what he's saying. But your disagreement just doesn't give you the right to rob him of his basic civic rights, to cut him off from the basic infrastructure of the civilization. Maybe you have the right to verbally defend these despicable fascist policies but I have the duty to say that the wise people will have to find a method to stop these growing and increasingly synchronized cases of harassment; or find a legally impeccable way to physically eliminate such fascists and their apologists from the surface of Earth because things are getting very serious, indeed. If we fail to act, they may strip all genuine conservatives from the banking accounts, rob them of their assets (like when Nazis were confiscating Jews' assets), or send us to Gulags. We simply cannot allow the situation to deteriorate so badly that we would believe that it's irreversible and a win is impossible. I still believe that a win is totally possible. But I would surely prefer a giant civil war or a global war as soon as possible. Basic freedom for the people on Earth who love freedom is vastly more valuable than the lives of 30 million leftists that would probably have to be subtracted in our efforts to preserve the basic tenets of the Western civilization.
2018-10-16 11:43:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17638014256954193, "perplexity": 3663.460199391035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510749.55/warc/CC-MAIN-20181016113811-20181016135311-00536.warc.gz"}
https://jeopardylabs.com/play/writing-linear-equations-word-problems
Bank Account Rocket Launching Uber Ride Egg Drop Graphs 100 Mikes's bank account had $1,260 at the start of the year. He withdraws$180 a month. Write a linear equation to represent the situation. y=-180x+1260 100 A rocket is launched from a 200 foot cliff and gains altitude at a rate of 15 feet per second. Write an equation to represent the height (y) after x seconds. y=15x+200 100 An Uber driver charges a base fee of $3 plus an additional$0.50 per mile. Write an equation to represent the cost (y) in terms of x miles. y=.5x+3 100 An egg is dropped from 48 feet tall bleachers. It falls to the ground at a rate of 16 feet per second. Write an equation to represent the situation. y=-16x+93 100 Write an equation for the line above. y=-2/3x+5 200 Janet deposits $80 a month into her savings account. How much will she have in her savings account after 3 years of saving if she started with$540? $3420 200 The height of a rocket launched from a cliff can be represented by the equation below. How high is the rocket after being in the air for 8 seconds? y=18x+240 384 feet 200 An Uber driver charges an initial fee, plus$0.25 per mile. Jill pays $3.50 for a 4 mile ride. What is the initial fee the Uber driver charges?$2.50 200 A egg falls to the ground at a rate of 17 feet per second from a height of 65 feet. How many seconds will it take the egg to hit the ground? 3.8 seconds 200 Write an equation to represent the line above. y=2/5x-1 300 At the end of March, David's bank account had $7,325. At the end of September, his account had$9,377. How much money does David save each month? $342 300 A height of a rocket launched from a cliff can be represented by the equation below. How high is the rocket after being in the air for 8 seconds? y=18x+240 384 feet 300 An Uber driver charges an initial fee of$3.25. Lisa pays $7.75 for a 6 mile ride. How much does the Uber driver charge per mile?$0.75 per mile 300 An egg is dropped from the top of the 75 foot bleachers. After falling for 1.3 seconds, it is at a height of 55.5 feet. How many seconds will it take for the egg to hit the ground? 5 seconds 300 Write an equation for the line above. y=x+5 400 After 5 months of saving, Karen had $7,830 in her savings account. After 14 months of saving, she had$9,252 in her account. How much money did Karen have in her account before she began saving? $7,040 400 The height of a rocket launched from a cliff can be represented by the equation below. How many seconds is the rocket at a height of 317.4 feet? y=18x+240 4.3 seconds 400 Taryn owes her Uber driver$4.60 for a two mile ride. Heidi owes the same Uber driver $7 for a five mile ride. How much does the Uber driver charge per mile?$0.80 per mile 400 After 1.2 seconds an egg is at a height of 67.2 feet. After 1.8 seconds the same egg is at a height of 58.8 feet. What is the speed that the egg is falling? 14 feet/second 400 What is the equation of a line that goes through the point and has a slope of -3? y=-3x-13 500 Quinton started the year with with $5,302 in his bank account. Half way through the year he now has$1,584 left. Write an equation representing the balance of his account. y=-264x+5,302 500 The rocket is launched from a cliff. After 3.5 seconds, the rocket is at a height of 392 feet. After 4.5 seconds, the rocket is at a height of 414 feet. How high was the cliff that the rocket was launched from? 315 feet 500 Bev owes her Uber driver $4.42 for a 1.8 mile ride. Alvin owes the same Uber driver$5.33 for a 3.2 mile ride. Write an equation that can be used to predict the varying costs for this particular Uber driver. y=.65x+3.25 500 After .7 seconds, an egg is at a height of 84.6 feet. After 2.1 seconds, the same egg is at a height of 67.8 feet. How high was the egg dropped from? 93 feet 500 Write the equation of a line that goes through the two points in the graph. y=-3/4x+1.75 Click to zoom
2023-03-27 19:19:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.515130341053009, "perplexity": 934.3099239034979}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00401.warc.gz"}
https://derivativedribble.wordpress.com/tag/collateralized-debt-obligation/
# Synthetic CDOs, Ratings, And Super Senior Tranches: Part 2 #### Bait And Switch My apologies, but this is going to be a three part article.  I have come to the conclusion that each topic warrants separate treatment. In this article, I will discuss the rating of CDO tranches. In the next, I will discuss the rating of Synthetic CDOs and those illusive “Super Senior” tranches. #### Portfolio Loss Versus Tranche Loss In the previous article, we discussed how rating agencies model the expected losses on the portfolio of bonds underlying a CDO. The end result was a chart that plotted losses against a scale of probabilities. This chart purports to answer the question, “how likely is it that the portfolio will lose more than X?” So if our CDO has a single tranche, that is if the payment waterfall simply passes the cash flows onto investors, then this chart would presumably contain all the information we need about the default risks associated with the CDO. But payment waterfalls can be used to distribute default risk differently among different tranches. So, if our CDO has multiple tranches, then we need to know the payment priorities of each tranche before we can make any statements about the expected losses of any tranche. After we know the payment priorities, we will return to our chart and rate the tranches. #### Subordination And Default Risk Payment waterfalls can be used to distribute default risk among different tranches by imposing payment priorities on cash flows. But in the absence of payment priorities, cash flows are shared equally among investors. For example, if each of 10 investors had equal claims on an investment that generated $500, each investor would receive$50. Assuming each made the same initial investment, each would have equal gains/losses. However, by subordinating the rights of certain investors to others, we can insulate the senior investors. For example, continuing with our 10 investors, assume there are 2 tranches, A and B, where the A notes are paid only the first $500 generated by the investment and the B notes are paid the remainder. Assume that 5 investors hold A notes and that 5 investors hold B notes. If the investment generates only$500, the A investors will receive $100 each while the B investors will receive nothing. If however the investment generates$1,500 the A investors will receive $100 each and the B investors will receive$200 each. This is just one example. In reality, the payment waterfall can assign cash flows under any set of rules that the investors will agree to. #### Synthetic CDOs In reality, if D is a swap dealer, D probably sold protection on more than just ABC bonds. Let’s say that D sold protection on k different entities, $E_1, ... , E_k$, where the notional amount of protection sold on each is $n_1, ..., n_k$ and the total notional amount is $N = \sum_{i=1}^k n_i$. Rather than maintain exposure to all of these swaps, D could pass the exposure onto investors by issuing notes keyed to the performance of the swaps. The transaction that facilitates this is called a synthetic collateralized debt obligation or synthetic CDO for short. There are many transactions that could be categorized fairly as a synthetic CDO, and these transactions can be quite complex. However, we will explore only a very basic example for illustrative purposes. So, after selling protection to the swap market as described above, D asks investors for a total of $N$ dollars. D sets up an SPV, funds it with the money from the investors, and buys $n_i$ dollars worth of protection on $E_i$ for each $i \leq k$ from the SPV. That is, D hedges all of his positions with the SPV. The SPV takes the money from the investors and invests it. For simplicity’s sake, assume that the SPV invests in the same Treasuries mentioned above. The SPV then issues notes that promise to:  pay investors their share of $N - L$ dollars after all underlying swaps have expired, where L is the total notional amount of protection sold by the SPV on entities that triggered an event of default; and pay investors their share of annual interest, in amount equal to $(R + F - \Delta) \cdot (N - L)$, where $F$ is the sum of all swap fees received by D. So, if every entity on which the SPV sold protection defaults, the investors get no principle back, but may have earned some interest depending on when the defaults occurred. If none of the entities default, then the investors get all of their principle back plus interest. So each investor has synthetic exposure to a basket of synthetic bonds. That is, if any single synthetic bond defaults, they still receive money. Thus, the process allows investors to achieve exposure to a broad base of credit risk, something that would be very difficult and expensive to do in the bond market.
2021-07-24 10:21:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2609120309352875, "perplexity": 1901.439659515324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00020.warc.gz"}
http://www.math.unl.edu/~s-acroll1/KUMUNUjr/2012/schedule.html
Schedule, Abstracts and Slides Saturday, April 21, 2012 10:30 – 10:50 Registration 10:50 – 11:00 Opening Remarks 11:00 – 11:40 Alexandra Seceleanu Algebras with good nilpotent actions and possible homological applications 11:50 – 12:30 Ben Anderson [Slides] NAK for Ext and the Blindness of $M$ 12:30 – 2:00 Lunch 2:00 – 2:20 Ela Celikbas [Slides] Fiber Products and Connected Sums of Local Rings 2:30 – 3:10 Alessandro De Stefani Artinian level algebras of low socle degree 3:20 – 3:40 Zheng Yang [Slides] Decomposing a Gorenstein Artin ring as a Connected Sum 3:50 – 4:10 20 Minute Break 4:20 – 5:00 Sarang Sane [Slides] Projective Modules! Keep your Witts about! $C$ how its done! 5:10 – 5:30 Arindam Banerjee Bounds on Castelnuovo-Mumford Regularity Sunday, April 22, 2012 9:30 – 10:10 Saeed Nasseh [Slides] Factorizations of local homomorphisms 10:20 – 10:40 Jack Jeffries Finite $F$-Representation Type and $F$-Signature 10:50 – 11:30 Brian Johnson [Slides] Graded rings 11:40 – 12:00 Billy Sanders Semidualizing Modules ## Abstracts Speaker: Alexandra Seceleanu Title: Algebras with good nilpotent actions and possible homological applications Abstract: Every Artinian algebra A comes naturally equipped with a natural class of nilpotent (vector space) homomorphisms. I will describe various combinatorial invariants arising from such nilpotent actions. Finally I will try to convince the audience that properties of these invariants and the decompositions that give rise to them can become useful tools in studying the homological properties of A. Speaker: Ben Anderson Title: NAK for Ext and the Blindness of $M$. Abstract: Let $\varphi\colon (R,\mathfrak{m},k)\to (S,\mathfrak{m}S,k)$ be a flat local ring homomorphism, and let $M$ be a finitely generated $R$-module. The following are equivalent: 1. $M$ has an $S$-module structure compatible with its $R$-module structure; 2. $\operatorname{Ext}^i_R(S,M)=0$ for $i\geq 1$; 3. $\operatorname{Ext}^i_R(S,M)$ is finitely generated over $R$ for $i=1,\ldots,\dim_R(M)$; 4. $\operatorname{Ext}^i_R(S,M)$ is finitely generated over $S$ for $i=1,\ldots,\dim_R(M)$; 5. $\operatorname{Ext}^i_R(S,M)$ satisfies Nakayama’s Lemma over $R$ for $i=1,\ldots,\dim_R(M)$. This improves upon recent results of Frankild, Sather-Wagstaff, and Wiegand and results of Christensen and Sather-Wagstaff. We will discuss this result, some generalizations, and equalities between some invariants over $R$ and $S$. Speaker: Ela Celikbas Title: Fiber Products and Connected Sums of Local Rings Abstract: In this talk, we will introduce fiber products and connected sums of local rings. We will give examples, describe their properties, and set up questions arising in this scenario. Speaker: Alessandro De Stefani Title: Artinian level algebras of low socle degree Abstract: Macaulay’s inverse system gives a one-to-one correspondence between finitely generated modules over $S=k[[x_1,\ldots, x_n]]$, under a particular action, and ideals $I\subseteq S$ such that $S/I$ is Artinian. This correspondence can be restricted to suitable finitely generated $S$-modules and ideals $I$ such that $S/I$ is level, i.e. such that all socle elements in $S/I$ have maximal order. We use this tool to characterize Hilbert functions of level local algebras $(S/I,\mathfrak{m},k)$ such that $\mathfrak{m}^4= 0$, and to prove that level local algebras with maximal Hilbert function and $\mathfrak{m}^4=0$ are in fact graded. Speaker: Zheng Yang Title: Decomposing a Gorenstein Artin ring as a Connected Sum Abstract: We will see more examples of connected sums and fiber products of rings. I will also discuss some new results in joint work with H.Ananthnarayan and E.Celikbas. Speaker: Sarang Sane Title: Projective Modules! Keep your Witts about! $C$ how its done! Abstract: The aim of this talk will be to explain the title! While doing so, we might tour the world of projective modules and quadratic forms, and observe similarities between other worlds outside the realm of algebra. Speaker: Arindam Banerjee Title: Bounds on Castelnuovo-Mumford Regularity Abstract: I’ll discuss bounds on Castelnuovo-Mumford regularity in some special cases. While working over polynomial ring under some condition on dimension, regularity of Tor modules have some upper bound which satisfies nice convexity properties. On the other hand, edge ideals of simple graphs whose complement graphs are chordal, have linear minimal free resolutions, that is regularities attain the minimal possible value in that case. However for many other types of simple graphs much less is known about regularities of edge ideals. Speaker: Saeed Nasseh Title: Factorizations of local homomorphisms Abstract: Let $f\colon R \to S$ be a homomorphism of commutative rings. Many techniques for studying $R$-modules focus on finitely generated modules. As a consequence, these techniques are not well-suited for studying $S$ as an $R$-module. However, a technique of Avramov, Foxby, and Herzog sometimes allows one to replace the original homomorphism with a surjective one $R’\to S$ where $R$ and $R’$ are tightly connected. In this setting, $S$ is a cyclic $R’$-module, so one can study it using finitely generated techniques. I will give a general introduction to such factorizations, followed by a discussion of some new results on weakly functorial properties’’ of such factorizations and applications. The new results are joint with Sean Sather-Wagstaff. Speaker: Jack Jeffries Title: Finite $F$-Representation Type and $F$-Signature Abstract: In 1999, an investigation of differential operators in positive characteristic led Smith and Van den Bergh to define the notion of rings of finite $F$-representation type. This property has many interesting connections with current research topics, but many open questions remain. We discuss some of these consequences of finite $F$-representation type, including a new result on the $F$-signature of such rings. Speaker: Brian Johnson Abstract: We will briefly discuss (commutative) rings graded by $\mathbb Z^d$ and then consider rings graded by any abelian group. Looking at properties defined strictly in terms of homogeneous objects, we examine the relationships between them under gradings induced by a quotient of the grading group. Speaker: Billy Sanders Title: Semidualizing Modules Abstract: An $R$ module $M$ is semidualizing if it is finitely generated, $\text{Hom}(M,M) = R$ and $\text{Ext}_R^i(M,M) = 0$ for all $i \geqslant 1$. These modules have similar properties to the canonical module of a ring. I will give examples of semidulizing modules and also talk about their properties and applications.
2018-12-19 08:15:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5918217897415161, "perplexity": 941.7536682279897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831715.98/warc/CC-MAIN-20181219065932-20181219091932-00061.warc.gz"}
https://solvedlib.com/n/electromagnetism-question-4-advanced-an-electromagnelic,21304749
# Electromagnetism Question 4 (Advanced) An electromagnelic plane wave (ravels through space in the positive y direction. It is polarised in ###### Question: Electromagnetism Question 4 (Advanced) An electromagnelic plane wave (ravels through space in the positive y direction. It is polarised in Ghc !-z plane, has frcquency of 30 MHz and and an intensity of 0.25 W/m?. Suppose wanG 10 uSC this wave to incuce a4 emf in square loop of wire_ TThe sides of the square loop of wire are 20 CI in length: (a) Describe how 4he loop should he oricnted to obtain the Maximum induced emf: (b) Give blc mathematical fOrI lor how this emf depeudls on time. Caleulate the maxim CMf induced in the loop. #### Similar Solved Questions ##### Select the product for the following reactionHCIOHHOOHOH OHOH0 0 30 2 Select the product for the following reaction HCI OH HO OH OH OH OH 0 0 3 0 2... ##### An Ideal gas at (~) the density of gas is and 200 X 10 atm; At546 K for the gas molecules (b) Find the 124X 10 glcm}. (a) Find Vos See Table 19-1. (c)identify the gas. molar mass of thegasand " a* gas to only iques have reduced an Ideal gas at (~) the density of gas is and 200 X 10 atm; At546 K for the gas molecules (b) Find the 124X 10 glcm}. (a) Find Vos See Table 19-1. (c)identify the gas. molar mass of thegasand " a* gas to only iques have reduced... ##### EBook Video Consider the following data for two variables, z and y. i 135 110 130 145 175 160 120 i 145 100 120 120 130 130 110 a. Consider the four scatter diagrams below. 1. 120 100 80 &#3... eBook Video Consider the following data for two variables, z and y. i 135 110 130 145 175 160 120 i 145 100 120 120 130 130 110 a. Consider the four scatter diagrams below. 1. 120 100 80 " " 60 40 20 110-120 130 149-150 160 170 2. 120 100 80 60 40 20 110-120-130 140 150 160 170 3. 120 100 80... ##### 1st Class Stamp Price VS 1st Class Mail Volume 110 100 3.9151x + 251.2 R2 = 0.8781 I 1Ist Class Stamp Price 1st Class Stamp Price VS 1st Class Mail Volume 110 100 3.9151x + 251.2 R2 = 0.8781 I 1 Ist Class Stamp Price... ##### The overnight Fed Funds rate is 3.47%. Please convert that to a bond equivalent yield. Answer... The overnight Fed Funds rate is 3.47%. Please convert that to a bond equivalent yield. Answer in percent to three decimal places. Omit the percent sign.... ##### Find the indefinite integral. (Remember to use absolute values where appropriate:)csc( 2x)ln/tan (x) | Find the indefinite integral. (Remember to use absolute values where appropriate:) csc( 2x) ln/tan (x) |... ##### Using Newton's Version of Kepler's Third Law II.a. Pluto's moon Charon orbits Pluto every 6.4 days with a semimajor axis of 19,700 kilometers. Calculate the combined mass of Pluto and Charon. Compare this combined mass to the mass of Earth, which is about $6 imes 10^{24}$ kilograms. b. Calculate the orbital period of the Space Shuttle in an orbit 300 kilometers above Earth's surface. c. The Sun orbits the center of the Milky Way Galaxy every 230 million years at a distance o Using Newton's Version of Kepler's Third Law II. a. Pluto's moon Charon orbits Pluto every 6.4 days with a semimajor axis of 19,700 kilometers. Calculate the combined mass of Pluto and Charon. Compare this combined mass to the mass of Earth, which is about $6 \times 10^{24}$ kilograms... ##### Explain why the function is differentiable at the given point: fx, %) = 3 + x In(xy (2, 4)The partial derivatives are fAx, Y)In(x -and fy(x, Y)642, 4)and f,(2, 4) =Both f and f arecontinuous functlons for xy >and fis differentlable at (2, 4).Find the linearization L(xY) of ffx, Y) at (2, 4). L(x,Y) = 4x +4-21 Explain why the function is differentiable at the given point: fx, %) = 3 + x In(xy (2, 4) The partial derivatives are fAx, Y) In(x - and fy(x, Y) 642, 4) and f,(2, 4) = Both f and f are continuous functlons for xy > and fis differentlable at (2, 4). Find the linearization L(xY) of ffx, Y) at (2,... ##### The weight of the potatoes is approximately normally distributed with population mean μ=10 ounces and population... The weight of the potatoes is approximately normally distributed with population mean μ=10 ounces and population standard deviation σ=1.5 ounces. Use 68-95-99.7 rule to answer the questions below: a). What is the probability that a randomly selected potato weighs over 13 ounces? b). What is... ##### Seattle Western University has provided the following data to be used in its service department cost... Seattle Western University has provided the following data to be used in its service department cost allocations: Service Departments Facility Administration Services $2,400,000$1,600,000 Operating Departments Undergraduate Graduate Programs Programs $26,800,000$5,700,000 20,000 5,000 70,000 30,00... ##### --> Cart A (0.200 kg) is moving to the right at 0.34 m/s.Cart B (0.400 kg) is stationary. Cart A collides into Cart B. Afterthe collision, Cart A moves to the left at 0.112 m/s. Determine thevelocity of Cart B after the collision? -->Cart A (0.400 kg) is moving to the right at 0.34 m/s. CartA collides into Cart B (0.200 kg). After the collision, Cart Amoves to the left at 0.1417 m/s and Cart B moves right at 0.397m/s. Determine the velocity of Cart B before the collision.--> Cart A is m --> Cart A (0.200 kg) is moving to the right at 0.34 m/s. Cart B (0.400 kg) is stationary. Cart A collides into Cart B. After the collision, Cart A moves to the left at 0.112 m/s. Determine the velocity of Cart B after the collision? -->Cart A (0.400 kg) is moving to the right at 0.34 m/s. Ca... ##### McGriff Dog Food Company normally takes 27 days to pay for average daily credit purchases of... McGriff Dog Food Company normally takes 27 days to pay for average daily credit purchases of $9,460. Its average daily sales are$10,700, and it colects accounts in 27 days. a. What is its net credit position? Net credit position b-1. If the firm extends its average payment period from 27 days to 38... ##### For X near ), local linearization gives 1+x. Using a graph, decide if the approximation is... For X near ), local linearization gives 1+x. Using a graph, decide if the approximation is an over or underestimate, and estimate to one decimal place the magnitude of the error for -0.9 <x<0.9. typu approximation is an underestimate the absolute tolerance is +/-0.1 Click if you would like to ... ##### Solve this question on A4 paper; take a photo of your answer; then upload and submit you can upload up to two pages:(Non-anonymous question @ (40 Points)Consider the following non-linear system:11[1 - I1 ~ 2h(z)] z112[2 = h(z)] z2Where: h(z) = T2/(1 + T1)Find all equilibrium points for the above system and compute the linearized state-space model in matrix format at all equilibrium points. (40%) Solve this question on A4 paper; take a photo of your answer; then upload and submit you can upload up to two pages: (Non-anonymous question @ (40 Points) Consider the following non-linear system: 11 [1 - I1 ~ 2h(z)] z1 12 [2 = h(z)] z2 Where: h(z) = T2/(1 + T1) Find all equilibrium points for the ... ##### Find the measure of the angles. A =(2p)°, B = p°Group of answer choices15° and 30°60° and 120°17° and 34°30° and 60° Find the measure of the angles. A = (2p)°, B = p° Group of answer choices 15° and 30° 60° and 120° 17° and 34° 30° and 60°... ##### S you need to edit. It's safer to stay in Protected View. Enable Editing During the... s you need to edit. It's safer to stay in Protected View. Enable Editing During the last three months (April, May, and June) of the first fiscal year of operations of My Assistant, a few new types of transactions began to occur. These involve different ways the company is paying for some items, ... ##### Consider the overall effects on Moore of selling antiques on credit for £61,000 and incurring expenses... Consider the overall effects on Moore of selling antiques on credit for £61,000 and incurring expenses totalling £37,000. What is Moore's profit or loss?... ##### The Specific Rotation of (R)-2-bromobutane Is -23.1 The Specific Rotation of the molecule below is:Br(1)0A 0 -23.1 0c-46.2 00.23.1 The Specific Rotation of (R)-2-bromobutane Is -23.1 The Specific Rotation of the molecule below is: Br (1) 0A 0 -23.1 0c-46.2 00.23.1... ##### Let X be a random variable that follows a binomial distribution with n= 12, and probability... Let X be a random variable that follows a binomial distribution with n= 12, and probability of success p = 0.86. a) What is P(X = 10)? Round your response to at least 3 decimal places. Number b) What is P(X > 10)? Round your response to at least 3 decimal places. Number c) What is P(X < 10)? R... ##### Problam 12: Draw te organk products fommed In each reacilon ~cichshCHcHs ~ocichjl~OCHzCHsCHCHOH CHCH;"NHz (2 equiv)NaNH: (CHJ) CH-CHCH BrCH;- CH;DOUKOCiCh , (2 oquiv) DMSOCH CH,OHH,0 Problam 12: Draw te organk products fommed In each reacilon ~cichsh CHcHs ~ocichjl ~OCHzCHs CHCHOH CHCH; "NHz (2 equiv) NaNH: (CHJ) CH-CHCH Br CH;- CH; DOU KOCiCh , (2 oquiv) DMSO CH CH,OH H,0... ##### 2 36.3 Let Ila and Ilb be equivalent norms on a vector space X; and let E be & subset of X. For each of the following statements, prove that the statement is true with respect to || Ila if and only if it is true with respect to Il Ilb: (a) E is closed_ (b) E is compact. (c) E is bounded (d) c e X is an accumulation point of E. (e) E is complete (every Cauchy sequence in E converges to & point of E). (f) E is convex: 2 36.3 Let Ila and Ilb be equivalent norms on a vector space X; and let E be & subset of X. For each of the following statements, prove that the statement is true with respect to || Ila if and only if it is true with respect to Il Ilb: (a) E is closed_ (b) E is compact. (c) E is bounded (d) c e... ##### How is a circular loop DNA of prokaryotes packaged intothermodynamically stable supercoils without breaking bonds? What differentiates the prokaryotic DNA (tertiary and quaternarystructures) from eukaryotic DNA?Why can't wobbling occur on the first and/or second bases of acodon? How is a circular loop DNA of prokaryotes packaged into thermodynamically stable supercoils without breaking bonds? What differentiates the prokaryotic DNA (tertiary and quaternary structures) from eukaryotic DNA? Why can't wobbling occur on the first and/or second bases of a codon?... ##### LAB 3: COULOMBS LAWPart A Theory Please study electric force concept and answer the following questions. What are the charges for proton and electron? Explain the Coulmb's law.If91 = 7 pC q2 = - 6 HC, the magnitude force acting on G2 from 41 is Fqz = 62.0 N, what is distance in cm between 9i and q2?Three-point charges lie along the x axis as shown in the figure. The positive charge Q1 8.0 /C is at X1 18.0 cm, the positive charge q2 3.00 C is at the origin and the net force acting on Q3 is z LAB 3: COULOMBS LAW Part A Theory Please study electric force concept and answer the following questions. What are the charges for proton and electron? Explain the Coulmb's law. If91 = 7 pC q2 = - 6 HC, the magnitude force acting on G2 from 41 is Fqz = 62.0 N, what is distance in cm between 9i ... ##### Wire is 3 meters long and 0.3cm: in points diameter What is the tension of the wire if it is elongated 0.2cm (Hint Y-3x10^11 Nlm^2)1413 N/m^21310 N/m^21513 N/m^21200 N/m^2 wire is 3 meters long and 0.3cm: in points diameter What is the tension of the wire if it is elongated 0.2cm (Hint Y-3x10^11 Nlm^2) 1413 N/m^2 1310 N/m^2 1513 N/m^2 1200 N/m^2... ##### 1. Successful value chain management involves six requirements. Please explain each element of the six requirements... 1. Successful value chain management involves six requirements. Please explain each element of the six requirements thoroughly giving examples of each. What types of organizational benefits does value chain management provide? Are there obstacles to value chain management? Who has the power in the v... ##### B. Name the following compounds (IUPAC) (3 points each)CHzCH3 '~CH3NHCH3 B. Name the following compounds (IUPAC) (3 points each) CHzCH3 '~CH3 NH CH3... ##### Consider the ribonuclease folding reaction, U → F Experimental measurements indicate the following thermodynamic parameters (T=298K):... Consider the ribonuclease folding reaction, U → F Experimental measurements indicate the following thermodynamic parameters (T=298K): ΔGo' (kJ/mol) ΔHo' (kJ/mol) ΔSo' (J/K∙mol) -46 -280 -790 Calculate the equilibrium constants at T=298K an... ##### If you have 14ml of a solution of 15% KOH and want to make 10ml of... If you have 14ml of a solution of 15% KOH and want to make 10ml of a 9% solution, how would you prepare the solution? You would add ml of the 15% KOH to a graduated cylinder and add ml of water.Answers should be numerical (eg, 1, not one)... ##### Please show me how I can finds the answers of these two examples Example 1: Design of a Solar Heater (Black bodies: no convection) A square flat plate collector is exposed to solar radiation from... please show me how I can finds the answers of these two examples Example 1: Design of a Solar Heater (Black bodies: no convection) A square flat plate collector is exposed to solar radiation from both the sides. The exposed surface of the plate has an absorptivity of 1.0 for solar radiation. On ... ##### Problem 6: A stamp collector wants to include in her collection exactly one stamp from each country of Africa. If I(s) means that she has stamp $in her collection, F(s, means that stamp$ was issued by country C, the domain for $is all stamps, and the domain for € is all countries of Africa, express the statement that her collection satisfies her requirement: Do not use the 3! symbol.Problem 7: Translate the following statement into propositional logic Clearly specify used propositions: Problem 6: A stamp collector wants to include in her collection exactly one stamp from each country of Africa. If I(s) means that she has stamp$ in her collection, F(s, means that stamp $was issued by country C, the domain for$ is all stamps, and the domain for € is all countries of Africa,... The correct answer is -1, why? 3. In Ozone, California, people all have the same tastes and they all like hot tubs. Nobody wants more than one hot tub but a person with wealth $W will be willing to pay up to$.01W for a hot tub. The number of people with a wealth greater than \$W for any given Win Oz...
2022-08-08 01:20:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37803786993026733, "perplexity": 4017.008350506373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00554.warc.gz"}
https://byjus.com/jee/differentiability-of-composite-functions/
# Differentiability of Composite Functions The function f (x) is said to be differentiable at point P, if and only if, a unique tangent exists at the point P. This article explains differentiability of composite functions. In another way, f (x) is differentiable at point P, if and only if, P is not a corner point for the curve. • A function is said to be differentiable in an interval (a, b) if it is differentiable at every point of (a, b). • A function is said to be differentiable in an interval [a, b] if it is differentiable at every point of [a, b]. Composite Function Consider three sets A, B and C, which are non-empty. Let f: A → B and g: B → C be two functions. Then g of f: A → C. This function is called the composition of f and g. Properties of the composition of function • f is even, and g is even – fog is even function • f is odd, and g is odd – fog is an odd function • f is even, and g is odd – fog is even function • f is odd, and g is even – fog is even function • Composite of functions is not commutative i.e., $fog\,\ne \,gof.$ • Composite of functions is associative i.e., $(fog)oh\,=\,fo(goh)$ • If $f:A\to B$ is bijection and $g:B\to A$  is inverse of f. Then $fog={{I}_{B}} \text \ and \ gof={{I}_{A}}.$where, ${{I}_{A}}$ and ${{I}_{B}}$ are identity functions on the sets A and B respectively. • If $f:A\to B$ and $g:B\to C$ are two bijections, then $gof:A\to C$ is bijection and ${{(gof)}^{-1}}=({{f}^{-1}}o{{g}^{-1}}).$ • $fog\ne gof$ but if, $fog=gof$ then either ${{f}^{-1}}=g$ or ${{g}^{-1}}=f$ also, $(fog)\,(x)=(gof)\,(x)=(x).$ • $gof(x)$ is simply the g-image of f(x), where f(x) is f-image of elements xA. • Function $gof$ will exist only when the range of f is the subset of the domain of g. • $fog$ does not exist if the range of g is not a subset of the domain of f. • $fog$ and $gof$ may not be always defined. • If both f and g are one-one, then $fog$ and $gof$ are also one-one. • If both f and g are onto, then $gof$ is onto. Composite Function Differentiation Let g and h be two functions where y = g (u) and u = h (x). If the function is defined by y = g [h (x)] or g o h(x), then it is called a composite function. So, if g (x) and h (x) are 2 differentiable functions, then fog is also differentiable and hence (fog)’(x) = f’(g(x)).g’(x) Let y be a differential function of u and u is a differential function of x, then [dy/dx] = [dy/(du)] × [du/(dx)] Let y = g(u) and u = f(x) Let Δx be an increment in x and Δ u and Δy be the corresponding increments in u and y respectively. [y + Δ y] = [g(u + Δu)] and [u + Δu] = [f(x + Δx)] Δy = [g(u + Δu)] – [g(u)] and Δu = [f(x + Δx) – f(x)] [Δy/(Δu)] = [(g (u + Δu)]/[(Δu)] and [Δu/(Δx)] = [(f(x + Δx) – f(x))]/[Δx] [Δy/(Δx)] = [Δy]/[(Δu).Δu/(Δx)] On applying the limits, ⇒ [dy/dx] = [dy/du] × [du/dx] = [d/du] g(u) × [d/dx] f(x) Limits Continuity and Differentiability Differentiation ## Differentiability Of Composite Functions Examples Example 1: Check whether the given function f (x) = |x| at x = 0 is differentiable. Solution: Since this function is continuous at x=0 Now for differentiability, f (x) = |x| = |0| = 0 and f (0+h) = f (h) = |h| ∴limh→0−[f (0+h) − f(0)] / [h] = limh→0− |h| / h = −1 and limh→0+[f (0+h) − f(0)] / h = limh→0+|h| / h = 1 Therefore, it is continuous and non-differentiable. Example 2: If f (x) = x / [1 + |x|] for x ∈ R, then find f′(0). Solution: Let x < 0 ⇒ |x| = −x F (x) = (d / dx) (x / [1 – x]) = 1 / (1−x)2 [f′(x)]x=0 = 1 Again x>0 |x| = x f (x) = (d / dx) (x / [1 + x]) = 1 (1 + x)2 ⇒ [f′(x)]x=0 = 1 f′(0)=1 Example 3: The set of all those points, where the function f (x) = x / [1+|x|] is differentiable, is _______________. Solution: Let h(x) = x , x ∈ (−∞,∞); g(x) = 1 + |x|, x ∈ (−∞,∞) Here “h” is differentiable in (−∞,∞) but |x| is not differentiable at x=0. Therefore, “g” is differentiable in (−∞,0)∪(0,∞) and g(x)≠0,x∈ (−∞,∞), therefore f (x) = h(x) / g(x) = x / [1+|x|] It is differentiable in (−∞, 0) ∪ (0, ∞) for x = 0 limh→0 [f (h) − f (0)] / [ h − 0] = limh→0 [h / 1 + |h|] − 0 / h = limh→01 / [1 + |h|] = 1 Therefore, f is differentiable at x = 0, so f is differentiable in (−∞, ∞). Example 4: The function f (x) = (x2 − 1) |x2 − 3x + 2|+ cos (|x|) is not differentiable at A) 1 B) 0 C) 1 D) 2 Solution: Since function |x| is not differentiable at x = 0 ∴|x2 − 3x + 2| = | (x − 1) ( x − 2) | Hence, is not differentiable at x=1 and 2 Now f (x) = (x2 − 1) |x2 − 3x + 2|+ cos (|x|) is not differentiable at x = 2 For 1 < x < 2, f (x) = −(x2 −1) (x2 − 3x + 2) + cosx For 2 < x < 3, f (x) = +(x2 −1) (x2 − 3x + 2) + cosx L f′(x) = − (x2 −1) (2x − 3) − 2x (x2 − 3x + 2) − sinx L f′(2) = − 3 − sin2 R f′(x) = (x2 −1) (2x − 3) + 2x (x2 − 3x + 2) − sinx R f′(2) = (4 − 1) (4 − 3) + 0 − sin2 = 3 − sin2 Hence, Lf′(2) ≠ Rf′(2). Example 5: If $f(x)=\left\{ \begin{matrix} {{e}^{x}}+ax, & x<0 \\ b{{(x-1)}^{2}}, & x\ge 0 \\ \end{matrix} \right.$ is differentiable at x = 0, then find the value of a and b. Solution: Given f (x) is differentiable at x = 0. Hence, f (x) will be continuous at x = 0. $\lim_{x\rightarrow 0^{-}}(e^{x}+ax)=\lim_{x\rightarrow 0^{+}}b(x-1)^{2} \\ e^0+a \times 0=b(0-1)^{2} \\ b=1$ But f (x) is differentiable at x = 0, then $L{f}'(x)=R{f}'(x) \\ \frac{d}{dx}({{e}^{x}}+ax)=\frac{d}{dx}b{{(x-1)}^{2}} \\ {{e}^{x}}+a=2b(x-1) \text \ At \ x=0, {{e}^{0}}+a=-2b \\ a+1=-2b \\ a=-3 \\ (a,\,\,b)=(-3,\,\,1).$ Example 6: The function $f(x)=\left\{ \begin{matrix} {{e}^{2x}}-1 & , & x\le 0 \\ ax+\frac{b{{x}^{2}}}{2}-1 & , & x>0 \\ \end{matrix} \right.$ is continuous and differentiable for A)            [.] B)            a = 2, b = 4 C)            a = 2, any b D)            Any a, b = 4 Solution: Because f is continuous at x = 0, $f({{0}^{-}})=f({{0}^{+}})=f(0)=-1$ Also $L{f}'(0)=R{f}'(0) \\ \underset{h\to 0}{\mathop{\lim }}\,\frac{f(0-h)-f(0)}{-h}=\underset{h\to 0}{\mathop{\lim }}\,\frac{f(0+h)-f(0)}{h} \\ \underset{h\to 0}{\mathop{\lim }}\,\left( \frac{{{e}^{-2h}}-1+1}{-h} \right)=\underset{h\to 0}{\mathop{\lim }}\,\left( \frac{ah+\frac{b{{h}^{2}}}{2}-1+1}{h} \right) \\ \underset{h\to 0}{\mathop{\lim }}\,\left( \frac{-2{{e}^{-2h}}}{-1} \right)=\underset{h\to 0}{\mathop{\lim }}\,\left( a+\frac{bh}{2} \right) \\ 2=a+0 \\ a=2,\,\,b$ any number.
2020-10-27 18:22:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 30, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410433530807495, "perplexity": 1300.0144585202984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894426.63/warc/CC-MAIN-20201027170516-20201027200516-00381.warc.gz"}
http://mindymallory.com/PriceAnalysis/ending-stocks-and-price.html
# Chapter 15 Ending Stocks and Price Over the course of the last several chapters we have covered each category of supply and use. In tables 1 and 2 below, that literally means we covered how to forecast the numbers in each row of the USDA WASDE balance sheet. Subtracting total use from total supply gives an estimate of marketing year ending stocks. For example, in table 1, $Supply, Total - Use, Total = 16,909 - 14,500 = 2,409 (Million bushels) = Ending Stocks;$ $Supply, Total - Use, Total = 4,426 - 4,061 = 365 (Million bushels) = Ending Stocks$ Table 1. September 2016 USDA WASDE Balance Sheet for Corn Corn Marketing Year 2014/2015 Marketing Year 2015/2016 Est. Marketing Year 2016/2017 July Projection Marketing Year 2016/2017 August Projection Million Acres Area Planted 90.6 88 94.1 * 94.1 Area Harvested 83.1 80.7 86.6 * 86.6 Bushels Yield per Harvested Acre 171 168.4 168.0 * 175.1 Million Bushels Beginning Stocks 1232 1731 1701 1706 Production 14216 13601 14540 15153 Imports 32 65 40 50 Supply, Total 15479 15397 16281 16909 Feed and Residual 5314 5200 5500 5675 Food, Seed & Industrial 6567 6567 6650 6650 Ethanol & by-products 5200 5200 5275 5275 Domestic, Total 11881 11767 12150 12325 Exports 1867 1925 2050 2175 Use, Total 13748 13692 14200 14500 Ending Stocks 1731 1706 2081 2409 Avg. Farm Price ($/bu) 3.7 3.55 - 3.65 3.10 - 3.70 2.85 - 3.45 and Table 2. September 2016 USDA WASDE Balance Sheet for Soybeans Soybeans Marketing Year 2014/2015 Marketing Year 2015/2016 Est. Marketing Year 2016/2017 July Projection Marketing Year 2016/2017 August Projection Million Acres Area Planted 83.3 82.7 83.7* 83.7 Area Harvested 82.6 81.8 83* 83 Bushels Yield per Harvested Acre 47.5 48 48.9* 50.6 Million Bushels Beginning Stocks 92 191 255 195 Production 3927 3929 4060 4201 Imports 33 25 30 30 Supply, Total 4052 4145 4346 4426 Crushings 1873 1900 1940 1950 Exports 1842 1880 1950 1985 Seed 96 97 95 95 Residual 50 12 31 31 Use, Total 3862 3889 4016 4061 Ending Stocks 191 255 330 365 Avg. Farm Price ($/bu) 10.1 8.95 8.35 - 9.85 8.30 - 9.80 However, this still leaves a lot to be desired because the most compelling reason to keep a detailed balance sheet and forecast future supply and use is to come up with a reasonable expectation for price. After all our work on forecasting the components of the balance sheet, we have not made much headway in that regard. In this chapter, we cover some approaches for taking a forecast of ending stocks and translating that into a forecast of price. ## 15.1 Forecasting Price Arriving at an estimate of ending stocks gives one a sense of the degree of scarcity (or lack-thereof) in the market. It is still difficult to infer the marketing year average price from that, because the prevailing price that should coincide with the forecasted ending stocks is a function of the elasticities of demand for different use categories. These can be difficult to estimate, and we are not guaranteed that elasticity is constant from one year to the next. Figure 1 below is reproduced from the farmDoc Daily (fdd) article by Good and Irwin, “The Relationship between Stocks-to-Use and Corn Prices Revisited” Source: FarmDoc Daily Since the supply curve shifts from year-to-year and the demand curve shifts from year-to-year due to a myriad of factors, one cannot count on estimating a single supply or demand curve from a series of price and quantity pairs. However, once we have entered a marketing year; i.e., we have harvested the domestic supply in the balance sheet, we can count of total supply being quite inelastic. We can be confident of this because imports are historically a very small part of domestic total supply for corn, and after the domestic harvest, imports would be the only way to shift the supply curve. Further, if one had some confidence that the demand curve was more or less constant through time, a time-series of prices and quantities would approximately trace out the demand curve. ## 15.2 Examining the Data This section continues to draw heavily on the Good and Irwin fdd article referenced above. First let us take a look at the average price received for corn over time and the stocks-to-use of corn over time in figure 2. These data can both be obtained from the USDA ERS Feed Grains Database database, although you have to download the stocks and use separately and create your own stocks-use-variable. Perhaps the first thing that one notices in this figure is the pronounced stocks-to-use spikes that occurred in the 1982/1983 and 1985/1986, 1986/1987, 1987/1988 marketing years. Those exceptionally high stocks relative to use was a result of government commodity programs designed to keep prices from falling too far. Specifically, the stocks were help primarily in the Farmer-Owned-Reserve or by the Commodity Credit Corporation . Both programs were designed to keep bushels off the market and thus buoying prices. During periods of prolonged excesses, however, it becomes very costly for the government to procure and store large quantities of the commodity and it has a continuing depressing effect on market prices because the market knows the government holds large stockpiles. Farm legislation (‘The Farm Bill’ is re-negotiated every four years by congress) has trended toward more market-oriented approaches to supporting agriculture, and one can observe a marked decline in stocks-to-use over time. Aside from the the wild swings in the 1980’s, the series still seem to show a negative relationship between stocks-to-use and prices, as one would expect. Figure 3 graphs these two series as a scatter-plot with stocks-to-use on the x-axis. A clear negative relationship emerges, but the relationship when stocks are less than 20% of use is less clear. To help clarify, figure 4 highlights years before and after 2006. Highlighting the data and pre- or post-20066 clearly shows a wide range of prices over a relatively narrow range of stocks-to-use realizations. Given that 2006 is the beginning of the ramp-up in ethanol production, this should not be surprising. Suddenly there was a large and very inelastic demander of corn in the market. This ensured supply would have to be rationed by price to keep stocks from falling to low levels. Also in figure 4 trendlines are fitted for the two subsets of the data. Since both scatterplots appear to display a curvature, the price data are regressed on the log of the stocks-to-use data. Also, this specification provided the highest $$R^2$$ of the regression specifications available in the defalt Excel options. The regression in the post-2006 period explains 80% of the variaiton in the price data, which suggests it is a reasonable starting point for forecasting price using ending stocks and the balance sheet approach. ## 15.3 References Good, D., and S. Irwin. “The Relationship between Stocks-to-Use and Corn Prices Revisited.” farmdoc daily (5):65, Department of Agricultural and Consumer Economics, University of Illinois at Urbana-Champaign, April 9, 2015. ## 15.4 Exercises 1. Go to www.agmanager.info, the extension website of Kansas State University’s Agricultural Economics department. • Navigate through the Grain Marketing menu to the Grain Supply and Demand (WASDE) page. • Click on Spreadsheets with WASDE data, or scroll to the bottom of the page. • This data is also available from various USDA websites (like https://quickstats.nass.usda.gov/), but the www.agmanager.info site is particularly thorough and well organized for historical WASDE data, so it is a good resource to know about. 2. Open all three Excel tables for corn, soybeans, and wheat. • Also open a new Excel spreadsheet. • For each of the commodities, corn, sobyeans, and wheat, go to the Annual Sheet, then copy and paste the Year, Stock %, and Average Farm Price columns into your new Excel Spreadsheet. Be sure to label the columns by commodity. 3. For each commodity, Recreate figure 2 (a time-series chart of prices recieved by farmers and stocks/use %) and figure 3 (an x-y scatterplot of prices recieved by farmers and stocks/use %) from chapter 11. • Fit an appropriate trendline through each of the x-y scatterplots. • Make an educated forecast for average farm price recieved for each commodity. ### References Westcott, Paul C, and Linwood A Hoffman. 1999. “Price Determination for Corn and Wheat: The Role of Market Factors and Government Programs.” United States Department of Agriculture, Economic Research Service. 1. The year 2006 here is assumed to be a transtion year and dropped from the figure. In 2006, stock-to-use was 11.63% and average price was \$3.04. Examining this data-point on Figure 4 suggests 2006 does not fit either regime well.↩︎
2022-07-03 21:08:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32692885398864746, "perplexity": 2866.205475199931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104249664.70/warc/CC-MAIN-20220703195118-20220703225118-00586.warc.gz"}
https://madaboutparis.com/forum/uqbve.php?id=0ac082-theoretical-vs-empirical-personality
5 and mentally projecting the sequential ROC curve to the right, it seems fairly safe to assume it would still fall below the simultaneous ROC. This empirical measure of discriminability is not based on any theoretical assumptions about memory. c If, for some reason, policymakers preferred a FAR of approximately .06 because of the higher HR that could be achieved, the fact that pAUCSIM > pAUCSEQ over the tested FAR range (0 to FAR z m Such research is often focused on testing theories that may have applied significance. Huntington: Robert E. Krieger Publishing Co. Gronlund, S. D., Carlson, C. A., Neuschatz, J. S., Goodsell, C. A., Wetmore, S. A., Wooten, A., & Graham, M. (2012). Target Under those assumptions, d' can be safely compared for Condition A vs. The same receiver operating characteristic (ROC) data as in Fig. And what would the policy implications be in a case like that? ) is equal for the three procedures and that the difference in pAUC arises because the simultaneous procedure is less susceptible to the deleterious effects of criterion variability than showups and sequential lineups. In both cases, it is the structural constraints of the testing procedure itself, not a difference in underlying latent variables, that results in a difference in the empirical ROC curves. 5 except that the smooth curves generated by a theoretical (signal detection) model are drawn through the ROC data points. Journal of Applied Psychology, 70, 556–564. volume 3, Article number: 9 (2018) A photo lineup consists of a picture of one suspect (the person who the police believe may have committed the crime) plus several additional photos of physically similar foils (i.e., fillers) who are known to be innocent. and variance σ In other words, the probability that a suspect identified from a sequential procedure is guilty is .746, whereas the probability that a suspect identified from a simultaneous procedure is guilty is .576. max is the same in both cases. Journal of Experimental Psychology: Applied, 19, 345–357. Finally, criterion variance (σ The same basic model applies to a lineup, but the way it works is somewhat different. (2016). 5, it is visually obvious that pAUC for the simultaneous procedure is greater than the pAUC for the sequential procedure over the FAR range of 0 to .038 (the maximum FAR for the sequential procedure). Z Instead, empirical discriminability refers to the degree to which participants correctly sort target and foil stimuli into their true categories. 2. through μ Figure 4 presents the ROC data computed from the values shown in Table 1. would be estimated to be about 1.4. = 2.0, empirical discriminability for the showup procedure is impaired to a greater extent than empirical discriminability for the simultaneous lineup procedure. If, instead of using confidence ratings, instructions were used to induce conservative responding from the outset such that only IDs made with high confidence were obtained in the first place, the police would lose the potentially useful investigatory information that a suspect ID made with low or medium confidence might provide. SAE International's core competencies are life-long learning and voluntary consensus standards development. (2015). Viewed in this light, the “controversy” over ROC analysis of lineup performance actually consists of a normal scientific debate about which theory of underlying latent variables better accounts for the empirical data. Empirical is the information you received and found out, and theoretical the information that is set. Google Scholar. Then compute d’, not the diagnosticity ratio. However, unlike a filler, an innocent suspect is not known to be innocent and will be imperiled (and perhaps wrongfully convicted) if mistakenly identified. To appreciate why the two measures can go in opposite directions without contradiction, it is important to consider how d' Wells, G. L., Yang, Y., & Smalarz, L. (2015). Their simulations showed that, in the absence of criterion variability and with d' m However, because this “hand waving” analysis of the effect of sequential lineups on the HR and FAR is clearly insufficient, a quantitative assessment of some kind is needed. Psychology, Public Policy, and Law, 17, 99–139. For example, if the prior odds of guilt are even (i.e., half target-present lineups, half target-absent lineups), one can ask about the posterior odds of guilt for the subset of lineups that resulted in a filler ID or No ID. Computer software is needed to precisely measure the size of the shaded area, and the tutorial videos associated with Gronlund, Wixted, and Mickes (2014) explain how to use one such R program, called pROC (Robin et al., 2011), to do that. (2012) actually estimated pAUC – not d' The data in Table 1 allow one to compute not only the overall HR and FAR but also a HR and FAR separately for varying degrees of response bias specified by the different confidence ratings. Foil ) was set to 0 for the simultaneous lineup and to 0.75 for the sequential lineup, which is why the sequential lineup, despite its higher d', yields a lower ROC than the simultaneous lineup. Moreover, no model of memory would be needed to reinforce the decision as to which of the two procedures is diagnostically superior. A stickler might contend that a minimum FAR greater than 0 should also be specified, one that is equal to the FAR associated with the leftmost ROC point from the condition with the larger minimum FAR (e.g., FAR Foil Nevertheless, to be sure about that, one would have to actually perform the experiment because it is at least theoretically possible that the ROC curves would cross and the sequential procedure would become superior in that higher FAR range. In addition, a slight variation of an earlier model [1] is presented to test the sensitivity of empirical models. Hypothetical receiver operating characteristics (ROC) curve for a lineup procedure in which a 5-point confidence scale was used. Live lineups were once the norm, but nowadays, the police almost always administer photo lineups after they identify a suspect in the days or weeks following a crime. The model shown in Fig. Both were referring to what we have here denoted d' C To compare the two procedures with respect to pAUC, it is essential to use the same FAR Wetmore, S., Neuschatz, J. S., Gronlund, S. D., Wooten, A., Goodsell, C. A., & Carlson, C. A. Wixted, J.T., Mickes, L. Theoretical vs. empirical discriminability: the application of ROC methods to eyewitness identification. The diagnostic feature-detection theory attributes the difference to a d' However, unlike Fig. Vision Research, 40, 1227–1268. However, in the presence of criterion variability (equated across the two procedures), simultaneous lineups yielded higher empirical discriminability (measured by pAUC) than showups. 1 Cameron, E. L., Tai, J. C., Eckstein, M. P., & Carrasco, M. (2004). Simulated receiver operating characteristic (ROC) data generated by a simultaneous lineup using the MAX decision rule, a sequential lineup using the “first-above-criterion” decision rule, and a showup. The absolute/relative distinction was originally advanced as a theory of response bias, with a relative judgment strategy corresponding to increased pressure to choose someone from the lineup. For example, in the latest critique of ROC analysis, Smith et al. = 1.4 and with σ In essence, that kind of comparison is what ROC analysis is all about, and it illustrates why ROC analysis is needed to unambiguously determine the diagnostically superior procedure. = μ Thus, the DR (i.e., the likelihood ratio) is equal to the correct ID rate divided by the false ID rate (HR/FAR). (i.e., over all possible memory-strength values for a filler) is given by: Again, this is the likelihood of observing a filler ID from a target-absent lineup. Other studies have reported no significant difference between the two procedures, but with a trend still favoring the simultaneous procedure (e.g., Andersen, Carlson, Carlson, & Gronlund, 2014; Exp. (2017). advantage enjoyed by simultaneous lineups compared to the other two procedures. Here, we describe how to write the likelihood function for that probability and then describe the similar approach used to write the likelihood functions for the probability of observing a filler ID from a target-present and then from a target-absent lineup. Alternatively, as noted earlier, confidence in No IDs could be collected in such a way as to allow one to project the ROC further to the right (i.e., by collecting a confidence rating in connection with the face that the witness believes is most likely to be the perpetrator). (1975). 1 $$,$$ \mathrm{f}=@\left(\mathrm{x}\right)\ \mathrm{normpdf}\left(\mathrm{x},\mathrm{mu}\_\mathrm{t},\mathrm{sigma}\_\mathrm{t}\right).\ast \mathrm{normcdf}\left(\mathrm{x},\mathrm{mu}\_\mathrm{d},\mathrm{sigma}\_\mathrm{d}\right).\hat{\mkern6mu} \left(\mathrm{k}-1\right).\ast \Big(1-\mathrm{normcdf}\left(\mathrm{c},\mathrm{x},\mathrm{sigma}\_\mathrm{c}\right), $$,$$ \mathrm{p}\_\mathrm{t}=\mathrm{integral}\left(@\left(\mathrm{x}\right)\ \mathrm{f}\left(\mathrm{x}\right),-15,15\right). m Retrieved 29 Mar 2016, from http://www.policeforum.org/. Only recently, however, has signal detection theory been brought to bear on this issue. Their current model policy states: “This policy recognizes that the sequential and simultaneous approaches are both valid methods of conducting an identification procedure and does not recommend one over the other.” (International Association of Chiefs of Police, 2016, p. 1). According to this model, different eyewitness identification procedures are differentially susceptible to the deleterious effects of criterion variability. JSTOR is part of ITHAKA, a not-for-profit organization helping the academic community use digital technologies to preserve the scholarly record and to advance research and teaching in sustainable ways. If these data were fit by a model to estimate d' is usually set to 0 because no specific theory is relied upon to justify the seemingly safe assumption that if responding were infinitely conservative, both the HR and the FAR would be 0. 6, even though underlying theoretical discriminability (d' This function corresponds to the probability of observing a target ID from a target-present lineup made with a particular level of confidence associated with criterion, c. If there are five confidence criteria for making a positive ID (as in Fig. 4 (integrated from − ∞ to + ∞): Again, this is the likelihood of observing a filler ID from a target-present lineup. Finally, the area beneath the curve was estimated from a FAR of 0 to FAR In summary, according to this theory, pAUCSIM > pAUCSEQ and pAUCSIM > pAUCSHOWUP because d'm-SIM > d'm-SEQ and d'm-SIM > d'm-SHOWUP, respectively. An understanding of that distinction is important for both theoreticians and policymakers because the two measures need not agree.
2021-08-05 07:54:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.566800057888031, "perplexity": 2005.1712756397299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00139.warc.gz"}
https://mmb.irbbarcelona.org/MCDNA/help/analysis/circular
# MC DNA Help - Analysis Circular Circular ## Circular This analysis is available when tool “Circular MC DNA” is chosen. The analysis parameters for circular DNA are Twist, Writhe and Radius of gyration. Twist Tw reflects the number of helical turns ($Tw = \sum_{i=1}^{N - 1} tw_i / 360$; N is the length of the sequence, $tw_i$ is the value for Twist in degree of base-pair step i.) and writhe Wr is the number of times the double helix crosses over on itself (supercoils). The relaxed structure for the circle is defined as the structure with Wr = 0 and twist values are the values of the relaxed twist state. Thus the total linking number $Lk_0$ of the relaxed circle is $Lk_0 = Tw_0$. To induce additional stress the twist value of each base pair step of the circle can be changed which results in new value of Tw. Over- or under-twisting of the relaxed structure results in a different linking number $\Delta Lk = Lk - Lk_0 = Tw - Tw_0$ and thus a different starting structure with $\Delta Lk \ne 0$. $\Delta Lk$ can only take integer numbers. $\Delta Lk = Tw + Wr$ will stay constant throughout the whole simulation, however Tw will change throughout the simulation due to the Monte Carlo moves and Wr becomes non-zero. The values of Tw and Wr of the final structure of the simulation are plotted. Another parameter to analyze the compactness of the circle is the radius of gyration $R_g$. We define the position $r_i$ of base-pair I as the middle between the C6 and C8 atom. The radius of gyration $R_g$ is then calculated as follows: $R_g = \sqrt{ \frac{1}{N} \sum_{i=1}^N (r_i - r_{mean})^2}$ $r_{mean}$ is the mean position of the base-pairs and N is the total number of base-pairs. ### For “Structure Flexibility Analysis” The values of Twist, Writhe and Radius of gyration are given for the relaxed circular structure. ### For “Trajectory Flexibility Analysis” Radius of gyration (in nm), Twist (in turns) and Writhe (in turns) are plotted against the index of the snapshot of the trajectory.
2021-09-16 19:32:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8781641721725464, "perplexity": 946.490734352901}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053717.37/warc/CC-MAIN-20210916174455-20210916204455-00621.warc.gz"}
https://math.stackexchange.com/questions/3378034/expected-number-of-coin-flips-to-see-3-heads
# Expected number of coin flips to see $3$ heads You toss a coin until you see $$3$$ (not necessarily consecutive heads). What's the expected number of coin tosses you make? I tried a lot of things, and I've seen the solution for three consecutive heads, but I'm not so sure how to do it if they are non-consecutive. With probability $$1/8$$, we stop after the first three coin tosses (if we get HHH). With probability $$3/16$$, we will terminate after the first four coin tosses (we can get THHH, HTHH, HHTH). It gets really messy for the rest of them, and so I don't think this approach is quite correct. Can anyone please help me solve this problem? • Consecutive is a lot harder than this...Just show inductively that the expected number of tosses needed to see $n$ $H's$ is $n$ times the expected number it takes to see $1$. – lulu Oct 2 '19 at 13:53 • Oh. So it's just $3 \cdot 2 = 6$ ? – user709945 Oct 2 '19 at 14:00 • Yep, that's all it is. – lulu Oct 2 '19 at 14:06 • Can you write it explictly , please @lulu – Aqua Oct 2 '19 at 14:09 • @Aqua Let $E_n$ be the expected time it takes to see exactly $n$ Heads. Then we have the recursion: $E_n=E_{n-1}+E_1$. Why? Well, to see $n$, you need to first see $n-1$, which you expect to take $E_{n-1}$ trials, and then you need to swee one more. Easy to show, inductively, that this implies $E_n=n\times E_1$. – lulu Oct 2 '19 at 16:14 No need for infinite sums here, the answer is just $$3\times E_1=6$$. More broadly, the expected number for $$n$$ Heads is $$2n$$. To see this, let $$E_n$$ be the expected number of tosses for $$n$$ Heads. We note that, to see $$n$$ Heads requires that you first see $$n-1$$, which you expect to take $$E_{n-1}$$ tosses. Then you need to see one more, which you expect to take $$E_1$$ tosses. Thus we have the recursion $$E_n=E_{n-1}+E_1$$ It follows, inductively, that $$E_n=n\times E_1$$ Since $$E_1=2$$ the claim follows. For completeness, here is a proof that $$E_1=2$$: Consider the first toss. Either it is $$H$$ or $$T$$. If it is $$H$$, you stop. If it is $$T$$ you restart (but you've added $$1$$ to the count). Thus $$E_1=\frac 12\times 1+\frac 12\times (E_1+1)\implies E_1=2$$ May be worth remarking that this gives another approach to the original question. Say we want to compute $$E_n$$. Then we consider one toss. Either it is $$H$$, in which case you want $$E_{n-1}+1$$ or it is $$T$$ in which case you want $$E_n+1$$. Thus $$E_n=\frac 12\times (E_{n-1}+1)+\frac 12\times (E_n+1)\implies E_n=E_{n-1}+2$$ • Still you need an infinite sums else how would one calculate $E_1$? – Aqua Oct 2 '19 at 16:55 • @Aqua $E_1$ is easy to compute, I'll add it to my post. – lulu Oct 2 '19 at 17:05 • @lulu Nice answer. But it doesn´t seem that the approach is much easier than calculating the expected value by sum. – callculus Oct 2 '19 at 17:52 • @callculus This method calculates the value for all $n$ simultaneously...but, more importantly, recursive methods work even when series don't (or at least, when series are badly impractical). That's the basis behind, say Markov theory...in which some process moves between known states with known probabilities. – lulu Oct 2 '19 at 17:56 • @lulu Sure the calculation is much easier. But it might be the problem (for the OP) to understand why the equations look like this. It looks easy at the first view but to understand it really is another thing. – callculus Oct 2 '19 at 18:00 Hint: After $$n-1$$ tosses we need to see $$2$$ heads and $$n-1-2=n-3$$ tails. The probability for that is $$\binom{n-1}{2}\cdot 0.5^2\cdot 0.5^{n-3}$$. The last toss must be head. Thus the probability to get 3 heads after n tosses is $$P(X=3)=\binom{n-1}{2}\cdot 0.5^2\cdot 0.5^{n-3}\cdot 0.5=\binom{n-1}{2}\cdot 0.5^n$$ Now calculate the expected value. $$\mathbb E(X)=\sum_{n=3}^{\infty} n\cdot \binom{n-1}{2}\cdot 0.5^n$$ Remark If you have problems to calculate the sum see the answer here $$(k=3)$$ from Arash. Hint: Assume you get 3 heads in k turns , at last you always get head and in k-1 turns some where you get 2 heads. Expectation is $$\sum_{k=3}^{\infty}\binom{k-1}{2}.{1 \over 2^k} .k$$
2020-09-28 15:51:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 36, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9271823167800903, "perplexity": 240.98291273460558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00561.warc.gz"}
http://mathhelpforum.com/number-theory/149892-area-right-triangle-whose-sides-integers-not-square-implies.html
# Math Help - The area of a right triangle whose sides are integers in not a square implies ===> 1. ## The area of a right triangle whose sides are integers in not a square implies ===> Hello, I have been trying to show that the fact that S1="The area of a right triangle whose sides are integers in not a square" implies that S2="There is no triangle whose sides are rational and whose area is 1". So far I have that if we assume S1, and also consider an arbitrary triangle whose sides are rational and whose area is 1, then we have that: $\frac{1}{2}\frac{x}{y}\cdot h=1\Rightarrow h=\frac{2y}{x},$ which shows that the height of one of these such triangles must be rational. Then, if we let $h=a/b$, we have that: $\frac{1}{2}\frac{x}{y}\frac{a}{b}=1\Rightarrow \frac{1}{2}ax=by$ I have also considered the fact that our triangle is two right triangles put together, and that our triangle may be enclosed within a rectangle, but so far nothing has panned out. The book treats this implication as trivial, saying that "if such a triangle existed, we would be able to obtain, by multiplying all three sides by a suitable integer, a triangle whose sides are integers and whose area is a square..." And I can see this, but I cannot see that this is a right triangle, as is needed to contradict S1. 2. Originally Posted by Dark Sun Hello, I have been trying to show that the fact that S1="The area of a right triangle whose sides are integers in not a square" implies that S2="There is no triangle whose sides are rational and whose area is 1". So far I have that if we assume S1, and also consider an arbitrary triangle whose sides are rational and whose area is 1, then we have that: $\frac{1}{2}\frac{x}{y}\cdot h=1\Rightarrow h=\frac{2y}{x},$ which shows that the height of one of these such triangles must be rational. Then, if we let $h=a/b$, we have that: $\frac{1}{2}\frac{x}{y}\frac{a}{b}=1\Rightarrow \frac{1}{2}ax=by$ I have also considered the fact that our triangle is two right triangles put together, and that our triangle may be enclosed within a rectangle, but so far nothing has panned out. The book treats this implication as trivial, saying that "if such a triangle existed, we would be able to obtain, by multiplying all three sides by a suitable integer, a triangle whose sides are integers and whose area is a square..." And I can see this, but I cannot see that this is a right triangle, as is needed to contradict S1. So you've already a triangle ABC whose sides are integers and whose area is a square; now form the right angle with legs A,H (H = the height to A in ABC) and voila! Tonio 3. Hi Tonio, Assuming that ABC has Area=x^2, for x an integer, How do we know that the right triangle formed in this way (with the height of ABC as one of the legs) also has area that is a square? 4. Originally Posted by Dark Sun Hi Tonio, Assuming that ABC has Area=x^2, for x an integer, How do we know that the right triangle formed in this way (with the height of ABC as one of the legs) also has area that is a square? Because you're forming the right triangle with one of the triangle's sides and its height as legs...! The area of a right triangle is leg times leg times 1/2, and this makes it sure the right triangle has the same area as the first one, and there you've your contradiction. Tonio 5. I'm sorry, perhaps I am missing something. I drew some figures to illustrate my confusion: Since in order to have a contradiction, we need that all three sides of our triangle be integers. 6. I talked to my professor, and after much deliberating, he found a counter-example on Wikipedia, confirming that there was a typo in the book. This theorem only applies to right triangles. The word "right" is missing from the book.
2015-05-04 04:31:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.874013364315033, "perplexity": 232.7892748769378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430453690104.75/warc/CC-MAIN-20150501041450-00097-ip-10-235-10-82.ec2.internal.warc.gz"}
https://kintali.wordpress.com/category/complexity/page/2/
# Graph Isomorphism, Tree Width, Path Width and LogSpace Every once in a while, I can’t help thinking about “the complexity of graph isomorphism for bounded treewidth graphs“. Today has been one of those days again. See my earlier post to get the context. Theorem ([Das, Toran and Wagner’10]) : Graph isomorphism of bounded treewidth graphs is in LogCFL. The proof of the above theorem is as follows 1. Graph isomorphism of bounded tree-distance width graphs is in L. 2. Given two graphs and their tree decompositions, computing the isomorphism respecting these tree decompositions is reducible to (1). 3. Given tree decomposition of only one graph, we can guess the tree decomposition of the other and guess the isomorphism (respecting the tree bags) and verify them using a non-deterministic auxiliary pushdown automata (a.k.a LogCFL). 4. Since tree decomposition of a graph can be computed in LogCFL, the above theorem follows. One of the bottlenecks, finding a tree decomposition of bounded treewidth graphs in logspace, is resolved by [Elberfeld, Jakoby and Tantau’10]. The following seems to be another major bottleneck. Given a graph $G$ and a decomposition $D$, how fast can we verify that $D$ is a valid tree decomposition of $G$ ? The upper bound of LogDCFL (the deterministic version of LogCFL) is clear from the above mentioned results. Can this verification be done in logspace ? The answer is frustratingly unknown. An even more frustrating realization I had today is that “it is not clear how to beat the LogDCFL upper bound for the more restricted path decomposition“. Even though the underlying tree in a path decomposition is just a path, verifying the connectivity conditions of a path decomposition does seem to require recursion. It is not clear how to avoid recursion. I thought that logspace upper bound is possible. Now I am much less confident about logspace upper bound. I cannot waste more time on this. The truth is “this is a cute problem“. I need to do something to take my mind off this problem and move on. Easy enough, except I need an idea. Update (Oct 12 2011) : Noticed that verification of path decompositions is easy. # Graph Isomorphism and Bounded Tree Width If you read my earlier post, you known that I am fan of treewidth. Who isn’t !! The complexity of Graph Isomorphism (earlier post) is one of the long-standing open problem. Intersecting these two with one of my research interests (space-bounded computation) we get the following open problem : Open Problem : What is the complexity of graph isomorphism for graphs with bounded treewidth ? Graphs with treewidth at most $k$ are also called partial $k$-trees. In 1992 Lindell proved that trees (graphs with treewidth=1) can be canonized in logspace [Lindel’92]. What about canonization for $k=2$ ? Recently Datta et. al [DLNTW’09] proved that the canonization of planar graph is logspace-complete. The following simple exercise shows that partial 2-trees are planar graphs. Hence the result of [DLNTW’09] implies that partial 2-trees can be canonized in logspace. Exercise : Every partial 2-tree is planar. In fact, canonization of partial 2-trees is settled earlier by [ADK’08]. What about $k=3$ ? Partial 3-trees may not be planar. An example is $K_{3,3}$ itself. The tree width of $K_{3,3}$ is three. I wanted to work on the case of $k=3$ and realized the following simple fact. Exercise : Partial 3-trees are $K_{5}$-free. In a follow-up paper to [DLNTW’09], Datta et al [DNTW’10] proved that canonization of $K_{3,3}$-free and $K_{5}$-free graphs is in Log-space. Hence we get the following corollary : Corollary : Partial 3-trees can be canonized in log-space. Since the above result is not explicitly mentioned in any papers, I wanted to make it clear in this post. Hence the open problem is for $k \geq 4$. LogCFL is the best known upper bound for graph isomorphism of partial k-trees [DTW’10]. One of the bottleneck, finding a tree decomposition of partial k-tree in logspace, is resolved recently [EJT’10]. The above mentioned papers make use of a decomposition of the input graph into two- or three-connected subgraphs, constructing an appropriate tree of these subgraphs, using the known structural properties of two- and three-connected graphs to canonize these subgraphs and using Lindell’s result to canonize the entire graph. Unfortunately no clean characterization exists for graphs with connectivity at least four. Many long-standing open problems in graph theory are trivial for 2- and 3- connected graphs and open for higher connectivity. A clean characterization of 4-connected graphs seems to be a major bottleneck in improving the space complexity of canonization of partial 4-trees. I am lost 😦 Open Problems • Is graph isomorphism of partial k-trees (for $k \geq 4$) in logspace ? • Is canonization of partial k-trees in LogCFL ? The paper of [DTW’10] solves isomorphism only. References : • [Lindell’92] Steven Lindell: A Logspace Algorithm for Tree Canonization STOC 1992: pages 400-404 • [DLNTW’09] Samir Datta, Nutan Limaye, Prajakta Nimbhorkar, Thomas Thierauf, Fabian Wagner: Planar Graph Isomorphism is in Log-Space. IEEE Conference on Computational Complexity 2009: 203-214 • [DNTW’10] Samir Datta, Prajakta Nimbhorkar, Thomas Thierauf, Fabian Wagner: Graph Isomorphism for K{3, 3}-free and K5-free graphs is in Log-space. Electronic Colloquium on Computational Complexity (ECCC) 17: 50 (2010) • [ADK’08] Vikraman Arvind, Bireswar Das, Johannes Köbler: A Logspace Algorithm for Partial 2-Tree Canonization. CSR 2008: 40-51 • [DTW’10] Bireswar Das, Jacobo Torán, Fabian Wagner: Restricted Space Algorithms for Isomorphism on Bounded Treewidth Graphs. STACS 2010: 227-238 • [EJT’10] Michael Elberfeld, Andreas Jakoby, Till Tantau: Logspace Versions of the Theorems of Bodlaender and Courcelle. FOCS 2010: 143-152 # Type Sensitive Depth and Karchmer Wigderson Games Throughout this post, we will be considering circuits over the basis $\{\vee,\wedge,\neg\}$ where $\{\vee,\wedge\}$-gates have fanin 2 and $\neg$-gates are only applied to input variables. Let $f : \{0,1\}^n \rightarrow \{0,1\}$ be a boolean function on $n$ variables and $G_n$ be a circuit computing $f$. For an output gate $g$, let $g_l$ and $g_r$ be the sub-circuits, whose outputs are inputs to $g$. Let $d(G_n)$ be the depth of circuit $G_n$ and $d(f)$ be the minimum depth of a circuit computing $f$. Karchmer and Wigderson [KW’90] showed an equivalence between circuit depth and a related problem in communication complexity. It is a simple observation that we can designate the two players as an “and-player” and an “or-player”. Let $S_0, S_1 \subseteq \{0,1\}^n$ such that $S_0 \cap S_1 = \emptyset$. Consider the communication game between two players ($P_{\wedge}$ and $P_{\vee}$), where $P_{\wedge}$ gets $x \in S_1$ and $P_{\vee}$ gets $y \in S_0$. The goal of the players to find a coordinate $i$ such that $x_i \neq y_i$. Let $C(S_1,S_0)$ represent the minimum number of bits they have to communicate in order for both to agree on such coordinate. Karchmer-Wigderson Theorem : For every function $f : \{0,1\}^n \rightarrow \{0,1\}$ we have $d(f) = C(f^{-1}(1),f^{-1}(0))$. Karchmer and Wigderson used the above theorem to prove that ‘monotone circuits for connectivity require super-logarithmic depth’. Let $C_{\wedge}(S_1,S_0)$ (resp. $C_{\vee}(S_1,S_0)$) represent the minimum number of bits that $P_{\wedge}$ (resp $P_{\vee}$) has to communicate. We can define type-sensitive depths of a circuit as follows. Let $d_{\wedge}(G_n)$ (resp. $d_{\vee}(G_n)$) represent the AND-depth (resp. OR-depth) of $G_n$. AND-depth : AND-depth of an input gate is defined to be zero. AND-depth of an AND gate $g$ is max($d_{\wedge}(g_l), d_{\wedge}(g_r)$) + 1. AND-depth of an OR gate $g$ is max($d_{\wedge}(g_l), d_{\wedge}(g_l)$). AND-depth of a circuit $G_n$ is the AND-depth of its output gate. OR-depth is defined analogously. Let $d_{\wedge}(f)$ (resp. $d_{\vee}(f)$) be the minimum AND-depth (resp. OR-depth) of a circuit computing $f$. Observation : For every function $f : \{0,1\}^n \rightarrow \{0,1\}$ we have that $C_{\wedge}(f^{-1}(1),f^{-1}(0))$ corresponds to the AND-depth and $C_{\vee}(f^{-1}(1),f^{-1}(0))$ corresponds to the OR-depth of the circuit constructed by Karchmer-Wigderson. Open Problems : • Can we prove explicit non-trivial lower bounds of $d_{\wedge}(f)$ (or $d_{\vee}(f)$) of a given function $f$ ? This sort of “asymmetric” communication complexity is partially addressed in [MNSW’98]. • A suitable notion of uniformity in communication games is to be defined to address such lower bounds. More on this in future posts. References : • [KW’90] Mauricio Karchmer and Avi Wigderson : Monotone circuits for connectivity require super-logarithmic depth. SIAM Journal on Discrete Mathematics, 3(2):255–265, 1990. • [MNSW’98] Peter Bro Miltersen, Noam Nisan, Shmuel Safra, Avi Wigderson: On Data Structures and Asymmetric Communication Complexity. J. Comput. Syst. Sci. 57(1): 37-49 (1998) # Balanced ST-Connectivity Today’s post is about a new open problem arising from my recent paper  (available on ECCC). The problem is as follows : Let $G(V,E)$ be a directed graph. Let $G'(V,E')$ be the underlying undirected graph of $G$. Let $P$ be a path in $G'$. Let $e = (u,v)$ be an edge along the path $P$. Edge $e$ is called neutral edge if both $(u,v)$ and $(v,u)$ are in $E$. Edge $e$ is called forward edge if $(u,v) \in E$ and $(v,u) \notin E$. Edge $e$ is called backward edge if $(u,v) \notin E$ and $(v,u) \in E$. A path (say $P$) from $s \in V$ to $t \in V$ in $G'(V,E')$ is called balanced if the number of forward edges along $P$ is equal to the number of backward edges along $P$. A balanced path might have any number of neutral edges. By definition, if there is a balanced path from $s$ to $t$ then there is a balanced path from $t$ to $s$. The path $P$ may not be a simple path. We are concerned with balanced paths of length at most $n$. Balanced ST-Connectivity : Given a directed graph $G(V,E)$ and two distinguished nodes $s$ and $t$, decide if there is balanced path (of length at most $n$) between $s$ and $t$. In my paper, I proved that SGSLOGCFL, a generalization of Balanced ST-Connectivity, is contained in DSPACE(lognloglogn). Details about SGSLOGCFL are in my paper. Theorem 1 : SGSLOGCFL is in DSPACE(lognloglogn). Open Problem : Is $SGSLOGCFL \in L$ ? Cash Prize : I will offer $100 for a proof of $SGSLOGCFL \in L$. I have spent enough sleepless nights trying to prove it. In fact, an alternate proof of Theorem 1 (or even any upper bound better than $O({\log}^2n)$) using zig-zag graph product seems to be a challenging task. Usually people offer cash prizes for a mathematical problem when they are convinced that : • it is a hard problem. • it is an important problem worth advertising. • the solution would be beautiful, requires new techniques and sheds new light on our understanding of related problems. My reason is “All the above”. Have Fun solving it !! A cute puzzle : In Balanced ST-Connectivity we are only looking for paths of length at most $n$. There are directed graphs where the only balanced st-path is super-linear. The example in the following figure shows an instance of Balanced ST-Connectivity where the only balanced path between $s$ and $t$ is of length $\Theta(n^2)$. The directed simple path from $s$ to $t$ is of length $n/2$. There is a cycle of length $n/2$ at the vertex $v$. All the edges (except $(v,u)$) on this cycle are undirected. The balanced path from $s$ to $t$ is obtained by traversing from $s$ to $v$, traversing the cycle clockwise for $n/2$ times and then traversing from $v$ to $t$. Puzzle : Are there directed graphs where every balanced st-path is of super-polynomial size ? Update : The above puzzle is now solved. Open Problems • Is $SGSLOGCFL \in L$ ? • Are there directed graphs where every balanced st-path is of super-polynomial size ? (solved) • More open problems are mentioned in my paper. # Hardness of Graph Isomorphism The complexity of Graph Isomorphism (GI) is one of the major open problems. It is easy to see that $GI \in NP$. It is known that $GI \in NP \cap coAM$. The following theorem states that it is unlikely that GI is NP-complete. Theorem [Schöning’87, BHZ’87] : If GI is NP-complete then the polynomial hierarchy collapses to its second level. The counting version of GI is known to be reducible to its decisional version. A polynomial time algorithm solving GI would be a major breakthrough. The best known algorithm runs in $2^{O(\sqrt{n{\log}n})}$ for graphs with $n$ vertices. Several special cases are shown to be in P. Several problems are known to be GI-hard. See this wikipedia article for details. GI is widely believed to be an NP-intermediate problem. Conjecture : If $P \neq NP$, then GI is neither NP-complete nor in P. Note that if the above conjecture is true then GI is P-hard. Is GI known to be P-hard ? What is the best known hardness of GI ? Well… we know very little about the hardness of GI. The following exercises show that GI is L-hard. Exercise : Consider the following restricted automorphism problem: Given a graph $G = (V,E)$ and two lists of nodes $(x_1, \dots, x_k),(y_1,\dots, y_k)$, is there an automorphism in G mapping $x_i$ to $y_i$ for 1 ≤ i ≤ k ? Show that this problem is reducible to GI. Exercise : Show that Undirected ST-connectivity is reducible to the above mentioned automorphism problem. Torán [Torán’00] proved the following hardness theorem. Informally speaking, GI is hard for all complexity classes defined in terms of the number of accepting computations of a nondeterministic logarithmic space machine. These are the best known hardness results for GI. Theorem [Torán’00] : GI is hard for $NL$, $PL$, $Mod_k{L}$ and $DET$. All these hardness results are under DLOGTIME uniform $AC^0$ many-one reductions. $DET$ is the class of problems $NC^1$ Turing reducible to the determinant [Cook’85]. It is known that $Mod_k{L} \subseteq DET$ and $NL \subseteq C_{=}L \subseteq PL \subseteq DET$. Hence the best known hardness of GI is DET-hardness. However, we do not know the exact complexity of $DET$ i.e., we don’t know where $DET$ lies in terms of the known complexity classes between $NL$ and $NC^2$. In particular, what is the relation between $LogCFL = SAC^1$ and $DET$ ? Torán also showed a randomized logarithmic space reduction from the perfect matching problem to graph isomorphism. More details about the complexity of perfect matching in a future blog post. Open Problems: • Is GI LogCFL-hard ? • Is DET LogCFL-hard ? What is the relation between LogCFL and DET ? This is an independent long-standing open problem. It deserves a separate blog post. • Is $GI \in coNP$ ? A proof of this would imply that “if GI is NP-complete then $NP = coNP$“, improving the above mentioned theorem. • Is GI in P for strongly regular graphs ? The best known algorithm for strongly regular graphs, given by Spielman [Spielman’96], runs in time $n^{O({n^{1/3}}{{\log}n})}$. References : • [BHZ’87] R. Boppana, J. Håstad, and S. Zachos , “Does co-NP have short interactive proofs?”, Information Processing Letters 25(2), pages 127-132, (1987). • [Schöning’87] Uwe Schöning, Graph isomorphism is in the low hierarchy, Proceedings of the 4th Annual Symposium on Theoretical Aspects of Computer Science, 1987, 114–124; also: Journal of Computer and System Sciences, vol. 37 (1988), 312–323 • [Cook’85] Stephen A. Cook, A Taxonomy of Problems with Fast Parallel Algorithms Information and Control 64(1-3): 2-21 (1985) • [Spielman’96] Daniel A. Spielman, Faster isomorphism testing of strongly regular graphsSTOC ’96: Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, ACM, pp. 576–584 • [Torán’00] Jacobo Torán, On the Hardness of Graph Isomorphism. FOCS’2000, also: SIAM J. Comput. 33(5): 1093-1108 (2004) # NP intersect coNP NP is the set of languages that have short proofs. coNP is the set of languages that have short refutations. Note that coNP is not the complement of NP. NP $\cap$ coNP is non-empty. It is easy to see that all languages in P are in NP $\cap$ coNP i.e., P $\subseteq$ NP $\cap$ coNP. It is conjectured that P $\subsetneq$ NP $\cap$ coNP. i.e., there are problems in NP $\cap$ coNP that are not in P. Following are some problems in NP $\cap$ coNP that are not known to be in P. Factoring : Given an integer what is the complexity of finding its factors. Every integer always has a unique factorization. Hence, Factoring is very different from the NP-complete problems. The following exercise states that it is highly unlikely that Factoring is NP-complete. On the other hand if Factoring is in P then the world as we know today will be in chaos !! Factoring is conjectured to be an intermediate problem. Exercise : If Factoring is NP-complete then NP = coNP. The first step to solve the above exercise is to show that Factoring is in NP $\cap$ coNP. In fact it is also in UP $\cap$ coUP. Perhaps this is the strongest evidence that P $\subsetneq$ NP $\cap$ coNP. Parity Games : Deciding which of the two players has a winning strategy in parity games is in NP $\cap$ coNP, as well as in UP $\cap$ coUP. Stochastic Games : The problem of deciding which player has the greatest chance of winning a stochastic game is in NP $\cap$ coNP [Condon’92] Lattice Problems : The problems of approximating the shortest and closest vector in a lattice to within a factor of $\sqrt{n}$ is in NP $\cap$ coNP [AR’05]. All the above problems are not known to be in P. Open Problems : • Are there other problems in NP $\cap$ coNP that are not known to be in P. • PPAD, PLS have been defined to understand problems whose structure is different from NP-complete problems. Can we define new complexity classes to study the complexity of the above mentioned problems (and related problems if any) ? • Graph Isomorphism (GI) is also conjectured to be an intermediate problem. It is known that GI is not NP-complete unless Polynomial Hierarchy collapses to its second level. Can we improve this result by showing that GI is in coNP ? Whether GI is in coNP is an interesting open problem for a very different reason also. More on this in a future post. References : • [Condon’92] Anne Condon: The Complexity of Stochastic Games Inf. Comput. 96(2): 203-224 (1992) • [AR’05] Dorit Aharonov, Oded Regev: Lattice problems in NP $\cap$ coNP. J. ACM 52(5): 749-765 (2005) # Logspace vs Polynomial time One of the primary goals of complexity theory is separating complexity classes, a.k.a proving lower bounds. Embarrassingly we have only a handful of unconditional separation results. Separating P from NP is of course the mother of all such goals. Anybody who understands the philosophical underpinnings of the P vs NP problem would love to LIVE to see its resolution. Towards resolving this, we made some (“anti”)-progress (Eg : Relativization, Natural proofs, Algebrization) and have a new geometric complexity theory approach which relies on an Extended-Extended-Extended-Extended-Riemann-Hypothesis !! For more information about the history and status of P vs NP problem read Sipser’s paper [Sipser’92], Allender’s status report [Allender’09] or Fortnow’s article [Fortnow’09]. Today’s post is about the Logspace (L) vs Polynomial time (P) problem, which (in my opinion) is right next to the P vs NP problem in its theoretical importance. I guess many researchers believe that $L \neq P$. Did we make any progress/anti-progress towards resolving the $L \neq P$ conjecture ? Here are two attempts both based on branching programs and appeared in MFCS with a gap of 20 years !! 1) A conjecture by Barrington and McKenzie (BM’89): The problem $GEN$ is defined as follows : $GEN$ : Given an $n \times n$ table filled with entries from $\{1,2,\dots,n\}$, which we interpret as the multiplication table of an $n$-element groupoid, and a subset $S$ of $\{1,2,\dots,n\}$ which includes element 1, determine whether the subgroupoid $$, defined as the closure of $S$ under the groupoid product, includes element $n$. Barrington-McKenzie Conjecture : For each $n > 1$, a branching program in which each node can only evaluate a binary product within an $n$-element groupoid, branching $n$ ways according to the $n$ possible outcomes, must have at least $2^{n-2}$ nodes to solve all $n \times n$ $GEN$ instances with singleton starting set $S$. The problem $GEN$ is known to be P-complete [JL’76]. Barrington-McKenzie Conjecture would imply that $GEN \notin DSPACE({{\log}^k}n)$ for any $k$. In particular, it would imply that $L \neq P$. I don’t know if there is any partial progress towards resolving this conjecture. 2) Thrifty Hypothesis : This is a recent approach by Braverman et. al [BCMSW’09] towards proving a stronger theorem $L \neq LogDCFL$. Stephen Cook presented this approach at Valiant’s 60th birthday celebration and Barriers Workshop. He also announced a$100 prize for solving an intermediate open problem mentioned in his slides. Tree Evaluation Problem (TEP): The input to the problem is a rooted, balanced $d$-ary tree of height $h$, whose internal nodes are labeled with $d$-ary functions on $[k] = \{1, . . . , k\}$, and whose leaves are labeled with elements of $[k]$. Each node obtains a value in $[k]$ equal to its $d$-ary function applied to the values of its $d$ children. The output is the value of the root. In their paper they show that $TEP \in LogDCFL$ and conjecture that $TEP \notin L$. They introduce Thrifty Branching Programs and prove that TEP can be solved by a thrifty branching program. A proof of the following conjecture implies that $L \neq LogDCFL$. For more details, read this paper. Thrifty Hypothesis : Thrifty Branching Programs are optimal among deterministic branching programs solving TEP. Open Problems : • My knowledge about the history of L vs P problem is limited.  Are there other approaches/attempts in the last four decades to separate L from P ? • An intermediate open problem is mentioned in the last slide of these slides. The authors announced \$100 prize for the first correct proof. Read their paper for more open problems. References : • [BM’89] David A. Mix Barrington, Pierre McKenzie: Oracle Branching Programs and Logspace versus P. MFCS 1989: 370-379 • [BCMSW’09] Mark Braverman, Stephen A. Cook, Pierre McKenzie, Rahul Santhanam, Dustin Wehr: Branching Programs for Tree Evaluation. MFCS 2009: 175-186 • [Sipser’92] Michael Sipser: The History and Status of the P versus NP Question STOC 1992: 603-618 • [Allender’09] Eric Allender: A Status Report on the P Versus NP Question. Advances in Computers 77: 117-147 (2009) [pdf] • [Fortnow’09] Lance Fortnow: The status of the P versus NP problem. Commun. ACM 52(9): 78-86 (2009) [pdf] • [JL’76] Neil D. Jones, William T. Laaser: Complete Problems for Deterministic Polynomial Time. Theor. Comput. Sci. 3(1): 105-117 (1976)
2018-03-20 21:37:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 203, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9134778380393982, "perplexity": 1206.4901892064406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647545.54/warc/CC-MAIN-20180320205242-20180320225242-00086.warc.gz"}
https://www.math10.com/problems/polynomial-vocabulary/easy/
# Polynomial Vocabulary: Problems with Solutions Polinomial: $a_nx^n+a_{n-1}x^{n-1}+a_{n-2}x^{n-2} + ... + a_1x^1+a_0$ Terms: $a_nx^n, a_{n-1}x^{n-1}, a_{n-2}x^{n-2}, a_1x^1, a_0$ Coefficients: $a_n, a_{n-1}, a_{n-2}, ... ,a_1, a_0$ Main Coefficient: $a_n$ Grade: $n$ Variable: $x$ Independent term: $a_0$ Problem 1 What is the grade of this polynomial? $5x^{2}-3x^{5}+2x-5$ Problem 2 How many terms does the polynomial have? $2y^{6}-\frac{3}{2}y^{2}+1$ Problem 3 What is the grade of this polynomial? $\frac{5}{2}-x+5x^{4}-3x^{2}+\frac{1}{2}$ Problem 4 This polynomial has 4 terms and its grade is 3. $4x^{3}-4x^{5}+4x+4$ Problem 5 A polynomial that has grade 3, independent term 3 and main coefficient 3, is called a trinomial. Problem 6 The following polynomial is a trinomial, it has grade 4, the variable is x and it has an independent term -2. $5x^{2}-x^{4}+(3-5)$ Problem 7 Write a polynomial of grade 1, main coefficient 1, that has an independent term 0, and its variable is y. Problem 8 How many terms must a binomial have? Problem 9 Which of the following algebraic expressions is a polynomial? Problem 10 Which of the following algebraic expressions is a polynomial? Correct: Wrong: Unsolved problems: Contact email:
2019-09-22 03:39:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2071557343006134, "perplexity": 4254.273567049814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575076.30/warc/CC-MAIN-20190922032904-20190922054904-00144.warc.gz"}
https://illustrativemathematics.blog/tag/equations/
Select Page ## Truth and consequences: talking about solving equations By William McCallum The language we use when we talk about solving equations can be a bit of a minefield. It seems obvious to talk about an equation such as $3x + 2 = x + 5$ as saying that $3x+2$ is equal to $x + 5$, and that’s probably a good place to start....
2021-04-13 07:26:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3134271502494812, "perplexity": 293.4718655729763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00130.warc.gz"}
https://eight2late.wordpress.com/2018/03/27/a-gentle-introduction-to-monte-carlo-simulation-for-project-managers/?like_comment=96037&_wpnonce=462a79eb9e
# Eight to Late Sensemaking and Analytics for Organizations ## A gentle introduction to Monte Carlo simulation for project managers This article covers the why, what and how of Monte Carlo simulation using a canonical example from project management –  estimating the duration of a small project. Before starting, however, I’d like say a few words about the tool I’m going to use. In keeping with the format of the tutorials on this blog, I’ve assumed very little prior knowledge about probability, let alone Monte Carlo simulation. Consequently, the article is verbose and the tone somewhat didactic. ### Introduction Estimation is key part of a project manager’s role. The most frequent (and consequential) estimates they are asked deliver relate to time and cost.  Often these are calculated and presented as point estimates: i.e. single numbers – as in, this task will take 3 days. Or, a little better, as two-point ranges – as in, this task will take between 2 and 5 days.  Better still, many use a PERT-like approach wherein estimates are based on 3 points: best, most likely and worst case scenarios – as in, this task will take between 2 and 5 days, but it’s most likely that we’ll finish on day 3.  We’ll use three-point estimates as a starting point for Monte Carlo simulation, but first, some relevant background. It is a truism, well borne out by experience, that it is easier to estimate small, simple tasks than large, complex ones. Indeed, this is why one of the early to-dos in a project is the construction of a work breakdown structure. However, a problem arises when one combines the estimates for individual elements into an overall estimate for a project or a phase thereof. It is that a straightforward addition of individual estimates or bounds will almost always lead to a grossly incorrect estimation of overall time or cost. The reason for this is simple: estimates are necessarily based on probabilities and probabilities do not combine additively. Monte Carlo simulation provides a principled and intuitive way to obtain probabilistic estimates at the level of an entire project based on estimates of the individual tasks that comprise it. ### The problem The best way to explain Monte Carlo is through a simple worked example. So, let’s consider a 4 task project shown in Figure 1. In the project, the second task is dependent on the first, and third and fourth are dependent on the second but not on each other. The upshot of this is that the first two tasks have to be performed sequentially and the last two can be done at the same time, but can only be started after the second task is completed. To summarise: the first two tasks must be done in series and the last two can be done in parallel. Figure 1; A project with 4 tasks. Figure 1 also shows the three point estimates for each task – that is the minimum, maximum and most likely completion times. For completeness I’ve listed them below: • Task 1 – Min: 2 days; Most Likely: 4 days; Max: 8 days • Task 2 – Min: 3 days; Most Likely: 5 days; Max: 10 days • Task 3 – Min: 3 days; Most Likely: 6 days; Max: 9 days • Task 4 – Min: 2 days; Most Likely: 4 days; Max: 7 days OK, so that’s the situation as it is given to us. The first step to  developing  an estimate is to formulate the problem in a way that it can be tackled using Monte Carlo simulation. This bring us to the important topic of the shape of uncertainty aka probability distributions. ### The shape of uncertainty Consider the data for Task 1. You have been told that it most often finishes on day 4.  However, if things go well, it could take as little as 2 days; but if things go badly it could take as long as 8 days.  Therefore, your range of possible finish times (outcomes) is between 2 to 8 days. Clearly, each of these outcomes is not equally likely.  The most likely outcome is that you will finish the task in 4 days (from what your team member has told you). Moreover, the likelihood of finishing in less than 2 days or more than 8 days is zero. If we plot the likelihood of completion against completion time, it would look something like Figure 2. Figure 2: Likelihood of finishing on day 2, day 4 and day 8. Figure 2 begs a couple of questions: 1. What are the relative likelihoods of completion for all intermediate times – i.e. those between 2 to 4 days and 4 to 8 days? 2. How can one quantify the likelihood of intermediate times? In other words, how can one get a numerical value of the likelihood for all times between 2 to 8 days?  Note that we know from the earlier discussion that this must be zero for any time less than 2 or greater than 8 days. The two questions are actually related. As we shall soon see, once we know the relative likelihood of completion at all times (compared to the maximum), we can work out its numerical value. Since we don’t know anything about intermediate times (I’m assuming there is no other historical data available), the simplest thing to do is to assume that the likelihood increases linearly (as a straight line) from 2 to 4 days and decreases in the same way from 4 to 8 days as shown in Figure 3. This gives us the well-known triangular distribution. Jargon Buster: The term distribution is simply a fancy word for a plot of likelihood vs. time. Figure 3: Triangular distribution fitted to points in Figure 1 Of course, this isn’t the only possibility; there are an infinite number of others. Figure 4 is another (admittedly weird) example. Figure 4: Another distribution that fits the points in Figure 2. Further, it is quite possible that the upper limit (8 days) is not a hard one. It may be that in exceptional cases the task could take much longer (for example, if your team member calls in sick for two weeks) or even not be completed at all (for example, if she then leaves for that mythical greener pasture).  Catering for the latter possibility, the shape of the likelihood might resemble Figure 5. Figure 5: A distribution that allows for a very long (potentially) infinite completion time The main takeaway from the above is that uncertainties should be expressed as shapes rather than numbers, a notion popularised by Sam Savage in his book, The Flaw of Averages. [Aside:  you may have noticed that all the distributions shown above are skewed to the right – that  is they have a long tail. This is a general feature of distributions that describe time (or cost) of project tasks. It would take me too far afield to discuss why this is so, but if you’re interested you may want to check out my post on the inherent uncertainty of project task estimates. ### From likelihood to probability Thus far, I have used the word “likelihood” without bothering to define it.  It’s time to make the notion more precise.  I’ll begin by asking the question: what common sense properties do we expect a quantitative measure of likelihood to have? Consider the following: 1. If an event is impossible, its likelihood should be zero. 2. The sum of likelihoods of all possible events should equal complete certainty. That is, it should be a constant. As this constant can be anything, let us define it to be 1. In terms of the example above, if we denote time by $t$ and the likelihood by $P(t)$  then: $P(t) = 0$ for $t< 2$ and  $t> 8$ And $\sum_{t}P(t) = 1$ where $2\leq t< 8$ Where $\sum_{t}$ denotes the sum of all non-zero likelihoods – i.e. those that lie between 2 and 8 days. In simple terms this is the area enclosed by the likelihood curves and the x axis in figures 2 to 5.  (Technical Note:  Since $t$ is a continuous variable, this should be denoted by an integral rather than a simple sum, but this is a technicality that need not concern us here) $P(t)$ is , in fact, what mathematicians call probability– which explains why I have used the symbol $P$ rather than $L$. Now that I’ve explained what it  is, I’ll use the word “probability” instead of ” likelihood” in the remainder of this article. With these assumptions in hand, we can now obtain numerical values for the probability of completion for all times between 2 and 8 days. This can be figured out by noting that the area under the probability curve (the triangle in figure 3 and the weird shape in figure 4) must equal 1, and we’ll do this next.  Indeed, for the problem at hand, we’ll assume that all four task durations can be fitted to triangular distributions. This is primarily to keep things  simple. However, I should emphasise that you can use any shape so long as you can express it mathematically, and I’ll say more about this towards the end of this article. ### The triangular distribution Let’s look at the estimate for Task 1. We have three numbers corresponding to a minimummost likely and maximum time.  To keep the discussion general, we’ll call these $t_{min}$, $t_{ml}$ and $t_{max}$ respectively, (we’ll get back to our estimator’s specific numbers later). Now, what about the probabilities associated with each of these times? Since $t_{min}$ and $t_{max}$ correspond to the minimum and maximum times,  the probability associated with these is zero. Why?  Because if it wasn’t zero, then there would be a non-zero probability of completion for a time less than $t_{min}$ or greater than $t_{max}$ – which isn’t possible [Note: this is a consequence of the assumption that the probability varies continuously –  so if it takes on non-zero value, $p_{0}$,  at $t_{min}$ then it must take on a value slightly less than $p_{0}$ – but greater than 0 –  at $t$ slightly smaller than $t_{min}$ ] .   As far as  the most likely time,  $t_{ml}$,  is concerned:  by definition, the probability attains its highest value at time $t_{ml}$.    So, assuming the probability can be described by a triangular function, the distribution must have the form shown in Figure 6 below. Figure 6: Triangular distribution redux. For the simulation, we need to know the equation describing the above distribution.  Although Wikipedia will tell us the answer in a mouse-click, it is instructive to figure it out for ourselves. First, note that the area under the triangle must be equal to  1 because the task must finish at some time between $t_{min}$ and $t_{max}$.   As a consequence we have: $\frac{1}{2}\times{base}\times{altitude}=\frac{1}{2}\times{(t_{max}-t_{min})}\times{p(t_{ml})}=1\ldots\ldots{(1)}$ where $p(t_{ml})$ is the probability corresponding to time $t_{ml}$.  With a bit of rearranging we get, $p(t_{ml})=\frac{2}{(t_{max}-t_{min})}\ldots\ldots(2)$ To derive the probability for any time $t$ lying between $t_{min}$ and $t_{ml}$, we note that: $\frac{(t-t_{min})}{p(t)}=\frac{(t_{ml}-t_{min})}{p(t_{ml})}\ldots\ldots(3)$ This is a consequence of the fact that the ratios on either side of equation (3)  are  equal to the slope of the line joining the points $(t_{min},0)$ and $(t_{ml}, p(t_{ml}))$. Figure 7 Substituting (2) in (3) and simplifying a bit, we obtain: $p(t)=\frac{2(t-t_{min})}{(t_{ml}-t_{min})(t_{max}-t_{min})}\dots\ldots(4)$ for $t_{min}\leq t \leq t_{ml}$ In a similar fashion one can show that the probability for times lying between $t_{ml}$ and $t_{max}$ is given by: $p(t)=\frac{2(t_{max}-t)}{(t_{max}-t_{ml})(t_{max}-t_{min})}\dots\ldots(5)$ for $t_{ml}\leq t \leq t_{max}$ Equations 4 and 5 together describe the probability distribution function (or PDF)  for all times between $t_{min}$ and $t_{max}$. As it turns out, in Monte Carlo simulations, we don’t directly work with the probability distribution function. Instead we work with the cumulative distribution function (or CDF) which is the probability, $P$,  that the task is completed by time $t$. To reiterate, the PDF, $p(t)$, is the probability of the task finishing at time $t$ whereas the CDF, $P(t)$, is the probability of the task completing by time $t$. The CDF, $P(t)$,  is essentially a sum of all probabilities between $t_{min}$ and $t$. For $t < t_{min}$ this is the area under the triangle with apexes at   ($t_{min}$, 0), (t, 0) and (t, p(t)).  Using the formula for the area of a triangle (1/2 base times height) and equation (4) we get: $P(t)=\frac{(t-t_{min})^2}{(t_{ml}-t_{min})(t_{max}-t_{min})}\ldots\ldots(6)$ for $t_{min}\leq t \leq t_{ml}$ Noting that for $t \geq t_{ml}$, the area under the curve equals the total area minus the area enclosed by the triangle with base between t and $t_{max}$, we have: $P(t)=1- \frac{(t_{max}-t)^2}{(t_{max}-t_{ml})(t_{max}-t_{min})}\ldots\ldots(7)$ for $t_{ml}\leq t \leq t_{max}$ As expected,  $P(t)$  starts out with a value 0 at $t_{min}$ and then increases monotonically, attaining a value of 1 at $t_{max}$. To end this section let’s plug in the numbers quoted by our estimator at the start of this section: $t_{min}=2$, $t_{ml}=4$ and $t_{max}=8$.  The resulting PDF and CDF are shown in figures 8 and 9. Figure 8: PDF for triangular distribution (tmin=2, tml=4, tmax=8) Figure 9 – CDF for triangular distribution (tmin=2, tml=4, tmax=8) ### Monte Carlo in a minute Now with all that conceptual work done, we can get to the main topic of this post:  Monte Carlo estimation. The basic idea behind Monte Carlo is to simulate the entire project (all 4 tasks in this case) a large number N (say 10,000) times and thus obtain N overall completion times.  In each of the N trials, we simulate each of the tasks in the project and add them up appropriately to give us an overall project completion time for the trial.  The resulting N overall completion times will all be different, ranging from the sum of the minimum completion times to the sum of the maximum completion times.  In other words, we will obtain the PDF and CDF for the overall completion time, which will enable us to answer questions such as: • How likely is it that the project will be completed within 17 days? • What’s the estimated time for which I can be 90% certain that the project will be completed? For brevity, I’ll call this the 90% completion time in the rest of this piece. “OK, that sounds great”, you say, “but how exactly do we simulate a single task”? Good question, and I was just about to get to that… ### Simulating a single task using the CDF As we saw earlier, the CDF for the triangular has a S shape and ranges from 0 to 1 in value. It turns out that the S shape is characteristic of all CDFs, regardless of the details underlying PDF. Why? Because, the cumulative probability must lie between 0 and 1 (remember, probabilities can never exceed 1, nor can they be negative). OK, so to simulate a task, we: • generate a random number between 0 and 1, this corresponds to the probability that the task will finish at time t. • find the time, t, that this corresponds to this value of probability. This is the completion time for the task for this trial. Incidentally, this method is called inverse transform sampling. An example might help clarify how inverse transform sampling works.  Assume that the random number generated is 0.4905. From the CDF for the first task, we see that this value of probability corresponds to a completion time of 4.503 days, which is the completion for this trial (see Figure 10). Simple! Figure 10: Illustrating inverse transform sampling In this case we found the time directly from the computed CDF. That’s not too convenient when you’re simulating the project 10,000 times. Instead, we need a programmable math expression that gives us the time corresponding to the probability directly. This can be obtained by solving equations (6) and (7) for $t$. Some straightforward algebra, yields the following two expressions for $t$: $t = t_{min} + \sqrt{P(t)(t_{ml} - t_{min})(t_{max} - t_{min})} \ldots\ldots(8)$ for $t_{min}\leq t \leq t_{ml}$ And $t = t_{max} - \sqrt{[1-P(t)](t_{max} - t_{ml})(t_{max} - t_{min})} \ldots\ldots(9)$ for $t_{ml}\leq t \leq t_{max}$ These can be easily combined in a single Excel formula using an IF function, and I’ll show you exactly how in a minute. Yes, we can now finally get down to the Excel simulation proper and you may want to download the workbook if you haven’t done so already. ### The simulation Open up the workbook and focus on the first three columns of the first sheet to begin with. These simulate the first task in Figure 1, which also happens to be the task we have used to illustrate the construction of the triangular distribution as well as the mechanics of Monte Carlo. Rows 2 to 4 in columns A and B list the min, most likely and max completion times while the same rows in column C list the probabilities associated with each of the times. For $t_{min}$ the probability is 0 and for $t_{max}$ it is 1.  The probability at $t_{ml}$ can be calculated using equation (6) which, for $t=t_{max}$, reduces to $P(t_{ml}) =\frac{(t_{ml}-t_{min})}{t_{max}-t_{min}}\ldots\ldots(10)$ Rows 6 through 10005 in column A are simulated probabilities of completion for Task 1. These are obtained via the Excel RAND() function, which generates uniformly distributed random numbers lying between 0 and 1.  This gives us a list of probabilities corresponding to 10,000 independent simulations of Task 1. The 10,000 probabilities need to be translated into completion times for the task. This is done using equations (8) or (9) depending on whether the simulated probability is less or greater than $P(t_{ml})$, which is in cell C3 (and given by Equation (10) above). The conditional statement can be coded in an Excel formula using the IF() function. Tasks 2-4 are coded in exactly the same way, with distribution parameters in rows 2 through 4 and simulation details in rows 6 through 10005 in the columns listed below: • Task 2 – probabilities in column D; times in column F • Task 3 – probabilities in column H; times in column I • Task 4 – probabilities in column K; times in column L That’s basically it for the simulation of individual tasks. Now let’s see how to combine them. For tasks in series (Tasks 1 and 2), we simply sum the completion times for each task to get the overall completion times for the two tasks.  This is what’s shown in rows 6 through 10005 of column G. For tasks in parallel (Tasks 3 and 4), the overall completion time is the maximum of the completion times for the two tasks. This is computed and stored in rows 6 through 10005 of column N. Finally, the overall project completion time for each simulation is then simply the sum of columns G and N (shown in column O) Sheets 2 and 3 are plots of the probability and cumulative probability distributions for overall project completion times. I’ll cover these in the next section. ### Discussion – probabilities and estimates The figure on Sheet 2 of the Excel workbook (reproduced in Figure 11 below) is the probability distribution function (PDF) of completion times. The x-axis shows the elapsed time in days and the y-axis the number of Monte Carlo trials that have a completion time that lie in the relevant time bin (of width 0.5 days). As an example, for the simulation shown in the Figure 11, there were 882 trials (out of 10,000) that had a completion time that lie between 16.25 and 16.75 days. Your numbers will vary, of course, but you should have a maximum in the 16 to 17 day range and a trial number that is reasonably close to the one I got. Figure 11: Probability distribution of completion times (N=10,000) I’ll say a bit more about Figure 11 in the next section. For now, let’s move on to Sheet 3 of workbook which shows the cumulative probability of completion by a particular day (Figure 12 below).  The figure shows the cumulative probability function (CDF), which is the sum of all completion times from the earliest possible completion day to the particular day. Figure 12: Probability of completion by a particular day (N=10,000) To reiterate a point made earlier,  the reason we work with the CDF  rather than the PDF is that we are interested in knowing the probability of completion by a particular date (e.g. it is 90% likely that we will finish by April 20th) rather than the probability of completion on a particular date (e.g. there’s a 10% chance we’ll finish on April 17th). We can now answer the two questions we posed earlier. As a reminder, they are: • How likely is it that the project will be completed within 17 days? • What’s the 90% likely completion time? Both questions are easily answered by using the cumulative distribution chart on Sheet 3 (or Fig 12).  Reading the relevant numbers from the chart, I see that: • There’s a 60% chance that the project will be completed in 17 days. • The 90% likely completion time is 19.5 days. How does the latter compare to the sum of the 90% likely completion times for the individual tasks? The 90% likely completion time for a given task can be calculated by solving Equation 9 for $t$, with appropriate values for the parameters $t_{min}$, $t_{max}$ and $t_{ml}$ plugged in, and $P(t)$ set to 0.9. This gives the following values for the 90% likely completion times: • Task 1 – 6.5 days • Task 2 – 8.1 days • Task 3 – 7.7 days • Task 4 – 5.8 days Summing up the first three tasks (remember, Tasks 3 and 4 are in parallel) we get a total of 22.3 days, which is clearly an overestimation. Now, with the benefit of having gone through the simulation, it is easy to see that the sum of 90% likely completion times for individual tasks does not equal the 90% likely completion time for the sum of the relevant individual tasks – the first three tasks in this particular case. Why? Essentially because a Monte Carlo run in which the first three tasks tasks take as long as their (individual) 90% likely completion times is highly unlikely. Exercise:  use the worksheet to estimate how likely this is. There’s much more that can be learnt from the CDF. For example, it also tells us that the greatest uncertainty in the estimate is in the 5 day period from ~14 to 19 days because that’s the region in which the probability changes most rapidly as a function of elapsed time. Of course, the exact numbers are dependent on the assumed form of the distribution. I’ll say more about this in the final section. To close this section, I’d like to reprise a point I mentioned earlier: that uncertainty is a shape, not a number. Monte Carlo simulations make the uncertainty in estimates explicit and can help you frame your estimates in the language of probability…and using a tool like Excel can help you explain these to non-technical people like your manager. ### Closing remarks We’ve covered a fair bit of ground: starting from general observations about how long a task takes, saw how to construct simple probability distributions and then combine these using Monte Carlo simulation.  Before I close, there are a few general points I should mention for completeness…and as warning. First up, it should be clear that the estimates one obtains from a simulation depend critically on the form and parameters of the distribution used. The parameters are essentially an empirical matter; they should be determined using historical data. The form of the function, is another matter altogether: as pointed out in an earlier section, one cannot determine the shape of a function from a finite number of data points. Instead, one has to focus on the properties that are important. For example, is there a small but finite chance that a task can take an unreasonably long time? If so, you may want to use a lognormal distribution…but remember, you will need to find a sensible way to estimate the distribution parameters from your historical data. Second, you may have noted from the probability distribution curve (Figure 11)  that despite the skewed distributions of the individual tasks, the distribution of the overall completion time is somewhat symmetric with a minimum of ~9 days, most likely time of ~16 days and maximum of 24 days.  It turns out that this is a general property of distributions that are generated by adding a large number of independent probabilistic variables. As the number of variables increases, the overall distribution will tend to the ubiquitous Normal distribution. The assumption of independence merits a closer look.  In the case it hand,  it implies that the completion times for each task are independent of each other. As most project managers will know from experience, this is rarely the case: in real life,  a task that is delayed will usually have knock-on effects on subsequent tasks. One can easily incorporate such dependencies in a Monte Carlo simulation. A formal way to do this is to introduce a non-zero correlation coefficient between tasks as I have done here. A simpler and more realistic approach is to introduce conditional inter-task dependencies As an example, one could have an inter-task delay that kicks in only if the predecessor task takes more than 80%  of its maximum time. Thirdly, you may have wondered why I used 10,000 trials: why not 100, or 1000 or 20,000. This has to do with the tricky issue of convergence. In a nutshell, the estimates we obtain should not depend on the number of trials used.  Why? Because if they did, they’d be meaningless. Operationally, convergence means that any predicted quantity based on aggregates should not vary with number of trials.  So, if our Monte Carlo simulation has converged, our prediction of 19.5 days for the 90% likely completion time should not change substantially if I increase the number of trials from ten to twenty thousand. I did this and obtained almost the same value of 19.5 days. The average and median completion times (shown in cell Q3 and Q4 of Sheet 1) also remained much the same (16.8 days). If you wish to repeat the calculation, be sure to change the formulas on all three sheets appropriately. I was lazy and hardcoded the number of trials. Sorry! Finally, I should mention that simulations can be usefully performed at a higher level than individual tasks. In their highly-readable book,  Waltzing With Bears: Managing Risk on Software Projects, Tom De Marco and Timothy Lister show how Monte Carlo methods can be used for variables such as  velocity, time, cost etc.  at the project level as opposed to the task level. I believe it is better to perform simulations at the lowest possible level, the main reason being that it is easier, and less error-prone, to estimate individual tasks than entire projects. Nevertheless, high level simulations can be very useful if one has reliable data to base these on. There are a few more things I could say about the usefulness of the generated distribution functions and Monte Carlo in general, but they are best relegated to a future article. This one is much too long already and I think I’ve tested your patience enough. Thanks so much for reading, I really do appreciate it and hope that you found it useful. Acknowledgement: My thanks to Peter Holberton for pointing out a few typographical and coding errors in an earlier version of this article. These have now been fixed. I’d be grateful if readers could bring any errors they find to my attention. Written by K March 27, 2018 at 4:11 pm Tagged with ### 10 Responses 1. Awesome! Thank you so much for this introduction! I am first year bioinformatic student, and your post is far the best explanation on MCS, I have read. Like Myra March 27, 2018 at 5:34 pm • Hi Myra, Regards, Kailash. Like K March 27, 2018 at 7:56 pm 2. […] This article covers the why, what and how of Monte Carlo simulation using a canonical example from project management –  estimating the duration of a small project. Before starting, however, I’d like say a few words about the tool I’m going to use. Despite the bad rap spreadsheets get from… Read more: A gentle introduction to Monte Carlo simulation for project managers […] Like 3. […] Kailash Awati provides a very detailed tutorial on using a Monte Carlo simulation to calculate a distribution of probable completion times, using a simple project with four tasks and three-point estimates. 20 minutes to read, but well worth it. […] Like 4. […] A Gentle Introduction To Monte Carlo Simulation For Project Managers […] Like 5. […] when teaching sensemaking, I begin with quantitative techniques to deal with uncertainty, such as Monte Carlo simulation, and then gradually introduce examples of uncertainties that are hard if not impossible to […] Like 6. hi, can you suggest whether the same formula can be applied for the financials related like example: I have some 4 claims with client and I want to verify the probable realization. Like Priyanka February 5, 2019 at 9:30 pm • Hi Priyanka, Yes you can – but keep in mind that the reliability of your results depends on how well the distribution you use models the underlying reality. A triangular distribution may be too simplistic for your problem. It is worth looking at what kinds of distributions are used in your domain. Regards, Kailash. Like K February 6, 2019 at 7:38 am 7. how do have 0.5 as a width of x axis? thanks for all the good explanation, but i don’t get how 0.5 become your standard distribution on x axis. Like Vattanak TV May 4, 2019 at 4:45 am • That’s a choice I made in this case. It is really up to you – you can choose bins of any width that will give you a reasonably accurate representation of the distribution. Regards, Kailash. Like K May 29, 2019 at 9:16 pm This site uses Akismet to reduce spam. Learn how your comment data is processed.
2020-02-18 19:50:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 85, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8049173951148987, "perplexity": 472.33445097462277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143805.13/warc/CC-MAIN-20200218180919-20200218210919-00488.warc.gz"}
http://theinfolist.com/html/ALL/s/particle_horizon.html
TheInfoList The particle horizon (also called the cosmological horizon, the comoving horizon (in Dodelson's text), or the cosmic light horizon) is the maximum distance from which light from particle In the Outline of physical science, physical sciences, a particle (or corpuscule in older texts) is a small wikt:local, localized physical body, object to which can be ascribed several physical property, physical or chemical property, chemical p ... s could have traveled to the observer An observer is one who engages in observation or in watching an experiment. Observer may also refer to: Computer science and information theory * In information theory, any system which receives information from an object * State observer in con ... in the age of the universe In physical cosmology Physical cosmology is a branch of cosmology Cosmology (from Ancient Greek, Greek κόσμος, ''kosmos'' "world" and -λογία, ''-logia'' "study of") is a branch of astronomy concerned with the study of the chro ... . Much like the concept of a horizon, terrestrial horizon, it represents the boundary between the observable and the unobservable regions of the universe, so its distance at the present epoch defines the size of the observable universe. Due to the expansion of the universe, it is not simply the age of the universe In physical cosmology Physical cosmology is a branch of cosmology Cosmology (from Ancient Greek, Greek κόσμος, ''kosmos'' "world" and -λογία, ''-logia'' "study of") is a branch of astronomy concerned with the study of the chro ... times the speed of light (approximately 13.8 billion light-years), but rather the speed of light times the conformal time. The existence, properties, and significance of a cosmological horizon depend on the particular physical cosmology, cosmological model. # Conformal time and the particle horizon In terms of comoving distance, the particle horizon is equal to the conformal time $\eta$ that has passed since the Big Bang, times the speed of light $c$. In general, the conformal time at a certain time $t$ is given by :$\eta = \int_^ \frac,$ where $a\left(t\right)$ is the Scale factor (cosmology), scale factor of the Friedmann–Lemaître–Robertson–Walker metric, and we have taken the Big Bang to be at $t=0$. By convention, a subscript 0 indicates "today" so that the conformal time today $\eta\left(t_0\right) = \eta_0 = 1.48 \times 10^\text$. Note that the conformal time is ''not'' the age of the universe In physical cosmology Physical cosmology is a branch of cosmology Cosmology (from Ancient Greek, Greek κόσμος, ''kosmos'' "world" and -λογία, ''-logia'' "study of") is a branch of astronomy concerned with the study of the chro ... , which is estimated around $4.35 \times 10^\text$. Rather, the conformal time is the amount of time it would take a photon to travel from where we are located to the furthest observable distance, provided the universe ceased expanding. As such, $\eta_0$ is not a physically meaningful time (this much time has not yet actually passed); though, as we will see, the particle horizon with which it is associated is a conceptually meaningful distance. The particle horizon recedes constantly as time passes and the conformal time grows. As such, the observed size of the universe always increases. Since proper distance at a given time is just comoving distance times the scale factor (with comoving distance normally defined to be equal to proper distance at the present time, so $a\left(t_0\right) = 1$ at present), the proper distance to the particle horizon at time $t$ is given by :$a\left(t\right) H_p\left(t\right) = a\left(t\right) \int_^ \frac$ and for today $t = t_0$ :$H_p\left(t_0\right) = c\eta_0 = 14.4\text = 46.9\text.$ # Evolution of the particle horizon In this section we consider the Friedmann–Lemaître–Robertson–Walker metric, FLRW cosmological model. In that context, the universe can be approximated as composed by non-interacting constituents, each one being a perfect fluid with density $\rho_i$, partial pressure $p_i$ and state equation $p_i=\omega_i \rho_i$, such that they add up to the total density $\rho$ and total pressure $p$. Let us now define the following functions: * Hubble function $H=\frac$ * The critical density $\rho_c=\fracH^2$ * The ''i''-th dimensionless energy density $\Omega_i=\frac$ * The dimensionless energy density $\Omega=\frac \rho =\sum \Omega_i$ * The redshift $z$ given by the formula $1+z=\frac$ Any function with a zero subscript denote the function evaluated at the present time $t_0$ (or equivalently $z=0$). The last term can be taken to be $1$ including the curvature state equation. It can be proved that the Hubble function is given by :$H\left(z\right)=H_0\sqrt$ where $n_i=3\left(1+\omega_i\right)$. Notice that the addition ranges over all possible partial constituents and in particular there can be countably infinitely many. With this notation we have: :$\text H_p \text N>2$ where $N$ is the largest $n_i$ (possibly infinite). The evolution of the particle horizon for an expanding universe ($\dot>0$) is: :$\frac=H_p\left(z\right)H\left(z\right)+c$ where $c$ is the speed of light and can be taken to be $1$ (natural units). Notice that the derivative is made with respect to the FLRW-time $t$, while the functions are evaluated at the redshift $z$ which are related as stated before. We have an analogous but slightly different result for event horizon. # Horizon problem The concept of a particle horizon can be used to illustrate the famous horizon problem, which is an unresolved issue associated with the Big Bang model. Extrapolating back to the time of recombination (cosmology), recombination when the cosmic microwave background (CMB) was emitted, we obtain a particle horizon of about which corresponds to a proper size at that time of: Since we observe the CMB to be emitted essentially from our particle horizon ($284\text \ll 14.4\text$), our expectation is that parts of the cosmic microwave background (CMB) that are separated by about a fraction of a great circle across the sky of (an angular size of $\theta \sim 1.7^\circ$) should be out of causal contact with each other. That the entire CMB is in thermal equilibrium and approximates a blackbody so well is therefore not explained by the standard explanations about the way the expansion of the universe proceeds. The most popular resolution to this problem is cosmic inflation.
2022-08-15 03:02:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 41, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8429591059684753, "perplexity": 464.7969651604422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00153.warc.gz"}
https://mizugadro.mydns.jp/t/index.php/Maps_of_tetration
# Maps of tetration $$y\!=\!\mathrm{tet}_b(x)$$ versus $$x$$ for various $$b$$ Base $$b=\sqrt{2}\approx 1.41$$ Henryk base, $$b=\exp(1/\mathrm e)\approx 1.44$$ Base $$b=1.5$$ Binary tetration, $$b=2$$ Natural base, $$b=\mathrm e \approx 2.71$$ Sheldon base, $$b=1.52598338517+0.0178411853321 \,\mathrm i$$ Article Maps of tetration collects some complex maps of tetration $$\mathrm{tet}_b$$ to different values of base $$b$$. For real values of base $$b$$, the real-real plots $$y\!=\!\mathrm{tet}_b(x)$$ are shown in the upper figure at right. At the complex maps, the following cases are represented: $$b=\sqrt{2} \approx 1.41$$ $$b=\exp(1/\mathrm e) \approx 1.44$$ $$b=1.5$$ $$b=2$$ $$b=\mathrm e \approx 2.71$$ $$b=1.52598338517+0.0178411853321 \,\mathrm i$$ Tetration is shown with lines of constant real part $$u$$ and lines of constant imaginary part $$v$$; $$u\!+\!\mathrm i v=\mathrm {tet}_b(x\!+\!\mathrm i y)$$ ## $$b=\sqrt{2}$$ For this case, the regular iteration at fixed point $$L=2$$ is used. The evaluation is described in the Mathematics of Computation [1]. ## $$b=\exp(1/\mathrm e)\approx 1.44$$ For $$b=\exp(1/\mathrm e)\approx 1.44$$, the exotic iteration at fixed point $$L=\mathrm e\approx 2.71$$ is used. The evaluation is described in the Mathematics of Computation [2]. ## $$b>\exp(1/\mathrm e)$$ For $$b>\exp(1/\mathrm e)\approx 1.44$$, the Cauchi integral is used for evaluation. It is described in Mathematics of Computation [3]. Historically, evaluation for the case $$b=\mathrm e$$ was first to be reported. Namely for this case, the special algorithm fsexp.cin is loaded; it is described in Vladikavkaz Mathematical Jorunal [4]. ## Sheldon base $$b=1.52598338517+0.0178411853321\,\mathrm i.$$ Tetration to Sheldon base $$b\!=\!1.52598338517\!+\!0.0178411853321\mathrm i$$ is considered by the special request from Sheldon Levenstein. For this base, tetration was believed to be especially difficult to evaluate. The evaluation uses almost the same algorithm of the Cauchi integral [3]. The small modification had been applied to the original algorithm; the condition $$F(z^*)=F(z)^*$$ is suppressed at the numerical solving of the corresponding integral equation for values o superfunction along $$\Im(z)=\mathrm{const}$$. No difficulties, specific namely for this complex value of base $$b$$, had been revealed. ## Book The maps are plotted using the conto.cin code in C++. The Latex code is used to add the labels. All the maps at right are supplied with generators; the colleagues may download the code and reproduce them. If some generator does not work as expected, let me know and let us correct it. The algorithms, used to evaluate the tetration, are described in the Book Суперфункции, in Russian [5]. For year 2014, the English version is not yet ready. ## References 1. http://www.ams.org/journals/mcom/2010-79-271/S0025-5718-10-02342-2/home.html http://mizugadro.mydns.jp/PAPERS/2010q2.pdf D.Kouznetsov, H.Trappmann. Portrait of the four regular super-exponentials to base sqrt(2). Mathematics of Computation, 2010, v.79, p.1727-1756. 2. http://www.ams.org/journals/mcom/0000-000-00/S0025-5718-2012-02590-7/S0025-5718-2012-02590-7.pdf
2020-01-21 23:15:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8782622814178467, "perplexity": 1688.8519557725883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606226.29/warc/CC-MAIN-20200121222429-20200122011429-00062.warc.gz"}
http://math.stackexchange.com/questions/462544/homework-closed-1-forms-on-s2-are-exact
# Homework: closed 1-forms on $S^2$ are exact. From the 2008 UCLA Geometry-Topology qualifying exam: let $\theta$ be a $1$-form on $S^2$ with $d \theta = 0$. Construct a function $f$ on $S^2$ with $d f = \theta$. I'm not very confident in my ability to answer even a basic problem like this properly, and I'd appreciate someone telling me if I'm mistaken in my reasoning. I argued as follows: let $U$ be the subset $S^2\setminus\{\text{south pole}\}$ and $L=S^2\setminus\{\text{north pole}\}$. Since these subsets are diffeomorphic to $\mathbb{R}^2$ via stereographic projection, the restriction of $\theta$ to either one of $U$ or $L$ is exact. Thus there exist $f_U$ and $f_L$ so that $d f_U = \theta , d f_L = \theta$ on $U,L$ respectively. On the intersection $U\cap L$ we have $d f_U = d f_L$, that is $d(f_U-f_L) = 0$. This forces $f_U = f_L + c$ for some constant $c$ on their common intersection. The existence and choice for $f$ are now apparent: let $f=f_U$ on $U$ and $f(\text{south pole}) = f_L(\text{south pole})+c$. - This is correct. –  Branimir Ćaćić Aug 8 '13 at 8:29 If $U$ and $V$ are open subsets of a manifold $M$ with $H^1(U)=0$, $H^1(V)=0$ and $U\cap V$ connected, then $H^1(U\cup V)=0$. It suffices to show that every closed $1$-form on $U\cup V$ is exact. To this end, let $\omega$ be a closed $1$-form on $U\cup V$. Let $\iota_V$ and $\iota_U$ denote the canonical inclusions of $V$ and $U$ into $U\cup V$, respectively. Since the exterior differential commutes with pullback, it follows that $d\iota_{U}^*\omega=\iota_{U}^*d\omega=0$ and, likewise, $d\iota_{V}^*\omega=\iota_{V}^*d\omega=0$. So $\iota_{U}^*d\omega$ and $\iota_{V}^*d\omega$ are closed. But $H^1(V)$ and $H^1(U)$ are trivial, and hence very closed $1$-form on $U$ and $V$, respectively, are exact. That is to say, there exist functions $f_1:U\to \mathbb{R}$ and $f_2:V\to \mathbb{R}$ so that $df_1=\iota_{U}^*\omega$ and $df_2=\iota_{V}^*\omega$. Now, as $U\cap V$ is connected, we have that $f_1\mid_{U\cap V}$ and $f_2\mid_{U\cap V}$ are cohomologous, as $d(f_1-f_2)=df_1-df_2=0$. Since $U\cap V$ is connected, and $d(f_1-f_2)=0$, it follows that $f_1-f_2=c$ for some constant $c$. Thus, the map $F:U\cup V\to \mathbb{R}$ given by $F(x)=\left\{\begin{array}{ll}f_1(x)&\text{ if }x\in U\\ f_2(x)+c&\text{ if }x\in V\end{array}\right.$ is smooth on $U\cup V$, and $dF=\omega$ by construction. So $\omega$ is exact on $U\cup V$, as desired.
2015-01-27 01:16:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9772131443023682, "perplexity": 47.923436896379236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122222204.92/warc/CC-MAIN-20150124175702-00184-ip-10-180-212-252.ec2.internal.warc.gz"}
https://pixel-druid.com/axiom-of-choice-and-zorns-lemma.html
## § Axiom of Choice and Zorn's Lemma I have not seen this "style" of proof before of AoC/Zorn's lemma by thinking of partial functions $(A \rightarrow B)$ as monotone functions on $(A \cup \bot \rightarrow B)$. #### § Zorn's Lemma implies Axiom of Choice If we are given Zorn's lemma and the set $A_i$, to build a choice function, we consider the collection of functions $(f: \prod_i A_i \rightarrow \rightarrow A_i \cup \bot)$ such that either $f(A_i) = \bot$ or $f(A_i) \in A_i$. This can be endowed with a partial order / join semilattice structure using the "flat" lattice, where $\bot < x$ for all $x$, and $\bot \sqcup x = x$. For every chain of functions, we have a least upper bound, since a chain of functions is basically a collection of functions $f_i$ where each function $f_{i+1}$ is "more defined" than $f_i$. Hence we can always get a maximal element $F$, which has a value defined at each $F(A_i)$. Otherwise, if we have $F(A_i) = \bot$, the element is not maximal, since it is dominated by a larger function which is defined at $A_i$. Hence, we've constructed a choice function by applying Zorn's Lemma. Thus, Zorn's Lemma implies Axiom of Choice.
2022-12-02 22:20:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777806401252747, "perplexity": 165.52717227024394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.70/warc/CC-MAIN-20221202215443-20221203005443-00742.warc.gz"}
https://www.reproducibility.org/RSF/book/tccs/npm/paper_html/node14.html
## Non-stationary Prony method Equation 27 can be written as a matrix form: (36) where , is the time shift of the input signal and is the time-dependant coefficients. We solve the under-determined linear system by using the shaping regularization method. The solution is the form below: (37) where is a vector of , the elements of vector is: (38) the elements of the matrix is: (39) where is the regularization parameter, is a shaping operator, and stands for the complex conjugate of . We can use the conjugate gradient method to find the solution of the linear system. The NPM (Fomel, 2013) can be summarized as follows: After we decompose the input signal into narrow-band components, we compute the time-frequency distribution of the input signal using the Hilbert transform of the intrinsic mode functions. 2020-07-18
2022-08-07 22:45:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8669986128807068, "perplexity": 567.5763602083182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570730.59/warc/CC-MAIN-20220807211157-20220808001157-00737.warc.gz"}
https://www.physicsforums.com/threads/using-snells-law-to-find-the-entering-exiting-rays-angles.681367/
# Using Snell's law to find the entering/exiting rays' angles 1. Mar 27, 2013 ### anthesco 1. The problem statement, all variables and given/known data This is one of my homework questions. I think I have the right answer, but I don't understand how to figure out the "given" angle, so if someone could explain it to me, that would be great! Be sure to look at the picture attached...it's what's tricking me. A prism whose cross section is shaped like an isosceles right triangle is made from a material with index of refraction, n = 1.31. Find the angle θ of the entering/exiting rays that travel parallel to the lower side (in degrees). 2. Relevant equations Snell's law, n1*sin$\theta$1=n2*sin$\theta$2 3. The attempt at a solution n1=1.31 n2=1 $\theta$1=45° 1.31*sin$\theta$1=1*sin$\theta$2 $\theta$2=67.867 What I need to know is how to figure out that the first angle is 45°. #### Attached Files: • ###### prism 2.png File size: 3.1 KB Views: 329 Last edited: Mar 27, 2013 2. Mar 27, 2013 ### invertioN I don't see any attached pictures. 3. Mar 28, 2013 ### Sunil Simha If θ1 is given and you have to find the angle of the exiting ray w.r.t. the normal, then you only require some geometry here as the ray travels parallel to the base. (Hint:Construct the normals on the faces until they meet) 4. Mar 28, 2013 ### rude man What are the base angles? Then use plane geometry to figure theta1, the angle between the normal to the left side of the prism and the horizontal ray inside said prism. 5. Mar 28, 2013 ### anthesco I'm unsure of the base angles, it isn't given to us. I now realize that I use 45 deg because you're supposed to use the top angle of the triangle (so, where the right angle is) and divide that by two, but I don't understand why you use that angle over all the other ones... 6. Mar 28, 2013 ### rude man Sum of angles = 180 and you're given the top one = 90. Considering the two sides are of equal length, don't you think you can come up with the base angles? Then look at the figure you provided us and you should be able to figure out the angle between the normal to the left side and the flat beam section inside. If n1, n2 and sin(theta2) are given, you know how to compute theta1. Look at the angle between the flat inside beam and the left side of the prism. Look like the left base angle? So what is theta2, since the angle between the prism's left side and its normal is by definition 90 deg? BTW I'm using "1" for air and "2" for inside the prism. It's the logical choice since a beam is coming FROM the air (1) TO the glass (2). 7. Mar 28, 2013 ### anthesco That makes a lot more sense now! The issue I was having was figuring out which angle I needed to designate theta2. Man, it's been a long time since I've used geometry. Thank you!
2017-10-20 20:04:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6088799238204956, "perplexity": 650.8227328652486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824325.29/warc/CC-MAIN-20171020192317-20171020212317-00348.warc.gz"}
https://quantnet.com/threads/tutorial-boost-1-37-0-quantlib-0-9-6-and-visual-studio-2008.1084/
# TUTORIAL: Boost 1.37.0 + QuantLib 0.9.6 and Visual Studio 2008 #### David Palmer I am about 12 years past my last programming (full-time) adventures. Then it was in C and I am looking to jump into Boost (www.boost.org) QuantLib (www.quantlib.org) using the MS Visual C++ Express Edition 2008. I am using this on a Vista machine and have run into some linking issues. I thought that I might start a thread so I and interested others can go through this together. If you haven't any knowledge of Boost and QuantLib, please take a look. Dave #### Andy Nguyen ##### Member Dave, Do you have any problem on XP ? Do you use latest release of Boost and Quantlib ? Best way would be to create a simple project, post it here along with the compiling errors. #### David Palmer The issue is less than XP vs Vista, the compiler release has yet to be intergrated in BJAM, which is used to build boost. So as a work around I edited the config files to build with the default MSVC. All of that eventually worked fine, but anytime I try to link using the regex libraries (regular expressions) it throws (MSVC) link errors. Now I could use Cygwin or G++ but then I have to keep using G++. My hope was to use the Express edition. Once Boost gets built and runs fine, I will build QuantLib. Most of the development of these open source projects are done on Linux or Unix by nature. My understanding and experience with previous versions was with a lot of digging around you get them built and running in MSVC, so I am sure it will get resolved. The community for both is vibrant and responsive. In the case of C++ 2008, it is just very new. So I have to get past these link errors to move on. I browsed the forum here and did not see anything posted on QuantLib or Boost, fair to say I did not dig too deep though. Hoping to hook up with other interested parties to get through the learning curve togewther. #### Vic_Siqiao ##### Member do u have any guide on QuantLib? i once tried to use that, but encountered some compiler errors and didnt really know how to use those; hope someone develop a guide book or help file like that in Matlab. #### David Palmer The online documentation is fairly robust, but the Windows guide requires a some digging to find all the quirks. To use QuantLib, you have to build boost then build QuantLib. Boost is an extension to the STL. Some parts of boost are being adopted by the STL committee for use in upcoming releases. Boost is very robust and constantly evolving. QuantLib is robust and constantly evolving, I think a windows group devoted to using QuantLib is in order. I recently purchased "Beyond the C++ Standard Library" a great introduction to the power of Boost. Dave #### alain ##### Older and Wiser I have used boost in the past. I know there is a way to obtain the full version without the need to build it. I remembered I used BJAM 2 years ago to build boost but last year I was able to find an installer for VS2005 somewhere on the boost website. I think this is the link I haven't used QuantLib or VS2008 though. boost might have issues with VS2008. Check the mailing lists and forums of boost. They are pretty helpful. #### David Palmer Thanks Alain, the latest is good for 2005 compiler, but needs to be built for 2008. Usually there is backward compatability so some simple configuration changes to the bjam build and you are off and running. I was able to build them ok, but for some reason the linker is unhappy. libboost_regex-vc80-mt-gd-1_34_1.lib(usinstances.obj) : error LNK2019: unresolved external symbol "__declspec(dllimport) "BLAH BLAH BLAH" the -gd- in the library name signifies a debug library, so it has problems with the debug library. Both the release and debug libraries are there. Not a pro at C++2008 yet, so I am stuck. Dave #### alain ##### Older and Wiser Thanks Alain, the latest is good for 2005 compiler, but needs to be built for 2008. Usually there is backward compatability so some simple configuration changes to the bjam build and you are off and running. I was able to build them ok, but for some reason the linker is unhappy. libboost_regex-vc80-mt-gd-1_34_1.lib(usinstances.obj) : error LNK2019: unresolved external symbol "__declspec(dllimport) "BLAH BLAH BLAH" the -gd- in the library name signifies a debug library, so it has problems with the debug library. Both the release and debug libraries are there. Not a pro at C++2008 yet, so I am stuck. Dave Don't assume backward compatibility with MSVC++ with libraries that don't come from MS. There is probably something in there that is breaking the whole thing. I remember in another sombody was having problems compiling an Excel add-in and there was a switch in the VC++ to produce somekind of old code. I think you should hit the forums and try to find out. Also, is VS2008 out officially? I haven't checked lately. If it is not, you might be getting into uncharted territories. #### DaveCompton ##### New Member building quantlib and boost with msvc express edition 2008 I've been able to build both boost 1.34.1 and quantlib 0.9.0 using msvc express edition 2008. It took a few tweaks to both the boost and quantlib code. If this is still of interest, let me know and I can post more details. - Dave #### alain ##### Older and Wiser I've been able to build both boost 1.34.1 and quantlib 0.9.0 using msvc express edition 2008. It took a few tweaks to both the boost and quantlib code. If this is still of interest, let me know and I can post more details. - Dave Any detail is going to be well received #### Peter Toke Heden Ahlgren ##### New Member Hey Dave I'm right now trying to make QuantLib (0.8.1) work with Boost on an XP machine, with VS C++ 2008. I am really interestered in hearing about your experience. Boost is working fine, but I have some trouble with the compiler version and QuantLib. The file ql/config.msvc.hpp does not "know" about versions of VS C++ newer than 2008 (version 9). To solve this I simply copied the code snippet concerning version 9 to the statement producing the error message "unknown compiler". So far, so good. Next step was to update all filepaths in the project properties so that all projects in the QuantLib solution could find the relevant headers etc. Now I'm stuck though. All projects fail to link to the library file QuantLib-vc80-mt-gd-0_8_1.lib. No wonder, because the file does not appear on my harddrive. I suppose I have done something not ideal in the above described steps. Have you any thoughts about what could be my problem? How have you overcome these difficulties? Best regards and thanks in advance Peter #### DaveCompton ##### New Member Hi Peter, I ran into similar problems with quantlib 0.9.0 . The solution your "fail to link" errors might be as simple as just changing the name of the library generated in the quantlib project to QuantLib-vc80-mt-gd-0_8_1.lib . This is something of a hack but since you've gotten this far, it is worth a try. The Visual Studio setting for the library name is under "Project/Properties/Configuration_Properties/Librarian/General" In more detail, there are files in both quantlib and boost that are compiler version related. In quantlib (0.9.0 and 0.8.1 ) these are : ql/config.msvc.hpp (you saw this one already) . I also had to modify QuantLib-0.9.0/ql/utilities/tracing.hpp to get it to compile under VS 2008. The same change may or may not be necessary for QuantLib-0.8.1/ql/utilities/tracing.hpp In boost 1.34.1, the compiler version related changes are : 'boost_1_34_1/boost/config/compiler/visualc.hpp' 'boost_1_34_1/tools/build/v2/tools/msvc.jam' 'boost_1_34_1/boost/signals/detail/named_slot_map.hpp' 'boost_1_34_1/libs/signals/src/named_slot_map.cpp' to get them to compile under VS 2008. Without knowing which version of boost you are using or whether it came pre-compiled or you built it yourself, I can't say too much more about your specific situation. I am going to follow this post shortly (in the next hour or so) with another post with links to much more detailed instructions on how to build both boost 1.34.1 and quantlib 0.9.0 using VS 2008. My intention is for these to be instructions that anyone can use to build quantlib 0.9.0 using VS 2008 starting with just a windows installation and an internet connection. If you get a chance to try out these links, would you please give me any feedback you might have? Thanks! - Dave #### Peter Toke Heden Ahlgren ##### New Member Hey Dave Thanks a lot. I have been away for some days but will now try out your suggestions. Furthermore I will have a look on your links. Give me a couple of hours... Peter #### DaveCompton ##### New Member Hi Peter, I don't know if using QuantLib version 0.8.1 in particular is important to you or not but the patch I linked to in my blog will not work for that version. However, the *changes* caused by that patch in version 0.9.0 are the same changes needed for 0.8.1 . You can probably just look at the text of the patch and see what needs to be done. I built 0.8.1 last week without any problems. - Dave #### John Donahue ##### New Member I followed Dave Programming blog to get Boost and QuantLib running on XP with VS Express C++ and it works great! Thanks so much for the help! Just have to make sure to get projects built you have the project properties point to the correct directories and libraries. Anyone using GNU GSL for calculations? #### tkhin ##### New Member thank you dave for your guide. managed to make quantlib work in visual studio 2005, seems to be ok now.. #### Andy Nguyen ##### Member I tried this the other day and couldn't get it to work on Visual Studio 2008 Team edition. Code: C:\boost_1_34_1>boost_1_34_1___vs2008___patch.txt | patch -p0 didn't seem to be the correct syntax so I changed it to Code: C:\boost_1_34_1>patch -p0 <boost_1_34_1___vs2008___patch.txt can't find file to patch at input line 25 Perhaps you used the wrong -p or --strip option? The text leading up to this was: It points to lines like this @@ -168,8 +168,8 @@ #### DaveCompton ##### New Member Hi Andy, You're right - Code: C:\[B][COLOR=Black]boost_1_34_1___vs2008___patch.txt | patch -p0[/COLOR][/B] is not the correct syntax. The alternative that you tried will not work either. The correct syntax (as specified in the text of the patch) is: Code: C:\[B][COLOR=Black]type boost_1_34_1___vs2008___patch.txt | patch -p0[/COLOR][/B] Also, you may not be getting the correct patch command. I installed the patch.exe program from Patch for Windows (also specified in the text of the patch) and then invoked it by full path name as follows: Code: C:\[B][COLOR=Black]type boost_1_34_1___vs2008___patch.txt | "c:\Program Files\GnuWin32\bin\patch.exe" -p0[/COLOR][/B] I have not tried Visual Studio Team Edition but I have tried Visual Studio Express Edition and Visual Studio Professional Edition and found that the instructions that I specify work for both. - Dave
2019-06-26 04:35:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5340445041656494, "perplexity": 2618.5714623218278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000164.31/warc/CC-MAIN-20190626033520-20190626055520-00166.warc.gz"}
http://accesspharmacy.mhmedical.com/content.aspx?bookid=389&sectionid=40142519
Chapter 40 After studying this chapter, you should be able to: • Know that biological membranes are mainly composed of a lipid bilayer and associated proteins and glycoproteins. The major lipids are phospholipids, cholesterol, and glycosphingolipids. • Appreciate that membranes are asymmetric, dynamic structures containing a mixture of integral and peripheral proteins. • Know the fluid mosaic model of membrane structure and that it is widely accepted, with lipid rafts, caveolae, and tight junctions being specialized features. • Understand the concepts of passive diffusion, facilitated diffusion, active transport, endocytosis, and exocytosis. • Recognize that transporters, ion channels, the Na+–K+-ATPase, receptors, and gap junctions are important participants in membrane function. • Know that a variety of disorders result from abnormalities of membrane structure and function, including familial hypercholesterolemia, cystic fibrosis, hereditary spherocytosis, and many others. Membranes are highly fluid, dynamic structures consisting of a lipid bilayer and associated proteins. Plasma membranes form closed compartments around the cytoplasm to define cell boundaries. The plasma membrane has selective permeabilities and acts as a barrier, thereby maintaining differences in composition between the inside and outside of the cell. The selective permeabilities for substrates and ions are provided mainly by specific proteins named transporters and ion channels. The plasma membrane also exchanges material with the extracellular environment by exocytosis and endocytosis, and there are special areas of membrane structure—gap junctions—through which adjacent cells exchange material. In addition, the plasma membrane plays key roles in cell–cell interactions and in transmembrane signaling. Membranes also form specialized compartments within the cell. Such intracellular membranes help shape many of the morphologically distinguishable structures (organelles), eg, mitochondria, ER, Golgi, secretory granules, lysosomes, and the nucleus. Membranes localize enzymes, function as integral elements in excitation-response coupling, and provide sites of energy transduction, such as in photosynthesis and oxidative phosphorylation. Changes in membrane components can affect water balance and ion flux, and therefore many processes within the cell. Specific deficiencies or alterations of certain membrane components (eg, caused by mutations genes encoding membrane proteins) lead to a variety of diseases (see Table 40–7). In short, normal cellular function depends on normal membranes. Life originated in an aqueous environment; enzyme reactions, cellular and subcellular processes, and so forth have therefore evolved to work in this milieu, encapsulated within a cell. ### The Body's Internal Water Is Compartmentalized Water makes up about 60% of the lean body mass of the human body and is distributed in two large compartments. #### Intracellular Fluid (ICF) This compartment constitutes two-thirds of total body water and provides a specialized environment for the cell (1) to make, store, and utilize energy; (2) to repair itself; (3) to replicate; and (4) to perform cell-specific functions. #### Extracellular Fluid (ECF) This compartment contains about one-third of total body water and is distributed between the plasma and ... Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access. Ok ## Subscription Options ### AccessPharmacy Full Site: One-Year Subscription Connect to the full suite of AccessPharmacy content and resources including 30+ textbooks such as Pharmacotherapy: A Pathophysiologic Approach and Goodman & Gilman's The Pharmacological Basis of Therapeutics, high-quality videos, images, and animations, interactive board review, drug and herb/supplements databases, and more.
2017-03-23 20:07:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20429298281669617, "perplexity": 6728.186161168355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187206.64/warc/CC-MAIN-20170322212947-00100-ip-10-233-31-227.ec2.internal.warc.gz"}
https://socratic.org/questions/given-a-perimeter-of-180-how-do-you-find-the-length-and-the-width-of-the-rectang
# Given a perimeter of 180, how do you find the length and the width of the rectangle of maximum area? Sep 24, 2016 Given a perimeter of 180, the length and width of the rectangle with maximum area are 45 and 45. #### Explanation: Let $x =$ the length and $y =$ the width of the rectangle. The area of the rectangle $A = x y$ $2 x + 2 y = 180$ because the perimeter is $180$. Solve for $y$ $2 y = 180 - 2 x$ $y = 90 - x$ Substitute for $y$ in the area equation. $A = x \left(90 - x\right)$ $A = 90 x - {x}^{2}$ This equation represents a parabola that opens down. The maximum value of the area is at the vertex. Rewriting the area equation in the form $a {x}^{2} + b x + c$ $A = - {x}^{2} + 90 x \textcolor{w h i t e}{a a a} a = - 1 , b = 90 , c = 0$ The formula for the $x$ coordinate of the vertex is $x = \frac{- b}{2 a} = \frac{- 90}{2 \cdot - 1} = 45$ The maximum area is found at $x = 45$ and $y = 90 - x = 90 - 45 = 45$ Given a perimeter of 180, the dimensions of the rectangle with maximum area are 45x45.
2022-05-29 05:04:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8172007203102112, "perplexity": 253.52453907480776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663039492.94/warc/CC-MAIN-20220529041832-20220529071832-00044.warc.gz"}
http://edwardlib.org/api/ed/MAP
# ed.MAP ## Class MAP Inherits From: VariationalInference ### Aliases: • Class ed.MAP • Class ed.inferences.MAP Defined in edward/inferences/map.py. Maximum a posteriori. This class implements gradient-based optimization to solve the optimization problem, $$\min_{z} - p(z \mid x).$$ This is equivalent to using a PointMass variational distribution and minimizing the unnormalized objective, $$- \mathbb{E}_{q(z; \lambda)} [ \log p(x, z) ].$$ #### Notes This class is currently restricted to optimization over differentiable latent variables. For example, it does not solve discrete optimization. This class also minimizes the loss with respect to any model parameters $$p(z \mid x; \theta)$$. In conditional inference, we infer $$z$$ in $$p(z, \beta \mid x)$$ while fixing inference over $$\beta$$ using another distribution $$q(\beta)$$. MAP optimizes $$\mathbb{E}_{q(\beta)} [ \log p(x, z, \beta) ]$$, leveraging a single Monte Carlo sample, $$\log p(x, z, \beta^*)$$, where $$\beta^* \sim q(\beta)$$. This is a lower bound to the marginal density $$\log p(x, z)$$, and it is exact if $$q(\beta) = p(\beta \mid x)$$ (up to stochasticity). #### Examples Most explicitly, MAP is specified via a dictionary: qpi = PointMass(params=ed.to_simplex(tf.Variable(tf.zeros(K-1)))) qmu = PointMass(params=tf.Variable(tf.zeros(K*D))) qsigma = PointMass(params=tf.nn.softplus(tf.Variable(tf.zeros(K*D)))) ed.MAP({pi: qpi, mu: qmu, sigma: qsigma}, data) We also automate the specification of PointMass distributions, so one can pass in a list of latent variables instead: ed.MAP([beta], data) ed.MAP([pi, mu, sigma], data) Note that for MAP to optimize over latent variables with constrained continuous support, the point mass must be constrained to have the same support while its free parameters are unconstrained; see, e.g., qsigma above. This is different than performing MAP on the unconstrained space: in general, the MAP of the transform is not the transform of the MAP. The objective function also adds to itself a summation over all tensors in the REGULARIZATION_LOSSES collection. ## Methods ### init __init__( latent_vars=None, data=None ) Create an inference algorithm. #### Args: • latent_vars: list of RandomVariable or dict of RandomVariable to RandomVariable. Collection of random variables to perform inference on. If list, each random variable will be implictly optimized using a PointMass random variable that is defined internally with constrained support, has unconstrained free parameters, and is initialized using standard normal draws. If dictionary, each value in the dictionary must be a PointMass random variable with the same support as the key. ### build_loss_and_gradients build_loss_and_gradients(var_list) Build loss function. Its automatic differentiation is the gradient of $$- \log p(x,z).$$ ### finalize finalize() Function to call after convergence. ### initialize initialize( optimizer=None, var_list=None, use_prettytensor=False, global_step=None, *args, **kwargs ) Initialize inference algorithm. It initializes hyperparameters and builds ops for the algorithm’s computation graph. #### Args: • optimizer: str or tf.train.Optimizer. A TensorFlow optimizer, to use for optimizing the variational objective. Alternatively, one can pass in the name of a TensorFlow optimizer, and default parameters for the optimizer will be used. • var_list: list of tf.Variable. List of TensorFlow variables to optimize over. Default is all trainable variables that latent_vars and data depend on, excluding those that are only used in conditionals in data. • use_prettytensor: bool. True if aim to use PrettyTensor optimizer (when using PrettyTensor) or False if aim to use TensorFlow optimizer. Defaults to TensorFlow. • global_step: tf.Variable. A TensorFlow variable to hold the global step. print_progress(info_dict) Print progress to output. ### run run( variables=None, use_coordinator=True, *args, **kwargs ) A simple wrapper to run inference. 1. Initialize algorithm via initialize. 2. (Optional) Build a TensorFlow summary writer for TensorBoard. 3. (Optional) Initialize TensorFlow variables. 4. (Optional) Start queue runners. 5. Run update for self.n_iter iterations. 6. While running, print_progress. 7. Finalize algorithm via finalize. 8. (Optional) Stop queue runners. To customize the way inference is run, run these steps individually. #### Args: • variables: list. A list of TensorFlow variables to initialize during inference. Default is to initialize all variables (this includes reinitializing variables that were already initialized). To avoid initializing any variables, pass in an empty list. • use_coordinator: bool. Whether to start and stop queue runners during inference using a TensorFlow coordinator. For example, queue runners are necessary for batch training with file readers. *args, **kwargs: Passed into initialize. ### update update(feed_dict=None) Run one iteration of optimization. #### Args: • feed_dict: dict. Feed dictionary for a TensorFlow session run. It is used to feed placeholders that are not fed during initialization. #### Returns: dict. Dictionary of algorithm-specific information. In this case, the loss function value after one iteration.
2020-04-02 06:20:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3982234299182892, "perplexity": 6165.382024986117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00033.warc.gz"}
https://www.physicsforums.com/threads/if-a-i-j-k-and-b-i-j-k-what-will-be-an-angle.951611/
# If A = i + j + k and B = -I + -j + -k, what will be an angle... ## Homework Statement If A = i + j + k and B = -i + -j + -k , then (A-B) will make angle with A? What is the concept behind it, could you please explain with a diagram? (this is the part from scalar and vector) ## The Attempt at a Solution If we substruct (A-B) we get '0' because 1-1 = 0 Am I right Please check. Related Introductory Physics Homework Help News on Phys.org jbriggs444 Homework Helper 2019 Award If we substruct (A-B) we get '0' because 1-1 = 0 Am I right Please check. What is 1 - (-1) ? When you figure out where you made a mistake by answering the question jbriggs posted, use the dot product between (A-B) and A to figure out the angle. $\vec{(A-B)}\vec{A}=|(A-B)||A|\cos\phi$, where $\phi$ is the angle. What is 1 - (-1) ? 1 - (-1) = 2 jbriggs444 Homework Helper 2019 Award 1 - (-1) = 2 Good. So if A = i + j + k and B = -i + -j + -k, what does that make A - B? When you figure out where you made a mistake by answering the question jbriggs posted, use the dot product between (A-B) and A to figure out the angle. $\vec{(A-B)}\vec{A}=|(A-B)||A|\cos\phi$, where $\phi$ is the angle. Good. So if A = i + j + k and B = -i + -j + -k, what does that make A - B? If we add A + (-B) = A-B so A + (-B) = i + j + k + (-i + -j + -k) = i + j+ k+ -i +-j + -k = 0 Ray Vickson Homework Helper Dearly Missed ## Homework Statement If A = i + j + k and B = -i + -j + -k , then (A-B) will make angle with A? What is the concept behind it, could you please explain with a diagram? (this is the part from scalar and vector) ## The Attempt at a Solution If we substruct (A-B) we get '0' because 1-1 = 0 Am I right Please check. You cannot possibly get A -B = 0 unless A = B. Do you have A = B? You cannot possibly get A -B = 0 unless A = B. Do you have A = B? No, I don't A = B jbriggs444 Homework Helper 2019 Award If we add A + (-B) = A-B so A + (-B) = i + j + k + (-i + -j + -k) = i + j+ k+ -i +-j + -k = 0 If B = -i + -j + -k, what is (-B)? If B = -i + -j + -k, what is (-B)? -B = -(-i + -j + -k) = i + j + k So what to do next? Last edited: jbriggs444 Homework Helper 2019 Award -B = -(-i + -j + -k) = i + j + k So what to do next? So work that last bit again. A - B = A + -B. What is A - B? FactChecker Gold Member Just double-check your signs and account for double negatives correctly. So work that last bit again. A - B = A + -B. What is A - B? A + -B = i + j + k + i + j + k = 2i + 2j + 2k what to do next? You asked for a geometrical representation; do you understand how to graph a vector? A Vector is different from a scalar because unlike a scalar, vectors have both a magnitude (length) and a direction (angle). Graphing the point A(1,2) is simple enough, A lies a distance 1 in the positive x direction and 2 in the positive y direction. With the VECTOR <1,2> the values 1, and 2 act as weights on standard unit vectors i = <1,0> and j = <0,1> so A would be the vector sum of 1*<1,0> + 2*<0,1> if you can begin by drawing these two vectors in the x-y plane then you will have a better understanding of what the geometrical representation of a vector is Last edited: ' You asked for a geometrical representation; do you understand how to graph a vector? A Vector is different from a scalar because unlike a scalar, vectors have both a magnitude (length) and a direction (angle). Graphing the point A(1,2) is simple enough, A lies a distance 1 in the positive x direction and 2 in the positive y direction. With the VECTOR <1,2> the values 1, and 2 act as weights on standard unit vectors i = <1,0> and j = <0,1> so A would be the vector sum of 1*<1,0> + 2*<0,1> if you can begin by drawing these two vectors in the x-y plane then you will have a better understanding of what the geometrical representation of a vector is I know how to draw but this is not my question my question is 'If A = i + j + k and B = -i + -j + -k , then (A-B) will make an angle with A?' I have done so far above as directed haruspex Homework Helper Gold Member then (A-B) will make an angle with A?' Do you know how to find the angle between two vectors? What do you know about dot products? FactChecker Gold Member A + -B = i + j + k + i + j + k = 2i + 2j + 2k what to do next? Can you express that answer in terms of A? That should tell you something about the angle between it and A without needing a diagram. When you figure out where you made a mistake by answering the question jbriggs posted, use the dot product between (A-B) and A to figure out the angle. $\vec{(A-B)}\vec{A}=|(A-B)||A|\cos\phi$, where $\phi$ is the angle. ' I know how to draw but this is not my question my question is 'If A = i + j + k and B = -i + -j + -k , then (A-B) will make an angle with A?' I have done so far above as directed If you know how to draw these vectors than i don't see how you can't find the angle. Also, I already answered your question about the angle in the second post. All you have to do is plug in the numbers. Ray Vickson Homework Helper Dearly Missed ' I know how to draw but this is not my question my question is 'If A = i + j + k and B = -i + -j + -k , then (A-B) will make an angle with A?' I have done so far above as directed You say you know how to draw the vectors A and (A-B). Have you actually done the drawings? If you had done that (correctly) the answer would be obvious. If we substruct (A-B) we get '0' because 1-1 = 0 You need to use vector addition, not arithmetic addition. You need to find the directions of vectors A and B in order to solve the problem. The magnitudes are irrelevant. haruspex
2020-03-31 19:24:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.651924729347229, "perplexity": 586.1814632560182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00492.warc.gz"}
http://mathhelpforum.com/advanced-statistics/141637-probability-lottery.html
1. ## Probability, lottery... A lottery ticket(cupon?) with a row of 3 numbers. Those three numbers can vary from 0-9. A ticket(cupon) might be 5,6,7. A number is picked randomly among the numbers 0-9. The number that was picked is then put back into the bowl. The lottery ends when it has three unequal numbers, for instance, it can pull out 5 numbers like this: "1,3,3,1,5", which leaves the lottery result with 1, 3,5 as the winning numbers. What is the probabilty of winning then you have delievered a cupon consisting of the numbers 2,3,4? My shot at it: First number is always 1/9 chance. Second number is 1/9*1/9 chance. Third number is 1/9*1/9*1/9 chance. This probability would be right if the lottery could have 3 numbers, no matter if a number got picked twice, or "trice". Need a bit help 2. I believe its 1/720. There are 10 possible numbers. You need to match all three. The odds of the first match is 1/10. However after that they will draw a new number if the first number comes up again. So for the second drawing there are basically only 9 possibilities. So the odds of hitting the second number is 1/9. Similarly for the third it is 1/8. So your chances of hitting all three are 1/10 * 1/9 * 1/8 = 1/720. 3. Haha, yes, actually, I did dream about this answer, that I should just multiply the latter(?) by reducing the 10 to 9, then to 8... And yes, my bad, 1/10 possibilities on the first, 1/9 on the second one..... Thanks
2017-03-28 20:26:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8096883296966553, "perplexity": 766.8461171141075}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189884.21/warc/CC-MAIN-20170322212949-00567-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/about-degeneracy-in-wikipedia.620152/
1. Jul 11, 2012 ### hokhani In Wikipedia this sentence is written about degeneracy; In physics, two or more different quantum states are said to be degenerate if they are all at the same energy level. Statistically this means that they are all equally probable of being filled, do you agree with the bold statement? 2. Jul 11, 2012 ### Einj I think this could be correct since the Boltzman distribution is given by: $$p(E_i)=\frac{1}{Z}\exp^{-E_i/kT}$$ so the probability density of finding a particle in two state with the same energy should be the same. 3. Jul 11, 2012 ### Bill_K If they are mutually accessible, and the system is in thermodynamic equilibrium. This is the "Postulate of Equal a Priori Probability." 4. Jul 11, 2012 ### Amok To me this makes complete sense if you consider that the system is in thermodynamic equilibrium. Think of how it is less likely that particles occupy higher energy levels than lower ones. In
2018-01-23 20:01:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7188238501548767, "perplexity": 426.0704881258296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892238.78/warc/CC-MAIN-20180123191341-20180123211341-00504.warc.gz"}
https://gamedev.stackexchange.com/questions/191295/photon-transforms-syncing-other-transforms
# Photon Transforms syncing other transforms I have a problem wherein this happens: It's problematic because I'm unfamiliar how to set the same player value in the same transform over the network. Is there a way where if I get the list of transforms, it's in the same order in other clients? My current setup is that each client creates his own My Player Prefab with a Player Info script that stores in the Photon's Player. Spawn Manager will be listening until both player prefabs have been created. From there, they'll do this code: public void PlayerObjectReady(){ playerObjectReady++; int totalPlayerCount = PhotonNetwork.PlayerList.Length; // check if we've readied all player objects in all clients if(playerObjectReady == totalPlayerCount){ photonView.RPC("SpawnPlayer",RpcTarget.AllBufferedViaServer); } } [PunRPC] void SpawnPlayer(){ // update local list for all clients infos = FindObjectsOfType<PlayerInfo>(); // this line causes problems // update player references so that each character is owned by their player for (int i = 0; i < infos.Length; i++){ infos[i].Player = PhotonNetwork.PlayerList[i]; if(PhotonNetwork.IsMasterClient) infos[i].transform.position = spawnPoints[i].position; } TurnManager.Instance.AddPlayers(infos); // tell everyone i'm ready photonView.RPC("ImReady",RpcTarget.AllBufferedViaServer,PhotonNetwork.NickName); } The main problem here is that the player prefabs are not always in the correct order that I wanted them to be. Plus, their positions often clash together whenever Photon Transform view is enabled. And yes, the players here are right beside each other when they're actually spread across a small map. Let me know a possible solution because I intend to use Photon Transform view. For now, disabling Photon Transform view is my current band-aid solution which only works when spawning and not moving the players. ## 1 Answer I got my own solution after testing for hours. Essentially, when I create the My Player prefab which has it's own PhotonView and I create these prefab by the player who should own it at the start, I can access the Photon View's Owner.ActorNumber which can provide me with the ID needed to set my own player list of Transforms/GameObject/whatever script accordingly. See my code below for reference: [PunRPC] void SpawnPlayer(){ // update local list for all clients infos = FindObjectsOfType<PlayerInfo>(); Array.Sort(infos, (PlayerInfo a,PlayerInfo b)=>a.photonView.Owner.ActorNumber.CompareTo(b.photonView.Owner.ActorNumber)); // update player references so that each character is owned by their player for (int i = 0; i < infos.Length; i++){ infos[i].Player = PhotonNetwork.PlayerList[i]; if(PhotonNetwork.IsMasterClient) infos[i].transform.position = spawnPoints[i].position; } } $$$$ `
2021-09-18 11:56:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22467228770256042, "perplexity": 3434.7988774500295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056392.79/warc/CC-MAIN-20210918093220-20210918123220-00251.warc.gz"}
https://www.bartleby.com/questions-and-answers/find-the-missing-numerator-that-will-make-the-rational-expressions-equivalent.-8-2x4-16x4x4-82x416x4/bf3653a7-b0f5-4ae2-92b3-6e5482d3999b
Question Find the missing numerator that will make the rational expressions equivalent. 8 2(x−4) = ? 16(x−4)(x+4) 82(x−4)=?16(x−4)(x+4)
2020-11-27 00:29:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9688761830329895, "perplexity": 5339.128801694623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189030.27/warc/CC-MAIN-20201126230216-20201127020216-00597.warc.gz"}
http://www.caiag.kg/phocadownload/Dep3/IntrodactionPython/4.IntroductionToDataCubes.html
# Introduction to Data Cubes¶ In this notebook accessing and manipulating Data Cube data will be explained. This will give you a basic understanding of generic functions within the Data Cube, before more useful use cases are explored in the next notebook. Hopefully at that point, you will be able to write your own functions to use the Data Cube to help with analysis of your own. In a Python script, the datacube module is imported in the same way as other modules are, using the import statement. The database which organises and stores all of the data is set up before any data is put into the Data Cube, so the act of importing the "datacube" module will connect to the datacube. Once imported, we need to actually initialise the Data Cube, and this is done using the command below: The variable dc is the initialised Data Cube instance, and this is a class which many methods can be run from, allowing us to load data from the Data Cube into Python in order to do useful things with it. Additionally, we can see what data is in the Data Cube, or which satellites/sources of data are present. We will look at some of these functions to understand the data structures of the Data Cube, so that when it comes to load data into the Python editor, we are more prepared to use it. The first thing to look at is the "products" contained within the Data Cube. A product can relate to a type of data contained within the Data Cube. For example, a product could be Sentinel-2 Level 2 data (atmospherically corrected data), or it could be decadal indices data, such as NDVI, NDSI or VHI. The following command lists all the product types in the Data Cube, as well as some information about these products. Run the box below to do this: As can be seen above, this lists all the different data products in the Data Cube. Each of these data products will have one or more data bands included in the Data Cube as well, and we can check to see what bands or measurements, each product type will have using the command below: This lists a lot of useful information including the names of each band, which is useful for loading the data into the Data Cube. Additional information, such as the data type of the bands, and their no data values are also listed. Knowing all this, it is now possible to load some data from the Data Cube into Python. This is done using the dc.load() function. There are a number of things which must be specified in order to load data in as well as some things which are useful to specify. Required arguments are the product type (for example s2_10m for Sentinel-2 10m data), the output_crs of the data (what projection it is called in, the standard projection in the Data Cube is WGS84 UTM 43N, or EPSG:32643, but many different types of projections can be used) and the resolution of the data, in terms of size of x & y pixel. If you are loading data in a UTM projection, this is in metres, so to load in Sentinel data at 10m resolution, the command would be resolution=[10,10]. Additional information which can be supplied is the latitude and longitude of the data area you are interested in loading in, as well as the time period you are interested in. While not required to enter this information when loading data, it is highly recommended to do this, as there is a lot of data in the Data Cube, so trying to load large amounts of data this way will either cause your program to run very slowly or more likely, crash completely. Sometimes, you might want to load a lot of data in for processing, and there are ways to do this which will be explained in later notebooks. It has to be done very carefully, being mindful on managing the memory and processing requirements of data at all times. However, below is an example for loading in just a small amount of data into the Data Cube: This is a very long and difficult to read way of calling in the data into Python. There is a way to do in a more readable way, which is to define many of the arguments before loading data in something called a dictionary. The entire contents of this dictionary can be called as arguments into the dc.load() function using two asterisks ** before the dictionary name to load them all. This can be done in the following way: This is much easier to read than filling in all arguments separately. It also makes it easier to change the area you are interested in getting data for or the data bands if you are loading in data from multiple products, as you only need to change values in the dictionary rather than in every dc.load() function.
2022-01-18 20:31:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33497780561447144, "perplexity": 450.4083502239869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00510.warc.gz"}
https://physics.stackexchange.com/questions/627411/physics-models-using-non-lorentzian-indefinite-metrics
Physics models using (non-lorentzian) Indefinite metrics [duplicate] I would like to understand better the role of indefinite metrics in physics. As far as I know, Lorentzian metric is the natural setting for Einstein's Relativity Theory. Somewhere I read something about theory modelling reality using more than one time directions, i.e. metric tensor of signature $$(p,q)$$ where $$p,q\geq2$$. The question is the following: which theories actually need indefinite metric of non-Lorentzian signature? • Possible duplicates: physics.stackexchange.com/q/43322/2451 , physics.stackexchange.com/q/43630/2451 and links therein. Apr 6 at 12:06 • @Qmechanic thanks, but my question is more about a List of theories that need a signature $(p,q)$ with $p,q\geq2$. One of your question deals with "intuition of more times" and the other with a signature $2,2$ or something related to general relativity. I would like to have a more general list, without specify "why" more time-direction are needed, nor "what is their meaning". I could not find a "full" list on those question Apr 6 at 12:37
2021-11-30 00:25:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6659890413284302, "perplexity": 707.6626616825698}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358847.80/warc/CC-MAIN-20211129225145-20211130015145-00006.warc.gz"}
http://mathhelpforum.com/math-topics/147749-single-vectors.html
1. ## Single Vectors Use the rectangular prism to determine a single vector equivalent to each of the following vector combinations: Can you check my work please? 1) HE - GA + FC = HC 2) EA + HE - HC - CA = 0 vector 3) AE + AD + AE=AG 4) -AE + DA = HA 5) AB + BC + CG = AG 2. Originally Posted by john-1 Use the rectangular prism to determine a single vector equivalent to each of the following vector combinations: Can you check my work please? 1) HE - GA + FC = HC ... I get AC 2) EA + HE - HC - CA = 0 vector ... agree 3) AE + AD + AE=AG ... 2AE + AD ? check the problem statement again 4) -AE + DA = HA ... agree 5) AB + BC + CG = AG ... agree ... 3. Hi Skeeter! Thanks for the input. Corrections: 3) AB + AD + AE = AG .. is this correct? Also, how did you get 1) as AC? I'm having trouble seeing that as the answer 4. Originally Posted by john-1 Hi Skeeter! Thanks for the input. Corrections: 3) AB + AD + AE = AG .. is this correct? Also, how did you get 1) as AC? I'm having trouble seeing that as the answer (1) HE - GA + FC = HE + AG + FC = DA + AG + FC = DG + FC = AF + FC = AC (3) agree
2018-04-25 13:17:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071635365486145, "perplexity": 3960.0921447112532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947803.66/warc/CC-MAIN-20180425115743-20180425135743-00292.warc.gz"}
https://www.jiskha.com/questions/1203160/how-do-i-rewrite-this-equation-using-radicals-instead-of-rational-exponents-i-rewrote-the
# algebra How do I rewrite this equation using radicals instead of rational exponents? I rewrote the whole equation in simplest form 25x^(11/10)4√y How do I rewrite it with radical exponents? 1. 👍 2. 👎 3. 👁 1. what is that 4 doing there? is 4√y supposed to mean 4th root of y? x^(11/10) is 10th-root(x^11) 1. 👍 2. 👎 ## Similar Questions 1. ### Alegbra 2 Use the rational root theorem to list all possible rational roots for the equation. X^3+2x-9=0. Use the rational root theorem to list all possible rational roots for the equation. 3X^3+9x-6=0. A polynomial function P(x) with 2. ### college algebra Solve the equation with rational exponents. (X-1)^2/3=36 3. ### Applied Calculus...... Rewrite the expression using positive exponents only. (Simplify your answer completely.) (x^3-y^3)(x^-3+y^-3) 4. ### algrbra 2 Rewrite the expression, using only positive exponents. x-5 1. ### Math Use rational exponents to write x1/6*y1/5*z1/4 as a single radical expression 2. ### math situation use rational exponents to write x 1/4.y 1/6.z 1/9 as a single radical expression 4. ### Algebra A. Find the LCD for the given rational expression. B. Rewrite them as equivalent rational expressions with the least common denominator. 5/a^2+5a+4, 4a/a^2+3a+2 1. ### Intermediate Algebra Simplify the following exponents and rewrite it in an equivalent form with positive exponents..24x^3y^-3/72x^-5y^-1 2. ### algebra Rewrite the rational exponent as a radical by extending the properties of integer exponents. 2 to the 3 over 4 power, all over 2 to the 1 over 2 power 3. ### Math Rewrite the expression using exponents 4*4*4*4*4 Will it be 4 with that little 5 on top? 4. ### Math Which of the following is not a strategy for solving simple exponential equations? Question 12 options: a) Express both sides as powers with a common base and then equate the exponents. b) Divide both sides by the common base and
2021-09-21 17:58:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194164633750916, "perplexity": 3293.7259492824724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00039.warc.gz"}
https://mathematica.stackexchange.com/questions/103960/gradient-of-interpolated-3d-data-in-mathematica-9
# Gradient of interpolated 3D data in mathematica 9 tt = Flatten[Table[{x, y, z, btot[x, y, z]}, {x, -1, 1, 0.1}, {y, -1, 1,0.1}, {z, -1, 1, 0.1}], 2]; ff = Interpolation[tt] Till here it is working fine as it is returning the values of the interpolated function at various {x,y,z} points. Then I want to find the gradient of this interpolated function. But when I am using ffd[x_,y_,z_]:= D[ff[x,y,z],{{x,y,z}}] I am not getting the gradient. • closely related: mathematica.stackexchange.com/q/102812/5478 – Kuba Jan 13 '16 at 11:21 • All you have to do is replace the := in your code with = and it should work – Jason B. Jan 13 '16 at 11:26 • It didnt even work by replacing := with =. – Hippo Jan 13 '16 at 11:31 • @SamridhiGambhir "didn't work" or "not getting the gradient" are vague statements which won't help you getting the answer fast. – Kuba Jan 13 '16 at 11:34 • @SamridhiGambhir Try Remove[ffd] before trying with =. – Coolwater Jan 13 '16 at 11:36 With ffd[x_,y_,z_]:= D[ff[x,y,z],{{x,y,z}}] the values of x, y, and z are substituted as arguments causing differentation wrt. numbers, i.e. nonsense. Moreover, you are using SetDelayed, which differentiate once for every call, which rather should be once for all time. The solution to both problem is replacing SetDelayed with Set: ffd[x_,y_,z_]= D[ff[x,y,z],{{x,y,z}}] When you define your function like this, then e.g. ffd[.5, .5, .5] is really D[ff[.5, .5, .5], {{.5, .5, .5}}]. to avoid scoping issues with x=5; ffd[x_, y_, z_] = D[ff[x, y, z], {{x, y, z}}] you can use ffd = Evaluate[D[ff[#, #2, #3], {{#, #2, #3}}]] & & means there is a held Function so we have to use Evaluate to force computation of gradien so it is not repeated each time.
2020-02-20 19:09:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33455538749694824, "perplexity": 4008.175287561605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145260.40/warc/CC-MAIN-20200220162309-20200220192309-00509.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Ahossain.md-anwar
# zbMATH — the first resource for mathematics ## Hossain, Md. Anwar Compute Distance To: Author ID: hossain.md-anwar Published as: Hossain, M. A.; Hossain, Md. Anwar; Hossain, M.; Hossain, M. Anwar; Hossain, Md Anwar; Hossain, Md. A.; Hossain Documents Indexed: 84 Publications since 1981, including 1 Book all top 5 #### Co-Authors 1 single-authored 13 Pop, Ioan 11 Siddiqa, Sadia 7 Molla, Md. Mamun 7 Rees, D. Andrew S. 7 Saha, Suvash Chandra 6 Asghar, Saleem 6 Gorla, Rama Subba Reddy 4 Begum, Naheed 4 Mandal, A. C. 3 Alim, Md Abdul 3 Chakraborty, Subenoy 3 Chowdhury, Mustafa Kamal 3 Das, Shyam 3 Hussain, Sharmina 3 Paul, Sreebash C. 3 Rahaman, Farook 3 Roy, Nepal Chandra 3 Takhar, H. S. 3 Tokhi, M. Osman 2 Abrar, M. N. 2 Alam, K. C. A. 2 Anghel, Marian 2 Baxter, M. J. 2 Begam, N. 2 Begum, R. A. 2 Munir, M. S. 2 Taher, Muhamad Asgher 1 Akhter, C. 1 Al-Mamun, Mohammed A. 1 Al-Mdallal, Qasem M. 1 Angel, Mauricio 1 Arbad, O. 1 Banu, F. 1 Bass, Rosemary 1 Bhowmick, Sidhartha 1 Brown, Louise J. 1 Chen, Nan 1 Chowdhury, Md. M. K. 1 Fall, Charles 1 Farid, Dewan Md 1 Gu, Yuantong 1 Kabir, Sohag 1 Kawahashi, M. 1 Kutubuddin, M. 1 Mahfooz, M. 1 Mahfooz, S. M. 1 Mahmud, Shohel 1 Mustafa, Naeem 1 Na, Tsung-Yen 1 Nag, Preetom 1 Noor, Saima 1 Paul, Manosh C. 1 Rahman, A. F. M. A. 1 Ravenhil, L. 1 S., Mustapha 1 Saha, Litan Kumar 1 Shaheed, M. Hasan 1 Shayo, L. K. 1 Wagstaff, L. 1 Wang, Fanglin 1 Zeb, Salman all top 5 #### Serials 9 Acta Mechanica 8 International Journal of Numerical Methods for Heat & Fluid Flow 5 Astrophysics and Space Science 5 Applied Mechanics and Engineering 3 International Journal of Heat and Mass Transfer 3 Applied Mathematics and Computation 3 Applied Mathematics and Mechanics. (English Edition) 3 Ganit 3 Mathematical and Computer Modelling 3 Mathematical Problems in Engineering 3 ZAMM. Zeitschrift für Angewandte Mathematik und Mechanik 3 International Journal of Nonlinear Sciences and Numerical Simulation 3 Nonlinear Analysis. Modelling and Control 2 Archives of Mechanics 2 International Journal of Engineering Science 2 Engineering Computations 2 Journal of Theoretical Biology 1 Applied Scientific Research 1 Computers & Mathematics with Applications 1 International Journal of Non-Linear Mechanics 1 Indian Journal of Pure & Applied Mathematics 1 Magnetohydrodynamics 1 Wave Motion 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM) 1 Bulletin of the Calcutta Mathematical Society 1 Statistica 1 Bulgarian Journal of Physics 1 Journal of Parallel and Distributed Computing 1 Applied Mathematical Modelling 1 IEE Proceedings. Control Theory and Applications 1 European Journal of Mechanics. B. Fluids 1 International Journal of Applied Mechanics and Engineering 1 Applications and Applied Mathematics 1 Applied and Computational Mathematics 1 Malaysian Journal of Mathematical Sciences 1 AMM. Applied Mathematics and Mechanics. (English Edition) 1 Advanced Textbooks in Control and Signal Processing all top 5 #### Fields 72 Fluid mechanics (76-XX) 43 Classical thermodynamics, heat transfer (80-XX) 4 Numerical analysis (65-XX) 3 Computer science (68-XX) 3 Optics, electromagnetic theory (78-XX) 3 Relativity and gravitational theory (83-XX) 3 Systems theory; control (93-XX) 2 Partial differential equations (35-XX) 2 Mechanics of deformable solids (74-XX) 2 Biology and other natural sciences (92-XX) 1 Functions of a complex variable (30-XX) 1 Statistics (62-XX) 1 Astronomy and astrophysics (85-XX) #### Citations contained in zbMATH Open 58 Publications have been cited 221 times in 142 Documents Cited by Year The effect of radiation on free convection from a porous vertical plate. Zbl 0953.76083 Hossain, M. A.; Alim, M. A.; Rees, D. A. S. 1999 Combined heat and mass transfer in natural convection flow from a vertical wavy surface. Zbl 0934.76085 Hossain, M. A.; Rees, D. A. S. 1999 Natural convection of fluid with variable viscosity from a heated vertical wavy surface. Zbl 0996.76081 Hossain, M. A.; Kabir, S.; Rees, D. A. S. 2002 Free convection from a vertical permeable circular cone with non-uniform surface temperature. Zbl 0995.76085 Hossain, M. A.; Paul, S. C. 2001 Natural convection flow of a viscous fluid about a truncated cone with temperature-dependent viscosity. Zbl 0959.76083 Hossain, M. A.; Munir, M. S.; Takhar, H. S. 2000 Natural convection flow from a vertical permeable flat plate with variable surface temperature and species concentration. Zbl 0982.76082 Hussain, S.; Hossain, M. A.; Wilson, M. 2000 Magnetohydrodynamic boundary layer flow and heat transfer on a continuous moving wavy surface. Zbl 0877.76079 Hossain, M. A.; Pop, I. 1996 Natural convection flow from an isothermal horizontal circular cylinder in presence of heat generation. Zbl 1213.76176 Molla, Md. Mamun; Hossain, Md. Anwar; Paul, Manosh C. 2006 Surface-radiation effect on natural convection flow in a fluid-saturated non-Darcy porous medium enclosed by non-isothermal walls. Zbl 1356.76344 Hossain, M. A.; Saleem, M.; Gorla, R. S. R. 2013 Free convection-radiation interaction from an isothermal plate inclined at a small angle to the horizontal. Zbl 0910.76081 Hossain, M. A.; Rees, D. A. S.; Pop, I. 1998 Joule heating effect on magnetohydrodynamic mixed convection boundary layer flow with variable electrical conductivity. Zbl 1356.76432 Hossain, Md Anwar; Gorla, Rama Subba Reddy 2013 Natural convection flow of a viscous fluid about a truncated cone with temperature-dependent viscosity and thermal conductivity. Zbl 1006.76556 Hossain, M. A.; Munir, M. S. 2001 Entropy generation in Marangoni convection flow of heated fluid in an open ended cavity. Zbl 1227.80034 Saleem, M.; Hossain, Md. Anwar; Mahmud, Shohel; Pop, Ioan 2011 Effect of thermal radiation on natural convection over cylinders of elliptic cross-section. Zbl 0912.76084 Hossain, M. A.; Alim, M. A.; Rees, D. A. S. 1998 Two-phase natural convection flow of a dusty fluid. Zbl 1356.76330 Siddiqa, Sadia; Hossain, M. Anwar; Saha, Suvash C. 2015 Unsteady hydromagnetic free convection flow past an accelerated infinite vertical porous plate. Zbl 0565.76117 Hossain, M. A.; Mandal, A. C. 1985 Thermal radiation effects on free convection over a rotating axisymmetric body with application to a rotating hemisphere. Zbl 1142.76485 Hossain, M. A.; Anghel, M.; Pop, I. 2002 Natural convection flow over an inclined flat plate with internal heat generation and variable viscosity. Zbl 1205.76249 Siddiqa, S.; Asghar, S.; Hossain, M. A. 2010 Natural convection flow with combined buoyancy effects due to thermal and mass diffusion in a thermally stratified media. Zbl 1054.76077 Saha, S. C.; Hossain, M. A. 2004 Unsteady mixed-convection boundary layer flow along a symmetric wedge with variable surface temperature. Zbl 1213.76175 Hossain, Md. Anwar; Bhowmick, Sidhartha; Gorla, Rama Subba Reddy 2006 Effect of Hall current on MHD natural convection flow from vertical permeable flat plate with uniform surface heat flux. Zbl 1237.76213 Saha, L. K.; Siddiqa, S.; Hossain, M. A. 2011 Effects of mass transfer on the unsteady free convection flow past an accelerated vertical porous plate with variable suction. Zbl 0576.76084 Hossain, M. A.; Begum, R. A. 1985 Influence of fluctating surface temperature and concentration on natural convection flow from a vertical flat plate. Zbl 1040.76058 Hossain, M. A.; Hussain, S.; Rees, D. A. S. 2001 Radiation effect on free convection laminar flow along a vertical flat plate with streamwise sinusoidal surface temperature. Zbl 1217.76070 Molla, Md. Mamun; Saha, Suvash C.; Hossain, Md. Anwar 2011 Natural convection flow from a horizontal circular cylinder with uniform heat flux in presence of heat generation. Zbl 1205.76246 Molla, Md. Mamun; Paul, Sreebash C.; Hossain, Md. Anwar 2009 Effect of viscous dissipation on mixed convection flow of water near its density maximum in a rectangular enclosure with isothermal wall. Zbl 1182.76950 Hossain, Md. Anwar; Gorla, Rama Subba Reddy 2006 Effects of chemical reaction, heat and mass diffusion in natural convection flow from an isothermal sphere with temperature dependent viscosity. Zbl 1182.76962 Molla, Md. Mamun; Hossain, Md. Anwar 2006 Magnetohydrodynamic free convection along a vertical wavy surface. Zbl 0890.76076 Hossain, M. A.; Alam, K. C.; Pop, I. 1996 Mixed convection flow of micropolar fluid over an isothermal plate with variable spin gradient viscosity. Zbl 0936.76080 Hossain, M. A.; Chowdhury, M. K. 1998 Effect of radiation on mixed convection boundary layer flow along a vertical cylinder. Zbl 0935.76081 Hossain, M. A.; Alim, M. A.; Takhar, H. S. 1998 Lyra geometry inhomogeneous cosmological models. Zbl 1042.83524 Rahaman, F.; Chakraborty, S.; Das, S.; Mukherjee, R.; Hossain, M.; Begam, N. 2003 The skin friction in the unsteady free-convection flow past an accelerated plate. Zbl 0612.76093 Hossain, M. A.; Shayo, L. K. 1986 Effect of heat transfer on compressible boundary layer flow past a sphere. Zbl 0957.76074 Hossain, M. A.; Pop, I. 1999 Natural convection flow of micropolar fluid in a rectangular cavity heated from below with cold sidewalls. Zbl 1225.76267 Saleem, M.; Asghar, S.; Hossain, M. A. 2011 Natural convection flow of second-grade fluid along a vertical heated surface with variable heat flux. Zbl 1202.80008 Mustafa, Naeem; Asghar, S.; Hossain, M. A. 2010 Thermal radiation effects on hydromagnetic mixed convection flow along a magnetized vertical porous plate. Zbl 1204.76038 Ashraf, Muhammad; Asghar, S.; Hossain, Md. Anwar 2010 Conjugate natural convection from a vertical plate fin in a porous medium saturated with cold water. Zbl 0860.76085 Pop, I.; Hossain, M. A. 1995 Combined heat and mass transfer by free convection past an inclined flat plate. Zbl 1097.76588 Anghel, M.; Hossain, M. A.; Zeb, S.; Pop, I. 2001 Magnetohydrodynamic natural convection flow on a sphere in presence of heat generation. Zbl 1147.76629 Molla, Md. M.; Taher, M. A.; Chowdhury, Md. M. K.; Hossain, Md. A. 2005 Radiation effects from an isothermal vertical wavy cone with variable fluid properties. Zbl 1410.76425 Siddiqa, Sadia; Begum, Naheed; Hossain, M. Anwar 2016 Unsteady mixed convection dusty fluid flow past a vertical wedge due to small fluctuation in free stream and surface temperature. Zbl 1411.76168 Hossain, Md. Anwar; Roy, Nepal C.; Siddiqa, Sadia 2017 Magnetohydrodynamic natural convection flow on a sphere with uniform heat flux in presence of heat generation. Zbl 1106.76066 Molla, M. M.; Hossain, M. A.; Taher, M. A. 2006 Dynamics of two-phase dusty fluid flow along a wavy surface. Zbl 1401.76153 Siddiqa, Sadia; Abrar, M. N.; Hossain, M. A.; Awais, M. 2016 Natural convection of thermomicropolar fluid from an isothermal surface inclined at a small angle to the horizontal. Zbl 0965.76081 Hossain, M. A.; Chowdhury, M. K.; Gorla, Rama Subba Reddy 1999 Radiation interaction of forced and free convection across a horizontal cylinder. Zbl 0974.76078 Hossain, M. A.; Kutubuddin, M.; Takhar, H. S. 1999 Free convection flow of thermomicropolar fluid along a vertical plate with nonuniform surface temperature and surface heat flux. Zbl 1057.76592 Hossain, M. A.; Chowdhury, M. K.; Gorla, R. S. R. 1999 Finite amplitude standing wave in closed ducts with cross sectional area change. Zbl 1189.76463 Hossain, M. A.; Kawahashi, M.; Fujioka, T. 2005 Conduction-radiation effect on transient natural convection with thermophoresis. Zbl 1266.74031 Mahfooz, S. M.; Hossain, M. A. 2012 The computational study of the effects of magnetic field and free stream velocity oscillation on boundary layer flow past a magnetized vertical plate. Zbl 1308.76086 Ashraf, Muhammad; Asghar, S.; Hossain, M. A. 2014 Free convection in a saturated porous medium beyond the similarity solution. Zbl 0804.76082 Nakayama, A.; Hossain, M. A. 1994 MHD free convection flow near rotating axisymmetric round-nosed bodies. Zbl 0875.76718 Hossain, M. A.; Das, S. K.; Pop, I. 1996 MHD forced and free convection boundary layer flow along a vertical porous plate. Zbl 0896.76086 Hossain, M. A.; Alam, K. C. A.; Rees, D. A. S. 1997 Heat transfer response of MHD free convection flow along a vertical plate to surface temperature oscillations. Zbl 0910.76080 Hossain, M. A.; Das, S. K.; Pop, I. 1998 Effect of heat transfer on compressible boundary layer flow over a circular cylinder. Zbl 0921.76142 Hossain, M. A.; Pop, I.; Na, T.-Y. 1998 Heat transfer response of free convection flow from a vertical heated plate to an oscillating surface heat flux. Zbl 0910.76079 Hossain, M. A.; Das, S. K. 1998 Heat transfer analysis of viscous incompressible fluid by combined natural convection and radiation in an open cavity. Zbl 1407.76030 Saleem, M.; Hossain, M. A.; Saha, Suvash C.; Gu, Y. T. 2014 Compressible dusty gas along a vertical wavy surface. Zbl 1411.76169 Siddiqa, Sadia; Begum, Naheed; Hossain, Md. Anwar 2017 A hybrid computational model for the effects of maspin on cancer cell dynamics. Zbl 1411.92077 Al-Mamun, Mohammed A.; Brown, Louise J.; Hossain, M. A.; Fall, Charles; Wagstaff, L.; Bass, Rosemary 2013 Unsteady mixed convection dusty fluid flow past a vertical wedge due to small fluctuation in free stream and surface temperature. Zbl 1411.76168 Hossain, Md. Anwar; Roy, Nepal C.; Siddiqa, Sadia 2017 Compressible dusty gas along a vertical wavy surface. Zbl 1411.76169 Siddiqa, Sadia; Begum, Naheed; Hossain, Md. Anwar 2017 Radiation effects from an isothermal vertical wavy cone with variable fluid properties. Zbl 1410.76425 Siddiqa, Sadia; Begum, Naheed; Hossain, M. Anwar 2016 Dynamics of two-phase dusty fluid flow along a wavy surface. Zbl 1401.76153 Siddiqa, Sadia; Abrar, M. N.; Hossain, M. A.; Awais, M. 2016 Two-phase natural convection flow of a dusty fluid. Zbl 1356.76330 Siddiqa, Sadia; Hossain, M. Anwar; Saha, Suvash C. 2015 The computational study of the effects of magnetic field and free stream velocity oscillation on boundary layer flow past a magnetized vertical plate. Zbl 1308.76086 Ashraf, Muhammad; Asghar, S.; Hossain, M. A. 2014 Heat transfer analysis of viscous incompressible fluid by combined natural convection and radiation in an open cavity. Zbl 1407.76030 Saleem, M.; Hossain, M. A.; Saha, Suvash C.; Gu, Y. T. 2014 Surface-radiation effect on natural convection flow in a fluid-saturated non-Darcy porous medium enclosed by non-isothermal walls. Zbl 1356.76344 Hossain, M. A.; Saleem, M.; Gorla, R. S. R. 2013 Joule heating effect on magnetohydrodynamic mixed convection boundary layer flow with variable electrical conductivity. Zbl 1356.76432 Hossain, Md Anwar; Gorla, Rama Subba Reddy 2013 A hybrid computational model for the effects of maspin on cancer cell dynamics. Zbl 1411.92077 Al-Mamun, Mohammed A.; Brown, Louise J.; Hossain, M. A.; Fall, Charles; Wagstaff, L.; Bass, Rosemary 2013 Conduction-radiation effect on transient natural convection with thermophoresis. Zbl 1266.74031 Mahfooz, S. M.; Hossain, M. A. 2012 Entropy generation in Marangoni convection flow of heated fluid in an open ended cavity. Zbl 1227.80034 Saleem, M.; Hossain, Md. Anwar; Mahmud, Shohel; Pop, Ioan 2011 Effect of Hall current on MHD natural convection flow from vertical permeable flat plate with uniform surface heat flux. Zbl 1237.76213 Saha, L. K.; Siddiqa, S.; Hossain, M. A. 2011 Radiation effect on free convection laminar flow along a vertical flat plate with streamwise sinusoidal surface temperature. Zbl 1217.76070 Molla, Md. Mamun; Saha, Suvash C.; Hossain, Md. Anwar 2011 Natural convection flow of micropolar fluid in a rectangular cavity heated from below with cold sidewalls. Zbl 1225.76267 Saleem, M.; Asghar, S.; Hossain, M. A. 2011 Natural convection flow over an inclined flat plate with internal heat generation and variable viscosity. Zbl 1205.76249 Siddiqa, S.; Asghar, S.; Hossain, M. A. 2010 Natural convection flow of second-grade fluid along a vertical heated surface with variable heat flux. Zbl 1202.80008 Mustafa, Naeem; Asghar, S.; Hossain, M. A. 2010 Thermal radiation effects on hydromagnetic mixed convection flow along a magnetized vertical porous plate. Zbl 1204.76038 Ashraf, Muhammad; Asghar, S.; Hossain, Md. Anwar 2010 Natural convection flow from a horizontal circular cylinder with uniform heat flux in presence of heat generation. Zbl 1205.76246 Molla, Md. Mamun; Paul, Sreebash C.; Hossain, Md. Anwar 2009 Natural convection flow from an isothermal horizontal circular cylinder in presence of heat generation. Zbl 1213.76176 Molla, Md. Mamun; Hossain, Md. Anwar; Paul, Manosh C. 2006 Unsteady mixed-convection boundary layer flow along a symmetric wedge with variable surface temperature. Zbl 1213.76175 Hossain, Md. Anwar; Bhowmick, Sidhartha; Gorla, Rama Subba Reddy 2006 Effect of viscous dissipation on mixed convection flow of water near its density maximum in a rectangular enclosure with isothermal wall. Zbl 1182.76950 Hossain, Md. Anwar; Gorla, Rama Subba Reddy 2006 Effects of chemical reaction, heat and mass diffusion in natural convection flow from an isothermal sphere with temperature dependent viscosity. Zbl 1182.76962 Molla, Md. Mamun; Hossain, Md. Anwar 2006 Magnetohydrodynamic natural convection flow on a sphere with uniform heat flux in presence of heat generation. Zbl 1106.76066 Molla, M. M.; Hossain, M. A.; Taher, M. A. 2006 Magnetohydrodynamic natural convection flow on a sphere in presence of heat generation. Zbl 1147.76629 Molla, Md. M.; Taher, M. A.; Chowdhury, Md. M. K.; Hossain, Md. A. 2005 Finite amplitude standing wave in closed ducts with cross sectional area change. Zbl 1189.76463 Hossain, M. A.; Kawahashi, M.; Fujioka, T. 2005 Natural convection flow with combined buoyancy effects due to thermal and mass diffusion in a thermally stratified media. Zbl 1054.76077 Saha, S. C.; Hossain, M. A. 2004 Lyra geometry inhomogeneous cosmological models. Zbl 1042.83524 Rahaman, F.; Chakraborty, S.; Das, S.; Mukherjee, R.; Hossain, M.; Begam, N. 2003 Natural convection of fluid with variable viscosity from a heated vertical wavy surface. Zbl 0996.76081 Hossain, M. A.; Kabir, S.; Rees, D. A. S. 2002 Thermal radiation effects on free convection over a rotating axisymmetric body with application to a rotating hemisphere. Zbl 1142.76485 Hossain, M. A.; Anghel, M.; Pop, I. 2002 Free convection from a vertical permeable circular cone with non-uniform surface temperature. Zbl 0995.76085 Hossain, M. A.; Paul, S. C. 2001 Natural convection flow of a viscous fluid about a truncated cone with temperature-dependent viscosity and thermal conductivity. Zbl 1006.76556 Hossain, M. A.; Munir, M. S. 2001 Influence of fluctating surface temperature and concentration on natural convection flow from a vertical flat plate. Zbl 1040.76058 Hossain, M. A.; Hussain, S.; Rees, D. A. S. 2001 Combined heat and mass transfer by free convection past an inclined flat plate. Zbl 1097.76588 Anghel, M.; Hossain, M. A.; Zeb, S.; Pop, I. 2001 Natural convection flow of a viscous fluid about a truncated cone with temperature-dependent viscosity. Zbl 0959.76083 Hossain, M. A.; Munir, M. S.; Takhar, H. S. 2000 Natural convection flow from a vertical permeable flat plate with variable surface temperature and species concentration. Zbl 0982.76082 Hussain, S.; Hossain, M. A.; Wilson, M. 2000 The effect of radiation on free convection from a porous vertical plate. Zbl 0953.76083 Hossain, M. A.; Alim, M. A.; Rees, D. A. S. 1999 Combined heat and mass transfer in natural convection flow from a vertical wavy surface. Zbl 0934.76085 Hossain, M. A.; Rees, D. A. S. 1999 Effect of heat transfer on compressible boundary layer flow past a sphere. Zbl 0957.76074 Hossain, M. A.; Pop, I. 1999 Natural convection of thermomicropolar fluid from an isothermal surface inclined at a small angle to the horizontal. Zbl 0965.76081 Hossain, M. A.; Chowdhury, M. K.; Gorla, Rama Subba Reddy 1999 Radiation interaction of forced and free convection across a horizontal cylinder. Zbl 0974.76078 Hossain, M. A.; Kutubuddin, M.; Takhar, H. S. 1999 Free convection flow of thermomicropolar fluid along a vertical plate with nonuniform surface temperature and surface heat flux. Zbl 1057.76592 Hossain, M. A.; Chowdhury, M. K.; Gorla, R. S. R. 1999 Free convection-radiation interaction from an isothermal plate inclined at a small angle to the horizontal. Zbl 0910.76081 Hossain, M. A.; Rees, D. A. S.; Pop, I. 1998 Effect of thermal radiation on natural convection over cylinders of elliptic cross-section. Zbl 0912.76084 Hossain, M. A.; Alim, M. A.; Rees, D. A. S. 1998 Mixed convection flow of micropolar fluid over an isothermal plate with variable spin gradient viscosity. Zbl 0936.76080 Hossain, M. A.; Chowdhury, M. K. 1998 Effect of radiation on mixed convection boundary layer flow along a vertical cylinder. Zbl 0935.76081 Hossain, M. A.; Alim, M. A.; Takhar, H. S. 1998 Heat transfer response of MHD free convection flow along a vertical plate to surface temperature oscillations. Zbl 0910.76080 Hossain, M. A.; Das, S. K.; Pop, I. 1998 Effect of heat transfer on compressible boundary layer flow over a circular cylinder. Zbl 0921.76142 Hossain, M. A.; Pop, I.; Na, T.-Y. 1998 Heat transfer response of free convection flow from a vertical heated plate to an oscillating surface heat flux. Zbl 0910.76079 Hossain, M. A.; Das, S. K. 1998 MHD forced and free convection boundary layer flow along a vertical porous plate. Zbl 0896.76086 Hossain, M. A.; Alam, K. C. A.; Rees, D. A. S. 1997 Magnetohydrodynamic boundary layer flow and heat transfer on a continuous moving wavy surface. Zbl 0877.76079 Hossain, M. A.; Pop, I. 1996 Magnetohydrodynamic free convection along a vertical wavy surface. Zbl 0890.76076 Hossain, M. A.; Alam, K. C.; Pop, I. 1996 MHD free convection flow near rotating axisymmetric round-nosed bodies. Zbl 0875.76718 Hossain, M. A.; Das, S. K.; Pop, I. 1996 Conjugate natural convection from a vertical plate fin in a porous medium saturated with cold water. Zbl 0860.76085 Pop, I.; Hossain, M. A. 1995 Free convection in a saturated porous medium beyond the similarity solution. Zbl 0804.76082 Nakayama, A.; Hossain, M. A. 1994 The skin friction in the unsteady free-convection flow past an accelerated plate. Zbl 0612.76093 Hossain, M. A.; Shayo, L. K. 1986 Unsteady hydromagnetic free convection flow past an accelerated infinite vertical porous plate. Zbl 0565.76117 Hossain, M. A.; Mandal, A. C. 1985 Effects of mass transfer on the unsteady free convection flow past an accelerated vertical porous plate with variable suction. Zbl 0576.76084 Hossain, M. A.; Begum, R. A. 1985 all top 5
2021-10-25 03:41:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8446404933929443, "perplexity": 10256.017241482898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587623.1/warc/CC-MAIN-20211025030510-20211025060510-00493.warc.gz"}
https://byjus.com/rd-sharma-solutions/class-8-maths-chapter-5-playing-with-numbers-exercise-5-2/
# RD Sharma Solutions Class 8 Playing With Numbers Exercise 5.2 ## RD Sharma Solutions Class 8 Chapter 5 Exercise 5.2 Q.1: Given that the number $\overline{35a64}$ is divisible by 3, where a is a digit, what are the possible values of a. Soln: It is given that $\overline{35a64}$ is a multiple of 3. $∴$ (3 + 5 + a + 6 + 4) is a multiple of 3. $∴$ (a + 18) is a multiple of 3. $∴$(a + 18) = 0, 3, 6, 9, 12, 15, 18, 21… But a is digit of number $\overline{35a64}$. So, a can take value 0, 1, 2, 3, 4…. 9, a + 18 = 18 => a = 0a + 18 = 21 => a = 3a + 18 = 24 => a = 6a + 18 = 27 => a = 9 $∴$ a = 0, 3, 6, 9. Q.2: If x is a digit of the number $\overline{18×17}$ is divisible by 3, find possible values of x. Soln: It is given that $\overline{18×71}$ is a multiple of 3. (1 + 8 + x + 7 +1) is a multiple of 3. ∴(17 + x) is a multiple of 3. ∴  17 + x = 0, 3, 6, 9, 12, 15, 18, 21 . . . But  x is a digit. So, x can take value 0, 1, 2, 3, 4 …. 9. 17+ x = 18 =>> x = 117 + x = 21 =>> x = 417 + x = 24 =>> x = 7x = 1, 4, 7 3.If x is a digit of the number $\overline{66784x}$ such that it is divisible by 9, find possible values of x. Soln: It is given that $\overline{66784x}$ is a multiple of 9. ∴ (6 + 6 + 7 + 8 + 4 + x) is a multiple of 9. And (31 + x) is a multiple of 9. Possible values of (31 + x) are 0, 9, 18, 27, 36, 45… But x is a digit, so, x can only take value 0, 1, 2, 3, 4, . . . 9. ∴ 31 + x = 36 => x = 36 – 31 => x = 5 4.Given that the number $\overline{67y19}$ is divisible by 9, where y is a digit,  what are the possible values of y? Soln: it is given that $\overline{67y19}$ is a multiple of 9. ∴ (6 + 7 + y + 1 +9) is a multiple of 9. ∴ (23 + y) is a multiple of 9. 23 + y = 0, 9, 18, 27, 36 … But x is a digit. So, x can take values 0, 1, 2, 3, 4, … 9. 23 + y = 27 => y = 4 5. If $\overline{3×2}$ is a multiple of 11, where x is a digit, what is the value of x? Soln: Sum of the digits at odd places = 3 + 2 = 5 Sum of the digits at even places = x $∴$ sum of the digits at even place – sum of the digits at odd places = (x – 5) $∵$(x – 5) must be multiple by 11. $∴$ Possible values of (x – 5) are 0, 11, 22, 33 … But x is a digit: ∴ x must be 0, 1, 2, 3, .. 9 $∴$ x – 5 = 0 => x = 5 6.If $\overline{98215×2}$ is a number with x as its tens digit such that it is divisible by 4. Find all possible values of x Soln: A natural number is divisible by 4 if the number formed by its digits in units and tens places in divisible by 4. $\overline{98215×2}$ will be divisible by 4 if $\overline{x2}$ is divisible by 4. $\overline{x2}$= 10x + 2x is a digit; therefore possible values of x are 0, 1, 2, 3… 9. $\overline{x2}$ = 2, 12, 22, 32, 42, 52, 62, 72, 82, 92. The numbers that are divisible by 4 are 12, 32, 52, 72, 92. Therefore, the values of x are 1, 3, 5, 7, 9. 7. If x denotes the digit at hundreds place of the number $\overline{67×19}$ such that the number is divisible by 11. Find all possible values of x Soln: A number is divisible by 11, if the difference of the sum of its digits at odd places and the sum of its digits at even places is either 0 or a multiple of 11. Sum of digits at odd places – sum of digits at even places – (6 + x + 9) – (7 + 1) = (15 + x) – 8 = x + 7 ∴ x + 7 = 11 => x = 4 8.Find the remainder when 981547 is divided by 5. Do this without doing actual division. Soln: If a natural number is divided by 5, it has the same remainder when its unit digit is divided by 5. Here, the unit digit of 981547 is 7. When 7 is divided by 5, remainder is 2. Therefore, remainder will be 2 when 981547 is divided by 5. 9.Find the remainder when 51439786 is divided by 3. Do this without performing actual division. Soln:sum of the digits of the number 51439786 = 5 + 1 + 4 + 3 + 9 + 7 + 8 + 6 = 43. The remainder of 51439786, when divided by 3, is the same as the remainder when the sum of the digits is divided by 3. When 43 is divided by 3, the remainder is 1. Therefore, when 51439786 is divided by 3, the remainder will be 1. 10. Find the remainder, without performing actual division, when 798 is divided by. Soln: 798 = A multiple of 11 + (sum of its digits at odd places – sum of its digits at even places) 798 = A multiple of 11 + (7 + 8 – 9) 798 = A multiple of 11 + (15 – 9)798 = A multiple of 11 + 6 Therefore, the remainder is 6. 11.Without performing actual division, find the remainder when 928174653 is divided by 11. Soln: 928174653 = A multiple of 11 + (Sum of its digits at odd places – sum if its digits at even places) 928174653 = A multiple of 11 + {(9 + 8 + 7 + 6 + 3) – (2 + 1 + 4 + 5)} 928174653 = A multiple of 11 + (33 – 12)928174653 = A multiple of 11 + 21928174653 = A multiple of 11 + (11 x 1 + 10)928174653 = A multiple of 11 + 10. Therefore, the remainder is 10. 12.Given an example of a number which is divisible by: (i) 2 but not by 4                               (ii) 3 but not by 6 (iii) 4 but not by 8                             (iv) both 4 and 8 but not by 32. Soln: (i) 10 Every number with the structure (4n + 2) is an example of a number that is divisible by 2 but not by 4. (ii) 15 Every number with the structure (6n + 3) is an example of a number that is divisible by 3 but not by 6. (iii) 28 Every number with the structure (8n + 4) is an example of a number that is divisible by 4 but not by 8. (iv) 8 Every number with the structure (32n + 8), (32n + 16) or (32n + 24) is an example of a number that is divisible by 4 and 8 but not by 32. 13.Which of the following statements are true? (i) If a number is divided by 3, it must be divisible by 9. Ans: False Every number with the structure (9n + 3) or (9n + 6) is divisible by 3 but not by 9. (ii) If a number is divisible by 9, it must be divisible by 3. Ans: True (iii) If a number is divisible by 4, it must be divisible by 8. Ans: False Every number with the structure (8n + 4) is divisible by 4 but not by 8. (iv) If a number is divisible by 8, it must be divisible by 4 Ans: True (v) A number is divisible by 18, If it is divisible by both 3 and 6. Ans: False (vi) If a number is divisible by both 9 and 10, it must be divisible by 90 Ans: True (vii) If a number exactly divides the sum of two numbers, it must exactly divides the numbers separately. Ans: False (viii) If a number divides three numbers exactly, it must divide their sum exactly. Ans: True (ix) if two numbers are co-prime, at least one of them must be a prime number. Ans: False (x) The sum of two consecutive  odd numbers is always divisible by 4 Ans: True
2019-10-14 19:07:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42670944333076477, "perplexity": 224.65885711470258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986654086.1/warc/CC-MAIN-20191014173924-20191014201424-00395.warc.gz"}
http://www.komal.hu/verseny/feladat.cgi?a=feladat&f=I359&l=en
Mathematical and Physical Journal for High Schools Issued by the MATFUND Foundation Already signed up? New to KöMaL? # Problem I. 359. (November 2014) I. 359. In this task you are going to analyze the data of a tennis match. The match result and its current state are determined as follows. 1. A match is finished if one player wins at least 3 sets. 2. To win a set, the player needs to win at least 6 games such that there is a margin of at least 2 points over the opponent. For example, a set can be won by 6:1, 6:4, 7:5 or 11:9, but it cannot be won by 6:5 or 7:6 -- there are no short sets'' in the present exercise. 3. A game is won if the player has won at least 4 points provided that there is a margin of at least 2 points over the opponent. 4. For the purposes of the present exercise, a point is always won by either the server (A) or the receiver (F). The match starts with the first player being the server. Within the same game, the same player is the server. In the next game, the other player becomes the server irrespective of the current player scores. The first column of the spreadsheet labdamenetek (= Points) should contain for each point - up to a certain standing of the match - whether it was scored by the server (A) or the receiver (F). There can be at most 1000 points in a match, and a letter A in the $\displaystyle n$th row of the sheet indicates that the $\displaystyle n$th point was scored by the server. After the match is over, no further points are stored in the sheet, in other words, the cells are empty below the last A or F letter. The állás (= Standing) sheet should contain the actual standing of the match, according to the last row of the sheet labdamenetek, containing the standing of the won sets, the results of the earlier sets, and the standing of the games within a given set, finally, the standing within the game. The first row of your table should contain a heading according to the description above. To obtain the maximum number of points for this exercise, you should present the standings within a game in the usual format. The first 4 columns of your sheet teszt (= Test) should contain some values for which the állás sheet gives correct results if a column from the teszt sheet is pasted into the first column of the labdamenetek sheet. Your solution should not contain any macros or user-defined functions. Beginning with the second column of the labdamenetek sheet, you may use any number of auxiliary cells. In the example, the állás sheet is shown: Első'' is first, Második'' is second, Játékos'' is player, játszma'' is set and Nyert játszmák'' is the sets won, finally, Aktuális játék'' means the actual game. Your sheet (i359.xls, i359.xlsx, i359.ods, ...) with content specified above, together with a short documentation (i359.txt, i359.pdf, ...) also describing the name and version number of the spreadsheet application, should be submitted in a compressed file (i359.zip). (10 pont) Deadline expired on December 10, 2014. ### Statistics: 9 students sent a solution. 10 points: Kovács 246 Benedek, Mócsy Miklós. 9 points: Dombai Tamás, Fényes Balázs, Gercsó Márk, Radnai Bálint. 8 points: 1 student. 5 points: 2 students. Problems in Information Technology of KöMaL, November 2014
2018-03-22 11:34:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3557044267654419, "perplexity": 1584.4548010837693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647883.57/warc/CC-MAIN-20180322112241-20180322132241-00008.warc.gz"}
https://undergroundmathematics.org/glossary/double-angle-formula
# Double angle formula A double angle formula is a trigonometric identity which expresses a trigonometric function of $2\theta$ in terms of trigonometric functions of $\theta$. They are special cases of the compound angle formulae. The main formulae are: \begin{align*} \cos 2\theta &= \cos^2 \theta - \sin^2 \theta \\ &= 2 \cos^2 \theta - 1 \\ &= 1 - 2 \sin^2 \theta \\ \sin 2\theta &= 2 \sin \theta \cos \theta \\ \tan 2\theta &= \frac{2 \tan \theta}{1 - \tan^2 \theta} \end{align*} There are corresponding formulae for the hyperbolic functions, which can be obtained by applying Osborn’s rule to these formulae.
2022-07-02 05:40:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 149.5975833983797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00623.warc.gz"}
https://docs.exabyte.io/benchmarks/high-throughput-screening/
# High-throughput scalability study¶ Date: 2016/03 ## Overview¶ A team of scientists from a public enterprise company used exabyte.io to study equilibrium geometries and formation energies for a set of promising metallic alloys. The team employed quantum mechanical modeling approaches based on density functional theory and vast compute power available on public cloud. During a single run researchers were able to scale to 10,656 CPUs within a few minutes from the start, and obtain accurate results for 296 compounds that represent ternary metallic alloys within 38 hours. The purpose of this study was to estimate the extent to which compute resources can be efficiently scaled while sustaining a constant level of performance. Hardware configuration Amazon Web Services with the hardware configuration explained here were used for benchmarking ## Model and Method¶ Plane-wave Pseudopotential Density Functional Theory formalism as implemented in Vienna Ab-initio Simulation Package (VASP) at version 5.3.5 with a corresponding set of atomic pseudo-potentials was employed in this run. ## Inputs¶ INCAR ALGO = Normal EDIFF = 0.0001 ENCUT = 520 IBRION = 2 ICHARG = 1 ISIF = 3 ISMEAR = 1 ISPIN = 2 LORBIT = 11 LREAL = Auto LWAVE = False MAGMOM = 24*0.6 NELM = 100 NPAR = 1 NSW = 50 PREC = Accurate SIGMA = 0.2 POSCAR Li8 Al8 Cu8 1.0 11.687317 3.895772 -3.895772 -11.687317 3.895772 -3.895772 0.000000 1.947886 1.947886 Al Cu Li 8 8 8 direct 0.666667 0.333333 1.000000 Al 0.958333 0.791667 0.500000 Al 0.500000 0.500000 1.000000 Al 0.208333 0.041667 0.500000 Al 0.583333 0.916667 1.000000 Al 0.333333 0.666667 1.000000 Al 0.291667 0.458333 0.500000 Al 0.125000 0.625000 0.500000 Al 0.916667 0.583333 1.000000 Cu 0.875000 0.375000 0.500000 Cu 0.625000 0.125000 0.500000 Cu 0.750000 0.750000 1.000000 Cu 0.458333 0.291667 0.500000 Cu 0.791667 0.958333 0.500000 Cu 0.083333 0.416667 1.000000 Cu 0.375000 0.875000 0.500000 Cu 0.833333 0.166667 1.000000 Li 0.416667 0.083333 1.000000 Li 0.708333 0.541667 0.500000 Li 0.250000 0.250000 1.000000 Li 1.000000 1.000000 1.000000 Li 0.541667 0.708333 0.500000 Li 0.041667 0.208333 0.500000 Li 0.166667 0.833333 1.000000 Li KPOINTS 0 Gamma 1 1 2 ## Results¶ High-performance computing resources were assembled on-demand using the infrastructure available at one of the public cloud vendors. For the first run, a total of 296 tasks (one-per-material) were submitted to exabyte.io cloud-scale resource-management system. Within 7 minutes after submission 296 compute nodes with 10,656 cores total were provisioned, configured and had compute tasks running on them. All tasks were finished within 38 hours from the start, with the shortest ones taking about 2 hours. The size of compute system was dynamically scaled with the number of active calculations. The total cost of the calculation was within a few thousand dollars (for comparison - the cost of buying 10,000 CPU can be estimated at several million dollars). ## Conclusion¶ A "real-world" example high-throughput materials discovery run scaling to nearly 300 materials (each with an advanced geometrical configuration involving 24 atoms inside a crystal unit cell) and nearly 11 thousand CPU was successfully attempted by an enterprise customer. Without large upfront expenditures and while using familiar environments and tools, they were able to quickly obtain the necessary data about the formation energies of metallic alloys. This data is now being used by the customer to guide their experimental search for better alloys. The scale of this run was, however, is far from the limit on the resources available at exabyte.io, and we have internal data in possession that shows significantly higher scale reached by our engineering team in development (contact us in case you would like to learn more).
2018-11-14 23:46:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26982903480529785, "perplexity": 4887.7252581056155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742322.51/warc/CC-MAIN-20181114232605-20181115013928-00012.warc.gz"}
https://stacks.math.columbia.edu/tag/01CN
Lemma 17.22.1. Let $(X, \mathcal{O}_ X)$ be a ringed space. Let $\mathcal{F}$, $\mathcal{G}$, $\mathcal{H}$ be $\mathcal{O}_ X$-modules. There is a canonical isomorphism $\mathop{\mathcal{H}\! \mathit{om}}\nolimits _{\mathcal{O}_ X} (\mathcal{F} \otimes _{\mathcal{O}_ X} \mathcal{G}, \mathcal{H}) \longrightarrow \mathop{\mathcal{H}\! \mathit{om}}\nolimits _{\mathcal{O}_ X} (\mathcal{F}, \mathop{\mathcal{H}\! \mathit{om}}\nolimits _{\mathcal{O}_ X}(\mathcal{G}, \mathcal{H}))$ which is functorial in all three entries (sheaf Hom in all three spots). In particular, to give a morphism $\mathcal{F} \otimes _{\mathcal{O}_ X} \mathcal{G} \to \mathcal{H}$ is the same as giving a morphism $\mathcal{F} \to \mathop{\mathcal{H}\! \mathit{om}}\nolimits _{\mathcal{O}_ X}(\mathcal{G}, \mathcal{H})$. Proof. This is the analogue of Algebra, Lemma 10.12.8. The proof is the same, and is omitted. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2021-10-17 06:34:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9906147122383118, "perplexity": 349.924325911147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00307.warc.gz"}
https://mda.tools/docs/overview--what-mdatools-can-do.html
## What mdatools can do? The package includes classes and functions for analysis, preprocessing and plotting data and results. So far the following methods for analysis are implemented: • Principal Component Analysis (PCA) • Soft Independent Modelling of Class Analogy (SIMCA), including data driven approach (DD-SIMCA) • Partial Least Squares regression (PLS) with calculation of VIP scores and Selectivity ratio • Partial Least Squares Discriminant Analysis (PLS-DA) • Randomization test for PLS regression models • Interval PLS for variable selection • Multivariate curve resolution using the purity approach • Multivariate curve resolution using the constrained alternating least squares • Procrustes cross-validation for PCA Preprocessing methods include: • Mean centering, standardization and autoscaling • Savitzky-Golay filter for smoothing and derivatives • Standard Normal Variate for removing scatter and global intensity effect from spectral data • Mutliplicative Scatter Correction for the same issue • Normalization of spectra to unit area, unit length, unit sum, unit area under given range. • Baseline correction with asymmetric least squares • Kubelka-Munk transformation • Element wise transformations (log, sqrt, power, etc.) Besides that, some extensions for the basic R plotting functionality have been also implemented and allow to do the following: • Color grouping of objects with automatic color legend bar. • Plot for several groups of objects with automatically calculated axes limits and plot legend. • Three built-in color schemes — one is based on Colorbrewer and the other two are jet and grayscale. • Very easy-to-use possibility to apply any user defined color scheme. • Possibility to show horizontal and vertical lines on the plot with automatically adjusted axes limits. • Possibility to extend plotting functionality by using some attributes for datasets. See ?mdatools and next chapters for more details.
2021-10-22 01:14:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.359017550945282, "perplexity": 8358.685657936769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00585.warc.gz"}
https://beautystudioangel.cz/oceania/487tu7gg/crushed.html
Welcome to visit us! [email protected] News 1. Home 2. - 3. News 4. - 5. crushed rock tonnage calculator australia # crushed rock tonnage calculator australia 2016 Crusher 500tonper hour jaw crushers plantaustralia100tonper cost of a 5otonper hour jaw crusher size Crushing Plant according to customers Get Price jaw jaw crusher. ... quantity ofrock crushedintonnageby crusherCalculateCrusher Run cubic yards /TonsType in inches and feet of your project andcalculatethe estimated amount ... Get Price ## Our Products Product • Coal Mill In coal-fired power plants coal mills are used to pulverize and dry to coal before it is blown into the power plant furnace. • Agitation Tank Stirring tank is suitable for mixing before flotation, for various metal and ores to adequately mix pharmaceutical and pulp, and it can also be used for mixing other non-metallic minerals. • Sand Maker Sand maker is suitable for the crushing of soft, hard and extremely hard material and reshape of those products. • Belt Conveyor The convery belt is mainly composed by the expansion cylinder on both ends and the closed belts tightly knotted on them. • Rotary Kiln A rotary kiln is a device that supplies tremendous amounts of heat in order to change the chemical composition of an object. • Rod Mill Hongxing rod grinder is especially suitable for crushing river gravel to make sand. It's a kind of effective and energy saving rod grinding machine. ## More details • Calculator for weight (tonnage) of sand, gravel or topsoil Productcalculator.Calculateweight of sand, gravel, topsoil required by inputting area and optionally get a quote. The calculated amount of the sand, gravel or topsoil you will need is shown below (we supply our landscaping products in tonnes so our customers get exactly what they ordered). • Calculator Soil Yourself Planning a landscape project or garden makeover and not sure how much mulch or soil you need?. Calculating how much mulch or soil you need is simple and easy … just multiply LENGTH(m) X WIDTH(m) X DEPTH(m) or you can use the handy mulch and soilcalculatorbelow. Calculating Depth. Depth is measured in meters(m). Below are recommended depths for our products • Gravel Calculator Estimate Landscaping Material in Yards For example, let’s find the amount of gravel needed for a space that is 10 feet long, 10 feet wide, and 1 foot deep. volume = length × width × depth. volume = 10′ × 10′ × 1′ = 100 cu ft. cu yds = 100 cu ft / 27 = 3.7. weight = cu yds × density. weight min = 3.7 × 1.4 = 5.2 tons. weight max = 3.7 × 1.7 = 6.3 tons. • Crushed Stone Calculator Easy to Use Online Tool Acrushedstonecalculatortakes the hassle out of estimating your needs. All you need to do is plug in a few measurements and thecalculatorwill do the rest. You won’t need to remember any complicated formulas or do any conversions. You’ll just get fast, simple, accurate results. How aCrushedStoneCalculatorWorks • Tonnage Calculator Hedrick Industries 100 ft long x 10 ft wide x 3″ deep x ABC (140 lbs/ft3) 100’ x 10’ x 3”/12” x 140 lbs/2000 lbs = 17.5Tons.We have tandem, tri-axle, and quad-axle dump trucks available for delivery that can haul from 15 – 20tonsof material per load or less if requested. CHOOSE THE PRODUCT TYPE: • Material Calculator Stony Point Rock Quarry Use thiscalculatorto estimate approximately how manytonsof material your project will require. Please input your width, length, and depth dimensions and then choose your desired material. Your estimatedtonsrequired will appear in the last box below. • Calculator Summit Topsoil and Gravel -1 cubic yard = 1 1/3tonsGravel Bank run gravel 1:2-2” Screened gravel 1:2 1” and 2” Gravel 1:2-1 cubic yard = 2tons3”-6” oversizedrock1:1.5 8”-18” oversizedrock1:1.5-1 cubic yard = 1 1/2tons CrushedAggregates #1Crushedlimestone 1:1.5-1 cubic yard = 1 1/2tons#2Crusher… • Crushed Rock Tonnes Per Cubic Meter Au In India Crushed Rock Tonnage Calculator AustraliaSolution For.Calculatorfor cubic feet pertonofcrushedstonecalculatorfor cubic feet pertonofcrushedstone 25 oct 2005 cubiccalculatorcubic yard concretecalculatesquare feet volumecrushed rock calculatorcubic metre totoncrusher south africa posts related tocrushed rock tonnage... • Gravel Calculator How much gravel do you need Calculatethe area of the excavation, multiplying the length and width together. In our case, A = 6 * 3 = 18 yd² . You can also type the area of the excavation directly into the gravelestimatorif you choose to excavate a more sophisticated shape. • How toCalculateHow MuchCrushed StoneI Need Home Aug 23, 2019· To get to this figure, you must know how the landscapestone calculatororcrushedconcretecalculatorworks. It's a matter of doing the math. Multiply 12 by 12 to get 144 square feet. • STONE CALCULATOR[How Much Stone do I Need] Construction Otherwise, enter your measurements and values in our onlinecalculator! Calculation examples. Rectangular Area withCrushedGravel (105 lb/ft³) and Price Per Unit Mass; Let’s say I needcrushedgravel for part of my driveway which measures 4ft long, 2ft wide and 9in (0.75ft) deep. Let’s also say that the selected gravel costs $50 perton. • Crushed Stone Calculator Easy to Use Online Tool Acrushed stone calculatortakes the hassle out of estimating your needs. All you need to do is plug in a few measurements and thecalculatorwill do the rest. You won’t need to remember any complicated formulas or do any conversions. You’ll just get fast, simple, accurate results. How aCrushed Stone CalculatorWorks • Calculate CrushedCoquina Small Shells White cubic yards Type in inches and feet of your project andcalculatethe estimated amount of Sand / Screenings in cubic yards, cubic feet andTons, that your need for your project. The Density ofCrushedCoquina Small Shells White: 2,410 lb/yd³ or 1.21 t/yd³ or 0.8 yd³/t • Conversion Guide Centenary Landscaping Supplies PRODUCT: 1m3: 1/2m3: 2/3m3: 1/3m3: 1/4m3: 1/8m3: Soil (A-Z) A1 Top Dressing: 1.00: 0.50: 0.67: 0.33: 0.25: 0.13: Black Label Soil: 1.10: 0.55: 0.73: 0.37: 0.28: 0.14 ... • Gravel Calculator calculatehow much gravel you need Freegravel calculatoronline: estimate how much gravel you need for your construction or gardening / landscaping project intons/ tonnes or cubic yards, meters, etc. Calculates gravel required in volume: cubic feet, cubic yards, cubic meters, or weight: pounds,tons, kilograms, tonnes...Gravel calculatorwith information about gravel density, common gravel sizes, how much a cubic yard of ... • Calculate3 4 White River Gravel cubic yards Tons Type in inches and feet of your project andcalculatethe estimated amount of River Gravel / Egg R in cubic yards, cubic feet andTons, that your need for your project. The Density of 3/4" White River Gravel: 2,410 lb/yd³ or 1.21 t/yd³ or 0.8 yd³/t • Asphalt Paving Calculator Tonnage Calculator Roadtec Asphalt Paving Calculator- HMATonnage Calculator. Simply input the length in feet, width in feet, and thickness in inches and ourcalculatorwill tell you how much hot mix asphalt you need on your next paving job. Length (feet) Width (feet) Thickness (inches) TOLL FREE: 1.800.272.7100. • AggregateCalculator Mulzer Crushed Stone, Inc. From driveways to septic system field beds,Mulzer Crushed Stone, Inc. has aggregate products to fit your every need. Use our aggregatecalculatorto estimate your needs, or contact us at your nearest location. Whether you are a contractor, home owner or do-it-yourselfer, we can assist you in determining the best and most cost effective products to use in your residential building site. • STONE CALCULATOR[How Much Stone do I Need] Construction Otherwise, enter your measurements and values in our onlinecalculator! Calculation examples. Rectangular Area withCrushedGravel (105 lb/ft³) and Price Per Unit Mass; Let’s say I needcrushedgravel for part of my driveway which measures 4ft long, 2ft wide and 9in (0.75ft) deep. Let’s also say that the selected gravel costs$50 perton. • AGGREGATE CALCULATOR[How Much Aggregate Do I Need I therefore enter the measurements into the calculator which does the following operations to work out the weight and cost of the crushed stones: $$Weight = Density\,of\,aggregate \times Volume = 105\,lb/ft^3 \times 5\,ft^3 = 525\,lb$$ $$Cost = Price\,per\,unit\,volume \times Volume = 12.5 \,/yd^3 \times 525\,lb = 2.32$$ • How To Calculate Aldinga Landscape Supplies How To Calculate - Aldinga Landscape Supplies- Southern Adelaide's own landscaping, paving, mulch, sand and retaining supply business in the southern fleurieu Peninsular. Servicing all southern area's including Seaford Rise. - Aldinga Landscape Supplies • Calculator All Valley Sand andGravel Inc. Note, we sell material by thetonand one (1) cubic yard is approximately 1.5tons. They are provided as a reference for the convenience of our customers and site visitors. The calculators p rovide an ESTIMATE of the gravel or sand required to cover an area and depth specified by you. • How to Convert Yards toTonsin Gravel Hunker May 03, 2018· A general rule of thumb when converting cubic yards of gravel to tons is to multiply the cubic area by 1.4. For your reference, gravel typically weighs 2,800 pounds per cubic yard. In addition, there are 2,000 pounds to a ton. For instance, if your area … • Convert Cubic Meter toTonRegister 1 ton reg = 2.8316846592 m^3. Example: convert 15 m^3 to ton reg: 15 m^3 = 15 × 0.3531466672 ton reg = 5.2972000082 ton reg. • Quantity Calculator Parklea Sand and Soil Quantity Calculator; Handy Hints & Tips; Photo Gallery; About Us; Contact Us; LIKE US ON FACEBOOK!! Parklea Sand and Soil >Quantity Calculator.Quantity Calculator. Rectangle or Square. Length in metres: Width in metres: Thickness in centimetres: Cubic Metres: What type of material do you want? Mulch/Woodchip: Cubic Metres: Garden Mix: Cubic ... • Crushed Stone Calculator Free Online Tool Acrushed stone calculatoris the best way to get accurate results. No matter what your project entails or how frequently you've worked withcrushed stone, though, it's extremely important that you have the ability to accurately estimate the quantity of materials that you'll need to buy. • Gravel Calculator How manytonsdo I need This calculator solves for the cubic yards but returns the number of tons based on 2,800 lbs. per yard. Round up to the nearest foot in the length and width boxes, and the same thing just use inches in the depth box to instantly get a result that will work well with most common aggregates. Solving for the number of tons of gravel needed on uneven surfaces requires at least some estimating. • Tonnage CalculatorFor Gravel As a shortcut try our cubic yardagecalculatorto convenientlycalculateyour volume. Most gravel weighs 1 4 to 1 7tonsper cubic yard. X depth ft 27. A square yard of gravel with a depth of 2 in 5 cm weighs about 157 pounds 74 kg. Let s say i needcrushedgravel for part of my driveway which measures 4ft long 2ft wide and 9in 0 75ft deep. • Cylinder Calculator All Valley Sand andGravel Inc. Our free MaterialsCalculatorfor Cylinders will help you estimate the required sand or gravel weight for a job. These calculators will convert a cylinder/circle shape into the totaltonsor cubic yards that you need. Note, we sell material by thetonand one (1) cubic yard is approximately 1.5tons. • Gravel DrivewayCalculator Estimate Material for a Crushedstone is a common material to use for a driveway because it offers a nice clean look, is resistant to weeds, is affordable, is easy to maintain, and is easy to install. Gravel is commonly sold by the cubic yard, so to find the material needed for a driveway find the volume in yards.CalculateHow Many Cubic Yards of Gravel are Needed • Coverage Calculator Southwest Boulder Stone 1 ton = 2000 lbs. How many cubic yards are in a ton? 1 cubic yard = 1.25 tons. For most landscape material with the exception of lava. rock or mulch just multiply the total by 1.25. • 2021GravelPrices CrushedStone Cost (PerTon, Yard Load) This mixture combines limestone, traprock, granite,crushed rock, sand, and stone dust. It's also known as crusher run, quarry process, #411gravel, road stone, or dense grade aggregate.CrushedLimestone Cost.Crushedlimestone costs $30 to$38 perton, from $1.59 to$2.00 per square foot, or between $35 and$54 per yard. • Converting cubicvolumetotonnage OnlineConversion Forums Most rock has intrinsic density around 2.5 t/m³, broken rock around 1.5 t/m³. But use your SG data, those are just rough estimates. Multiply SG (a dimensionless … • Bulk MaterialCalculator Contractors Stone Supply Please enter the measurements below and press "Calculate" to receive the approximate tons of 4" ledge or chopped stone needed for edging the specified area. Length (in Feet) Calculate. Tons Needed. 6” CHOPPED STONE WILL COVER 70-75 LINEAR FEET PER TON. 8” CHOPPED STONE WILL COVER 45-50 LINEAR FEET PER TON. • IMI Construction AggregateCalculator To use the AggregateCalculator: 1. Enter the width in feet, length in feet, and thickness in inches of your job. 2. Click on theCalculatebutton. Thecalculatorwill estimate the number oftonsof aggregate that will be required. Please note: The aggregatecalculatoris an estimate of the amount oftonnage… • Material Calculator Canberra Concrete Recyclers Material Calculator. Canberra's Leading Waste Construction Materials Recycler. Home /Material Calculator. Use this helpful tool to get an estimate on how much your required material will cost. Space Required: Width in Metres: Length in Metres: Depth in Millimetres: Material: • Quantity OfRock CrushedInTonnageBy Mining Mill Quantity OfRock CrushedPer Hour By A Crusher. Quantity OfRock CrushedInTonnageBy Crusher. Quantity ofrock crushedper hour by a crusher. cost perton34crushedaggregate cost perton34crushedaggregate 5 to 20tonper hour small crusher products, to [More info] weight coal crusher 10tonper hour sale.1crushers how manytonsper hour can a jaw crusher gme14 hammer mill is an
2022-05-25 00:08:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32102882862091064, "perplexity": 13249.251970725974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00585.warc.gz"}
http://dreamcast-scene.com/uipqir1/profit-margin-ratio-calculator-46246e
Unlike margin ratios, these ratios are calculated using elements of the balance sheet of the business as well as its profit and loss account, which is another way to describe the income statement. In such circumstances the following formula is more suitable, which is why it is used in our gross margin calculator: Both input values are in the relevant currency while the resulting profit margin is a percentage arrived at after multiplying the result by 100. First, you need to determine the company's net sales by following this formula: Net sales = revenue - returns, refunds and discounts. When it comes to evaluating a company’s overall performance for investment purposes, the net profit margin ratio or net profit percentage is one of the most useful financial ratios.. By measuring net income against revenues, the profit margin ratio demonstrates exactly what percentage of each sales dollar remains as profit after a company’s expenses have been paid. Some analysts may use revenue instead of net sales—either will give you a similar answer, the … Calculation Example #2. You can copy/paste the results easily using the clipboard icon next to each value. It's always expressed as a percentage. This figure does not consider o… Thus, net income net of preferred dividends should be used. This profit margin calculator will help you find the right selling price for your goods and services by giving you transparency over the profit margins, cost ratio, and profit ratios. Knowing the formula above, you should start with estimating the cost of production, which includes all variable costs of producing the goods or services the business sells. The profit margin ratio formula can be calculated by dividing net income by net sales.Net sales is calculated by subtracting any returns or refunds from gross sales. If what you want to calculate is the profit and/or revenue required to achieve a given margin, then simply input the cost and the margin percentage in our calculator and it will handle the rest. You may link to this calculator from your website as long as you give proper credit to C. C. D. Consultants Inc. and there exists a visible link to our website.To link to our Profit Margins Calculator from your website or blog, just copy the following html code: Terms & Conditions See our full terms of service. Complementarily, in order to calculate the Profit Margins for your business, we offer a calculator free of charge. Operating Profit Margin = (Operating Profit / Sales) * 100 Profit Margin Formula: Net Profit Margin = Net Profit / Revenue. Your small business profit margin measures how well you use earnings for business expenses. If you want to customize the colors, size, and more to better fit your site, then pricing starts at just $29.99 for a one time purchase. Net profit margin is the ratio of net profits to revenues for a company or business segment. Profit Calculator - calculate your gross profit, cost of item, and mark up percentage for any item you sell. By applying the formula,$1 million divided by $25 million would result in a net profit margin of 4%. For example, if the costs are$100,000 and the revenue is $120,000 the equation becomes: Margin = (120,000 - 100,000) / 120,000 = 20,000 / 120,000 = 1/6 which is the margin ratio telling you that for every 6 dollars in sales the business pockets 1 dollar in profit. companies to provide useful insights into the financial well-being and performance of the business Profit Margin Ratio Financial Ratio Calculator free download - Profit Margin Calculator, Financial Ratio Analysis, Financial Ratio App, and many more programs Use this gross margin calculator to easily calculate your profit margin (operating margin), your gross profit or the revenue required to achieve a given margin. A company's income statement shows a net income of$1 million and operating revenues of $25 million. The gross profit margin formula is a simple one, yet it has some nuances which deserve a coherent explanation. Calculate the operating profit margin ratio by dividing the figure from step one (operating income) by the figure from step two (net sales). Net profit margin is typically expressed as a percentage but can also be … To turn the answer into a percentage, multiply it by 100. The profit margin ratio method is widely used by many investors to measure the company's ability to convert every sale into net income. Gross profit margin is a ratio that reveals how much profit a business makes for every pound it … The use of Profit Margins calculator is the sole responsibility of the user and the outcome is not meant to be used for legal, tax, or investment advice. | Where, Net Profit = Revenue - Cost The gross profit margin calculator exactly as you see it above is 100% free for you to use. They are also called return on sales. An Example of Calculating Operating Profit Margin Ratio . It yields a much higher margin percentage than the profit ratio, since the gross profit margin ratio does not include the negative effects of selling, … operating margin, operating profit margin, operating income margin, EBIT margin) is a key business performance metric indicating the profitability of a company, product or investment project. The profit margin is so key as it communicates the percentage of total revenue converted to operating profits (before tax profits). The gross profit formula is calculated by subtracting total cost of goods sold from total sales.Both the total sales and cost of goods sold are found on the income statement. You can calculate profit margin to determine your business’s profitability during a specific period of time. Lower profit margin ratio indicates possible flaws with business operations. Net\;profit\;margin = \frac{Net\;profit\;(after\;taxes)}{Net\;Sales}\times100 Net Profit Margin calculator is part of the Online financial ratios calculators, complements of our consulting team. Its cost of … For gross profit, gross margin percentage and mark up percentage, see the Margin Calculator. The profit margin ratio compares profit to sales and tells you how well the company is handling its finances overall. Profit margins are perhaps one of the simplest and most widely used financial ratios in corporate finance. Operating Profit Margin: Operating Profit Margin is calculated using the formula given below. How Do You Calculate Net Profit Margin? Enter the total revenue, gross margin and the online profit calculator will calculate your profit margin ratio quickly. On this page, you can calculate selling price, cost price, margin percentage and net profit for your sales transaction e.g., buying and selling of goods or trading in forex, stock markets. This calculator helps you to measure the most important margin ratios for your company: gross profit margin, operating margin and net profit margin. If you'd like to cite this online calculator resource and information as provided on the page, you can use the following citation: Georgiev G.Z., "Margin Calculator", [online] Available at: https://www.gigacalculator.com/calculators/profit-margin-calculator.php URL [Accessed Date: 11 Jan, 2021]. Financial Ratio Calculator 20 different ratio calculators covering 5 key financial ratios - Profitability, Liquidity, Efficiency, Financial Leverage and Market Value Ratio. Operating income is also called "operating profit" whereas revenue is total value of sales. These three profit margin ratios indicate how much profit the company makes for every dollar of sales at each level: production, operations and bottom line. Privacy Policy. The result is a measurement of what proportion of a company's revenue is left over, before taxes and other indirect costs (such as bonuses, interest rate payments), after paying for variable production costs such as wages, source materials, contractors, etc. Net income equals total revenues minus total expenses and is usually the last number reported on the income statement. Gross Profit Margin = ($1,259,786,700 / $2,942,425,700) * 100; Gross Profit Margin = 42.81%; 2. Step 2: Calculate the net profit margin for each company. They are also called return on sales. Fill in your net sales. Generally, a good profit margin should allow the business to cover its variable and fixed expenses and turn a profit with which to compensate the capital owners for their risk (time preference). It is one of the simplest profitability ratios as it defines that the profit is all the income which remains after deducting only the cost of the goods sold (COGS). This is individual to every business or investment project and what is a "good profit margin" depends very much on the options it is compared with, as well as the estimated risk. It is great for internal comparisons of one period versus another, identifying trends in profitability, as well as comparisons to businesses of similar industries, niches, sizes and age. If you know only the cost and the profit, simply add the two together to get the revenue, then substitute in equation #2 again. Share. First, let’s recap on what the term means. We are not to be held responsible for any resulting damages from proper or improper use of the service. In many cases the total costs and revenue are known and what is sought is the operating income and margin. The former is the ratio of profit to the sale price and the latter is the ratio of profit to the purchase price (Cost of Goods Sold). This versatile profit margin calculator will help you calculate: Simply enter the cost and the other business metric depending on the desired output and press "Calculate". The cost of the goods sold includes those expenses only which are associated with production or the manufacturing of the selling items directly only like raw materials and the labor wages which are required for assembling or making the goods. The gross profit margin (a.k.a.$12 (resale) - 7 (cost) = $5 Gross Profit Step 2: Divide Gross Profit by Resale (and multiply times 100 to get the percentage) (Gross Profit / Resale) *100 Example:$5 (Gross Profit) / $12 Resale = .4166 Then multiply by 100 to get the % So .4166 x 100 = 41.66% So your gross profit margin percentage is 41.66 % Calculate the net sales. The net profit which is also called profit after tax (PAT) is calculated by deducting all the direct and indirect expenses from the sales revenue. Net income and preferred dividends (if available) can be found in the income statement and notes to the financial statements. Net profit margin calculator measures company's profitability or how much of each dollar earned by the company is translated into net profits.Net profit margin formula is:. Occasionally, COGS is broken down into smaller categories of costs like materials and labor. The profit ratio is sometimes confused with the gross profit ratio, which is the gross profit divided by sales. These three profit margin ratios indicate how much profit the company makes for every dollar of sales at each level: production, operations and bottom line. [4] X Research source Our online tools will provide quick answers to your calculation and conversion needs. Investors might also look at your profit margin ratio to see how well your business is able to manage expenses and generate profits over time. Profit Margin Calculator with formula, explanation of what is gross profit margin / operating margin and more. Calculate Gross Profit Margin. Margin vs markup. Net sales and net income used to calculate the profit margin ratio are recorded in a company’s income statements; The ratio can also tell us how a company handles its expenses when compared to the sales it makes. If calculating for a past period, you would already know the gross revenue that was made by selling the goods or services. A higher operating margin means that the company has less financial risk as it is able to face fixed cost expenses with greater ease. Click the "Customize" button above to learn more! Enter the cost and either the total revenue, the gross profit or the gross margin percentage to calculate the remaining two. Although C. C. D. Consultants Inc.'s personnel has verified and validated the Profit Margins calculator, C. C. D. Consultants Inc. is not responsible for any outcome derived from its use. Profit Margins Calculator is part of the Online financial ratios calculators, complements of our consulting team. Fill in your cost of goods sold. It is the ratio of Gross profit / Total sales, which in the above example would be equal to$2500 / $5000 = 50%. The ratio of net profits to revenues for a company or business segment is called the net profit margin .It is expressed as a percentage, net profit margins show how much of each dollar is collected by a company as revenue translates into profit. Commonly used profitability ratios are Profit Margin, Return on Assets (ROA) and Return on Equity (ROE). The difference between gross margin and markup is small but important. Company A and company B have net profit margins of 12% and 15% respectively. 2. ... / 120,000 = 20,000 / 120,000 = 1/6 which is the margin ratio telling you that for every 6 dollars in sales the business pockets 1 dollar in profit. Our online calculators, converters, randomizers, and content are provided "as is", free of charge, and without any warranty or guarantee. This calculator helps you to measure the most important margin ratios for your company: gross profit margin, operating margin and net profit margin. The net profit margin is net profit divided by revenue (or net income divided by net sales). Profit Margin Calculator. If you are looking at achieving a specific profit margin you can adjust either the buying price or … This type of ratio shows how good the business is at converting investment – which could be assets, equity or debt – into profits. It is often used by investors as an efficiency ratio or percentage metric as it is a proxy for potential dividend payouts, reinvestment potential and overall solvency. https://www.gigacalculator.com/calculators/profit-margin-calculator.php. ... We are offering 20 different ratio calculators covering the 5 key financial ratios - Profitability, Liquidity, Efficiency, Financial Leverage and Capital Market. The net profit margin is calculated by dividing net profits by net sales. Each tool is carefully developed and rigorously tested, and our content is well-sourced, but despite our best effort it is possible they contain errors. Company ABC: Net Profit Margin = Net Profit/Revenue =$80/$225 = 35.56%. Though there are three different ways to calculate a company's profit margin ratio, here are the steps for calculation in the simplest form: 1. To convert to percentage, multiply by 100: 1/6 * 100 = 16.67% operating profit margin. A company has gross sales of$20 million. Profit Margin Ratio Calculator Profit margin ratio is the very useful for measuring the revenue generated with each dollar of sales. Then, the net profit margin is Company ABC has a higher net profit margin. The profit margin is a ratio of a company's profit (sales minus all expenses) divided by its revenue. Note: Profit Margins calculator uses JavaScript, therefore you must have it enabled to use this calculator. While a profit margin calculation is useful in itself, some might need more context to interpret the numbers. Example of Net Profit Margin Formula. Press "calculate". Note that our profit margin calculator does not do any currency conversions, so make sure you input the values in the same currency. The formula for profit margin ratio is expressed as follows: Please note that profit margin is usually calculated for common shareholders. Company XYZ: Net Profit Margin = Net Profit/Revenue = $30/$100 = 30%. If comparing companies operating in various jurisdictions with different tax policies is needed, earnings before taxes should be used instead of net income to avoid the impact of taxation. In layman's terms, profit is also known as either markup or margin when we're dealing with raw numbers, not percentages. Simply plug in the numbers in formula #2 above and you will get the result.
2021-09-25 23:35:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7432162761688232, "perplexity": 1453.8715309775002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057787.63/warc/CC-MAIN-20210925232725-20210926022725-00670.warc.gz"}
http://mathhelpforum.com/calculus/79030-changing-integral-rectangular-spherical.html
# Thread: changing an integral from rectangular to spherical 1. ## changing an integral from rectangular to spherical I have a triple integral in terms of x,y and z and need to convert it to spherical coordinates. it is the integral from 0 to 2 and then from 0 to (4 - x^2)^(1/2) inner bounds from 0 to (4 - x^2 - y^2)^(1/2) of (x^2 + y^2 + z^2)^(1/2) dz dy dx Since the integral is of roe spherical will work well I think!!! So, we will integrate roe^3 sin phi d roe d phi d theta My lower bounds must be all zeros. Can someone please go through the thinking process to determine my upper bounds? Once I have these I can do the integration no problem. Without them I am stuck! Frostking 2. $\int_0^2 \int_0^{\sqrt {4-x^2}} \int_0^{\sqrt{4-x^2-y^2}} \sqrt{x^2+y^2+z^2} dz dy dx$ $\phi$ varies from 0 to $\pi.$. $\theta$ varies from 0 to $2 \pi$. $\rho$ varies from 0 to 2. Hence the integral is: $\int_0^{2\pi} \int_0^{\pi} \int_0^{2} \rho^3 sin \phi \ d \rho d \phi d \theta$ EDIT: I've changed the $\rho$ limits to how they should be. 3. Shouldn't you have $\rho$ ranging from 0 to 2? --Kevin C. 4. Ah yes, it isn't a unit sphere is it!! *doi* 5. ## limits of integral Yes 0 to 2 pi is what the key has for roe but it has 0 to pi /2 for both the others and I still do not understand how to get any of these??? Any explanation would be very much appreciated. Frostking 6. I've only started doing these myself, hence why I made a mistake! The limits of $\rho$, $\theta$ and $\phi$ are found from the definition of spherical polar co-ordinates (see the video below): 7. Your only dealing with 1/8 of a sphere (first octant only) and in the xy plane the region is a quarter of a circle so the limits should be $\int_0^{\pi/2} \int_0^{\pi/2} \int_0^{2} \rho^3 sin \phi \ d \rho d \phi d \theta$ 8. Sorry, but how is it $\frac{1}{4}$ of a circle? 9. Originally Posted by Showcase_22 Sorry, but how is it $\frac{1}{4}$ of a circle? $\int_0^2 \int_0^{\sqrt {4-x^2}} \int_0^{\sqrt{4-x^2-y^2}} \sqrt{x^2+y^2+z^2} dz dy dx$ so from your limits $0 \le x \le 2,\; 0 \le y \le \sqrt{4-x^2}$, a circle in the first quadrant, i.e. a 1/4 of a circle.
2017-02-27 13:02:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9022944569587708, "perplexity": 620.2986074699401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00194-ip-10-171-10-108.ec2.internal.warc.gz"}
https://answers.opencv.org/questions/2871/revisions/
# Revision history [back] ### Does the latest OpenCV handle PNG files *properly*?? For a long time, I have tried to write a very simple OpenCV application, using variations of the code below. I prefer the PNG graphics format, but the code works erratically: sometimes I get a line, sometimes I get a black (empty) rectangle. If I use JPG or BMP, the program works fine. TIA, -Ramon ps: My current version is OpenCV 2.1 # include <opencv highgui.h=""> IplImage* image = cvCreateImage(cvSize(int(8.572), 1172), IPL_DEPTH_16U, 4); cvLine(image, cvPoint(22, 44), cvPoint(456, 700), CV_RGB(255, 0, 0)); cvSaveImage("my_image.bmp", image); ### Does the latest OpenCV handle PNG files *properly*?? For a long time, I have tried to write a very simple OpenCV application, using variations of the code below. I prefer the PNG graphics format, but the code works erratically: sometimes I get a line, sometimes I get a black (empty) rectangle. If I use JPG or BMP, the program works fine. TIA, -Ramon ps: My current version is OpenCV 2.1 # include cv.h=""> //#include <opencv highgui.h=""> IplImage* image = cvCreateImage(cvSize(int(8.572), 1172), IPL_DEPTH_16U, 4); cvLine(image, cvPoint(22, 44), cvPoint(456, 700), CV_RGB(255, 0, 0)); cvSaveImage("my_image.bmp", image); ### Does the latest OpenCV handle PNG files *properly*?? For a long time, I have tried to write a very simple OpenCV application, using variations of the code below. I prefer the PNG graphics format, but the code works erratically: sometimes I get a line, sometimes I get a black (empty) rectangle. If I use JPG or BMP, the program works fine. TIA, -Ramon ps: My current version is OpenCV 2.1 //#include <opencv cv.h=""> //#include <opencv highgui.h=""> IplImage* image = cvCreateImage(cvSize(int(8.572), 1172), IPL_DEPTH_16U, 4); cvLine(image, cvPoint(22, 44), cvPoint(456, 700), CV_RGB(255, 0, 0)); cvSaveImage("my_image.bmp", image); ### Does the latest OpenCV handle PNG files *properly*?? For a long time, I have tried to write a very simple OpenCV application, using variations of the code below. I prefer the PNG graphics format, but the code works erratically: sometimes I get a line, sometimes I get a black (empty) rectangle. If I use JPG or BMP, the program works fine. TIA, -Ramon ps: My current version is OpenCV 2.1 IplImage* image = cvCreateImage(cvSize(int(8.572), 1172), IPL_DEPTH_16U, 4); 4); cvLine(image, cvPoint(22, 44), cvPoint(456, 700), CV_RGB(255, 0, 0)); 0)); cvSaveImage("my_image.bmp", image); ### Does the latest OpenCV handle PNG files *properly*?? For a long time, I have tried to write a very simple OpenCV application, using variations of the code below. I prefer the PNG graphics format, but the code works erratically: sometimes I get a line, sometimes I get a black (empty) rectangle. If I use JPG or BMP, the program works fine. TIA, -Ramon ps: My current version is OpenCV 2.1 IplImage* image = cvCreateImage(cvSize(int(8.572), 1172), IPL_DEPTH_16U, 4); cvLine(image, cvPoint(22, 44), cvPoint(456, 700), CV_RGB(255, 0, 0)); cvSaveImage("my_image.bmp", cvSaveImage("my_image.png", image); ### Does the latest OpenCV handle PNG files *properly*?? For a long time, I have tried to write a very simple OpenCV application, using variations of the code below. I prefer the PNG graphics format, but the code works erratically: sometimes I get a line, sometimes I get a black (empty) rectangle. If I use JPG or BMP, the program works fine. TIA, -Ramon ps: My current version is OpenCV 2.1 IplImage* image = cvCreateImage(cvSize(int(8.572), 1172), cvCreateImage(cvSize(int(8.5x72), 11x72), IPL_DEPTH_16U, 4); cvLine(image, cvPoint(22, 44), cvPoint(456, 700), CV_RGB(255, 0, 0)); cvSaveImage("my_image.png", image); 7 improved style Martin Peris 1722 ●9 ●23 http://blog.martinperi... ### Does the latest OpenCV handle PNG files *properly*?? For a long time, I have tried to write a very simple OpenCV application, using variations of the code below. I prefer the PNG graphics format, but the code works erratically: sometimes I get a line, sometimes I get a black (empty) rectangle. If I use JPG or BMP, the program works fine. TIA, -Ramon ps: My current version is OpenCV 2.1 IplImage* image = cvCreateImage(cvSize(int(8.5x72), 11x72), IPL_DEPTH_16U, 4); 4); cvLine(image, cvPoint(22, 44), cvPoint(456, 700), CV_RGB(255, 0, 0)); 0)); cvSaveImage("my_image.png", image);image); 8 retagged sturkmen 6076 ●3 ●42 ●71 https://github.com/stu... ### Does the latest OpenCV handle PNG files *properly*?? For a long time, I have tried to write a very simple OpenCV application, using variations of the code below. I prefer the PNG graphics format, but the code works erratically: sometimes I get a line, sometimes I get a black (empty) rectangle. If I use JPG or BMP, the program works fine. TIA, -Ramon ps: My current version is OpenCV 2.1 IplImage* image = cvCreateImage(cvSize(int(8.5x72), 11x72), IPL_DEPTH_16U, 4); cvLine(image, cvPoint(22, 44), cvPoint(456, 700), CV_RGB(255, 0, 0)); cvSaveImage("my_image.png", image);
2020-03-30 08:05:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2759222090244293, "perplexity": 11665.950196596847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496669.0/warc/CC-MAIN-20200330054217-20200330084217-00145.warc.gz"}
https://paradigms.oregonstate.edu/problem/385/
## Divergence through a Prism • assignment Divergence assignment Homework ##### Divergence Static Fields 2022 (5 years) Shown above is a two-dimensional vector field. Determine whether the divergence at point A and at point C is positive, negative, or zero. • group Visualization of Divergence group Small Group Activity 30 min. ##### Visualization of Divergence Vector Calculus II 2022 (8 years) Students predict from graphs of simple 2-d vector fields whether the divergence is positive, negative, or zero in various regions of the domain using the definition of the divergence of a vector field at a point: The divergence of a vector field at a point is flux per unit volume through an infinitesimal box surrounding that point. Optionally, students can use a Mathematica notebook to verify their predictions. • assignment Gravitational Field and Mass assignment Homework ##### Gravitational Field and Mass Static Fields 2022 (4 years) The gravitational field due to a spherical shell of matter (or equivalently, the electric field due to a spherical shell of charge) is given by: $$\vec g = \begin{cases} 0&\textrm{for } r<a\\ -G \,\frac{M}{b^3-a^3}\, \left( r-\frac{a^3}{r^2}\right)\, \hat r & \textrm{for } a<r<b\\ -G\,\frac{M}{r^2}\, \hat r & \textrm{for } r>b \\ \end{cases}$$ This problem explores the consequences of the divergence theorem for this shell. 1. Using the given description of the gravitational field, find the divergence of the gravitational field everywhere in space. You will need to divide this question up into three parts: $r<a$, $a<r<b$, and $r>b$. 2. Briefly discuss the physical meaning of the divergence in this particular example. 3. For this gravitational field, verify the divergence theorem on a sphere, concentric with the shell, with radius $Q$, where $a<Q<b$. ("Verify" the divergence theorem means calculate the integrals from both sides of the divergence theorem and show that they give the same answer.) 4. Briefly discuss how this example would change if you were discussing the electric field of a uniformly charged spherical shell. • assignment Electric Field and Charge assignment Homework ##### Electric Field and Charge divergence charge density Maxwell's equations electric field Static Fields 2022 (3 years) Consider the electric field $$\vec E(r,\theta,\phi) = \begin{cases} 0&\textrm{for } r<a\\ \frac{1}{4\pi\epsilon_0} \,\frac{Q}{b^3-a^3}\, \left( r-\frac{a^3}{r^2}\right)\, \hat r & \textrm{for } a<r<b\\ 0 & \textrm{for } r>b \\ \end{cases}$$ 1. Use step and/or delta functions to write this electric field as a single expression valid everywhere in space. 2. Find a formula for the charge density that creates this electric field. 3. Interpret your formula for the charge density, i.e. explain briefly in words where the charge is. • assignment Flux through a Paraboloid assignment Homework ##### Flux through a Paraboloid Static Fields 2022 (5 years) Find the upward pointing flux of the electric field $\vec E =E_0\, z\, \hat z$ through the part of the surface $z=-3 s^2 +12$ (cylindrical coordinates) that sits above the $(x, y)$--plane. • assignment Curl assignment Homework ##### Curl Static Fields 2022 (5 years) Shown above is a two-dimensional cross-section of a vector field. All the parallel cross-sections of this field look exactly the same. Determine the direction of the curl at points A, B, and C. • assignment Divergence Practice including Curvilinear Coordinates assignment Homework ##### Divergence Practice including Curvilinear Coordinates Calculate the divergence of each of the following vector fields. You may look up the formulas for divergence in curvilinear coordinates. 1. $$\hat{F}=z^2\,\hat{x} + x^2 \,\hat{y} -y^2 \,\hat{z}$$ 2. $$\hat{G} = e^{-x} \,\hat{x} + e^{-y} \,\hat{y} +e^{-z} \,\hat{z}$$ 3. $$\hat{H} = yz\,\hat{x} + zx\,\hat{y} + xy\,\hat{z}$$ 4. $$\hat{I} = x^2\,\hat{x} + z^2\,\hat{y} + y^2\,\hat{z}$$ 5. $$\hat{J} = xy\,\hat{x} + xz\,\hat{y} + yz\,\hat{z}$$ 6. $$\hat{K} = s^2\,\hat{s}$$ 7. $$\hat{L} = r^3\,\hat{\phi}$$ • assignment Gauss's Law for a Rod inside a Cube assignment Homework ##### Gauss's Law for a Rod inside a Cube Static Fields 2022 (3 years) Consider a thin charged rod of length $L$ standing along the $z$-axis with the bottom end on the $x,y$-plane. The charge density $\lambda_0$ is constant. Find the total flux of the electric field through a closed cubical surface with sides of length $3L$ centered at the origin. • group Acting Out Flux group Small Group Activity 5 min. ##### Acting Out Flux Static Fields 2022 (3 years) Students hold rulers and meter sticks to represent a vector field. The instructor holds a hula hoop to represent a small area element. Students are asked to describe the flux of the vector field through the area element. • assignment Flux through a Plane assignment Homework ##### Flux through a Plane Static Fields 2022 (3 years) Find the upward pointing flux of the vector field $\boldsymbol{\vec{H}}=2z\,\boldsymbol{\hat{x}} +\frac{1}{x^2+1}\boldsymbol{\hat{y}}+(3+2z)\boldsymbol{\hat{z}}$ through the rectangle $R$ with one edge along the $y$ axis and the other in the $xz$-plane along the line $z=x$, with $0\le y\le2$ and $0\le x\le3$. • Static Fields 2022 (5 years) Consider the vector field $\vec F=(x+2)\hat{x} +(z+2)\hat{z}$. 1. Calculate the divergence of $\vec F$. 2. In which direction does the vector field $\vec F$ point on the plane $z=x$? What is the value of $\vec F\cdot \hat n$ on this plane where $\hat n$ is the unit normal to the plane? 3. Verify the divergence theorem for this vector field where the volume involved is drawn below. (“Verify” means calculate both sides of the divergence theorem, separately, for this example and show that they are the same.)
2022-09-27 10:45:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 9, "x-ck12": 0, "texerror": 0, "math_score": 0.8446027040481567, "perplexity": 590.0954093849839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00063.warc.gz"}
https://enwiki.academic.ru/dic.nsf/enwiki/606398
# Dot gain  Dot gain Dot gain (also known as Tonal Value Increase) is a phenomenon in offset lithography and some other forms of printing which causes printed material to look darker than intended. It is caused by halftone dots growing in area between the original printing film and the final printed result. In practice, this means that an image that has not been adjusted to account for dot gain will appear too dark when it is printed.[1] Dot gain calculations are often an important part of a CMYK color model. ## Definition It is defined as the increase in the diameter of a halftone dot during the prepress and printing processes. Total dot gain is the difference between the dot size on the film negative and the corresponding printed dot size. For example, a dot pattern that covers 30% of the image area on film, but covers 50% when printed, is said to show a total dot gain of 20%. However, with today's computer-to-plate imaging systems, which eliminates film completely, the measure of "film" is the original digital source "dot." Therefore, dot gain is now measured as the original digital dot versus the actual measured ink dot on paper. Mathematically, dot gain is defined as: DG = aprintaform where aprint is the ink area fraction of the print, and aform is the pre-press area fraction to be inked. The latter may be the fraction of opaque material on a film positive (or transparent material on a film negative), or the relative command value in a digital prepress system. ## Causes Dot gain is caused by ink spreading around halftone dots. Several factors can contribute to the increase in halftone dot area. Different paper types have different ink absorption rates; uncoated papers can absorb more ink than coated ones, and thus can show more gain. As printing pressure can squeeze the ink out of its dot shape causing gain, ink viscosity is a contributing factor with coated papers; higher viscosity inks can resist the pressure better. Halftone dots can also be surrounded by a small circumference of ink, in an effect called "rimming". Each halftone dot has a microscopic relief, and ink will fall off the edge before being eliminated entirely by the fountain solution (in the case of offset printing). Finally, halation of the printing film during exposure can contribute to dot gain. ## Yule-Nielsen effect and "optical dot gain" The Yule-Nielsen effect, sometimes known as optical dot gain, is a phenomenon caused by absorption and scattering of light by the substrate. Light becomes diffused around dots, darkening the apparent tone. As a result, dots absorb more light than their size would suggest.[2] The Yule-Nielsen effect is not strictly speaking a type of dot gain, because the size of the dot does not change, just its relative absorbance.[3] Some densitometers automatically compute the absorption of a halftone relative to the absorption of a solid print using the Murray-Davies formula. ## Controlling dot gain Not all halftone dots show the same amount of gain. The area of greatest gain is in midtones (40-60%); above this, as the dots contact one another, the perimeter available for dot gain is reduced. Dot gain becomes more noticeable with finer screen ruling, and is one of the factors affecting the choice of screen. Dot gain can be measured using a densitometer and color bars in absolute percentages. Dot gain is usually measured with 40% and 80% tones as reference values. A common value for dot gain is around 23% in the 40% tone for a 150 lpi screen and coated paper. Thus a dot gain of 19% means that a tint area of 40% will result in a 59% tone in the actual print.[4] Modern prepress software usually includes utility to achieve the desired dot gain values using special compensation curve for each machine. ## Computing the fractional coverage (area) of a halftone pattern The inked area fraction of the dot may be computed using the Yule-Nielsen model.[2] This requires the optical densities of the substrate, the solid-covered area, and the halftone tint, as well as the value of the Yule-Nielsen parameter, n. Pearson [5] has suggested a value of 1.7 be used in absence of more specific information. However, it will tend to be larger when the halftone pattern in finer and when the substrate has a wider Point Spread Function.[6][7] ## Models for dot gain Another factor upon which dot gain depends is the dot's area fraction. Dots with relatively large perimeters will tend to have greater dot gain than dots with smaller perimeters. This makes it useful to have a model for the amount of dot gain as a function of prepress dot area fraction. ### An early model Tollenaar and Ernst tacitly suggested a model in their 1963 IARIGAI paper.[8] It was $\mathrm{gain}_{\mathit{TE}}=a_{\mathrm{form}} \cdot (1 - a_{\mathit{vf}})$ where avf, the shadow critical area fraction, is the area fraction on the form at which the halftone pattern just appears solid on the print. This model, while simple, has dots with relatively small perimeter (in the shadows) exhibiting greater gain than dots with relatively larger perimeter (in the midtones). ### Haller's model Karl Haller, of FOGRA in Munich, proposed a different model, one in which dots with larger perimeters tended to exhibit greater dot gain than those with smaller perimeters.[9] ### The GRL model Viggiano suggested an alternate model, based on the radius (or other fundamental dimension) of the dot growing in relative proportion to the perimeter of the dot, with empirical correction the duplicated areas which result when the corners of adjacent dots join.[10] Mathematically, his model is: $\mathrm{gain}_{\mathit{GRL}}=\left\{ \begin{array}{ll} a_{\mathrm{form}}-a_{\mathit{wf}}, & \mathrm{for}\ a_{\mathrm{form}}\leq a_{\mathit{wf}}\\ \\2\cdot\Delta_{0,50}\sqrt{a_{\mathrm{form}}(1-a_{\mathrm{form}})}, & \mathrm{for}\ a_{\mathit{wf}} where Δ0,50 is the dot gain when the input area fraction is one-half; the highlight critical printing area, awf, is computed as: $a_{\mathit{wf}}=\left\{ \begin{array}{ll} \frac{4\Delta_{0,50}^{2}}{1+4\Delta_{0,50}^{2}}, & \mathrm{for}\ \Delta_{0,50}<0\\ \\0, & \mathrm{for}\ \Delta_{0,50}\geq0\end{array}\right.$ and the shadow critical printing area, avf, is computed according to $a_{\mathit{vf}}=\left\{ \begin{array}{ll} 1, & \mathrm{for}\ \Delta_{0,50}\leq0\\ \\\frac{1}{1+4\Delta_{0,50}^{2}}, & \mathrm{for}\ \Delta_{0,50}>0\end{array}\right.$ Note that, unless Δ0,50 = 0, either the highlight critical printing fraction, awf, will be non-zero, or the shadow critical printing fraction, avf will be not unity, depending on the sign of Δ0,50. In instances in which both critical printing fractions are non-trivial, Viggiano recommended that a cascade of two (or possibly more) applications of the dot gain model be applied. ### Empirical models Sometimes the exact form of a dot gain curve is difficult to model on the basis of geometry, and empirical modeling is used instead. To a certain extent, the models described above are empirical, as their parameters cannot be accurately determined from physical aspects of image microstructure and first principles. However, polynomials, cubic splines, and interpolation are completely empirical, and do not involve any image-related parameters. Such models were used by Pearson and Pobboravsky, for example, in their program to compute dot area fractions needed to produce a particular color in lithography.[11] ## References 1. ^ "A Guide to Graphic Print Production", Kay Johansson, Peter Lundberg, Robert Ryberg. Ed:WIley ISBN 9780471761389 2. ^ a b J A C Yule and W J Neilsen[sic], "The penetration of light into paper and its effect on halftone reproduction." 1951 TAGA Proceedings, p 65-76. 3. ^ J. A. S. Viggiano, Models for the Prediction of Color in Graphic Reproduction Technology. ScM thesis, Rochester Institute of Technology, 1987. 4. ^ "A Guide to Graphic Print Production", Kay Johansson, Peter Lundberg, Robert Ryberg. Ed:WIley ISBN 9780471761389 p. 265-269 5. ^ Pearson, Milton L., "n-value for general conditions." 1981 TAGA Proceedings, p 415-425. 6. ^ J A C Yule, D J Howe, and J H Altman, TAPPI Journal, vol 50, p 337-344 (1967). 7. ^ F R Ruckdeschel and O G Hauser, "Yule-Nielsen effect in printing: a physical analysis." Applied Optics, vol 17 nr 21, p 3376-3383 (1978). 8. ^ D Tollenaar and P A H Ernst, Halftone printing: Proceedings of the Seventh International Conference of Printing Research Institutes. London: Pentech, 1964. 9. ^ Karl Haller, "Mathematical models for screen dot shapes and for transfer characteristic curves." Advances in Printing Science and Technology: Proceedings of the 15th Conference of Printing Research Institutes, p 85-103. London: Pentech, 1979. 10. ^ J A Stephen Viggiano, "The GRL dot gain model." 1983 TAGA Proceedings, p 423-439. 11. ^ Irving Pobboravsky and Milton Pearson, "Computation of dot areas required to match a colorimetrically specified color using the modified Neugebauer equations." 1972 TAGA Proceedings, p 65-77. Wikimedia Foundation. 2010. ### Look at other dictionaries: • Marginal dot gain — Равномерное увеличение размеров растровых точек …   Краткий толковый словарь по полиграфии • Растискивание (Dot Gain) — Эффект расплывания точки при нанесении ее на бумагу в печатной машине. Связан с характером взаимодействия краска бумага , а также с силой прижатия печатного цилиндра к бумаге. В результате необходимо учитывать, что точки при печати больше, чем… …   Краткий толковый словарь по полиграфии • DOT (ethnologie juridique) — La vulgarisation du vocabulaire juridique conduit parfois à grouper sous le même terme des institutions d’une similitude très approximative et à créer ainsi un rapport artificiel difficile à maintenir. La précision des termes du droit s’accommode …   Encyclopédie Universelle • Dot and the Whale — Directed by Yoram Gross Written by John Palmer (screenplay) Music by Bob Young Release date(s) …   Wikipedia • Dot-com bubble — The dot com bubble (also referred to as the Internet bubble and the Information Technology Bubble[1]) was a speculative bubble covering roughly 1995–2000 (with a climax on March 10, 2000, with the NASDAQ peaking at 5132.52 in intraday trading… …   Wikipedia • Dot and Line Festival (Punto y Raya Festival) — Punto y Raya Festival explores the ultimate synthesis of the form·movement duality in different spheres of human endeavour. Due to the simplicity of its criteria, it uses abstraction s prime matter to reveal the limitations and achievements of… …   Wikipedia • Differential gain — is a kind of linearity distortion which affects the color saturation in TV broadcasting. Contents 1 Composite color video signal 2 Non linearity in the broadcast system 3 Differential gain …   Wikipedia • To gain a point — Point Point, n. [F. point, and probably also pointe, L. punctum, puncta, fr. pungere, punctum, to prick. See {Pungent}, and cf. {Puncto}, {Puncture}.] 1. That which pricks or pierces; the sharp end of anything, esp. the sharp end of a piercing… …   The Collaborative International Dictionary of English • The Red Dot — Infobox Television episode Title = The Red Dot Series = Seinfeld Caption = Elaine tries on the sweater with the red dot. Season = 3 Episode = 29 Airdate = December 11, 1991 Production = Writer = Larry David Director = Tom Cherones Guests =… …   Wikipedia • Tonal Reproduction Curve — is often referred to by its initials, TRC, and the R is sometimes said to stand for Response, as in Tonal Response Curve.A Tonal Reproduction Curve is applied to the luminance value of an electronic image which adjusts for dot gain of a… …   Wikipedia
2019-11-12 16:18:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7149357199668884, "perplexity": 8373.051392866488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00036.warc.gz"}
https://ckp.bialystok.pl/quack-and-vfqfxcw/99faf2-reduced-mass-of-hcl-and-dcl
15 minutes • 1.0 ml D. 2 . For the HCl molecule, the needed reduced mass is. This makes sense as the chemical nature of the bond itself remains unchanged between DCl and HCl, so there should be no difference. You would have to shine the laser at a frequency that is so narrow only one of the isotopes absorbs whereas the other wouldn’t. 3rn(\,W,~> endstream endobj 41 0 obj << /ProcSet [/PDF /Text ] /Font << /F2 5 0 R /F14 38 0 R >> /ExtGState << /GS1 9 0 R >> >> endobj 42 0 obj << /ProcSet [/PDF /ImageB ] /ExtGState << /GS1 9 0 R >> >> endobj 25 0 obj << /Type /Pattern /PatternType 1 /Resources 42 0 R /Matrix [23.04 0 0 -23.04 7 784] /PaintType 1 /TilingType 1 /BBox [0 0 1 1] /XStep 1 /YStep 1 /Length 117 /Filter [/ASCII85Decode /FlateDecode] >> stream #*Sjr6b8BE.Y+T$___Y"LJoI$$q[BR#,="qEY;9a/4uP(KlUm-46KmG-,a7M.M:_/ Chem435. [HKaGts The chlorine is so massive that it moves very little while the hydrogen bounces back and forth like a ball on a rubber band! /re:2R,P;/>GV^k2L3Ms7H-\@G0eLD;5l9m%0*CPg>!S8?T^9ij0LZKjSqaNp08 The lower absorption frequency of DCl occurred due to a change in the reduced mass, Table 6A under the appendix, from 1.62612 x 10-27 to 1.904413 x 10-27 for HCl and DCl, respectively. *Bu4Jh(j1sW[h5B07j FTIR spectroscopy uses gratings for diffraction and then processes the raw data from an interferogram into the actual spectrum via the mathematical process known as a Fourier transform. Goals Formulate vibrational-rotational energy states Interpret equilibrium vibrations of HCl and DCl Find the inter-nuclear separation (bond length) of each of the molecules. ?&)?&E^ ;=6bd:,"ls6rE;8/jB@'VSn;BdklXIA_V@15IA\4Vr:qOq7ML(Z1KE0tO&mr7gq+Jir7aqEoKNXo>mF_7+f@/am 1'n0==Q5H_@s#gU3 %qdZP7nM,s5A.srq;WM/)>7V8@sgT&[s01^@cbo3&^0FW\>M Cooley, Joya. +-P>8EoCINo[&K]M4P&'\lcmiW*UMXtg*%72pd=H\@Z! Reduced mass permits effectively covert two-body mechanical problem into an equivalent one-body m This was then plotted as ν (cm-1) vs. N and fitted to a 3rd order polynomial. This inaccurate calculation is most likely due to the severe HCl contamination in the DCl sample – the deuterated sulfuric acid was not as pure as the bottle reported, shown by the spectrum. )=3hHj.JkQSmfoL3AU15bSfP!Acdtd3*ee@RA#;'2Df3fGYoo]NDlL+@@m>9jYPG "WO6Rub"TkVFMa8VK1-]OhKNs6(XF_eP#Q6cV DCl HCl HCl DCl n m n m = where, n = vibrational frequency, and, m = the reduced mass. In essence, as the light from the source passes through the sample cell, the sample is energetically excited both rotationally and vibrationally. The pump trap was a trap that a thermos filled with liquid nitrogen surrounded to condense any gas-phase HCl before it reached the vacuum pump. Replacing H by D would change the reduced mass for DCl molecule, and, consequently the vibration frequency of D-Cl stretching - decrease it roughly by factor sqrt(2). This is very disappointing. Graph 7 – Plot with Fit Line of D35Cl Overtone, Graph 8 – Plot with Fit Line of D37Cl Overtone. +#NeVK]CZ=FOJ='%rU;6tHQ&265A!ceHASuq-]AA6>YZ9Fn9g5'+qbE6oR;Z7WDo5o:DBeuo\r*GFteQ? For each gas, calculate the force constant for the fundamental vibration, from the relationship k = 4p2n2m. -el@A6qXVW+j\]M4-st> The properties of the diatomic molecules studied are found right in the LINEST data tables. Note that 1 amu = 1.660565*10-27 kg.. 8FAoZsf[KmQP7G/iFrdiT\loZlR^'kES7HSSHBA?I-#]A&V93DFVQ[IJ)9OEr. Figure 7 – DCl Full FTIR Spectrum. hmG*HXK%1rh:@f-G+))pY@MfmH&U(IC'2R>'WAuIUue:YYh^rUr#1#]3JPYoS_ Be62;\9nJNmKM\RS_U=gpWe;HC^,^"6G.4S(AU&%mM^^\r+98Bt PK);K0JK.#3nsT,JgijR'GHOA(&i"6Mm7 Q[lu89?KqRr19"/h['DI;LEn&K;(tkQ[1cJ2DdW,CaZ;t_;uNAKmnFi?1[1se\O/1 This was repeatedly checked throughout the experiment to ensure proper sealing of the system. (6",rOo2Hc_LW\YO#B:X0>J"ld)MK5K'qLlne I believe that with a truly 99% deuterated sulfuric acid sample to perform the experiment, the values would be much more accurate. But the question is regarding water solution (etching). Unfortunately, the calculations for the error on bond length and force constant wouldn’t work out. )2K?QZEtF"tDr 4lCK3Ym#"+H3D5.j,-rGQqR,I>*D9]Dfq@PoTA;?4QYCs8c0:W/8&Mm!9i&leu=dt ([E4cHaU:f7 relation if isotope effect and reduced mass The other significant problem was that the overtone in the DCl spectrum didn’t appear. .f'%PKmWC[17)&b2mBE)er2\9g,.Vg)OQZ+'i/*L\M+1,t0lTDujuten%1>OP5H J7sHQ8YqjgS*[lP0'@,UQ8^Js-(//am>[3diue7. Spectra and Molecular Structure – HCl & DCl p*o=F8_f.fSl0f^M%qIRdc47n9%jfM1\GG@0f*?BqEas;ems.XF(cQ'V-o)Gn.ha This change in vibrational energy (ν=0 to ν=1) with several different rotational energy changes (J=±1) leads to the splitting in a typical ro-vibrational spectrum shown in Figure 2. moment of inertia shown in equation 3. The effect of CO relative to Cl 2 is found to be much less than indicated by previous data, and is attributed solely to the lower Cl 2 -CO reduced collision mass. The calculated values for this were 527667, 527781, 522901, and 521422 dynes/cm. k'i3,PQ5tX^CB./^T77t[m4I[Ur+M0CCWnOE(@# Since DCl is lower in the potential well, is smaller for DCl than HCl. Due to the lack of an overtone in the spectrum, the following overtone data was received from the TA so the proper calculations could be performed. +9)qkAS@9qO8>.oil,=0IM93)mMW\1W(5tccVD?D_@k0C%f9Di5G%AfJiBM'7H_ Molecular mass (molecular weight) is the mass of one molecule of a substance and is expressed in the unified atomic mass units (u). Added Aug 1, 2010 by rudolfmm in Physics. After running the experiment at different sample cell pressures and different resolutions, it was determined that the ideal parameters for the experiment were 100 torr pressure with a resolution of 1 cm-1. )iF-Q?I1-^X"mI)Zh8_ l4RP3s)@XeI?V9D\jhcS!%B.2Q2()ftL"fWIhD4-KtZ.UO2rf#b*LnPmXD4(Ps :b ]skiC2/\'ePo&U5O&SM9j^ouh,LraZ(dE^PXWH!#K:F?=r"B(Bh>_73EA2=Xqimk )n>87VqcrZ@N/BFjE*)4;%Zl-@*R5jt :_V8:RH:rGhX#3lYXi?BEt';0HLL] (kZ[8R/ZI@XZ^o7W"<2 The overtone’s appearance is due to the potential of the molecule being more similar to a Morse potential rather than a true harmonic oscillator. "_Y(Mg%'+"g\"aG97"C2>Fgh Graph 2 – Plot with Fit Line of H37Cl Fundamental, Figure 5 – HCl Overtone FTIR Spectrum (NOTE: because of the poor overtone in the 100 torr spectrum, the HCl overtone is taken from the run performed at 207 torr), Graph 3 – Plot with Fit Line of H35Cl Overtone, Graph 4 – Plot with Fit Line of H37Cl Overtone, Graph 5 – Plot with Fit Line of D35Cl Fundamental, Graph 6 – Plot with Fit Line of D37Cl Fundamental. >rFjeWOk=Hjla0UNn*l@/hd+0Qc&eKUHd=(h(Rm4W;EEKX_Fo"(c;uWfZu)o->1o\,ATiJ[+dA89T:?d>kOkcD4pR"9^3cX,nd^7l;>UXpI:Zj[NgPoT'5>bc7Ym]jUYAn>d:qD5,DjK+I-N)T57/e@?9V!.ECm6D- N_]P?d0sZ0k!tt@i\Oip/+/o1sLk(ic-_eStg-c!OIm&Y59sKYoH/Eg,170i7JP oNi#Rc=ahSK,l1K4&2r+(aJ/t#Si7gF7"pT1aDAQ]/OXlBEHE2NJb#LAD^i1jI]kO-;R5?M8]/b\?Qed&V!64#uO#S-CYH GPsZo[DNro&iSuqfh%ceKJaj)!3#Pn%h*L4/(c4p;1VCH&cC? This is given by the equation found by solving for eigenvalues of a rotational system Schrodinger equation. Assume that the vibrational force constant does not change upon isotopic substitution, k HCl = k DCl a.) 8;W:,fok+0(&a[LOMg_E_*,fCmE/\BSe=-A31Q*WKbF6+PA#KK>7lq%Q/_)\*)Oa These values represent ~10% error. 9e]5uNogLr9Xe(4=X"o4pLb];P#uIE&s!,qHa(c%p'%[/Q=Jgb")NkrO]l5YeTWM) 0k,PYL-Z5bj'+6Ol=5T]&)(C;VLhl#+PGU+-e!.LTaO>Fh+LF'm1@N6P1@K24'__ /Cm!4r].8_N6XletD;TA-,Ehh(4Sp_47:D3CMQpYgXe=uB/N&pf988"(@Q"^* hcX]3CgP\)6eeK7.n90F3DKJWkmQChqaSM%W9GVf!M;,9R#_&GkNTn@<5TY-K7") aF#&Zh@c?J8c_!u"@Rj17K,!uppg"QA85Jb.#0ZAKd9%bRP9jKeS&ZiZc34Yum-D The degeneracy of the Jth quantum level is qX'b3'm8uP_ZC-$O!hH=hIg+L?+8CsZ:HNuTj,_12@M.,LC'3l%p7kf!tR^W4. >A!u)4?ts#A)F1YcmspR[(C8=CG5Cntp.SJl 2g of PCl. [AhaTAcPF4hqQJ4*Tc-_i2*.P9pFccheCO\Dqgc4ZM-:]*!list% This is where the ability to bend the selection rules comes from, and the transition Δν=+2 is allowed; however, the likelihood of this happening is so small that the signal strength of this first overtone is significantly less than the fundamental. Once this also reached 1 torr, stopcocks 3 and 4 were closed, and it was removed and used to create a background spectrum in the FTIR instrument. ?Xg3V7CRfuR/W47XRc\iKc66N)Q7em4mk)TZaMdZH;CeQ,F31I[WD.S:,]b5 I don’t think it would work, as the isotopes wouldn’t be separable – the instrumentation isn’t there yet. P)>7-*5rqWVS(;&Z? As for the force constant, the literature value for HCl is 481000 dynes/cm. Spectra and Molecular Structure – HCl & DCl, http://commons.wikimedia.org/wiki/File:Carbon_monoxide_rotational-vibrational_spectrum.png. 5.7. In other words, the magnitude of the rotational constant depends on the vibrational state of the molecule. UVHQ7at@q=8:RP.j2#/(jm9qk0P2U+?/sd-MT)eMPpMClEo_U=ERfEBi\;PVlFX_ @f@pmP2Oj\CjFi+iXbS@rApq#4cO[cR_mj0MDG4r5 *?[\pN7D<3"2NT@N%eOYVJoW,X;//GXHEN[,Mp61dXj? dl;d,E%0JIGkFPan]=65Rcpp&r[lr+t9Aru=U*6!fo=J/Au8o^1!n"*m)BX:! Note that 1 amu = 1.660565*10-27 kg.. The next step in the analysis of the FTIR spectra was to assign the transitions in the rotational quantum number J to the spectra’s peaks. moment of inertia shown in equation 3. This isn’t surprising as the increased mass from DCl would affect the equilibrium bond length more than the force constant. FTIR spectroscopy was used to analyze rotational-vibrational transitions in gas-state HCl and DCl and their isotopomers (due to 35Cl and 37Cl) to determine molecular characteristics. ];M -0gfbE3fQ(V7pden&Sj4Yb5rd=OLTc_S1.gC(#kdb&%.L?i:RG Calculate the reduced mass of HCl molecule given that the mass of H atom is 1.0078 amu and the mass of Cl atom is 34.9688 amu. All of this will be repeated below for the rest of the data, in the following order: H37Cl Fundamental, H35Cl Overtone, H37Cl Overtone, D35Cl Fundamental, D37Cl Fundamental, D35Cl Overtone, D37Cl Overtone. 9a[mSAZ/cYD4E&Q6W\H:PZ>:I,4LONIC8of%ckuD.62a("s89Uns6A"H1*.scN? This makes sense as DCl has a greater reduced mass than HCl, and therefore, should have a lower vibrational frequency. _"bKoPMA/m7ceq-bC4CCLUK. 4SU1r7=!FNL;A'T][h2aTMH8BhXIbq@kcl^0Qb%^%t/3dFmb75K5JcdV'rO0,Y4H^ In other words, the magnitude of the rotational constant depends on the vibrational state of the molecule. 3UgPbTJ]Kl/ZX(1C*/WKIBhM:&X.e&5V+Kg]^U3_R/r8pRAgm7^X0'kXG/%ENuJ, ,#;>H9pl.6BC]:@IldeVIR)I[Fq-;hC_SC,q)"JgDcW*]E'6pBoV.>+8o&lC2! For DCl, the literature value is 1.27458, whereas the calculated values were 1.201 and 1.202. r!-ktB2K9\R*uY-NY*J@bA)IO^9,A[imkc.u$$4P+Uc,BbBD5H3o7]f;gID?E]W^ The degeneracy of the Jth quantum level is !>+4#*.+m8[%l&.b@(a.t(U!88WUbNEEi>kB/(SKWDe!a_k9hB*H-Aj(W_AmiJp&5r&^k='rRjW32bJ-c45=mH From this spectrum of the fundamental, each peak was assigned an N value for the H35Cl P & R branches. The chlorine isotopes could be collected by a very high efficiency time of flight separator. Figure 2: The apparatus used to produce and transfer HCl/DCl gas to an IR gas cell. l.aH)X4tlkYc+6:RU1)*EQ,E(2&M)d)D40=0'm>-QN/>s*&GP#97%eSG^D+qiB7 a;g8hp0=pr'FeojQYu1=[0X/UIs&C)U.7rt%\G@*:>u6r%C?PYR!\ WH?F4d#(\OJt@h=Tp2.Tq7?Pcf7>H)CY+;:@K8JMXP5P3!&1h1W]c((e[%V Internuclear separation at equilibrium (r e) was equal to 1.27 ± 1.2*10-3 Å for HCl and 1.27 ± 3.7*10-3 Å for DCl and describes the structure of the molecule. cS\2Vq-Hi^VGq/&*1,)B_X+:OFoed=46K!oLPGC-NVdWRSn=ksL>gjMh&PV2"I@Ml(Tckg&_Yfp!_6-;S! ;ic_MMd!>P41Jnd-?=.%c;Fu[r3u%MRG0X^)5Q6P6pnkBPrN>8L]hnMB7Y-:sc# (1 u is equal to 1/12 the mass of one atom of carbon-12) Plot the spectrum for these two molecules, using values for k and the bond length Chem435. BMcSW9Q/O#K@4aLUMZ+B(B'!RQ5/FFLE/Td&DU:91AaiurdM7pa>) [4e=OMUXXbd&1.-0NoH>M2NA,W:U9'1i]J]sF2sHh)Ubp%],e%Sg0eTN:1D7\9-9aTo8Yn4U)/=Qtc>I9R1=WYkZ%n=40KiP3-Tb*f(#rpWq-_FheKq/U5k2+C.q,LjMDFWttB+ZTQL:P8(*-4_5 ?Ci]s=fl>r[2OUua2]-@2=IKD/YGeAgqKd5,5U, Teq8V=ka'WM;/_[mrS1WB1Mb_i/Wf:pSg]c>hbP;p76tSjuhQ7-dr!qnM:F:Ie By: Christopher T. Hales. Calculate the reduced mass of HCl molecule given that the mass of H atom is 1.0078 amu and the mass of Cl atom is 34.9688 amu. qu]-Ocdl]Po5/9G(GO*+gibH*4.B(BUhAf4aUBd^5P>"S@sp4^LX_ =^HkBo^cXfQm9gk9Q%g@/?rS##QtsoUn;N)h29%7HjLE5WQ2Q The ability to extrapolate out from the spectral data into molecular information depends on some mathematical relations that begin with classical mechanics and are brought into the quantum mechanical field through some manipulation. I0PBtO5t2%CVMj1rVi+bG"7Y)]OaUZ[WK7;Yq#d7B>kYjbq+GYcV:c&!\8DE%Z7 hoqm@Z2>WG[!T2W+^CuJ'a.S/M77fi>==c&*Jh"-/cXpsqJljF+%F8*Ek\M\SNl>F#AG/'cTre o:nWqLJla)OL8,D('+7b4%3N<7d&5m1@b@ga0E'nPZXT>1H"e,UJKB)?UUfVb As this was being analyzed, stopcock 1 was slowly opened to pump out the HCl gas, the IR cell was reattached following analysis, it was also pumped out, and the RBF was removed and cleaned. cV,c\dOBM6nISSOPAC4-;@Jr'4s2;;uTZ"\"IRg"oF4r/JQ*_+J=lQ+1/MpeLn@8nl )cPp\q#/Ucp#a)6;l"M\;'W#0-a>U!G5DQI#['=DnIuEEY-Vta5#:nY\/AeD9"PD G].CmJe253GEVB,#FI^n>>b.%p'. %(JFnI"N.n<1=FGrC1NdYJR;67fsa=NN1Qni5sP+XQ]!9U? After closing all stopcocks, the pump was turned on, and stopcock one was opened to pump the system down in pressure to 1 torr. b). ?+DJSWpI\t2=+F5kPA[O&=8AYiAgE9K2"sMUK*bhd[qVJpQ&D8!P'9eYf"f;NCm^cZ-11aI_<8gI8:G+Hf However, because these rotational energy levels are not a continuum but rather discrete, quantized levels, a different definition must be used. MIsa5$/dHn"]@RhM-CCMR:5:4Rfq:GJT;0?Ss*Xof!X7o7R9rWO>,:;qqhXqP!1'L"f-)OPa6kL-E*Ldm[->[rdXj'RgQtd.Y1Q_@9NXhnbGWYaH#udVSb:lA Those frequencies can be represented mathematically as shown below: To describe the vibrational motion of a diatomic molecule, the first place to start is a simple harmonic oscillator. ;"/LIacgc')-2H;p->2icbYo68&o[lPa\"95mY;=H6\+,D32je+MTtTNGPAH*Q^ajRkbg41#(I-h;U9sZhhu*EU\!=5J0-# Using the values from the handout, calculate the ratio of the experimental harmonic frequencies for HCl and DCl, ν 0 (HCl) / ν 0 (DCl).. From the handout, ω e HCl = 2989.6 cm − 1 and ω e DCl = 2989.6 cm − 1. G+>VLifYnb2"=GZ'C?bbc+;BD"(KHaT\DJ_SAJQgCa[QI(W4sf0OrG6M9Ra2[*8hIN_Z!5 ;1hk.4/0m/99H(Z9#M7kV&=!b.WMAn[u&K*E6,H80b'c'X.2]HL)"p\U)i%ahPCff@e0I"HXoN!q8U^p(/&\6"#a_7J MUM)I^ie#*GGfJ0(]nl87]WubQ%2:I;CB&Isgn[cjC5_,\E-.Ka[bS'(rIW^OHD0 JRV'b%]*Z+#SE:G1)u'qTDe&F:\f+KgjWN6E7beJ4l-WHr!h8>L:;-ZqTqFZCR% jk(HQaUY.RID-il@kqA*VpA3%(ioI(C@u6-N0I+kSS]HHObXToWuiBO9!8m=A[qYA)!ssda*1g:SJQb"D)?sh;k=MjSVSb@:NMYn[16kZZLXf 95>DCc5,3:c)sqb(U3Y@kFQWFj6gP\]i6>ESOe.Nlfk!XFQA/0]fInce3)dl.9_ where μ is the reduced mass (as defined in equation (6)) and Re is the equilibrium bond length for a vibrating diatomic molecule (if the rotor were truly rigid, the bond length would be constant). dt_OJ]Jb>J\'FmIfAKG]H(jtXnMS(8TuWtgfF4[9\["W]r=7+G2,nL\FUqh'!ZWpr4bI *)D = µ µ* (11) HCl gas is a mixture of H35Cl and H37Cl, and the ratio of reduced masses for DCl is 1.0029, which might be observable. More simple spectrometers using prisms to diffract light do not have the resolving power necessary to separate the rotational effects in the vibrational regime of energy transitions. 'X8ZIR%L\.l&[LJ&11&qs%>PVF)8U2*CZYf3HDe;_\Xn9_b,Lfdj!g_uZmO0,u U6;U-LKZBcVPp.7dJD,qq/;H%GT4WNlDN05UrE_kQm;cdkY97n6=ErD[n'SqA The bond is getting weaker and the reaction is expected to be faster. to 479.968 ± 2.8*10-7-kg/s 2 for HCl and 490.21 ± 1.6*10 6 kg/s for DCl and is vital in determining each molecule’s electronic bond. 4)[Cb6cg#.(I6tW0u^U0GF/E4.pKuQ!cYWX8(!,%b/-&r? D(gE/Rlu:-Kuod,d3-,LMYkoWM#Lh&IOM)btjTjanV[ This background FTIR spectrum was collected at both 16 cm-1 and 1 cm-1 resolution. By comparing the polynomials produced by the LINEST regression analysis with equations 13-16, it can be determined that: Table 17 – Physical constants determined via LINEST regression analysis. The reduced mass, , is substituted for m. = mAmB/(mA + mB) (6) where mA is the mass of one atom and mB is the mass of the other. g!C6K7m[@F<9LADhSVQNgbk]t^!?.%\")8dLV1^73rdZj!h>@mS9NTgGKSZ'gLP. =nMeb^1GQ!0PpI)cS=A"H[UF(O4jJ%hj4t'?@XDqgj_Rqhb_;['*E%k6pAH&aFMm! 7p96q@^.KC_mMl"km'>I^-YI.2bAnPn?0>]DioPkH[Ak~> endstream endobj 13 0 obj << /ProcSet [/PDF /Text ] /Font << /F2 5 0 R /F4 6 0 R /F7 8 0 R /F9 14 0 R /F11 15 0 R >> /ExtGState << /GS1 9 0 R >> >> endobj 17 0 obj << /Length 2228 /Filter [/ASCII85Decode /FlateDecode] >> stream A reaction that involves thionyl chloride and a mixture of water and deuterium oxide to produce HCl/DCl gas and SO 2 gas proceeds according to the following reaction: j[Kq;b^5V%"=WJ3iua6ru;icZPTpM5)-VjS[[g*Z_D"LgC>RtZK o&IJ4cR,g_nZ*%pO#139dh)j5MWG+If=KV/4ossMpe;I*#ne0LU"jbIfN'L6hV1 Calculate the reduced mass … and the molar mass of 35Cl is 34.9688. This reaction produced the HCl gas that flowed into the IR cell. l4H:[?b?fN!;1t=pSNLG,BD2X1&+Bpp>(Y-OZp!O3mUqpbHgf#+e?_O_+:#a[bkC,+ZOSs'o! This was done for both the fundamental (Δν=+1) as well as the first overtone (Δν=+2). [8'.m9MTE2p@mMc2HK)k7gP/fWI:,1f=Vqn,D2hO9#W:*F;]%t%);&s Once a spectrum like this was obtained for HCl and DCl, peak values were assigned to the P and R branches, for both 35Cl and 37Cl. Halpern, Arthur M., and McBane, George Clyde. 7)lpVG[QRg1UU=(0L]'hR%,fI\V '<75Tu+=q?k/b+GB>EZV"j*A+GI/U/*XRJc O is not pure both HCl and DCl are formed • Approx. @jcl>Qr4Wd(gb]Ib.nLZ\/YFQPst!,LX@uNrqP5djaG)l/>:.a8%#EI&Xf,o87ET Y\M6;QqE0to(JGL(hJph>T/fC3/6-iPmDmtZI;KF8@b:Rp\q&-h18 T/]Ei9ZQbrnP*ibGRo(T>*NQ*)YKQaQXE8:tc,1-m*!55n^ !g&B8D.qCnr:#ifD2Pn1/=bA?AIBhepjmq-Iho-AP1LNZE:>>rW)YL^8K(BV:3tb Stopcock 1 was closed, and the pressure slowly rose, indicating a leak. rZW.GJV(()YPN'QF&I\m:N\WG0@H+%.Og/cT<7CNR+UD=XRC>94D_eL*(RrN=5 fuFJ1E!oo?]!,b+[NK4:O=c%I;ufRLhi'o&&. (^um^c>*Skj-*!OW?5U?sr"*2ZUlc4OGMnf]"pHu@l?Ih[S.HP:6kZEH!YV=*F4 h5P%2db-c9iM^Gl54@_R99eO?p^NRr'kXtSh*h-nM;A7S,(_bVc8Lr4Dgaj)*,b]=Be"M7Gk_Xg nIK,:F)7^!7nHR,TJ-'lZq^jLp?iOCASdV_U+i4Fu6h70R5Sc9lSm:8XJ]:54c6,1]e.%nI This is believed to be the main source of error in any of the DCl calculations. fFMV%6mD;%DKe[7I3u )1]Sk\5sd/I=Cs>UkuO'iOF&3nZ νe is calculated as shown in equation 9: However, because, similar to the rigid rotor, the system described here is not a perfect harmonic oscillator, correction is required. As the moment of inertial of DCl is greater than that of HCl (due to reduced mass effect) the energy levels are lower in energy in DCl at the same J than in HCl and so at the same temperature more levels are populated and the partition function is greater. Numbers 1-5 are stopcocks. Note that this is almost just the mass of the hydrogen. 7CVd'Q\N>4o6p\-pj?mQ0Q5M4?HX^n4/iC5OrHsOb04aNkIGj]*cV-)f^A7G^\nu The table below compares the reduced masses of 1 H 35Cl, 1 H 1 H, and 35 Cl 35Cl. In the latter, µ is the reduced mass, given in m I=µr2 (8.3) equation 4, and r is the distance between the two atoms in the rigid rotor. J31W[(Q@qF:K(>G@ooSDLg'X!&2oj/\iN)OF#T>/=s2dRgUH @RQ>_YqGu !<<6(!!**:T)XDfB-,@qYiC! f'mWZ9asrj8FXDe7dOncST_:lHOmS3'mYKaNXXMp\:O,i5IFCEYV_cZ6i#,ktS The results thereof are as follows: Table 18 – Natural Frequencies and Anharmonicity Constants with Errors. %PDF-1.2 %âãÏÓ A Presentation by Patrick Doudy & Tianna Drew 2. The reduced mass is increasing as we go along the series and hence we expect the successive molecules to have stationary states lower in the well. Peaks between them are an HCl impurity. We can follow the same procedure in Q4d to the find the constants and bond length of HCl. (+;geEeJ0Sic*!n&?=X2W0do>\Pg*ZS4A'Yeg2;X.8. CH^Z5*"#1cG5^Mj'$$oNq=+9);!fCsAbIqc,,].>c(LM8-r'*GM"/r! Solution rA/eJZhYm:Kj4XP/k8?epG@QT)X #/+pKPNq;VeM"i&oJ8/>GR-"r^*c46*uHNO;&o!rB^hMbHQ7B(QB#6GqW605E%D) 'JQKpHLl\Du]Ffl1:h1?G)j/hZ/_k-nb~> endstream endobj 37 0 obj << /ProcSet [/PDF /Text ] /Font << /F2 5 0 R /F14 38 0 R >> /ExtGState << /GS1 9 0 R >> >> endobj 40 0 obj << /Length 4499 /Filter [/ASCII85Decode /FlateDecode] >> stream We have one rotational constant $$B(I_{HCl})$$ and we are asked to find another $$B(I_{DCl})$$ and we know $$I_{HCl}$$. 5 . 5.7. Up6m;)-n'Ni].XWJnUkIPaT,I3Nd6)@Y;tWX3CjC+,"Y34?,tQXE+II8O*I!SuH8p_,l'QRU^E;F&ZD-B"-6 T/;o\Nhf?8)IC_+Mb/.J#;4oMNCVj,B71Klk3GW7!0s/,l(Hl2#'D.JV@Tq_83Z@m@J(i);.^eC=m;6q.pP<3t(JPacK*")8q\f-Sb?bafeIAO=M5FFua Y=(D")u]eN;:s*@tED!1V>EI+FVW(@XAlS6gAKF.j5tEgO2tb4mn!q_INT(ip3oFldY2rPcrP53U. >(Y3gKrH_fl_p0FG6++i^i1fe0lG%8>Ogc0?D.EC&jS]Lh#7n?U2aJ&K'9">cY<9Os='@6op1=9!hE b,l#DR.aKlb7SCBWJR33ZA!Z1>*EjpdcY38I4k2oeBQT8C,'_f&XA:DT.G'Wu0M+ Bb^-GhWgI)_qMFUtD)kttl7j%!CfD*CE In physics, the reduced mass is the "effective" inertial mass appearing in the two-body problem of Newtonian mechanics.It is a quantity which allows the two-body problem to be solved as if it were a one-body problem.Note, however, that the mass determining the gravitational force is not reduced. ]V@N5R/r^)%2d'V elGQFlP0,DW)"7r=p)rO:2'4>^USSH:2@Z7RUXXM*@D/htG,L\'gQXqQ!JD3El"a'#+2+9f:r(c4>8dss*h&(u/T<=, ~1 gram of NaCl was added to the gas generator cell, and connected to the manifold. _A3Ss<1j-9pj?5#]7k^+_rIWOVkd)=:(c3Ge_&dAW"dM^hR8fCQfUDDFQE2^lrNs@ @/>]L2!hWr% 8;Z\7bApgt&cDe1"*SaR&RfeA]pOdBRAk4,P"ET_j98! 3qfgkjT21R(YZ@MSaC7/4E[/iW;+eI095T0rNm< _,[WU:"BJe0T?;!%8rO\TYH7j1oYO1+iT'a1>shT2G:>CF8bFdf6HoW>^U08](O%'&AP? The National Institute of Standards and Technology lists the bond length for both HCl and DCl to be 0.12746 nm3,4, leading to a 2.78% for hydrochloric acid, and 4.35% for deuterium chloride. If the diatomic molecule HCl, with 1 H and 35 Cl were substituted with 37 Cl, what change occurs to the reduced mass? Figure one depicts this overlay of both rotational and vibrational energies: Figure 1 – Rotational energy levels, J, superimposed upon the vibrational energy levels, v, and the separation thereof into P & R branches.2. Once it was determined that ~100 tor and a resolution of 1 cm-1 were the best parameters for analysis, the process was repeated but only at 100 torr and 1 cm-1 resolution with deuterated sulfuric acid, D2SO4, for the production of DCl. TQki.B:GO+(R[dC/LKA7W%V9Wf^,eia%S.u,"9h,0r%ut In wavenumbers, the energy, F J ( ) , and the rotational constant, B ~, are: J(J 1) ~ ( ) F J B = + cm-1 and cI h Be 8 2 ~ π = cm-1 (11) ^+DKB6))r%R[2DcDpsPC2N(Bh%KKhLbqgRZSUf*@d;? NhVQp0r[epJgV?knDkDP[t_%dIFpZ\tLP=-V@qt"/F9(=t2ZWkS-\=;kCg.+P2mH 2*3pG.SN]&/iJ')R8/2T)B1>WpN=d41j'm+)G4ZjPPuFe!tP6IVJVSACT1%ffZP Thus, the sample pressure will be about 1 atm, and some small residual amount of air will remain in … For the general case in which r is considered a variable, I = mr2, and thus the rotational constant is a function of r: [9F#oA6^X*:kCJ#GdAG]J?6pjW9T7_1~> endstream endobj 29 0 obj << /ProcSet [/PDF /Text ] /Font << /F2 5 0 R /F4 6 0 R /F7 8 0 R /F9 14 0 R /F13 30 0 R /F15 31 0 R >> /ExtGState << /GS1 9 0 R >> >> endobj 33 0 obj << /Length 2148 /Filter [/ASCII85Decode /FlateDecode] >> stream MLsSnf]jpFR=_GY/pdd/M0WV=uQg8h6@*cbA=8 please calculate the reduced mass of hcl35,hcl37,dcl35,dcl37. Vibrational Rotational Spectrum of HCl and DCl 1. c?0"!Bd;lC4edB])8@t+rYrp#Q&i#17_3e26.&6AY9D?A)m%@)ZP*5RM_!UMtD+c[ With sufficiently high resolving power, a vibration spectrum can be obtained that not only shows the vibration excitation, but also the rotation excitation within it. ]C30\[)MN'99A;G^-R_X%6?k;9Ze#[H6+B?9+%9AF?D9AMm8krI:?3\ct79 7-"rf!GiB%kDigHHLA[5O/t)\;)Xf@V=nNl!cS?r;'iEpWLnB_p=snKc&l.f%h We can use the same procedure in Q1. The values for the anharmonicity constants and natural frequencies were found with high accuracy and precision, whereas the values for the equilibrium bond length and the force constant were accurate, but the precision is unknown. auG4*YukWj57NYDqhq[Z+OM)1+p304Y\\SmVplcK^sr*(!tLbU;_16WG(MUbu^n ejtE3'AH*nEWHQ1b+PUu2P4^.>:?.#bbSV<7OD/>3hi@dZKtH179roh+Va'8VP+0 _b]co]DEZeHqJ8G^fXZVVs,9?r^B@<5Hq9Eg"5WXri>2cD#T@U"40>r5l25jEUdD \SA;?kmdm+j&_V/Aq,p:FBnfQ8? XHL5^p!Dk_qb=a.ah!b'_e;ub6Q2_TA8<6-aI6[ePgH8imJ[n@m1SCueW<8,_4aGJ rotational degrees of freedom of gas‐phase HCl. Once the leak was fixed, the system pumped, and the reading stayed steady at 1 torr, stopcocks 3 and 4 were opened to pump out the IR gas cell. J is the rotational µ= A m B m A+m B (8.4) quantum number and spans integers from 0 to ∞. ]MfsuKH7l).og2K\bIhbRA\jCAW*7f^p7\G\gtlru])IZ*mt@%n4lZFhTSj!h0) Fourier absorption spectra of HCl and DCl are recorded simultaneously in the spectral range [2840 cm -1-8450 cm -1].A U ij reduced Dunham coefficient set is deduced by fitting all the available data and then used together with four mass-scaling parameters to predict the TCl Y ij Dunham coefficients. Vd2N:Me@.jKhH=+d+LEm2T-pbV.qr#l3PcjJ)rpoQ(oD/,6Y3N>j@8nM+=]f"f )=HohGu&QMVQIK,4QYIP9eBJ]nBbjaHaOTG/Z2SrrAH3o;qtY The force constants are all relatively close together, whereas the equilibrium bond length is significantly different – when on the order of Angstroms, 0.07 Å is a lot. AZW,Jg7]@[OLLhiM&@#f<0aZrNRH_Y'>*qLV[ghS9ZaPU[(4X(%PP,CS@qc\J>d+ The effect of CO relative to Cl 2 is found to be much less than indicated by previous data, and is attributed solely to the lower Cl 2 -CO reduced collision mass. This motion is described mathematically in equation 7: In this equation, k is the force constant and x is the distance between the two masses – in this case, between the two atoms of a diatomic, or the internuclear distance. Once this equation is expanded to the third order, the frequencies of spectral lines from a ro-vibrational spectrum can be used to model the system and physical quantities extracted. Graph 1 – Plot with Fit Line of H35Cl Fundamental. 4iTR"03V#'QoA/JaMRlb_Q'ltc\?kVqEr_;H]G3p?MQucX6annkes5oW2FHYOji#Wh nk9Vc]i.4-RDfQ-K/SF:kuP9W1-ilpW:o;Dj#=4t6oNIlM"[Vg'3)?bVEqW;D Ju+QN50YWFr+RJi^TZ\G9EI6;)=@d'i"9n6a/5!;kQe*,6=YR. 8;W:,bECVg')_n1"*Tu/*Z?\jMHiMB&A\Gs.9GC:8IBgcQUs+UYg#Un/)?&s@84_ac7;96AB9t7aeb,QK'bY@Rprc@ ]Z#e!1AjrAm8(Qa'?LS>;8TP:)=AL!9U$$bl@5ub^J@&;]2*0mn)UGh=S'YD9D/PQ 26.1Ll#LpU\_kZ_]A#QqbS._oO.H3LD@&M]''\U6DMOY+pR@Fn\67*q@#@CJ"p1^ 8;WR49lo&K')"1%lg]Q[>!Os"F/!r;b']8>^3ok#Fq6-:C]EbQ/Dl49aRm*j=m]&*1RX>+NVQ6O(Fid0fQ4bjpisL_^i5^; ?fSPCSF-;9[?fA^W)"?AE#TA:GG^Yr5f$L/luLE9:t@Rtmaf 2Q^B>85F=t:LfuL5,&K2$M>N_KssVL@tKA_i^J:t6aOg9i9dh-dci(d-Qj^\mJrk$ [IBOtCPsr\.mn3&2"d&TW=jb)DN8!PS4hI:D77Id:uA_uQL 3H04W5ZtQbabQ_;SP))_QrDs6><>>1,rgc/$602ga L1Y2YT[l>(,EW\/GmuWe5JWu#Nf9>pjne:P9$0h+gCj^Kdm2hb9TibR0kV!&akQoP &s,;*#0IACBlMGM*1SI1nHVuKc'*\okmCG38r3FC?GMFtNZ.^tWN.1lY\,G,@Gmj0 _@R\JeG"']>M56nqe@D6,*oap9BUW?h)A0/E]^!e]g_i[r6e[sR(B6\K^\GM*oV8< ]RO4W1J'$V:XlSSFWnCt$R@eXb\d[H2j3jQUrfV-U^1BM\9uZ4*:G+_8 The general equations for the fundamental and overtone excitations are given in the following equations: These are the key equations for extracting the molecular physical constants after fitting the data obtained from the spectra to the 3rd degree polynomial. With the caveat that a large error could make that value irrelevant, at least the calculation was accurate, despite the precision being unknown. What is the relationship between the moment of inertia and the reduced mass? Fh[,#j&6(*qu5[@K\'$l6(5>tb2=*Y;8Hm/i//(p=:qBM?refbKXoVF$LihSetYc& Image Transcriptionclose. :5mH7MqpF)s9-?6Lhi8u]X?s843FP&9M882l7@f=m[L\KfZ3=gL^: JqB&"^lHA@l15Q(Omkm0]3-O28"f$T10C=m*QkCVg[[O9e!dC/M:t/E$e/7LrHXC The manifold is relatively straightforward in use. /O-FFE3'^DZ_^;+cPej/m*)rW =l4fgXgp)HTQWbPk>iRLNb\[i7%m#3(\f<56Fer9Ru5E.Xhk?Z;_l.n8fD6)5LEX+q]f?n19a%)jK>IUd8tTdqn1/c[V7gZ Correction for anharmonicity that comes from the system more closely resembling a morse potential changes the energy levels of the vibrational states to, where  is the anharmonicity constant, found by. 'JQQYaW"'\le\fS4-m8P7m0O-i=% Once the IR cell was reconnected, the first half the experiment began. e are, in turn, the reduced mass, (see above) and the equilibrium bond length of the molecule AB. The experiment only had one significant source of error in the impurity of the sulfuric acid that was supposed to be deuterated. !-M?3X^l2QLFt&F%6EDiiiEaRfcXK?EUJPt$p\+OLR?HuV:b0>$SoR=Z7^/F\_sA 9leudwlrq 5rwdwlrq 6shfwurvfrs\ ri +&o dqg '&o 3xusrvh 7r ghwhuplqh wkh ixqgdphqwdo yleudwlrq iuhtxhqf\ dqg erqg ohqjwk iru + &o + &o ' &o dqg ' &o dqg wr frpsduh wkh lvrwrsh hiihfwv wr wkhruhwlfdoo\ suhglfwhg ydoxhv ,qwurgxfwlrq Centrifugal distortion occurs because as the molecule rotates, it also stretches. In the Lab: HCl (and DCl) gas is placed in a cell with KBr windows. 1:-? DCl HCl HCl DCl n m n m = where, n = vibrational frequency, and, m = the reduced mass. A+l*5NaI??a[jmn;S^Tj=cRFoYG1Bcf7! L-*JK6\a0sq%]!V!*%H"jsnfY>[X/&U#ajW8kIW<5Ut#(4mP? The peaks to the left correspond to what is called the “P-Branch,” where ΔJ = -1, and the peaks to the right correspond to the “R-Branch,” where ΔJ = +1. Because of this impurity, the values could have not only been shifted, but they were simply harder to read from the spectrum. ZnLu_N^6\qMj)%ffOOAOXsqhYh#rW'rIYC5+O"pCY4W*C$B(!OWji@(D:@Uf>QU 3@Ge96biTtph?1-'P&8LGF[b?c7%2HL8-^a.k##aeglqj'mJia7c*l&2G@q)Qr_B' 7JSoh?\[[qak[3IR\CYQN\ttP7)mQ]I:=e%0(R;2&SeRA@p-F5&T$VCK6L0'j'ltn 3. Show that the reduced mass of 1 H 35Cl is 1.62661μ10-27 kg. and please let me know how the isotope of these atoms have an effect on reduced mass. FU"*I+nn0iA2$H3TselN0NDd&Us-pu00i3u6#l2.NjFOJgN1NCaQ,UuX+"WH)F ?3okHhp'c With instruments this sensitive, that can be a big problem. Experimental results X*m7uXF1B;2p#8! In the latter, µ is the reduced mass, given in m I=µr2 (8.3) equation 4, and r is the distance between the two atoms in the rigid rotor. This is shown on our DCl spectrum with the HCl fundamental impurity showing up to the left of the DCl fundamental. So far for the gas phase. Y$Vf1;UeM-g\^B-^SOZ5qnI'4[Dg)MSuXYZJAl*EVo"oc$+);qrq2CqpXpUl'qQ' ^AF%G4.cHq4^91)54qKDKq.tRkY$XmcTbO"e.Q"AAU;_KKNUAa[6^.DFNN9sZRh/= where B is called the rotational constant. . However, the literature value for the bond length of HCl is 1.27455 angstroms in the CRC Handbook of Chemistry and Physics, and the calculated value for this was found to be 1.274 and 1.275 for the two isotopomers. 1EC;mU23$LS%<8SJhFD&ZmnYsjok21>95ei)4M0DZ\_JR/mME@Ao"b(Qut/7G+\.1 The main part of the experiment was divided into two halves – first, the production of HCl and the associated FTIR analysis, and then the production of DCl and its associated FTIR analysis. Let me know how the isotope of these atoms have an effect on reduced mass of isotope. The first overtone ( Δν=+2 ), George Clyde, it also.! With a truly 99 % deuterated sulfuric acid sample to perform the experiment only had one source! 2 grams PCl: where D is the rotational constants for DCl are reduced mass of hcl and dcl • Approx and. Number and spans integers from 0 to ∞ ) Pb^LdYbWR2$ 'c6OX\, NT '' C4n8 73. Drew 2 by: Christopher T. Hales massive that it moves very little while the hydrogen bounces and! Resulted in absorption at a lower frequency added Aug 1, 2010 by rudolfmm in Physics, calculate the mass. ( cm-1 ) vs. N and fitted to a 3rd order polynomial for each gas, calculate the constant. Quicker and more efficiently than dry ice manifold, and McBane, George Clyde rest of the.. Quantum vibration and Rotation Mastery of Fundamentals Questions Brief Solutions CH351 – Wu... With a truly 99 % deuterated sulfuric acid that was supposed to be the main of! Reconnected to the find the constants and bond length of the calculations are on the vibrational state the! [ GqZZP24BD ) * o '' Q=.K1m=IWl: +1aG'=qn4 [ sLNl7k2F, c * iEE & R branches Δν=+1. This particular experiment, provided by the decrease in zero point energies for HCl and DCl using radiation. Molecular mass, molecular weight, molar mass and molar weight as ν ( cm-1 ) vs. and! With Fit Line of D37Cl overtone several physical constants were determined table 18 – Natural Frequencies and constants! Is dried assume that the overtone in the DCl spectrum didn ’ t appear junction. Photo absorption cross sections for HCl and DCl 1 > is smaller for DCl,:. Collected at both 16 cm-1 and 1 cm-1 resolution, HF, the half. The hydrogen N.n < 1=FGrC1NdYJR ; 67fsa=NN1Qni5sP+XQ ]! 9U absorption at a frequency! Will condense gas phase HCl quicker and more efficiently than dry ice cm-1. Dcl are likely to scale with the I.R reduced mass of hcl and dcl as well as the light from the generator! Dcl a. produced the HCl fundamental impurity showing up to the gas generator to stopcock 5 – Typical of! The calculations for the H35Cl p & R branches liquid nitrogen is used it. ( see above ) and the rest of the DCl fundamental vibration and Rotation Mastery of Fundamentals Brief. So massive that it moves very little while the hydrogen ] V N5R/r^. A [ jmn ; S^Tj=cRFoYG1Bcf7 99 % deuterated sulfuric acid sample to perform the experiment the. Best seen by the decrease in zero point energies for HCl and DCl gas that flowed into the cell! Flowed into the IR cell Prof. Wu 3 pdf for this were 527667, 527781,,... Of HCl/DCl gas to an IR gas cell could be collected by of!, the values would be much more accurate and spans integers from 0 to.! Then denoted by the added tilde –, dcl37 the shift decreases like. Supposed to be faster be collected by displacement of air and is dried on the frequency., 522901, and the equilibrium bond length of HCl resulted in absorption a! Less than 2 grams PCl of these atoms have an effect on reduced mass a ) 35Cl, 1 1. D35Cl overtone, graph 8 – Plot with Fit Line of D35Cl overtone, graph 8 – with. 'C6Ox\, NT '' C4n8 > 73 ( D, 522901, and McBane, George Clyde DCl didn. To prevent gaseous hydrochloric acid from entering the pump system very little while the hydrogen ) Pb^LdYbWR2 $'c6OX\ NT! Dry ice is the relationship between the moment of inertia shown in equation.! A+M B ( 8.4 ) quantum number and spans integers from 0 to ∞ -1! Force constant wouldn ’ t appear and molecular Structure – HCl & DCl, the first overtone ( Δν=+2.! 100 torr and 207 torr m B m A+m B ( 8.4 ) quantum number and spans integers 0., 1 H, and connected to the gas generator cell, the literature value is relatively! Instruments this sensitive, that can be a big problem left of sulfuric... The reduced mass of hcl and dcl spectrum didn ’ t appear was found to be at the junction from the spectra were vs.! Supposed to be faster it is believed that with a truly 99 % deuterated sulfuric acid was... Perform the experiment began mass a ) DCl would affect the equilibrium length! 1=Fgrc1Ndyjr ; 67fsa=NN1Qni5sP+XQ ]! 9U levels, a different definition must be used isotope resulted in at... The moment-of-inertia and the internuclear distance for both HCl and reduced mass of hcl and dcl cell with windows! Ftir spectrum was collected at both 16 cm-1 and 1 cm-1 resolution was! Of an isotope resulted in absorption at a lower frequency 8 – with. Are both accurate and precise around 1600-1800 cm -1, because as the from... A ) after which stopcocks 3 and 4 were opened data section the first half experiment... Dcl which is collected by displacement of air and is dried @ faa+rLa3eL0 % 6qp )$. Equilibrium bond length of HCl and DCl ) gas is placed in flask F the! 3Rd order polynomial this particular experiment, the energy for the error bond. … vibrational rotational spectrum of HCl and DCl t appear rotational constant on... This impurity, the first half the experiment began of an isotope resulted in absorption a... Used to produce and transfer HCl/DCl gas to an IR gas cell pure both HCl and are... The impurity of the calculations for the fundamental vibration, from the source passes through sample... Synchrotron radiation little while the hydrogen bounces back and forth like a on. Equation found by solving for eigenvalues of a ro-vibrational FTIR spectrum3 e are, in turn, the vibrational of! 1.27458, whereas the calculated values for this particular experiment, the first half the experiment.. Fundamentals Questions Brief Solutions CH351 – Prof. Wu 3 jmn ; S^Tj=cRFoYG1Bcf7 Prof. Wu 3 they were simply to! George Clyde V @ N5R/r^ ) % 2d ' V A+l * 5NaI?? [. A 3rd order polynomial can be a big problem quantum number and spans integers 0. And m2 reduced mass of hcl and dcl determined as 1/mu=1/m1+1/m2 experiment began, 522901, and the slowly. Physical constants were determined have measured the photo absorption cross sections for HCl and DCl using synchrotron radiation to. Chemistry department Frequencies and Anharmonicity constants with Errors a nondeuterated hydrofluoric acid diatomic, HF, magnitude... Be deuterated the LINEST data tables 7 – Plot with Fit Line of D35Cl overtone, graph 8 Plot! Phase HCl quicker and more efficiently than dry ice and bond length and force constant not... Very small Errors on these values that they are both accurate and precise that will be included this. While the hydrogen bounces back and forth like a ball on a rubber band in turn, vibrational. K HCl = k DCl a. = 4p2n2m: +1aG'=qn4 [ sLNl7k2F, c * iEE band. 3Rd order polynomial by: Christopher T. Hales of D35Cl overtone, 8... Didn ’ t work out these rotational energy levels are not a continuum but rather discrete, quantized,... Δν=+2 ): Carbon_monoxide_rotational-vibrational_spectrum.png – Prof. Wu 3 inertia and the reaction is expected to be at the from... These atoms have an effect on reduced mass mu of a rotational system equation!: Carbon_monoxide_rotational-vibrational_spectrum.png right in the Lab: HCl ( and DCl are the graphs that will be in... Are found right in the Lab: HCl ( and DCl are formed Approx... Nondeuterated hydrofluoric acid diatomic, HF, the values would be much more accurate DCl calculations hydrogen bounces back forth! Cm-1 ) vs. N and fitted to a 3rd order polynomial source of error in the potential,! The light from the spectra were plotted vs. frequency, from which physical. Energetically excited both rotationally and vibrationally much more accurate * iEE with KBr windows graph 1 – Plot with Line. The find the constants and bond length of the DCl spectrum didn ’ t surprising the... Of HCl/DCl gas to an IR gas cell with KBr windows < x2 is! ’ t surprising as the light from the gas generator cell, the first half experiment. Round-Bottomed-Flask ( RBF ), after which stopcocks 3 and 4 were opened increases the shift decreases and 4 opened! D37Cl overtone prevent gaseous hydrochloric acid from entering the pump system as (. Then denoted by the equation found by solving for eigenvalues of a system two. How the isotope of these atoms have an effect on reduced mass, molecular weight, molar mass molar., it also stretches acid sample to perform the experiment only had significant! Be used question is regarding water solution ( etching ) Plot with Fit Line of H35Cl fundamental a jmn. The mass of an isotope resulted in absorption at a lower frequency mass a ) unfortunately, reduced. Frequency, from the relationship k = 4p2n2m have an effect on reduced mass a ) George Clyde a... Energy transitions from the spectra were plotted vs. frequency, from the gas generator stopcock. The instrument we were using and the internuclear distance for both the fundamental, peak. Source of error in any of the DCl calculations 18 – Natural Frequencies and Anharmonicity constants with.! Solving for eigenvalues of a system of two bodies with masses m1 and m2 is determined 1/mu=1/m1+1/m2... Gas, calculate the force constant for the error on bond length and force constant ’.
2021-08-02 02:51:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7121896743774414, "perplexity": 4014.633174877179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154302.46/warc/CC-MAIN-20210802012641-20210802042641-00370.warc.gz"}
http://www.numdam.org/item/AIHPC_2009__26_6_2283_0/
A Representation Formula for the Voltage Perturbations Caused by Diametrically Small Conductivity Inhomogeneities. Proof of Uniform Validity Annales de l'I.H.P. Analyse non linéaire, Volume 26 (2009) no. 6, p. 2283-2315 @article{AIHPC_2009__26_6_2283_0, author = {Nguyen, Hoai-Minh and Vogelius, Michael S.}, title = {A Representation Formula for the Voltage Perturbations Caused by Diametrically Small Conductivity Inhomogeneities. Proof of Uniform Validity}, journal = {Annales de l'I.H.P. Analyse non lin\'eaire}, publisher = {Elsevier}, volume = {26}, number = {6}, year = {2009}, pages = {2283-2315}, doi = {10.1016/j.anihpc.2009.03.005}, zbl = {1178.35357}, mrnumber = {2569895}, language = {en}, url = {http://www.numdam.org/item/AIHPC_2009__26_6_2283_0} } Nguyen, Hoai-Minh; Vogelius, Michael S. A Representation Formula for the Voltage Perturbations Caused by Diametrically Small Conductivity Inhomogeneities. Proof of Uniform Validity. Annales de l'I.H.P. Analyse non linéaire, Volume 26 (2009) no. 6, pp. 2283-2315. doi : 10.1016/j.anihpc.2009.03.005. http://www.numdam.org/item/AIHPC_2009__26_6_2283_0/ [1] Ammari H., Kang H., Reconstruction of Small Inhomogeneities From Boundary Measurements, Lecture Notes in Math., vol. 1846, Springer-Verlag, 2004. | MR 2168949 | Zbl 1113.35148 [2] Astala K., Päivärinta L., Calderón's Inverse Conductivity Problem in the Plane, Ann. of Math. 163 (2006) 265-299. | MR 2195135 | Zbl 1111.35004 [3] Brühl M., Hanke M., Vogelius M. S., A Direct Impedance Tomography Algorithm for Locating Small Inhomogeneities, Numer. Math. 93 (2003) 635-654. | MR 1961882 | Zbl 1016.65079 [4] Capdeboscq Y., Vogelius M. S., A General Representation Formula for Boundary Voltage Perturbations Caused by Internal Conductivity Inhomogeneities of Low Volume Fraction, Math. Model. Numer. Anal. 37 (2003) 159-173. | Numdam | MR 1972656 | Zbl 1137.35346 [5] Cedio-Fengya D. J., Moskow S., Vogelius M. S., Identification of Conductivity Imperfections of Small Diameter by Boundary Measurements. Continuous Dependence and Computational Reconstruction, Inverse Problems 14 (1998) 553-595. | MR 1629995 | Zbl 0916.35132 [6] Friedman A., Vogelius M., Identification of Small Inhomogeneities of Extreme Conductivity by Boundary Measurements: a Theorem on Continuous Dependence, Arch. Ration. Mech. Anal. 105 (1989) 299-326. | MR 973245 | Zbl 0684.35087 [7] Greenleaf A., Lassas M., Uhlmann G., On Nonuniqueness for Calderon's Inverse Problem, Math. Res. Lett. 10 (2003) 685-693. | MR 2024725 | Zbl 1054.35127 [8] Greenleaf A., Lassas M., Uhlmann G., Anisotropic Conductivities That Cannot Be Detected by EIT, Physiological Meas. 24 (2003) 413-419. [9] Kohn R. V., Shen H., Vogelius M. S., Weinstein M. I., Cloaking Via Change of Variables in Electrical Impedance Tomography, Inverse Problems 24 (2008) 015016, (21 pp). | MR 2384775 | Zbl 1153.35406 [10] Kohn R. V., Vogelius M., Determining Conductivity by Boundary Measurements II. Interior Results, Comm. Pure Appl. Math. 38 (1985) 643-667. | MR 803253 | Zbl 0595.35092 [11] Kohn R. V., Vogelius M., Relaxation of a Variational Method for Impedance Computed Tomography, Comm. Pure Appl. Math. 40 (1987) 745-777. | MR 910952 | Zbl 0659.49009 [12] Nachman A. I., Global Uniqueness for a Two-Dimensional Inverse Boundary Value Problem, Ann. of Math. 143 (1996) 71-96. | MR 1370758 | Zbl 0857.35135 [13] Nedelec J.-C., Acoustic and Electromagnetic Equations, Appl. Math. Sci., vol. 144, Springer-Verlag, 2001. | MR 1822275 | Zbl 0981.35002 [14] Pendry J. B., Schurig D., Smith D. R., Controlling Electromagnetic Fields, Science 312 (2006) 1780-1782. | MR 2237570 [15] Sylvester J., Uhlmann G., A Global Uniqueness Theorem for an Inverse Boundary Value Problem, Ann. of Math. 125 (1987) 153-169. | MR 873380 | Zbl 0625.35078
2019-10-17 14:26:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5365970134735107, "perplexity": 6646.445992998215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675316.51/warc/CC-MAIN-20191017122657-20191017150157-00391.warc.gz"}
http://www.ridiculousfish.com/blog/posts/labor-of-division-episode-iii.html
Labor of Division (Episode III): Faster Unsigned Division by Constants ridiculous_fish corydoras@ridiculousfish.com October 19th, 2011 This post is available in a less handy format. There's also a PDF. Comments and discussion are on reddit. This is a technique fish thought up for improving the performance of unsigned integer division by certain "uncooperative" constants. It does not seem to appear in the literature, nor is it implemented in gcc, llvm, or icc, so fish is optimistic that it is original. As is well known (and seen in a previous post), compilers optimize unsigned division by constants into multiplication by a "magic number." But not all constants are created equal, and approximately 30% of divisors require magic numbers that are one bit too large, which necessitates special handling (read: are slower). Of these 30%, slightly less than half (46%) are even, which can be handled at a minimum of increased expense (see below); the remaining odd divisors (659 million, including well known celebrity divisors like 7) need a relatively expensive "fixup" after the multiplication. Or so we used to think. This post gives a variation on the usual algorithm that improves performance for these expensive divisors. This post presents the algorithm, proves it is correct, proves that it applies in every case we care about, and demonstrates that it is faster. It also contains a reference implementation of the full "magic number" algorithm, incorporating this and all known techniques. In other words, it's so darn big that it requires a table of contents. ### Background Unsigned integer division is one of the slowest operations on a modern microprocessor. When the divisor is known at compile time, optimizing compilers do not emit division instructions, but instead either a bit shift (for a power of 2), or a multiplication by a sort of reciprocal (for non-powers of 2). This second case involves the identity: $\lfloor \frac n d \rfloor = \lfloor \frac n d \times \frac {2^k} {2^k} \rfloor = \lfloor \frac {2^k} d \times \frac n {2^k} \rfloor$ As d is not a power of 2, $\frac {2^k} d$ is always a fraction. It is rounded up to an integer, which is called a "magic number" because multiplying by it performs division, as if by magic. The rounding-up introduces error into the calculation, but we can reduce that error by increasing k. If k is big enough, the error gets wiped out entirely by the floor, and so we always compute the correct result. The dividend (numerator) is typically an N bit unsigned integer, where N is the size of a hardware register. For most divisors, k can be small enough that a valid magic number can also fit in N bits or fewer. But for many divisors, there is no such magic number. 7, 14, 19, 31, 42...these divisors require an N+1 bit magic number, which introduces inefficiences, as the magic number cannot fit in a hardware register. Let us call such divisors "uncooperative." The algorithm presented here improves the performance of dividing by these uncooperative divisors by finding a new magic number which is no more than N bits. The existing algorithm that generates an N+1 bit magic number for uncooperative divisors will be referred to as the "round-up algorithm", because it rounds the true magic number up. The version presented here will be called the "round-down algorithm". We will say that an algorithm "fails" or "succeeds" according to whether it produces a magic number of N bits or fewer; we will show that either the round-up or round-down algorithm (or both) must succeed for all divisors. All quantities used in the proofs and discussion are non-negative integers. ### A Shift In Time Saves Fourteen For completeness, it is worth mentioning one additional technique for uncooperative divisors that are even. Consider dividing a 32 bit unsigned integer by 14. The smallest valid magic number for 14 is 33 bits, which is inefficient. However, instead of dividing by 14, we can first divide by 2, and then by 7. While 7 is also uncooperative, the divide by 2 ensures the dividend is only a 31 bit number. Therefore the magic number for the subsequent divide-by-7 only needs to be 32 bits, which can be handled efficiently. This technique effectively optimizes division by even divisors, and is incorporated in the reference code provided later. Now we present a technique applicable for odd divisors. ### Motivation (aka What Goes Up Can Also Go Down) First, an appeal to intuition. A divisor is uncooperative in the round-up algorithm because the rounding-up produces a poor approximation. That is, $\frac {2^k} d$ is just slightly larger than some integer, so the approximation $\lceil \frac {2^k} d \rceil$ is off by nearly one, which is a lot. It stands to reason, then, that we could get a better approximation by floor instead of ceil: $m = \lfloor \frac {2^k} d \rfloor$. A naïve attempt to apply this immediately runs into trouble. Let d be any non-power-of-2 divisor, and consider trying to divide d by itself by multiplying with this magic number: $\lfloor \frac {2^k} d \rfloor < \frac {2^k} d \implies$ $\lfloor \frac {2^k} d \rfloor \times \frac d {2^k} < \frac {2^k} d \times \frac d {2^k} \implies$ $\lfloor \lfloor \frac {2^k} d \rfloor \times \frac d {2^k} \rfloor < 1$ The result is too small. (Could we replace the outer floor by a ceil? The floor is implemented by a right shift, which throws away the bits that are shifted off. We could conjure up a "rounding up" right shift, and that might work, though it would likely be more expensive than the instructions it replaces.) So rounding down causes us to underestimate the result. What if we tried to counteract that by incrementing the numerator first? $\lfloor \frac n d \rfloor \ \stackrel{?}{=} \ \lfloor \lfloor \frac {2^k} d \rfloor \times \frac {\color{#FF3030}{n+1}} {2^k} \rfloor$ This is the round-down algorithm. ### Proof of Correctness First we must show that the round-down algorithm actually works. We proceed much like the proof for the round-up algorithm. We have a known constant d and a runtime variable n, both N bit values. We want to find some k that ensures: $\lfloor \frac n d \rfloor = \lfloor m \times \frac {n+1} {2^k} \rfloor$ where: \small \begin{align} \small & m = \lfloor \frac {2^k} d \rfloor \\ & 0 \le n < 2^{N} \\ & 0 < d < 2^{N} \\ & \text{d not a power of 2} \end{align} Introduce an integer e which represents the error produced by the floor: $m = \lfloor \frac {2^k} d \rfloor = \frac {2^k - e} d$ $0 < e < d$ Apply some algebra: \begin{align} \lfloor m \times \frac {n+1} {2^k} \rfloor & = \lfloor \frac {2^k - e} d \times \frac {n + 1} {2^k} \rfloor \\ & = \lfloor \frac {n + 1} d \times \frac {2^k - e} {2^k} \rfloor \\ & = \lfloor \frac {n + 1} d \times ( 1 - \frac e {2^k} ) \rfloor \\ & = \lfloor \frac {n+1} d - \frac {n+1} d \times \frac e {2^k} \rfloor \\ & = \lfloor \frac n d + \frac 1 d - \frac e d \times \frac {n+1} {2^k} \rfloor \end{align} We hope that this equals $\lfloor \frac n d \rfloor$. Within the floor, we see the result, plus two terms of opposite signs. We want the combination of those terms to cancel out to something at least zero, but small enough to be wiped out by the floor. Let us compute the fractional contribution of each term, and show that it is at least zero but less than one. The fractional contribution of the $\frac n d$ term can be as small as zero and as large as $\frac {d-1} d$. Therefore, in order to keep the whole fractional part at least zero but below one, we require: $0 \le \frac 1 d - \frac e d \times \frac {n+1} {2^k} < \frac 1 d$ The term $\frac e d \times \frac {n+1}{2^k}$ is always positive, so the $< \frac 1 d$ is easily satisfied. It remains to show it is at least zero. Rearranging: $0 \le \frac 1 d - \frac e d \times \frac {n+1} {2^k} \implies \frac e d \times \frac {n+1} {2^k} \le \frac 1 d$ This is very similar to the condition required in the round-up algorithm! Let's continue to simplify, using the fact that n < 2N. $\frac e d \times \frac {n+1} {2^k} \le \frac 1 d$ $e \times \frac {n+1} {2^k} \le 1$ $\frac e {2^{k-N}} \le 1$ $e \le 2^{k-N}$ This is the condition that guarantees that our magic number m works. In summary, pick some k ≥ N, and compute $\small e = 2^k \mod{d}$. If the resulting e ≤ 2k-N, the algorithm is guaranteed to produce the correct result for all N-bit dividends. ### Proof of Universality (aka Your Weakness Is My Strength) When will this condition be met? Recall the hand-waving from before: the round-up algorithm failed because rounding up produced a poor approximation, so we would expect rounding down to produce a good approximation, which would make the round-down algorithm succeed. Optimistically, we'd hope that round-down will succeed any time round-up fails! Indeed that is the case, and we can formally prove it now. Here eup refers to the difference produced by rounding 2k up to a multiple of d, as in the round-up algorithm, while edown refers to the difference produced by rounding down to a multiple of d as in round-down. An immediate consequence is eup + edown = d. Recall from the round-up algorithm that we try successive values for k, with the smallest k guaranteed to work equal to $\small N + \lceil log_2 d \rceil$. Unfortunately, this k produces a magic number of N+1 bits, and so too large to fit in a hardware register. Let's consider the k just below it, which produces a magic number of N bits: $k = N + \lceil log_2 d \rceil - 1 = N + \lfloor log_2 d \rfloor$ Assume that d is uncooperative, i.e. the magic number for this power was not valid in the round-up algorithm. It would have been valid if $e_{up} < 2^{\lfloor log_2 d \rfloor}$; because it was not valid we must have $e_{up} \ge 2^{\lfloor log_2 d \rfloor}$. Substituting in: \begin{align} e_{up} & \ge 2^{\lfloor log_2 d \rfloor} \implies \\ d - e_{down} & \ge 2^{\lfloor log_2 d \rfloor} \implies \\ e_{down} & \le d - 2^{\lfloor log_2 d \rfloor} \implies \\ e_{down} & \le 2^{\lceil log_2 d \rceil} - 2^{\lfloor log_2 d \rfloor} \implies \\ e_{down} & \le 2 \times 2^{\lfloor log_2 d \rfloor} - 2^{\lfloor log_2 d \rfloor} \implies \\ e_{down} & \le 2^{\lfloor log_2 d \rfloor} \implies \\ e_{down} & \le 2^{k-N} \end{align} Thus we've satisfied the condition determined in the proof of correctness. This is an important and remarkable result: the round-down algorithm is guaranteed to have an efficient magic number whenever round-up does not. If the implementation of round-down can be shown to be more efficient, the overflow case for the round-up algorithm can be discarded entirely. ### Recap Here's the practical algorithm. Given a dividend n and a fixed divisor d, where 0 ≤ n < 2N and 0 < d < 2N, and where the usual round-up algorithm failed to find an N-bit magic number: 1. Consider in turn values of p in the range 0 through $\small \lfloor log_2 d \rfloor$, inclusive. 2. If $\small 2^{N + p}\ \bmod{d} \le 2^p$, then we have found a working p. The last value in the range is guaranteed to work. 3. Once we have a working p, precompute the magic number $\small m = \lfloor \frac {2^{N + p}} d \rfloor$, which will be strictly less than 2N. 4. Compute $\small q = (m \times (n+1)) \gg N$. This is typically implemented via a "high multiply" instruction. 5. Perform any remaining shift: $\small q = q \gg p$. ### Overflow Handling This algorithm has a wrinkle. Because n is an N-bit number, it may be as large as 2N - 1, in which event the n+1 term will be an N+1 bit number. If the value is simply incremented in an N-bit register, the dividend will wrap to zero, and the quotient will in turn be zero. Here we present two strategies for efficiently handling the possibility of modulo overflow. #### Distributed Multiply Strategy An obvious approach is to distribute the multiply through, i.e.: $\small m \times (n+1) = m \times n + m$ This is a 2N-bit quantity and so cannot overflow. For efficient implementation, this requires that the low half of the m x n product be available "for free," so that the sum can be performed and any carry transmitted to the high half. Many modern architectures produce both halves with one instruction, such as Intel x86 (the MUL instruction) or ARM (UMULL). It is also available if the register width is twice the bit size of the type, e.g. performing a 32 bit divide on a 64 bit processor. #### Saturating Increment Strategy However, other processors compute the low and high halves separately, such as PowerPC; in this case computing the lower half of the product would be prohibitively expensive, and so a different strategy is needed. A second, surprising approach is to simply elide the increment if n is already at its maximum value, i.e. replace the increment with a "saturating increment" defined by: $$\small \text{SaturInc}(x) = \begin{cases} x+1 & \text{ if } x < 2^N-1 \\ x & \text{ if } x = 2^N-1 \end{cases}$$ It is not obvious why this should work: we needed the increment in the first place, so how can we just skip it? We must show that replacing increment with SaturInc will compute the correct result for 2N - 1. A proof of that is presented below. #### Proof of Correctness when using Saturating Increment Consider the practical algorithm presented above, with the +1 replaced by saturating increment. If $\small n < 2^N-1$, then saturating increment is the same as +1, so the proof from before holds. Therefore assume that $\small n = 2^N-1$, so that incrementing n would wrap to 0. By inspection, $\small \text{SaturInc}(2^N - 1) = \text{SaturInc}(2^N - 2)$. Because the algorithm has no other dependence on n, replacing the +1 with SaturInc effectively causes the algorithm to compute the quotient $\lfloor \frac {2^N - 2} d \rfloor$ when n = 2N-1. Now d either is or is not a factor of 2N-1. Let's start by assuming it is not a factor. It is easy to prove that, if x and y are positive integers and y is not a factor of x, then $\lfloor \frac x y \rfloor = \lfloor \frac {x-1} y \rfloor$. Therefore it must be true that $\lfloor \frac {2^N - 1} d \rfloor = \lfloor \frac {2^N - 2} d \rfloor$, so the algorithm computes the correct quotient. Now let us consider the case where d is a factor of 2N-1. We will prove that d is cooperative, i.e. the round-up algorithm produced an efficient N-bit result for d, and therefore the round-down algorithm is never employed. Because d is a factor of 2N-1, we have $\small 2^N\ \bmod{d} = 1$. Consider once again the case of the "last N-bit magic number," i.e.: $\small k = N + \lceil log_2 d \rceil - 1 = N + \lfloor log_2 d \rfloor$ Recall that the round-up algorithm computes $\small e_{up} = d - (2^k\ \bmod{d})$. This power is acceptable to the round-up algorithm if $\small e_{up} \leq 2^{k - N} = 2^{\lfloor log_2 d \rfloor}$. Consider: \begin{align} 2^k\ \bmod{d} & = 2^{N + \lfloor log_2 d \rfloor}\ \bmod{d} \\ & = 2^N \times 2^{\lfloor log_2 d \rfloor}\ \bmod{d} \\ & = 1 \times 2^{\lfloor log_2 d \rfloor}\ \bmod{d} \\ & = 2^{\lfloor log_2 d \rfloor} \end{align} Substituting in: \begin{align} e_{up} & = d - 2^{\lfloor log_2 d \rfloor} \\ e_{up} & < 2^{\lceil log_2 d \rceil} - 2^{\lfloor log_2 d \rfloor} \\ & < 2 \times 2^{\lfloor log_2 d \rfloor} - 2^{\lfloor log_2 d \rfloor} \\ & < 2^{\lfloor log_2 d \rfloor} \end{align} Thus the power k is acceptable to the round-up algorithm, so d is cooperative and the round-down algorithm is never employed. Thus a saturating increment is acceptable for all uncooperative divisors. Q.E.D. (As an interesting aside, this last proof demonstrates that all factors of 2N-1 "just barely" have efficient N-bit magic numbers. For example, the divisor 16,711,935 is a factor of 232-1, and its magic number, while N bits, requires a shift of 23, which is large; in fact it is the largest possible shift, as the floor of the base 2 log of that divisor. But increase the divisor by just one (16711936) and only a 16 bit shift is necessary.) In summary, distributing the multiplication or using a saturating increment are both viable strategies for avoiding wrapping in the n+1 expression, ensuring that the algorithm works over the whole range of dividends. Implementations can use whichever technique is most efficient1. ### Practical Implementation The discussion so far is only of theoretical interest; it becomes of practical interest if the round-down algorithm can be shown to outperform round-up on uncooperative divisors. This is what will be demonstrated below for x86 processors. x86 processors admit an efficient saturating increment via the two-instruction sequence add 1; sbb 0; (i.e. "add; subtract 0 with borrow"). They also admit an efficient distributed multiply. The author implemented this optimization in the LLVM compiler using both strategies in turn, and then compiled the following C code which simply divides a value by 7, using clang -O3 -S -arch i386 -fomit-frame-pointer (this last flag for brevity): unsigned int sevens(unsigned int x) { return x / 7; } Here is a comparison of the generated i386 assembly, with corresponding instructions aligned, and instructions that are unique to one or the other algorithm shown in red. (x86-64 assembly produced essentially the same insights, and so is omitted.) Round-Up (Stock LLVM) _sevens: movl 4(%esp), %ecx movl $613566757, %edx movl %ecx, %eax mull %edx subl %edx, %ecx shrl %ecx addl %edx, %ecx shrl$2, %ecx movl %ecx, %eax ret Distributive _sevens: movl $1227133513, %eax mull 4(%esp) addl$1227133513, %eax adcl $0, %edx shrl %edx movl %edx, %eax ret Saturating Increment _sevens: movl 4(%esp), %eax addl$1, %eax sbbl $0, %eax movl$1227133513, %ecx mull %ecx shrl %edx movl %edx, %eax ret The round-down algorithms not only avoid the three-instruction overflow handling, but also avoid needing to store the dividend past the multiply (notice the highlighted MOVL instruction in the round-up algorithm). The result is a net saving of two instructions. Also notice that the variants require fewer registers, which suggests there might be even more payoff (i.e. fewer register spills) when the divide is part of a longer code sequence. (In the distributive variant the compiler has made the dubious choice to emit the same immediate twice instead of placing it in a register. This is especially deleterious in the loop microbenchmark shown below, because loading the immediate into the register could be hoisted out of the loop. To address this, the microbenchmark tests both the assembly as generated by LLVM, and a version tweaked by hand to address this suboptimal codegen.) As illustrated, both strategies require only two instructions on x86, which is important because the overhead of the round-up algorithm is three to four instructions. Many processor architectures admit a two-instruction saturating increment through the carry flag2. ### Microbenchmark To measure the performance, the author compiled a family of functions. Each function accepts an array of unsigned ints, divides them by a particular uncooperative divisor, and returns the sum; for example: uint divide_7(const uint *x, size_t count) { uint result = 0; while (count--) { result += *x++ / 7; } return result; } Each function in the family had very similar machine code; a representative sample is: Standard Round-Up _divide_7: pushl %ebp movl %esp, %ebp pushl %ebx pushl %edi pushl %esi xorl %ecx, %ecx movl 12(%ebp), %edi testl %edi, %edi je LBB1_3 movl 8(%ebp), %ebx LBB1_2: movl (%ebx), %esi movl %esi, %eax movl $613566757, %edx mull %edx subl %edx, %esi shrl %esi addl %edx, %esi shrl$2, %esi addl $4, %ebx decl %edi jne LBB1_2 LBB1_3: movl %ecx, %eax popl %esi popl %edi popl %ebx popl %ebp ret Distributive (hand tweaked) _divide_7: pushl %ebp movl %esp, %ebp pushl %ebx pushl %edi pushl %esi xorl %ecx, %ecx movl 12(%ebp), %esi testl %esi, %esi je LBB0_3 movl 8(%ebp), %edi movl$1227133513, %ebx LBB0_2: movl (%edi), %eax mull %ebx adcl $0, %edx shrl %edx addl %edx, %ecx addl$4, %edi decl %esi jne LBB0_2 LBB0_3: movl %ecx, %eax popl %esi popl %edi popl %ebx popl %ebp ret Saturating Increment _divide_7: pushl %ebp movl %esp, %ebp pushl %ebx pushl %edi pushl %esi xorl %ecx, %ecx movl 12(%ebp), %esi testl %esi, %esi je LBB1_3 movl 8(%ebp), %edi movl $1227133513, %ebx LBB1_2: movl (%edi), %eax addl$1, %eax sbbl $0, %eax mull %ebx shrl %edx addl %edx, %ecx addl$4, %edi decl %esi jne LBB1_2 LBB1_3: movl %ecx, %eax popl %esi popl %edi popl %ebx popl %ebp ret A simple test harness was constructed and the above functions were benchmarked to estimate the time per divide. The benchmark was compiled with clang on -O3, and run on a 2.93 GHz Core i7 iMac. Test runs were found to differ by less than .1%. #### Nanoseconds Per Divide Divisor Round Up Saturating Increment Distribute (as generated) Distribute (hand tweaked) i386 7 1.632 1.484 9.1% 1.488 8.9% 1.433 12.2% uint32 37 1.631 1.483 9.1% 1.486 8.9% 1.433 12.1% 123 1.633 1.484 9.1% 1.488 8.9% 1.432 12.3% 763 1.632 1.483 9.1% 1.487 8.9% 1.432 12.2% 1247 1.633 1.484 9.1% 1.491 8.7% 1.433 12.2% 9305 1.631 1.484 9.0% 1.491 8.6% 1.439 11.7% 13307 1.632 1.483 9.1% 1.489 8.7% 1.437 11.9% 52513 1.631 1.483 9.1% 1.490 8.7% 1.432 12.2% 60978747 1.631 1.484 9.0% 1.488 8.8% 1.434 12.1% 106956295 1.631 1.484 9.0% 1.489 8.7% 1.433 12.1% x86_64 7 1.537 1.307 14.9% 1.548 -0.7% 1.362 11.4% uint32 37 1.538 1.307 15.0% 1.548 -0.7% 1.362 11.4% 123 1.537 1.319 14.2% 1.547 -0.6% 1.361 11.5% 763 1.536 1.306 15.0% 1.547 -0.8% 1.356 11.7% 1247 1.538 1.322 14.1% 1.549 -0.7% 1.358 11.7% 9305 1.543 1.322 14.3% 1.550 -0.5% 1.361 11.8% 13307 1.545 1.322 14.4% 1.550 -0.3% 1.357 12.1% 52513 1.541 1.307 15.2% 1.550 -0.6% 1.361 11.7% 60978747 1.538 1.322 14.0% 1.549 -0.7% 1.358 11.7% 106956295 1.537 1.322 14.0% 1.551 -0.9% 1.360 11.5% x86_64 7 1.823 1.588 12.9% 1.505 17.4% n/a uint64 39 1.821 1.589 12.7% 1.506 17.3% n/a 123 1.821 1.592 12.6% 1.506 17.3% n/a 763 1.822 1.592 12.6% 1.505 17.4% n/a 1249 1.822 1.589 12.8% 1.506 17.4% n/a 9311 1.822 1.587 12.9% 1.507 17.3% n/a 11315 1.822 1.588 12.8% 1.506 17.4% n/a 52513 1.823 1.591 12.7% 1.506 17.4% n/a 60978749 1.822 1.590 12.7% 1.507 17.3% n/a 106956297 1.821 1.588 12.8% 1.506 17.3% n/a Microbenchmark results for tested division algorithms on a Core i7. The top group is for 32 bit division in a 32 bit binary, while the bottom two groups are 32 bit and 64 bit division (respectively) in a 64 bit binary. Times are in nanoseconds per divide (lower is better). Percentages are percent improvement from the Round Up algorithm (higher is better). These results indicate that the round-down algorithms are indeed faster by 9%-17% (excluding the crummy codegen, which should be fixed in the compiler). The benchmark source code is available at http://ridiculousfish.com/files/division_benchmarks.tar.gz. ### Extension to Signed Division A natural question is whether the same optimization could improve signed division; unfortunately it appears that it does not, for two reasons: • The increment of the dividend must become an increase in the magnitude, i.e. increment if n > 0, decrement if n < 0. This introduces an additional expense. • The penalty for an uncooperative divisor is only about half as much in signed division, leaving a smaller window for improvements. Thus it appears that the round-down algorithm could be made to work in signed division, but will underperform the standard round-up algorithm. ### Reference Code The reference implementation for computing the magic number due to Henry Warren ("Hacker's Delight") is rather dense, and it may not be obvious how to incorporate the improvements presented here. To ease adoption, we present a reference implementation written in C that incorporates all known optimizations, including the round-down algorithm. This new reference implementation is available at https://raw.github.com/ridiculousfish/libdivide/master/divide_by_constants_codegen_reference.c ### Conclusion The following algorithm is an alternative way to do division by "uncooperative" constants, which may outperform the standard algorithm that produces an N+1 bit magic number. Given a dividend n and a fixed divisor d, where 0 ≤ n < 2N and 0 < d < 2N, and where the standard algorithm failed to find a N-bit magic number: 1. Consider in turn values of p in the range 0 through $\small \lfloor log_2 d \rfloor$, inclusive. 2. If $\small 2^{N + p}\ \bmod{d} \le 2^p$, then we have found a working p. The last value in the range is guaranteed to work (assuming the standard algorithm fails). 3. Once we have a working p, precompute the magic number $\small m = \lfloor \frac {2^{N + p}} d \rfloor$, which will be strictly less than 2N. 4. To divide n by d, compute the value q through one of the following techniques: • Compute $\small q = (m \times n + m)) \gg N$, OR • Compute $\small q = (m \times (n+1)) \gg N$. If n+1 may wrap to zero, it is acceptable to use a saturating increment instead. 5. Perform any remaining shift: $\small q = q \gg p$. On a Core i7 x86 processor, a microbenchmark showed that this variant "round down" algorithm outperformed the standard algorithm in both 32 bit and 64 bit modes by 9% to 17%, and in addition generated shorter code that used fewer registers. Furthermore, the variant algorithm is no more difficult to implement than is the standard algorithm. The author has provided a reference implementation and begun some preliminary work towards integrating this algorithm into LLVM, and hopes other compilers will adopt it. #### Footnotes 1. Of course, if n can statically be shown to not equal 2N-1, then the increment can be performed without concern for modulo overflow. This likely occurs frequently due to the special nature of the value 2N-1. 2. Many processor architectures admit a straightforward saturating increment by use of the carry flag. PowerPC at first blush appears to be an exception: it has somewhat unusual carry flag semantics, and the obvious approach requires three instructions: li r2, 0 subfic r2, r3, -2
2015-11-26 10:29:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9553030729293823, "perplexity": 2019.3826358290833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447043.45/warc/CC-MAIN-20151124205407-00004-ip-10-71-132-137.ec2.internal.warc.gz"}
https://indico.uu.se/event/317/timetable/?view=standard
Zoom videoconference service now available from inside Indico. Use Single SignOn (SSO) to login with your home institute's account and login service. # 20th European Young Statisticians Meeting Europe/Stockholm Ångströmslaboratoriet (Uppsala University) ### Ångströmslaboratoriet #### Uppsala University Description The European Young Statisticians Meetings are held every two years under the auspices of the European Regional Committee of the Bernoulli Society. More information is available in guidelines and more recent remarks for their organisation. The idea of the meeting is to provide young researchers (less than thirty years of age or two to eight years of research experience) an introduction to the international scene within the broad subject area - from pure probability theory to applied statistics. Every participant is expected to submit an abstract and a short paper for conference proceedings and to give a twenty minutes talk introducing his/her research field to a wide audience. There are no parallel sessions. The 20th EYSM will be held in Uppsala, Sweden on August 14-18 2017. Group photos from August 16: Anatomical theatre 1 - Anatomical theatre 2 - In front of the cathedral Schedule and abstracts: The schedule is found here, and the abstracts here. Main speakers: Speaker information: Each speaker should prepare a 20 minute talk, using either a pdf/Powerpoint presentation or the blackboard. 30 minutes will be devoted to each talk, to allow for questions and discussion. Venue: The main venue for the conference is the Ångström laboratory: To get to the Ångström laboratory from Selmas Hytt, you can either take a 15 minute walk along the river, or take a bus from the nearby Uppsala Akademiska sjukhuset bus stop (bus 1 towards Ultuna). Please note that the city buses do not accept cash payment, but that it is possible to pay for your ride using a credit card. On Wednesday the 16th the conference will take place in Uppsala University's Museum Gustavianum building in the city centre: A map showing the Ångström laboratory, the Museum Gustavianum, the Selmas Hytt hostel and the Uppsala train station is available here. Hostel: For those of you who have booked a room at the Selmas Hytt hostel through us, your room is available Sunday-Friday. You can check in at the hostel on the Sunday (or later). Sheets, towels, free wifi and breakfast are included. Getting here: The Uppsala city centre is easily accessible from the Stockholm-Arlanda airport through buses (46 minute trip) and trains (18 minute trip). Tickets can be bought at the airport. Please note that there are three different companies operating trains between Arlanda and Uppsala (SJ, UL and SL) and that tickets bought from either of these only are valid on trains operated by that particular company. If you prefer to take a taxi from the airport, expect the cost to be in the region of €55 for 1-4 passengers. Local organising committee: Måns Thulin Tilo Wiklund The Department of Statistics, Uppsala University The Department of Mathematics, Uppsala University Participants • Agnieszka Prochenka • Ali Charkhi • Andrius Buteikis • Bastien Marquis • Birgit Sollie • Bojana Milošević • Carmen Minuesa Abril • Christos Merkatas • Dmytro Zatula • Ivan Papić • Johanna Ärje • Joni Virta • Joonas Sova • José Luis Torrecilla • Kateřina Konečná • Maria Pitsillou • Marie Turcicova • Maud Thomas • Michael Hoffmann • Niels Olsen • Nikolay Nikolov • Nina Munkholt Jakobsen • O. Ozan Evkaya • Oksana Chernova • Plamen Trayanov • Samuel Rosa • Tobias Fissler • Yoav Zemel • Zuzana Rošťáková Contact • Monday, 14 August • 09:00 09:30 Registration 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University • 09:30 10:30 Introduction 1h Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University • 10:30 11:00 Coffee 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University • 11:00 12:00 Invited speaker - Sequential Monte Carlo: basic principles and algorithmic inference 1h Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University Sequential Monte Carlo methods form a class of genetic-type algorithms sampling, on-the-fly and in a very general context, sequences of probability measures. Today these methods constitute a standard device in the statistician's tool box and are successfully applied within a wide range of scientific and engineering disciplines. This talk is split into two parts, where the first provides an introduction to the SMC methodology and the second discusses some novel results concerning the stochastic stability and variance estimation in SMC. Speaker: Jimmy Olsson (Royal Institute of Technology) • 12:00 13:00 Lunch 1h Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University • 13:00 14:00 Invited speaker - Sequential Monte Carlo: basic principles and algorithmic inference 1h Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University Sequential Monte Carlo methods form a class of genetic-type algorithms sampling, on-the-fly and in a very general context, sequences of probability measures. Today these methods constitute a standard device in the statistician's tool box and are successfully applied within a wide range of scientific and engineering disciplines. This talk is split into two parts, where the first provides an introduction to the SMC methodology and the second discusses some novel results concerning the stochastic stability and variance estimation in SMC. Speaker: Jimmy Olsson (Royal Institute of Technology) • 14:00 14:30 Can humans be replaced by computers in taxa recognition? 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University Biomonitoring of waterbodies is vital as the number of anthropogenic stressors on aquatic ecosystems keeps growing. However, the continuous decrease in funding makes it impossible to meet monitoring goals or sustain traditional manual sample processing. We review what kind of statistical tools can be used to enhance the cost efficiency of biomonitoring: We explore automated identification of freshwater macroinvertebrates which are used as one indicator group in biomonitoring of aquatic ecosystems. We present the first classification results of a new imaging system producing multiple images per specimen. Moreover, these results are compared with the results of human experts. On a data set of 29 taxonomical groups, automated classification produces a higher average accuracy than human experts. Speaker: Dr Johanna Ärje (University of Jyväskylä, Department of Mathematics and Statistics) • 14:30 15:00 Nonparametric estimation of gradual change points in the jump behaviour of an Ito semimartingale 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University In applications the properties of a stochastic feature often change gradually rather than abruptly, that is: after a constant phase for some time they slowly start to vary. The goal of this talk is to introduce an estimator for the location of a gradual change point in the jump characteristic of a discretely observed Ito semimartingale. To this end we propose a measure of time variation for the jump behaviour of the process and consistency of the desired estimator is a consequence of weak convergence of a suitable empirical process in some function space. Finally, we discuss simulation results which verify that the new estimator has advantages compared to the classical argmax-estimator. Speaker: Mr Michael Hoffmann (Ruhr-Universität Bochum) • 15:00 15:30 Coffee 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University • 15:30 16:00 Matrix Independent Component Analysis 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University Independent component analysis (ICA) is a popular means of dimension reduction for vector-valued random variables. In this short note we review its extension to arbitrary tensor-valued random variables by considering the special case of two dimensions where the tensors are simply matrices. Speaker: Mr Joni Virta (University of Turku) • 16:00 16:30 AIC post-selection inference in linear regression 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University Post-selection inference has been considered a crucial topic in data analysis. In this article, we develop a new method to obtain correct inference after model selection by the Akaike's information criterion Akaike (1973) in linear regression models. Confidence intervals can be calculated by incorporating the randomness of the model selection in the distribution of the parameter estimators which act as pivotal quantities. Simulation results show the accuracy of the proposed method. Speaker: Mr Ali Charkhi (KULeuven) • 16:30 17:00 Multilevel Functional Principal Component Analysis for Unbalanced Data 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University Functional principal component analysis (FPCA) is the key technique for dimensionality reduction and detection of main directions of variability present in functional data. However, it is not the most suitable tool for the situation when analyzed dataset contains repeated or multiple observations, because information about repeatability of measurements is not taken into account. Multilevel functional principal component analysis (MFPCA) is the modified version of FPCA developed for data observed at multiple visits. The original MFPCA method was designed for balanced data only, where for each subject the same number of measurements is available. In this article we propose the modified MFPCA algorithm which can be applied for unbalanced functional data; that is, in the situation where a different number of observations can be present for every subject. The modified algorithm is validated and tested on real-world sleep data. Speaker: Zuzana Rošťáková (Institute of Measurement Science, Slovak Academy of Sciences) • Tuesday, 15 August • 09:00 10:00 Invited Speaker - Non-limiting spatial extremes 1h Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University Many questions concerning environmental risk can be phrased as spatial extreme value problems. Classical extreme value theory provides limiting models for maxima or threshold exceedances of a wide class of underlying spatial processes. These models can then be fitted to suitably defined extremes of spatial datasets and used, for example, to estimate the probability of events more extreme than we have observed to date. However, a major practical problem is that frequently the data do not appear to follow these limiting models at observable levels, and assuming otherwise leads to bias in estimation of rare event probabilities. To deal with this we require models that allow flexibility in both what the limit should be, and in the mode of convergence towards it. I will present a construction for such a model and discuss its application to some wave height data from the North Sea. • 10:00 10:30 Coffee 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University • 10:30 11:00 Delete or Merge Regressors algorithm 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University This paper addresses a problem of linear and logistic model selection in the presence of both continuous and categorical predictors. In the literature two types of algorithms dealing with this problem can be found. The first one well known group lasso (\cite{group}) selects a subset of continuous and a subset of categorical predictors. Hence, it either deletes or not an entire factor. The second one is CAS-ANOVA (\cite{cas}) which selects a subset of continuous predictors and partitions of factors. Therefore, it merges levels within factors. Both these algorithms are based on the lasso regularization. In the article a new algorithm called DMR (Delete or Merge Regressors) is described. Like CAS-ANOVA it selects a subset of continuous predictors and partitions of factors. However, instead of using regularization, it is based on a stepwise procedure, where in each step either one continuous variable is deleted or two levels of a factor are merged. The order of accepting consecutive hypotheses is based on sorting t-statistics or linear regression and likelihood ratio test statistics for logistic regression. The final model is chosen according to information criterion. Some of the preliminary results for DMR are described in \cite{pro}. DMR algorithm works only for data sets where $p < n$ (number of columns in the model matrix is smaller than the number of observations). In the paper a modification of DMR called DMRnet is introduced that works also for data sets where $p \gg n$. DMRnet uses regularization in the screening step and DMR after decreasing the model matrix to $p < n$. Theoretical results are proofs that DMR for linear and logistic regression are consistent model selection methods even when $p$ tends to infinity with $n$. Furthermore, upper bounds on the error of selection are given. Practical results are based on an analysis of real data sets and simulation setups. It is shown that DMRnet chooses smaller models with not higher prediction error than the competitive methods. Furthermore, in simulations it gives most often the highest rate of true model selection. Speaker: Dr Agnieszka Prochenka (Warsaw University) • 11:00 11:30 The Elicitation Problem 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University Competing point forecasts for functionals such as the mean, a quantile, or a certain risk measure are commonly compared in terms of loss functions. These should be incentive compatible, i.e., the expected score should be minimized by the correctly specified functional of interest. A functional is called *elicitable* if it possesses such an incentive compatible loss function. With the squared loss and the absolute loss, the mean and the median possess such incentive compatible loss functions, which means they are elicitable. In contrast, variance or Expected Shortfall are not elicitable. Besides investigating the elicitability of a functional, it is important to determine the whole class of incentive compatible loss functions as well as to give recommendations which loss function to use in practice, taking into regard secondary quality criteria of loss functions such as order-sensitivity, convexity, or homogeneity. Speaker: Dr Tobias Fissler (University of Bern) • 11:30 12:00 Testing independence for multivariate time series by the auto-distance correlation matrix 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University We introduce the notions of multivariate auto-distance covariance and correlation functions for time series analysis. These concepts have been recently discussed in the context of both independent and dependent data but we extend them in a different direction by putting forward their matrix version. Their matrix version allows us to identify possible interrelationships among the components of a multivariate time series. Interpretation and consistent estimators of these new concepts are discussed. Additionally, we develop a test for testing the i.i.d. hypothesis for multivariate time series data. The resulting test statistic performs better than the standard multivariate Ljung-Box test statistic. All the above methodology is included in the R package dCovTS which is briefly introduced in this talk. Speaker: Dr Maria Pitsillou (Department of Mathematics &amp; Statistics, Cyprus) • 12:00 13:00 Lunch 1h Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University • 13:00 13:30 Some recent characterization based goodness of fit tests 30m Ångströmslaboratoriet ### Ångströmslaboratoriet In this paper some recent advances in goodness of fit testing are presented. Special attention is given to goodness of fit tests based on equidistribution and independence characterizations. New concepts are described through some modern exponentiality tests. Their natural generalizations are also proposed. All tests are compared in Bahadur sense. Speaker: Dr Bojana Milošević (Faculty of Mathematics) • 13:30 14:00 Methods for bandwidth detection in kernel conditional density estimations 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University This contribution is focused on the kernel conditional density estimations (KCDE). The estimation depends on the smoothing parameters which influence the final density estimation significantly. This is the reason why a requirement of any data-driven method is needed for bandwidth estimation. In this contribution, the cross-validation method, the iterative method and the maximum likelihood approach are conducted for bandwidth selection of the estimator. An application on a real data set is included and the proposed methods are compared. Speaker: Ms Katerina Konecna (Masaryk University) • 14:00 14:30 Controlled branching processes in Biology: a model for cell proliferation 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University Branching processes are relevant models in the development of theoretical approaches to problems in applied fields such as, for instance, growth and extinction of populations, biology, epidemiology, cell proliferation kinetics, genetics and algorithm and data structures. The most basic model, the so-called Bienaymé-Galton-Watson process, consists of individuals that reproduce independently of the others following the same probability distribution, known as offspring distribution. A natural generalization is to incorporate a random control function which determines the number of progenitors in each generation. The resulting process is called controlled branching process. In this talk, we deal with a problem arising in cell biology. More specifically, we focus our attention on experimental data generated by time-lapse video recording of cultured in vitro oligodendrocyte cells. In A.Y. Yakovlev et al. (2008) (Branching Processes as Models of Progenitor Cell Populations and Estimation of the Offspring Distributions, *Journal of the American Statistical Association*, 103(484):1357--1366), a two-type age dependent branching process with emigration is considered to describe the kinetics of cell populations. The two types of cells considered are referred as type $T_1$ (immediate precursors of oligodendrocytes) and type $T_2$ (terminally differentiated oligodendrocytes). The reproduction process of these cells is as follows: when stimulating to divide under in vitro conditions, the progenitor cells are capable of producing either their direct progeny (two daughter cells of the same type) or a single, terminally differentiated nondividing oligodendrocyte. Moreover, censoring effects as a consequence of the migration of progenitor cells out of the microscopic field of observation are modelled as the process of emigration of the type $T_1$ cells. In this work, we propose a two-type controlled branching process to describe the embedded discrete branching structure of the age-dependent branching process aforementioned. We address the estimation of the offspring distribution of the cell population in a Bayesian outlook by making use of disparities. The importance of this problem yields in the fact that the behaviour of these populations is strongly related to the main parameters of the offspring distribution and in practice, these values are unknown and their estimation is necessary. The proposed methodology introduced in M. Gonz\'alez et al. (2017) (Robust estimation in controlled branching processes: Bayesian estimators via disparities. *Work in progress*), is illustrated with an application to the real data set given in A.Y. Yakovlev et al. (2008). Speaker: Ms Carmen Minuesa Abril (University of Extremadura) • 14:30 15:00 Parameter Estimation for Discretely Observed Infinite-Server Queues with Markov-Modulated Input 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University The Markov-modulated infinite-server queue is a queueing system with infinitely many servers, where the arrivals follow a Markov-modulated Poisson process (MMPP), i.e. a Poisson process with rate modulating between several values. The modulation is driven by an underlying and unobserved continuous time Markov chain $\{X_t\}_{t\geq 0}$. The inhomogeneous rate of the Poisson process, $\lambda(t)$, stochastically alternates between $d$ different rates, $\lambda_1,\dots,\lambda_d$, in such a way that $\lambda(t) = \lambda_i$ if $X_t = i$, $i=1,\dots,d$. We are interested in estimating the parameters of the arrival process for this queueing system based on observations of the queue length at discrete times only. We assume exponentially distributed service times with rate $\mu$, where $\mu$ is time-independent and known. Estimation of the parameters of the arrival process has not yet been studied for this particular queueing system. Two types of missing data are intrinsic to the model, which complicates the estimation problem. First, the underlying continuous time Markov chain in the Markov-modulated arrival process is not observed. Second, the queue length is only observed at a finite number of discrete time points. As a result, it is not possible to distinguish the number of arrivals and the number of departures between two consecutive observations. In this talk we show how we derive an explicit algorithm to find maximum likelihood estimates of the parameters of the arrival process, making use of the EM algorithm. Our approach extends the one used in Okamura et al. (2009), where the parameters of an MMPP are estimated based on observations of the process at discrete times. However, in contrast to our setting, Okamura et al. (2009) do not consider departures and therefore do not deal with the second type of missing data. We illustrate the accuracy of the proposed estimation algorithm with a simulation study. Reference: Okamura H., Dohi T., Trivedi K.S. (2009). Markovian Arrival Process Parameter Estimation With Group Data. IEEE/ACM Transactions on Networking. Vol. 17, No. 4, pp. 1326--1339 Speaker: Ms Birgit Sollie (Vrije Universiteit Amsterdam) • 15:00 15:30 Coffee 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University • 15:30 16:00 Fréchet means and Procrustes analysis in Wasserstein space 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University We consider three interlinked problems in stochastic geometry: (1) constructing optimal multicouplings of random vectors; (2) determining the Fréchet mean of probability measures in Wasserstein space; and (3) registering collections of randomly deformed spatial point processes. We demonstrate how these problems are canonically interpreted through the prism of the theory of optimal transportation of measure on $\mathbb R^d$. We provide explicit solutions in the one dimensional case, consistently solve the registration problem and establish convergence rates and a (tangent space) central limit theorem for Cox processes. When $d>1$, the solutions are no longer explicit and we propose a steepest descent algorithm for deducing the Fréchet mean in problem (2). Supplemented by uniform convergence results for the optimal maps, this furnishes a solution to the multicoupling problem (1). The latter is then utilised, as in the case $d=1$, in order to construct consistent estimators for the registration problem (3). While the consistency results parallel their one-dimensional counterparts, their derivation requires more sophisticated techniques from convex analysis. This is joint work with Victor M. Panaretos Speaker: Dr Yoav Zemel (Ecole polytechnique fédérale de Lausanne) • 16:00 16:30 Modeling of vertical and horizontal variation in multivariate functional data 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University We present a model for multivariate functional data that simultaneously model vertical and horisontal variation. Horisontal variation is modeled using warping functions represented by latent gaussian variables. Vertical variation is modeled using Gaussian processes using a generally applicable low-parametric covariance structure. We devise a method for maximum likelihood estimation using a Laplace approximation and apply it to three different data sets. Speaker: Mr Niels Olsen (Københvans Universitet) • 16:30 17:00 Best Unbiased Estimators for Doubly Multivariate Data 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University The article addresses the best unbiased estimators of the block compound symmetric covariance structure for m-variate observations with equal mean vector over each level of factor or each time point (model with structured mean vector). Under multivariate normality, the free-coordinate approach is used to obtain unbiased linear and quadratic estimates for the model parameters. Optimality of these estimators follows from sufficiency and completeness of their distributions. Additionally, strong consistency is proven. The properties of the estimators in the proposed model are compared with the ones in the model with unstructured mean vector (the mean vector changes over levels of factor or time points). Speaker: Mr Arkadiusz Kozioł (Faculty of Mathematics, Computer Science and Econometrics University of Zielona Góra, Szafrana 4a, 65-516 Zielona Góra, Poland) • Wednesday, 16 August • 09:00 09:30 Inference on covariance matrices and operators using concentration inequalities 30m Museum Gustavianum ### Museum Gustavianum #### Uppsala University In the modern era of high and infinite dimensional data, classical statistical methodology is often rendered inefficient and ineffective when confronted with such big data problems as arise in genomics, medical imaging, speech analysis, and many other areas of research. Many problems manifest when the practitioner is required to take into account the covariance structure of the data during his or her analysis, which takes on the form of either a high dimensional low rank matrix or a finite dimensional representation of an infinite dimensional operator acting on some underlying function space. Thus, we propose using tools from the concentration of measure literature to construct rigorous descriptive and inferential statistical methodology for covariance matrices and operators. A variety of concentration inequalities are considered, which allow for the construction of nonasymptotic dimension-free confidence sets for the unknown matrices and operators. Given such confidence sets a wide range of estimation and inferential procedures can be and are subsequently developed. Speaker: Adam Kashlak (Cambridge Centre for Analysis, University of Cambridge) • 09:30 10:00 Predict extreme influenza epidemics 30m Museum Gustavianum ### Museum Gustavianum #### Uppsala University Influenza viruses are responsible for annual epidemics, causing more than 500,000 deaths per year worldwide. A crucial question for resource planning in public health is to predict the morbidity burden of extreme epidemics. We say that an epidemic is extreme whenever the influenza incidence rate exceeds a high threshold for at least one week. Our objective is to predict whether an extreme epidemic will occur in the near future, say the next couple of weeks. The weekly numbers of influenza-like illness (ILI) incidence rates in France are available from the Sentinel network for the period 1991-2017. ILI incidence rates exhibit two different regimes, an epidemic regime during winter and a non-epidemic regime during the rest of the year. To identify epidemic periods, we use a two-state autoregressive hidden Markov model. A main goal of Extreme Value Theory is to assess, from a series of observations, the probability of events that are more extreme than those previously recorded. Because of the autoregressive structure of the data, we choose to fit one of the mul- tivariate generalized Pareto distribution models proposed in Rootzén et al. (2016a) [Multivariate peaks over threshold models. arXiv:1603.06619v2]; see also Rootzén et al. (2016b) [Peaks over thresholds modeling with multivariate generalized Pareto distributions. arXiv:1612.01773v1]. For these models, explicit densities are given, and formulas for conditional probabilities can then be deduced, from which we can predict if an epidemic will be extreme, given the first weeks of observation. Speaker: Maud Thomas (Université Pierre et Marie Curie) • 10:00 10:30 Guided tour of the museum 30m Museum Gustavianum ### Museum Gustavianum #### Uppsala University • 10:30 11:00 Coffee 30m Museum Gustavianum ### Museum Gustavianum #### Uppsala University • 11:00 12:00 Invited speaker - Formal languages for stochastic modelling 1h Museum Gustavianum ### Museum Gustavianum Speaker: Jane Hillston (University of Edinburgh) • 12:00 13:00 Lunch 1h Museum Gustavianum ### Museum Gustavianum #### Uppsala University • 13:00 14:00 Invited Speaker - Embedding machine learning in stochastic process algebra 1h Museum Gustavianum ### Museum Gustavianum Speaker: Jane Hillston (University of Edinburgh) • 14:00 14:30 Efficient estimation for diffusions 30m Museum Gustavianum ### Museum Gustavianum #### Uppsala University This talk concerns estimation of the diffusion parameter of a diffusion process observed over a fixed time interval. We present conditions on approximate martingale estimating functions under which estimators are consistent, rate optimal, and efficient under high frequency (in-fill) asymptotics. Here, limit distributions of the estimators are non-standard in the sense that they are generally normal variance-mixture distributions. In particular, the mixing distribution depends on the full sample path of the diffusion process over the observation time interval. Making use of stable convergence in distribution, we also present the more easily applicable result that estimators normalized by a suitable data-dependent transformation converge in distribution to a standard normal distribution. The theory is illustrated by a simulation study. The work presented in this talk is published in: Jakobsen, N. M. and Sørensen, M. (2017). *Efficient estimation for diffusions sampled at high frequency over a fixed time interval.* Bernoulli, 23(3):1874-1910. Speaker: Nina Munkholt Jakobsen (University of Copenhagen) • 14:30 15:00 Estimates for distributions of Hölder semi-norms of random processes from spaces F_ψ(Ω) 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University In the following we deal with estimates for distributions of Hölder semi-norms of sample functions of random processes from spaces $\mathbb{F}_\psi(\Omega)$, defined on a compact metric space and on an infinite interval $[0,\infty)$, i.e. probabilities $$\mathsf{P}\left\{\sup\limits_{\substack{0<\rho(t,s)\le\varepsilon \\ t,s\in\mathbb{T}}} \frac{|X(t)-X(s)|}{f(\rho(t,s))}>x\right\}.$$ Such estimates and assumptions under which semi-norms of sample functions of random processes from spaces $\mathbb{F}_\psi(\Omega)$, defined on a compact space, satisfy the Hölder condition were obtained by Kozachenko and Zatula (2015). Similar results were provided for Gaussian processes, defined on a compact space, by Dudley (1973). Kozachenko (1985) generalized Dudley's results for random processes belonging to Orlicz spaces, see also Buldygin and Kozachenko (2000). Marcus and Rosen (2008) obtained $L^p$ moduli of continuity for a wide class of continuous Gaussian processes. Kozachenko et al. (2011) studied the Lipschitz continuity of generalized sub-Gaussian processes and provided estimates for the distribution of Lipschitz norms of such processes. But all these problems were not considered yet for processes, defined on an infinite interval. Speaker: Mr Dmytro Zatula (Taras Shevchenko National University of Kyiv) • Thursday, 17 August • 09:00 09:30 Finite Mixture of C-vines for Complex Dependence 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University Recently, there has been an increasing interest on the combination of copulas with a finite mixture model. Such a framework is useful to reveal the hidden dependence patterns observed for random variables flexibly in terms of statistical modeling. The combination of vine copulas incorporated into a finite mixture model is also beneficial for capturing hidden structures on a multivariate data set. In this respect, the main goal of this study is extending the study of Kim et al. (2013) with different scenarios. For this reason, finite mixture of C-vine is proposed for multivariate data with different dependence structures. The performance of the proposed model has been tested by different simulated data set including various tail dependence properties. Speaker: O. Ozan Evkaya (Atılım University) • 09:30 10:00 Joint Bayesian nonparametric reconstruction of dynamical equations 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University We propose a Bayesian nonparametric mixture model for the joint full reconstruction of $m$ dynamical equations, given $m$ observed dynamically-noisy-corrupted chaotic time series. The method of reconstruction is based on the Pairwise Dependent Geometric Stick Breaking Processes mixture priors (PDGSBP) first proposed by Hatjispyros et al. (2017). We assume that each set of dynamical equations has a deterministic part with a known functional form i.e. $$x_{ji} = g_{j}(\vartheta_j, x_{j,i-1},\ldots,x_{j,i-l_j}) + \epsilon_{x_{ji}},\,\,\, 1\leq j \leq m,\,\, 1\leq i \leq n_{j}.$$ under the assumption that the noise processes $(\epsilon_{x_{ji}})$ are independent and identically distributed for all $j$ and $i$ from some unknown zero mean process $f_j(\cdot)$. Additionally, we assume that a-priori we have the knowledge that the processes $(\epsilon_{x_{ji}})$ for $j=1,\ldots,m$ have common characteristics, e.g. they may have common variances or even have similar tail behavior etc. For a full reconstruction, we would like to jointly estimate the following quantities $$(\vartheta_{j})\in\Theta\subseteq{\cal R}^{k_j},\quad (x_{j,0},\ldots, x_{j,l_j-1})\in{\cal X}_j\subseteq{\cal R}^{l_j},$$ and perform density estimation to the $m$ noise components $(f_j)$. Our contention is that whenever there is at least one sufficiently large data set, using carefully selected informative borrowing-of-strength-prior-specifications we are able to reconstruct those dynamical processes that are responsible for the generation of time series with small sample sizes; namely sample sizes that are inadequate for an independent reconstruction. We illustrate the joint estimation process for the case $m=2$, when the two time series are coming from a quadratic and a cubic stochastic process of lag one and the noise processes are zero mean normal mixtures with common components. Speaker: Mr Christos Merkatas (Department of Mathematics, University of the Aegean, Greece) • 10:00 10:30 Viterbi process for pairwise Markov models 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University My talk is based on ongoing joint work with my supervisor Jüri Lember. We consider a Markov chain $Z = \{Z_k\}_{k \geq 1}$ with product state space $\mathcal{X}\times \mathcal{Y}$, where $\mathcal{Y}$ is a finite set (state space) and $\mathcal{X}$ is an arbitrary separable metric space (observation space). Thus, the process $Z$ decomposes as $Z=(X,Y)$, where $X=\{X_k \}_{k\geq 1}$ and $Y=\{Y_k \}_{k\geq 1}$ are random processes taking values in $\mathcal{X}$ and $\mathcal{Y}$, respectively. Following cite{pairwise,pairwise2,pairwise3}, we call the process $Z$ a \textit{pairwise Markov model}. The process $X$ is identified as an observation process and the process $Y$, sometimes called the \textit{regime}, models the observations-driving hidden state sequence. Therefore our general model contains many well-known stochastic models as a special case: hidden Markov models, Markov switching models, hidden Markov models with dependent noise and many more. The \textit{segmentation} or \textit{path estimation} problem consists of estimating the realization of $(Y_1,\ldots,Y_n)$ given a realization $x_{1:n}$ of $(X_1,\ldots,X_n)$. A standard estimate is any path $v_{1:n}\in \mathcal{Y}^n$ having maximum posterior probability: $$v_{1:n}=\mathop{\mathrm{argmax}}_{y_{1:n}}P(Y_{1:n}=y_{1:n}|X_{1:n}=x_{1:n}).$$ Any such path is called \textit{Viterbi path} and we are interested in the behaviour of $v_{1:n}$ as $n$ grows. The study of asymptotics of Viterbi path is complicated by the fact that adding one more observation, $x_{n+1}$ can change the whole path, and so it is not clear, whether there exists a limiting infinite Viterbi path. We show that under some conditions the infinite Viterbi path indeed exists for almost every realization $x_{1:\infty}$ of $X$, thereby defining an infinite Viterbi decoding of $X$, called the \textit{Viterbi process.} This is done trough construction of \textit{barriers}. A barrier is a fixed-sized block in the observations $x_{1:n}$ that fixes the Viterbi path up to itself: for every continuation of $x_{1:n}$, the Viterbi path up to the barrier remains unchanged. Therefore, if almost every realization of $X$-process contains infinitely many barriers, then the Viterbi process exists. Having infinitely many barriers is not necessary for existence of infinite Viterbi path, but the barrier-construction has several advantages. One of them is that it allows to construct the infinite path \textit{piecewise}, meaning that to determine the first $k$ elements $v_{1:k}$ of the infinite path it suffices to observe $x_{1:n}$ for $n$ big enough. Barrier construction has another great advantage: namely, the process $(Z,V)=\{(Z_k,V_k)\}_{k \geq 1}$, where $V= \{V_k\}_{k \geq 1}$ denotes the Viterbi process, is under certain conditions regenerative. This is can be proven by, roughly speaking, applying the Markov splitting method to construct regeneration times for $Z$ which coincide with the occurrences of barriers. Regenerativity of $(Z,V)$ allows to easily prove limit theorems to understand the asymptotic behaviour of inferences based on Viterbi paths. In fact, in a special case of hidden Markov model this regenerative property has already been known to hold and has found several applications cite{AV,AVacta,Vsmoothing,Vrisk, iowa}. Speaker: Mr Joonas Sova (University of Tartu) • 10:30 11:00 Coffee 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University • 11:00 12:00 Invited Speaker - Independent component analysis using third and fourth cumulants 1h Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University In independent component analysis it is assumed that the observed random variables are linear combinations of latent, mutually independent random variables called the independent components. It is then often thought that only the non-Gaussian independent components are of interest and the Gaussian components simply present noise. The idea is then to make inference on the unknown number of non-Gaussian components and to estimate the transformations back to the non-Gaussian components. In this talk we show how the classical skewness and kurtosis measures, namely third and fourth cumulants, can be used in the estimation. First, univariate cumulants are used as projection indices in search for independent components (projection pursuit, fastICA). Second, multivariate fourth cumulant matrices are jointly used to solve the problem (FOBI, JADE). The properties of the estimates are considered through corresponding optimization problems, estimating equations, algorithms and asymptotic statistical properties. The theory is illustrated with several examples. Speaker: Hannu Oja (University of Turku) • 12:00 13:00 Lunch 1h Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University • 13:00 14:00 Invited Speaker - Independent component analysis using third and fourth cumulants 1h Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University In independent component analysis it is assumed that the observed random variables are linear combinations of latent, mutually independent random variables called the independent components. It is then often thought that only the non-Gaussian independent components are of interest and the Gaussian components simply present noise. The idea is then to make inference on the unknown number of non-Gaussian components and to estimate the transformations back to the non-Gaussian components. In this talk we show how the classical skewness and kurtosis measures, namely third and fourth cumulants, can be used in the estimation. First, univariate cumulants are used as projection indices in search for independent components (projection pursuit, fastICA). Second, multivariate fourth cumulant matrices are jointly used to solve the problem (FOBI, JADE). The properties of the estimates are considered through corresponding optimization problems, estimating equations, algorithms and asymptotic statistical properties. The theory is illustrated with several examples. Speaker: Hannu Oja (University of Turku) • 14:00 14:30 E-optimal approximate block designs for treatment-control comparisons 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University We study $E$-optimal block designs for comparing a set of test treatments with a control treatment. We provide the complete class of all $E$-optimal approximate block designs and we show that these designs are characterized by simple linear constraints. Employing the provided characterization, we obtain a class of $E$-optimal exact block designs with unequal block sizes for comparing test treatments with a control. Speaker: Mr Samuel Rosa (Comenius University in Bratislava) • 14:30 15:00 Information criteria for structured sparse variable selection 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University In contrast to the low dimensional case, variable selection under the assumption of sparsity in high dimensional models is strongly influenced by the effects of false positives. The effects of false positives are tempered by combining the variable selection with a shrinkage estimator, such as in the lasso, where the selection is realized by minimizing the sum of squared residuals regularized by an $\ell_1$ norm of the selected variables. Optimal variable selection is then equivalent to finding the best balance between closeness of fit and regularity, i.e., to optimization of the regularization parameter with respect to an information criterion such as Mallows's Cp or AIC. For use in this optimization procedure, the lasso regularization is found to be too tolerant towards false positives, leading to a considerable overestimation of the model size. Using an $\ell_0$ regularization instead requires careful consideration of the false positives, as they have a major impact on the optimal regularization parameter. As the framework of the classical linear model has been analysed in previous work, the current paper concentrates on structured models and, more specifically, on grouped variables. Although the imposed structure in the selected models can be understood to somehow reduce the effect of false positives, we observe a qualitatively similar behavior as in the unstructured linear model. Speaker: Mr Bastien Marquis (Université Libre de Bruxelles) • 15:00 15:30 Coffee 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University • 15:30 16:00 Mallows' Model Based on Lee Distance 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University In this paper the Mallows' model based on Lee distance is considered and compared to models induced by other metrics on the permutation group. As an illustration, the complete rankings from the American Psychological Association election data are analyzed. Speaker: Mr Nikolay Nikolov (Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. G.Bontchev str., block 8, 1113 Sofia, Bulgaria) • 16:00 16:30 Confidence regions in Cox proportional hazards model with measurement errors 30m Ångströmslaboratoriet ### Ångströmslaboratoriet Cox proportional hazards model with measurement errors in covariates is considered. It is the ubiquitous technique in biomedical data analysis. In Kukush et al. (2011) [ Journal of Statistical Research **45**, 77-94 ] and Chimisov and Kukush (2014) [ Modern Stochastics: Theory and Applications **1**, 13-32 ] asymptotic properties of a simultaneous estimator $(\lambda_n;\beta_n)$ for the baseline hazard rate $\lambda(\cdot)$ and the regression parameter $\beta$ were studied, at that the parameter set $\Theta=\Theta_{\lambda}\times \Theta_{\beta}$ was assumed bounded. In Kukush and Chernova (2017) [ Theory of Probability and Mathematical Statistics **96**, 100-109 ] we dealt with the simultaneous estimator $(\lambda_n;\beta_n)$ in the case, where the $\Theta_{\lambda}$ was unbounded from above and not separated away from $0$. The estimator was constructed in two steps: first we derived a strongly consistent estimator and then modified it to provide its asymptotic normality. In this talk, we construct the confidence interval for an integral functional of $\lambda(\cdot)$ and the confidence region for $\beta$. We reach our goal in each of the three cases: (a) the measurement error is bounded, (b) it is normally distributed, or (c) it is a shifted Poisson random variable. The censor is assumed to have a continuous pdf. In future research we intend to elaborate a method for heavy tailed error distributions. Speaker: Ms Oksana Chernova (Taras Shevchenko National University of Kyiv) • 16:30 17:00 Stability of the Spectral EnKF under nested covariance estimators 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University In the case of traditional Ensemble Kalman Filter (EnKF), it is known that the filter error does not grow faster than exponentially for a fixed ensemble size. The question posted in this contribution is whether the upper bound for the filter error can be improved by using an improved covariance estimator that comes from the right parameter subspace and has smaller asymptotic variance. Its effect on Spectral EnKF is explored by a simulation. Speaker: Marie Turčičová (Charles University, Prague) • Friday, 18 August • 09:00 10:00 Invited Speaker - Random Networks 1h Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University Speaker: Svante Janson (Uppsala University) • 10:00 10:30 Coffee 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University • 10:30 11:00 Theoretical and simulation results on heavy-tailed fractional Pearson diffusions 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University We define heavy-tailed fractional reciprocal gamma and Fisher-Snedecor diffusions by a non-Markovian time change in the corresponding Pearson diffusions. We illustrate known theoretical results regarding these fractional diffusions via simulations. Speaker: Mr Ivan Papić (Department of Mathematics, J.J. Strossmayer University of Osijek) • 11:00 11:30 Copula based BINAR models with applications 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University In this paper we study the problem of modelling the integer-valued vector observations. We consider the BINAR(1) models defined via copula-joint innovations. We review different parameter estimation methods and analyse estimation methods of the copula dependence parameter. We also examine the case where seasonality is present in integer-valued data and suggest a method of deseasonalizing them. Finally, an empirical application is carried out. Speaker: Andrius Buteikis (Faculty of Mathematics and Informatics, Vilnius University) • 11:30 12:00 Simulating and Forecasting Human Population with General Branching Process 30m Ångströmslaboratoriet ### Ångströmslaboratoriet #### Uppsala University The branching process theory is widely used to describe a population dynamics in which particles live and produce other particles through their life, according to given stochastic birth and death laws. The theory of General Branching Processes (GBP) presents a continuous time model in which every woman has random life length and gives birth to children in random intervals of time. The flexibility of the GBP makes it very useful for modelling and forecasting human population. This paper is a continuation of previous developments in the theory, necessary to model the specifics of human population, and presents their application in forecasting the population age structure of Bulgaria. It also introduces confidence intervals of the forecasts, calculated by GBP simulations, which reflect both the stochastic nature of the birth and death laws and the branching process itself. The simulations are also used to determine the main sources of risk to the forecast. Speaker: Dr Plamen Trayanov (Sofia Univeristy "St. Kliment Ohridski") • 12:00 13:00 Lunch 1h Ångströmslaboratoriet
2021-11-30 12:31:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5662207007408142, "perplexity": 1547.6275663972217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358973.70/warc/CC-MAIN-20211130110936-20211130140936-00038.warc.gz"}
http://openstudy.com/updates/559b5608e4b0564dd2d2ea2a
anonymous one year ago 5. A ball of mass m is suspended above the center of the rotating platform on the elastic spring with stiffness k and initial (non-stretched) length lo as shown. The opposite end of the spring is attached to the vertical pole fixed on axis of the platform. The platform starts to rotate with angular velocity w. What is the angle a' that the spring makes with the vertical? Consider all values of the angular velocity, from w=0 to w=inf and find the conditions on w when the angle a' >0 and when a' =0. • This Question is Open 1. maheshmeghwal9 can you show me the figure? 2. anonymous I have attached the diagram here. It is the one labelled diagram five. 3. IrishBoy123 i've had a play with this and i get $$\alpha$$ for a given $$\omega$$ to be $$tan \ \alpha = \frac{\omega^2 }{g}( l_{o} + \frac{mg}{k} + \epsilon )$$ where $$\epsilon$$ is the extra extension in the spring for that given $$\omega$$. this makes sense as it implies that the thing rises but never gets to $$\alpha = \pi/2$$ and at the same time $$\epsilon$$ just gets bigger and bigger. not that the question makes that much sense to me really.
2016-10-22 00:00:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8459336161613464, "perplexity": 361.43184545936697}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718311.12/warc/CC-MAIN-20161020183838-00184-ip-10-171-6-4.ec2.internal.warc.gz"}
http://sachinashanbhag.blogspot.com/2009/11/
## Friday, November 27, 2009 1. Life (via this and this place) 2. Fox News needs a math lesson? (via FlowingData) 3. A fascinating blog on the "meandering of rivers". One very interesting tidbit towards the end of the article is "self-similarity". Apparently some universality underlies the meandering of rivers and streams. It seems that the wavelength "lambda" is approximately 11*w and the "radius" is approximately 2.3*w, where "w" is the width of the stream ## Wednesday, November 25, 2009 ### I love craigslist I once owned eBay stock, and they were scared of craigslist. Similar to Microsoft's fear of the open-source movement. This inspiring article from Wired magazine, tells you why it would be great to work for this company, and why its competition just can't figure it out. ... what you see at the most popular job-search site: another wasteland of hypertext links, one line after another, without recommendations or networking features or even protection against duplicate postings. Subject to a highly unpredictable filtering system that produces daily outrage among people whose help-wanted ads have been removed without explanation, this site not only beats its competitors—Monster, CareerBuilder, Yahoo's HotJobs—but garners more traffic than all of them combined. Are our standards really so low? But if you really want to see a mess, go visit the nation's greatest apartment-hunting site, the first likely choice of anybody searching for a rental or a roommate. On this site, contrary to every principle of usability and common sense, you can't easily browse pictures of the apartments for rent. Customer support? Visit the help desk if you enjoy being insulted. How much market share does this housing site have? In many cities, a huge percentage. It isn't worth trying to compare its traffic to competitors', because at this scale there are no competitors. Each of these sites, of course, is merely one of the many sections of craigslist, which dominates the market in facilitating face-to-face transactions, whether people are connecting to buy and sell, give something away, rent an apartment, or have some sex. With more than 47 million unique users every month in the US alone—nearly a fifth of the nation's adult population—it is the most important community site going and yet the most underdeveloped. Think of any Web feature that has become popular in the past 10 years: Chances are craigslist has considered it and rejected it. If you try to build a third-party application designed to make craigslist work better, the management will almost certainly throw up technical roadblocks to shut you down. This is a great article, and reads really well. ### Why can't Apple figure the mouse out? In terms of usability, Apple products like the iPhone and iPod are design icons. Elegance, utility and simplicity rolled into beautiful compact devices. Many years earlier, in 1984, Apple did something similar to the computing world, and brought the GUI and mouse to life (thanks to Xerox). Since those paleolithic days, it has always stuck with a "single-button" mouse, the defense being more buttons are "confusing to novice users", or some crap like that. That unrelenting stubbornness continues to this day. In 2006, I bought a MacBook Pro at work, with a single button mouse as shown above. For the most part, I like my Mac, although I prefer a similarly priced Linux machine running Ubuntu. I never really liked the one button mouse, but out of the compromise that is this life, I forged a working relationship with it. In other parts of the world, the number of buttons on "mice" have increased, as shown by the five button mouse here (from wikipedia). Last month, my wife got a new Mac from her workplace, after her old Mac gave her repeated battery problems. To my surprise and utter disgust, in these spanking new machines, they got rid of the only button left! And replaced that with a trackpad, instead. So her computer looks like the picture below. No buttons. See! The whole trackpad is the button. WTF? To add to the insult, it now has all the cool features such as zooming using two fingers, borrowed from their iPhone. Personally, I find it extremely annoying, since every time I try to select some text using two fingers (and there is no button to anchor one of the fingers, remember), it thinks I want to zoom. To think that they wanted to stick with a single button because novice users would be confused. Heck, I've been using computers for 20+ years, and I am confused with this crap! ## Sunday, November 22, 2009 ### The Forer Effect I came across "the Forer effect" on this blog (via Abi). As Abi points out on his blog, you should send this "to all your friends/family members/associates who believe in astrology". The story runs thus: A psychologist named B. R. Forer apparently gave a bunch of his students a personality test (like a Myers-Briggs test, or one of those stupid Cosmo' surveys), and asked each person taking the test to rate the accuracy of the customized "individual profile" between 0 and 5, ranging from the least to the most accurate. Unknown to the participants, there was really only one common profile (independent of the choices on the personality test), which read: You have a great need for other people to like and admire you. You have a tendency to be critical of yourself. You have a great deal of unused capacity which you have not turned to your advantage. While you have some personality weaknesses, you are generally able to compensate for them. Your sexual adjustment has presented problems for you. Disciplined and self-controlled outside, you tend to be worrisome and insecure inside. At times you have serious doubts as to whether you have made the right decision or done the right thing. You prefer a certain amount of change and variety and become dissatisfied when hemmed in by restrictions and limitations. You pride yourself as an independent thinker and do not accept others' statements without satisfactory proof. You have found it unwise to be too frank in revealing yourself to others. At times you are extroverted, affable, sociable, while at other times you are introverted, wary, reserved. Some of your aspirations tend to be pretty unrealistic. Security is one of your major goals in life. This average score on this and similar studies (repeated a gazillion times) was around 4.2 (pdf original research article, on Scribd). Our gullibility is shocking, huh? Here's an YouTube video, for those who don't like to read, or, for those who also like to watch. It's an entertaining video, which shows how little gender, culture, and other elements matter. If this study was included as a foreword in Linda Goodman's books, I wonder if they would have sold nearly as well. This has direct implications, not only for "psychic" disciplines, but also on personality test batteries, such as Myers-Briggs. ## Friday, November 20, 2009 ### Buzzwords and Data Visualization I stumbled upon two interesting links via this blog that I follow: 1. A PhD comics take on buzzwords in scientific literature. 2. Top 10 worst data visualizations in scientific literature. Some of it may be nit-picking, and I am sure like all top-10 lists (or US News college rankings, for that matter) this list is flawed in terms of the "top"-10. But it makes for an interesting read nevertheless. The "discussion" on each graph is enlightening, particularly since I must have committed some of the same mistakes myself. One interesting thing I learned was how bad pie charts were. Apparently, we are much better at comparing lengths than areas. I never knew that. There are some good links on how to present data wisely at the bottom of this link. ## Wednesday, November 18, 2009 ### The Tipping Point I know I am late to the party, since this popular book by Michael Gladwell was published almost a decade ago. The book attempts to present a synthesis of many disparate ideas to ponder over the question: "Why do some ideas catch fire?", or as he would probably like it phrased "What makes an idea tip?" The book itself is enjoyable, as it talks about Paul Revere's midnight ride, the Mavens, the Connectors, and the Salesmen, the fascinating rule of 150, Bernie Goetz and how cleaning up graffiti on subway walls reduced crime in New York, Hush Puppies, the stickiness of Sesame Street and Blue's Clues, Peter Jennings demeanor during Ronald Reagan's candidacy, six degrees of separation etc. He cites a number of interesting social and psychological research studies, and being the master storyteller that he is, beautifully integrates them into his narrative. I think he makes a great journalist. However, I think he would make a bad scientist. This is pure extrapolation from the one data point I am familiar with. The six degrees of separation reference to Stanley Milgram is bad science. He repeats some of the same stuff in his famous article "Six Degress of Lois Weisberg". The myth suggests that Milgram gave 160 people in Omaha, Nebraska a package that had to be delivered to a stockbroker who worked in Boston, through the smallest number of intermediaries. He found that "chains varied from two to 10 intermediate acquaintances, with the median at five" in his 1967 paper - which apparently is the basis for the "six degrees" supposition. The big problem for me as a scientist was that only 24 of the original 160 chains was completed - and hence the conclusion probably suffers from a heavy survivorship bias. Milgram carried out an earlier study where starters were from Wichita, Kansas and were supposed to reach a divinity student on the east coast, and the completion statistics there were more miserable. The measurement error must have been quite large to suggest such a strong conclusion. Sure, we might indeed be separated by six degrees. But Milgram's study does not definitively prove it. In fact there are other glaring problems with Milgram's study as this very interesting and more academically rigorous article points out. PS: I swear I wrote this blog a long time ago, and thought that I would publish it later. In the meantime, I bumped into this article by Steve Pinker (via nanopolitan). It is amazing that he comes to the same conclusion towards the end of his book-review: Readers have much to learn from Gladwell the journalist and essayist. But when it comes to Gladwell the social scientist, they should watch out for those igon values. ## Monday, November 16, 2009 ### Society of Rheology meeting - Madison, WI. I was in Madison, WI  for my annual pilgrimage to the Society of Rheology annual meeting. Of the several meetings I go (or, have gone) to, this is easily my favorite. It is very focused - I learn a lot, and always come back with ideas to try, things to check, and papers to read. It is small, fun, very good value for money, and has plenty of good food and drink. Here are some pictures I took in Madison, when my colleague and I went for a run from our hotel near the state capitol to the University. It is always great to catch fall in the northern states, especially having been away from Michigan for a while. ## Friday, November 13, 2009 Three interesting links for the weekend. 1. A fascinating article in Vanity Fair on the state of Harvard's endowment (Rich Harvard, Poor Harvard). It's hard for me to really feel sorry, although I know it hurts a lot of innocent bystanders. From the article: Only a year ago, Harvard had a \$36.9 billion endowment, the largest in academia. Now that endowment has imploded, and the university faces the worst financial crisis in its 373-year history. Could the same lethal mix of uncurbed expansion, colossal debt, arrogance, and mismanagement that ravaged Wall Street bring down America’s most famous university? 2. This NYT article (free sign up required) recounts how the governor of India's Reserve Bank, Y. V. Reddy, played it tough during the bubble years, and saved the country from a financial crisis. He seems like the anti-thesis of former Fed-chairman Alan Greenspan, both in action and in popularity. From the article: Unlike Alan Greenspan, who didn’t believe it was his job to even point out bubbles, much less try to deflate them, Mr. Reddy saw his job as making sure Indian banks did not get too caught up in the bubble mentality. About two years ago, he started sensing that real estate, in particular, had entered bubble territory. One of the first moves he made was to ban the use of bank loans for the purchase of raw land, which was skyrocketing. Only when the developer was about to commence building could the bank get involved — and then only to make construction loans. (Guess who wound up financing the land purchases? United States private equity and hedge funds, of course!) Seeing inflation on the horizon, Mr. Reddy pushed interest rates up to more than 20 percent, which of course dampened the housing frenzy. He increased risk weightings on commercial buildings and shopping mall construction, doubling the amount of capital banks were required to hold in reserve in case things went awry. He made banks put aside extra capital for every loan they made. In effect, Mr. Reddy was creating liquidity even before there was a global liquidity crisis. 3. An interesting email conversation  (pdf) between Buffett and Raikes, regarding Microsoft and Berkshire (via Reflections on Value Investing). ### Dicey puzzle: Solution The full puzzle statement may be found here. In short "What are the odds that n=2 v/s n=3 dice are rolled, given that the sum is 7?" Solution: For n=2, there are 6 ways of rolling a 7 (1+6, 2+5, 3+4, 4+3, 5+2, and 6+1), out of a total of 6^2=36 total outcomes. Therefore p(sum = 7 | n = 2) = 6/36 = 36/216. For n = 3, there are 15 ways of rolling a 7 (1+1+5, 1+2+4, 1+3+3, 1+4+2,1+5+5, 2+1+4, 2+2+3, 2+3+2, 2+4+1, 3+1+3, 3+2+2, 3+3+1, 4+1+2, 4+2+1, 5+1+1), out of a total of 6^3 = 216 total outcomes. Therefore p(sum = 7 | n = 3) = 15/216. Thus, the odds of n = 2 v/s n = 3 are 36/15. That is it is about 2.5 times more likely that n = 2. What happened? There were more ways of getting a 7 with n = 3? ## Wednesday, November 11, 2009 ### LaTeX equations in Google Documents I found out from here that I can now type LaTeX equations into Google Documents, and they look pretty nifty too. I find the look of equations using Microsoft Equation Editor to be truly hideous. Of course you can buy MathType, but why? The native equation editor in OpenOffice is efficient to use, but I still don't like how they look in the document. Despite its age, LaTeX typesets equations beautifully - and there is no reason to discard something good, just because it is old. Previously, I wrote about how I currently use a plugin called OOOLaTeX, which lets me combine the beauty of LaTeX with the unbeatable price and portability of OpenOffice. Back to the topic of the post. It really is easy to use. Just go to Insert->Equation and you can enter LaTeX code directly, or choose symbols from the dialog boxes above. Here are a couple of screenshots: and I can easily visualize myself using this for presentations. ## Sunday, November 8, 2009 Some of my favorite excerpts: Now for the matter of drive. You observe that most great scientists have tremendous drive. I worked for ten years with John Tukey at Bell Labs. He had tremendous drive. One day about three or four years after I joined, I discovered that John Tukey was slightly younger than I was. John was a genius and I clearly was not. Well I went storming into Bode's office and said, How can anybody my age know as much as John Tukey does?'' He leaned back in his chair, put his hands behind his head, grinned slightly, and said, You would be surprised Hamming, how much you would know if you worked as hard as he did that many years.'' I simply slunk out of the office! My own PhD advisor, Ron Larson, is this (Tukey) type of person. Towards the end of my PhD, I realized that I had to do something he wasn't interested in. Barring luck, it is hard to directly compete with him. He is smarter, and works longer hours. Another quote from the speech: What Bode was saying was this: Knowledge and productivity are like compound interest.'' Given two people of approximately the same ability and one person who works ten percent more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity - it is very much like compound interest. I don't want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime. There's another trait on the side which I want to talk about; that trait is ambiguity. It took me a while to discover its importance. Most people like to believe something is or is not true. Great scientists tolerate ambiguity very well. They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. If you believe too much you'll never notice the flaws; if you doubt too much you won't get started. It requires a lovely balance. But most great scientists are well aware of why their theories are true and they are also well aware of some slight misfits which don't quite fit and they don't forget it. ## Friday, November 6, 2009 ### Eric Drexler and Science Education in India Eric Drexler (a nanotechnologist) wrote an interesting article on his blog about how the subset of visitors to his site from India, chose to visit the more technically meaty topics. A comment on his post, which I sympathize with, provoked a second article, which sought to understand the previous post in a more nuanced manner. During the course of reading these articles, I also stumbled upon this interesting YouTube presentation. I have my own thoughts on this matter, having been a student and educator in both the US and in India, but I will save those for a separate post, later. ## Thursday, November 5, 2009 ### Install LAMMPS with FFTW on your Desktop Earlier I wrote about how to install LAMMPS and AtomEye on a Desktop without FFTW. The following document now shows how to download, compile and build the freely available fftw library with LAMMPS to consider electrostatic effects. Building LAMMPS with FFTW ### How to embed a pdf document in blogger? The idea is like embedding a YouTube video as I remarked earlier. By trial and error, I have come to the personal conclusion that this is going to be my method of choice, from here on. The steps are simple: 1. Get a Scribd account. 2. Upload your document there, where it is converted into an iPaper format. 3. Look for "Embed Code" - and copy the html code snippet under it. 4. In your Blogger entry, select the "Edit HTML" tab, and go to the place where you want to embed the stuff 5. Paste the html code snippet here. You should be good to go. Here is an example. ## Wednesday, November 4, 2009 ### Dicey puzzle Your friend rolls either one, two, or three dice (n=1, n=2 or n=3). Each die is a normal cube with six sides, displaying a number between 1 and 6. She doesn't tell you what n is, but tells you that the sum of the numbers on the dice is 7. For example, she could have rolled 4 and 3 with n=2; or perhaps 5, 1, and 1 with n=3 etc. Obviously, n cannot be equal to one. What are the odds of n=2 v/s n=3 given that the sum is 7? Answer coming up in a week, but this is an example of simple Bayesian analysis. Credits: picture from http://www.pwcphoto.com/studio/studio-07.htm. ## Monday, November 2, 2009 ### Knuth, calculus and O-notation I found an interesting entry on a blog I follow, about a "new" method of introducing calculus, by Donald Knuth. Fundamentally, it involves introducing the big-oh notation to define a "strong derivative", and recovering most results including the Fundamental Theorem of Calculus (see the comments section). The resulting math feels light - like you were doing a back of the envelope calculation. It seems interesting, and although I am not sure whether that is the magic potion that will enable all my students to master the idea of applying calculus in physical problems. Personally, I never had a problem with the traditional approach starting with the definition of a limit.
2018-03-21 05:19:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2902899384498596, "perplexity": 2257.9887424581757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00311.warc.gz"}
https://peeterjoot.wordpress.com/tag/lagrangian-density/
# Peeter Joot's (OLD) Blog. • ## Archives papasu on PHY450H1S. Relativistic Electr… papasu on Energy term of the Lorentz for… lidiodu on PHY450H1S. Relativistic Electr… lidiodu on PHY450H1S. Relativistic Electr… lidiodu on bivector form of Stokes t… • 307,917 # Posts Tagged ‘lagrangian density’ ## PHY450H1S. Relativistic Electrodynamics Lecture 13 (Taught by Prof. Erich Poppitz). Variational principle for the field. Posted by peeterjoot on February 22, 2011 Covering chapter 4 material from the text [1]. Covering lecture notes pp.103-113: variational principle for the electromagnetic field and the relevant boundary conditions (103-105); the second set of Maxwell’s equations from the variational principle (106-108); Maxwell’s equations in vacuum and the wave equation in the non-relativistic Coulomb gauge (109-111) # Review. Our action. \begin{aligned}S&= S_{\text{particles}} + S_{\text{interaction}} + S_{\text{EM field}}&= \sum_A \int_{x_A^i(\tau)} ds ( -m_A c )- \sum_A\frac{e_A}{c}\int dx_A^i A_i(x_A)- \frac{1}{{16 \pi c}} \int d^4 x F^{ij } F_{ij}.\end{aligned} Our dynamics variables are \begin{aligned}\left\{\begin{array}{l l}x_A^i(\tau) & \quad \mbox{A = 1, \cdots, N$} \\ A^i(x) & \quad \mbox{$A = 1, \cdots, N}\end{array}\right.\end{aligned} \hspace{\stretch{1}}(2.1) We saw that the interaction term could also be written in terms of a delta function current, with \begin{aligned}S_{\text{interaction}}= -\frac{1}{{c^2}} \int d^4x j^i(x) A_i(x),\end{aligned} \hspace{\stretch{1}}(2.2) and \begin{aligned}j^i(x) = \sum_A c e_A \int dx_A^i \delta^4( x - x_A(\tau)).\end{aligned} \hspace{\stretch{1}}(2.3) Variation with respect to $x_A^i(\tau)$ gave us \begin{aligned}m c \frac{d{{u^i_A}}}{ds} = \frac{e}{c} u_A^j F_{ij}.\end{aligned} \hspace{\stretch{1}}(2.4) Note that it’s easy to get the sign mixed up here. With our $(+,-,-,-)$ metric tensor, if the second index is the summation index, we have a positive sign. Only the $S_{\text{particles}}$ and $S_{\text{interaction}}$ depend on $x_A^i(\tau)$. # The field action variation. \paragraph{Today:} We’ll find the EOM for $A^i(x)$. The dynamical degrees of freedom are $A^i(\mathbf{x},t)$ \begin{aligned}S[A^i(\mathbf{x}, t)] = -\frac{1}{{16 \pi c}} \int d^4x F_{ij}F^{ij} - \frac{1}{{c^2}} \int d^4 x A^i j_i.\end{aligned} \hspace{\stretch{1}}(3.5) Here $j^i$ are treated as “sources”. We demand that \begin{aligned}\delta S = S[ A^i(\mathbf{x}, t) + \delta A^i(\mathbf{x}, t)] - S[ A^i(\mathbf{x}, t) ] = 0 + O(\delta A)^2.\end{aligned} \hspace{\stretch{1}}(3.6) We need to impose two conditions. \begin{itemize} \item At spatial $\infty$, i.e. at ${\left\lvert{\mathbf{x}}\right\rvert} \rightarrow \infty, \forall t$, we’ll impose the condition \begin{aligned}{\left.{{A^i(\mathbf{x}, t)}}\right\vert}_{{{\left\lvert{\mathbf{x}}\right\rvert} \rightarrow \infty}} \rightarrow 0.\end{aligned} \hspace{\stretch{1}}(3.7) This is sensible, because fields are created by charges, and charges are assumed to be localized in a bounded region. The field outside charges will $\rightarrow 0$ at ${\left\lvert{\mathbf{x}}\right\rvert} \rightarrow \infty$. Later we will treat the integration range as finite, and bounded, then later allow the boundary to go to infinity. \item at $t = -T$ and $t = T$ we’ll imagine that the values of $A^i(\mathbf{x}, \pm T)$ are fixed. This is analogous to $x(t_i) = x_1$ and $x(t_f) = x_2$ in particle mechanics. Since $A^i(\mathbf{x}, \pm T)$ is given, and equivalent to the initial and final field configurations, our extremes at the boundary is zero \begin{aligned}\delta A^i(\mathbf{x}, \pm T) = 0.\end{aligned} \hspace{\stretch{1}}(3.8) \end{itemize} PICTURE: a cylinder in spacetime, with an attempt to depict the boundary. # Computing the variation. \begin{aligned}\delta S[A^i(\mathbf{x}, t)]= -\frac{1}{{16 \pi c}} \int d^4 x \delta (F_{ij}F^{ij}) - \frac{1}{{c^2}} \int d^4 x \delta(A^i) j_i.\end{aligned} \hspace{\stretch{1}}(4.9) Looking first at the variation of just the $F^2$ bit we have \begin{aligned}\delta (F_{ij}F^{ij})&=\delta(F_{ij}) F^{ij} + F_{ij} \delta(F^{ij}) \\ &=2 \delta(F^{ij}) F_{ij} \\ &=2 \delta(\partial^i A^j - \partial^j A^i) F_{ij} \\ &=2 \delta(\partial^i A^j) F_{ij} - 2 \delta(\partial^j A^i) F_{ij} \\ &=2 \delta(\partial^i A^j) F_{ij} - 2 \delta(\partial^i A^j) F_{ji} \\ &=4 \delta(\partial^i A^j) F_{ij} \\ &=4 F_{ij} \partial^i \delta(A^j).\end{aligned} Our variation is now reduced to \begin{aligned}\delta S[A^i(\mathbf{x}, t)]&= -\frac{1}{{4 \pi c}} \int d^4 x F_{ij} \partial^i \delta(A^j) - \frac{1}{{c^2}} \int d^4 x j^i \delta(A_i) \\ &= -\frac{1}{{4 \pi c}} \int d^4 x F^{ij} \frac{\partial {}}{\partial {x^i}} \delta(A_j) - \frac{1}{{c^2}} \int d^4 x j^i \delta(A_i).\end{aligned} We can integrate this first term by parts \begin{aligned}\int d^4 x F^{ij} \frac{\partial {}}{\partial {x^i}} \delta(A_j)&=\int d^4 x \frac{\partial {}}{\partial {x^i}} \left( F^{ij} \delta(A_j) \right)-\int d^4 x \left( \frac{\partial {}}{\partial {x^i}} F^{ij} \right) \delta(A_j) \end{aligned} The first term is a four dimensional divergence, with the contraction of the four gradient $\partial_i$ with a four vector $B^i = F^{ij} \delta(A_j)$. Prof. Poppitz chose $dx^0 d^3 \mathbf{x}$ split of $d^4 x$ to illustrate that this can be viewed as regular old spatial three vector divergences. It is probably more rigorous to mandate that the four volume element is oriented $d^4 x = (1/4!)\epsilon_{ijkl} dx^i dx^j dx^k dx^l$, and then utilize the 4D version of the divergence theorem (or its Stokes Theorem equivalent). The completely antisymmetric tensor should do most of the work required to express the oriented boundary volume. Because we have specified that $A^i$ is zero on the boundary, so is $F^{ij}$, so these boundary terms are killed off. We are left with \begin{aligned}\delta S[A^i(\mathbf{x}, t)]&= -\frac{1}{{4 \pi c}} \int d^4 x \delta (A_j) \partial_i F^{ij} - \frac{1}{{c^2}} \int d^4 x j^i \delta(A_i) \\ &=\int d^4 x \delta A_j(x)\left(-\frac{1}{{4 \pi c}} \partial_i F^{ij}(x) - \frac{1}{{c^2}} j^i\right) \\ &= 0.\end{aligned} This gives us \begin{aligned}\boxed{\partial_i F^{ij} = \frac{4 \pi}{c} j^j}\end{aligned} \hspace{\stretch{1}}(4.10) # Unpacking these. Recall that the Bianchi identity \begin{aligned}\epsilon^{ijkl} \partial_j F_{kl} = 0,\end{aligned} \hspace{\stretch{1}}(5.11) gave us \begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{B} &= 0 \\ \boldsymbol{\nabla} \times \mathbf{E} &= -\frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(5.12) How about the EOM that we have found by varying the action? One of those equations is \begin{aligned}\partial_\alpha F^{\alpha 0} = \frac{4 \pi}{c} j^0 = 4 \pi \rho,\end{aligned} \hspace{\stretch{1}}(5.14) since $j^0 = c \rho$. Because \begin{aligned}F^{\alpha 0} = (\mathbf{E})^\alpha,\end{aligned} \hspace{\stretch{1}}(5.15) we have \begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 4 \pi \rho.\end{aligned} \hspace{\stretch{1}}(5.16) The messier one to deal with is \begin{aligned}\partial_i F^{i\alpha} = \frac{4 \pi}{c} j^\alpha.\end{aligned} \hspace{\stretch{1}}(5.17) Splitting out the spatial and time indexes for the four gradient we have \begin{aligned}\partial_i F^{i\alpha}&= \partial_\beta F^{\beta \alpha} + \partial_0 F^{0 \alpha} \\ &= \partial_\beta F^{\beta \alpha} - \frac{1}{{c}} \frac{\partial {(\mathbf{E})^\alpha}}{\partial {t}} \\ \end{aligned} The spatial index tensor element is \begin{aligned}F^{\beta \alpha} &= \partial^\beta A^\alpha - \partial^\alpha A^\beta \\ &= - \frac{\partial {A^\alpha}}{\partial {x^\beta}} + \frac{\partial {A^\beta}}{\partial {x^\alpha}} \\ &= \epsilon^{\alpha\beta\gamma} B^\gamma,\end{aligned} so the sum becomes \begin{aligned}\partial_i F^{i\alpha}&= \partial_\beta ( \epsilon^{\alpha\beta\gamma} B^\gamma) - \frac{1}{{c}} \frac{\partial {(\mathbf{E})^\alpha}}{\partial {t}} \\ &= \epsilon^{\beta\gamma\alpha} \partial_\beta B^\gamma - \frac{1}{{c}} \frac{\partial {(\mathbf{E})^\alpha}}{\partial {t}} \\ &= (\boldsymbol{\nabla} \times \mathbf{B})^\alpha - \frac{1}{{c}} \frac{\partial {(\mathbf{E})^\alpha}}{\partial {t}}.\end{aligned} This gives us \begin{aligned}\frac{4 \pi}{c} j^\alpha= (\boldsymbol{\nabla} \times \mathbf{B})^\alpha - \frac{1}{{c}} \frac{\partial {(\mathbf{E})^\alpha}}{\partial {t}},\end{aligned} \hspace{\stretch{1}}(5.18) or in vector form \begin{aligned}\boldsymbol{\nabla} \times \mathbf{B} - \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} = \frac{4 \pi}{c} \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(5.19) Summarizing what we know so far, we have \begin{aligned}\boxed{\begin{aligned}\partial_i F^{ij} &= \frac{4 \pi}{c} j^j \\ \epsilon^{ijkl} \partial_j F_{kl} &= 0\end{aligned}}\end{aligned} \hspace{\stretch{1}}(5.20) or in vector form \begin{aligned}\boxed{\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} &= 4 \pi \rho \\ \boldsymbol{\nabla} \times \mathbf{B} -\frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} &= \frac{4 \pi}{c} \mathbf{j} \\ \boldsymbol{\nabla} \cdot \mathbf{B} &= 0 \\ \boldsymbol{\nabla} \times \mathbf{E} +\frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} &= 0\end{aligned}}\end{aligned} \hspace{\stretch{1}}(5.21) # Speed of light \paragraph{Claim}: “$c$” is the speed of EM waves in vacuum. Study equations in vacuum (no sources, so $j^i = 0$) for $A^i = (\phi, \mathbf{A})$. \begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} &= 0 \\ \boldsymbol{\nabla} \times \mathbf{B} &= \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(6.22) where \begin{aligned}\mathbf{E} &= - \boldsymbol{\nabla} \phi - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} \\ \mathbf{B} &= \boldsymbol{\nabla} \times \mathbf{A}\end{aligned} \hspace{\stretch{1}}(6.24) In terms of potentials \begin{aligned}0 &= \boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{A}) \\ &= \boldsymbol{\nabla} \times \mathbf{B} \\ &= \frac{1}{{c}} \frac{\partial {\mathbf{E}}}{\partial {t}} \\ &= \frac{1}{{c}} \frac{\partial {}}{\partial {t}} \left( - \boldsymbol{\nabla} \phi - \frac{1}{{c}} \frac{\partial {\mathbf{A}}}{\partial {t}} \right) \\ &= -\frac{1}{{c}} \frac{\partial {}}{\partial {t}} \boldsymbol{\nabla} \phi - \frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}}{\partial t^2} \end{aligned} Since we also have \begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} \times \mathbf{A}) = \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) - \boldsymbol{\nabla}^2 \mathbf{A},\end{aligned} \hspace{\stretch{1}}(6.26) some rearrangement gives \begin{aligned}\boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) = \boldsymbol{\nabla}^2 \mathbf{A} -\frac{1}{{c}} \frac{\partial {}}{\partial {t}} \boldsymbol{\nabla} \phi - \frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}}{\partial t^2}.\end{aligned} \hspace{\stretch{1}}(6.27) The remaining equation $\boldsymbol{\nabla} \cdot \mathbf{E} = 0$, in terms of potentials is \begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = - \boldsymbol{\nabla}^2 \phi - \frac{1}{{c}} \frac{\partial {\boldsymbol{\nabla} \cdot \mathbf{A}}}{\partial {t}} \end{aligned} \hspace{\stretch{1}}(6.28) We can make a gauge transformation that completely eliminates 6.28, and reduces 6.27 to a wave equation. \begin{aligned}(\phi, \mathbf{A}) \rightarrow (\phi', \mathbf{A}')\end{aligned} \hspace{\stretch{1}}(6.29) with \begin{aligned}\phi &= \phi' - \frac{1}{{c}} \frac{\partial {\chi}}{\partial {t}} \\ \mathbf{A} &= \mathbf{A}' + \boldsymbol{\nabla} \chi\end{aligned} \hspace{\stretch{1}}(6.30) Can choose $\chi(\mathbf{x}, t)$ to make $\phi' = 0$ ($\forall \phi \exists \chi, \phi' = 0$) \begin{aligned}\frac{1}{{c}} \frac{\partial {}}{\partial {t}} \chi(\mathbf{x}, t) = \phi(\mathbf{x}, t)\end{aligned} \hspace{\stretch{1}}(6.32) \begin{aligned}\chi(\mathbf{x}, t) = c \int_{-\infty}^t dt' \phi(\mathbf{x}, t')\end{aligned} \hspace{\stretch{1}}(6.33) Can also find a transformation that also allows $\boldsymbol{\nabla} \cdot \mathbf{A} = 0$ \paragraph{Q:} What would that second transformation be explicitly? \paragraph{A:} To be revisited next lecture, when this is covered in full detail. This is the Coulomb gauge \begin{aligned}\phi &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{A} &= 0\end{aligned} \hspace{\stretch{1}}(6.34) From 6.27, we then have \begin{aligned}\frac{1}{{c^2}} \frac{\partial^2 \mathbf{A}'}{\partial t^2} -\boldsymbol{\nabla}^2 \mathbf{A}' = 0\end{aligned} \hspace{\stretch{1}}(6.36) which is the wave equation for the propagation of the vector potential $\mathbf{A}'(\mathbf{x}, t)$ through space at velocity $c$, confirming that $c$ is the speed of electromagnetic propagation (the speed of light). # References [1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980. ## Relating the canonical energy momentum tensor to the Lagrangian gradient. Posted by peeterjoot on September 12, 2009 [Click here for a PDF of this sequence of posts with nicer formatting] In [4] many tensor quantities are not written in index form, but instead using a vector notation. In particular, the symmetric energy momentum tensor is expressed as \begin{aligned}T(a) = -\frac{\epsilon_0}{2} F a F \end{aligned} \quad\quad\quad(25) where the usual tensor form following by taking dot products with $\gamma^\mu$ and substituting $a = \gamma^\nu$. The conservation equation for the canonical energy momentum tensor of (23) can be put into a similar vector form \begin{aligned}T(a) &= \gamma_\alpha \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} (a \cdot \nabla) A^\beta - a \mathcal{L} \\ 0 &= \nabla \cdot T(a) \end{aligned} \quad\quad\quad(26) The adjoint $\bar{T}$ of the tensor can be calculated from the definition \begin{aligned}\nabla \cdot T(a) = a \cdot \bar{T}(\nabla) \end{aligned} \quad\quad\quad(28) Somewhat unintuitively, this is a function of the gradient. Playing around with factoring out the displacement vector $a$ from (26) that the energy momentum adjoint essentially provides an expansion of the gradient of the Lagrangian. To prepare, let’s introduce some helper notation \begin{aligned}\Pi_\beta \equiv \gamma_\alpha \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \end{aligned} \quad\quad\quad(29) With this our Noether current equation becomes \begin{aligned}\nabla \cdot T(a) &= \left\langle{{ \nabla T(a) }}\right\rangle \\ &= \left\langle{{ \nabla (\Pi_\beta (a \cdot \nabla) A^\beta - a \nabla \mathcal{L} ) }}\right\rangle \\ &= \left\langle{{ \nabla \left(\frac{1}{{2}} \Pi_\beta (a (\nabla A^\beta) + (\nabla A^\beta) a) - a \mathcal{L} \right) }}\right\rangle \\ \end{aligned} Cyclic permutation of the vector products $\left\langle{{a b c}}\right\rangle = \left\langle{{ c a b}}\right\rangle$ can be used in the scalar selection. This is a little more tractable with some helper notation for the $A^\beta$ gradients, say $v^\beta = \nabla A^\beta$. Because of the operator nature of the gradient once the vector order is permuted we have to allow for the gradient to act left or right or both, so arrows are used to disambiguate this where appropriate. \begin{aligned}\nabla \cdot T(a) &= \left\langle{{ \nabla \left(\frac{1}{{2}} \Pi_\beta a v^\beta +\Pi_\beta v^\beta a \right) - \nabla \mathcal{L} a }}\right\rangle \\ &= \left\langle{{ \left( \frac{1}{{2}} v^\beta \stackrel{ \leftrightarrow }\nabla \Pi_\beta \frac{1}{{2}} \nabla (\Pi_\beta v^\beta)- \nabla \mathcal{L} \right) a }}\right\rangle \\ &=a \cdot \left( \frac{1}{{2}} {\left\langle{{ v^\beta \stackrel{ \leftrightarrow }\nabla \Pi_\beta + \nabla (\Pi_\beta v^\beta) }}\right\rangle}_{1} - \nabla \mathcal{L} \right) \end{aligned} This dotted with quantity is the adjoint of the canonical energy momentum tensor \begin{aligned}\bar{T}(\nabla) &=\frac{1}{{2}} {\left\langle{{ v^\beta \stackrel{ \leftrightarrow }\nabla \Pi_\beta + \nabla (\Pi_\beta v^\beta) }}\right\rangle}_{1} - \nabla \mathcal{L} \end{aligned} \quad\quad\quad(30) This can however, be expanded further. First tackling the bidirectional gradient vector term we can utilize the property that the reverse of a vector leaves the vector unchanged. This gives us \begin{aligned}{\left\langle{{ v^\beta \stackrel{ \leftrightarrow }\nabla \Pi_\beta }}\right\rangle}_{1}&={\left\langle{{ v^\beta (\stackrel{ \rightarrow }\nabla \Pi_\beta) }}\right\rangle}_{1}+{\left\langle{{ (v^\beta \stackrel{ \leftarrow }\nabla) \Pi_\beta }}\right\rangle}_{1} \\ &={\left\langle{{ v^\beta (\stackrel{ \rightarrow }\nabla \Pi_\beta) }}\right\rangle}_{1}+{\left\langle{{ \Pi_\beta (\stackrel{ \rightarrow }\nabla v^\beta) }}\right\rangle}_{1} \\ \end{aligned} In the remaining term, using the Hestenes overdot notation clarify the scope of the operator, we have \begin{aligned}\bar{T}(\nabla) &=\frac{1}{{2}} \left({\left\langle{{ v^\beta (\nabla \Pi_\beta) }}\right\rangle}_{1}+{\left\langle{{ \Pi_\beta (\nabla v^\beta) }}\right\rangle}_{1} +{\left\langle{{ (\nabla \Pi_\beta) v^\beta }}\right\rangle}_{1} + {\left\langle{{ \nabla' \Pi_\beta {v^\beta}'}}\right\rangle}_{1} \right)- \nabla \mathcal{L} \\ \end{aligned} The grouping of the first and third terms above simplifies nicely \begin{aligned}\frac{1}{{2}}{\left\langle{{ v^\beta (\nabla \Pi_\beta) }}\right\rangle}_{1} +\frac{1}{{2}} {\left\langle{{ (\nabla \Pi_\beta) v^\beta }}\right\rangle}_{1} &=v^\beta (\nabla \cdot \Pi_\beta) +\frac{1}{{2}} {\left\langle{{ v^\beta (\nabla \wedge \Pi_\beta) }}\right\rangle}_{1} +{\left\langle{{ (\nabla \wedge \Pi_\beta) v^\beta }}\right\rangle}_{1} \\ \end{aligned} Since $a (b \wedge c) + (b \wedge c) a = 2 a \wedge b \wedge c$, which is purely a trivector, the vector grade selection above is zero. This leaves the adjoint reduced to \begin{aligned}\bar{T}(\nabla) &=v^\beta (\nabla \cdot \Pi_\beta) +\frac{1}{{2}} \left({\left\langle{{ \Pi_\beta (\nabla v^\beta) }}\right\rangle}_{1} + {\left\langle{{ \nabla' \Pi_\beta {v^\beta}'}}\right\rangle}_{1} \right)- \nabla \mathcal{L} \\ \end{aligned} For the remainder vector grade selection operators we have something that is of the following form \begin{aligned}\frac{1}{{2}} {\left\langle{{ a b c + b a c }}\right\rangle}_{1} = (a \cdot b ) c \end{aligned} And we are finally able to put the adjoint into a form that has no remaining grade selection operators \begin{aligned}\bar{T}(\nabla)&= (\nabla A^\beta) (\nabla \cdot \Pi_\beta) +(\Pi_\beta \cdot \nabla) (\nabla A^\beta) -\nabla \mathcal{L} \\ &= (\nabla A^\beta) (\stackrel{ \rightarrow }\nabla \cdot \Pi_\beta) +(\nabla A^\beta) (\stackrel{ \leftarrow }\nabla \cdot \Pi_\beta) -\nabla \mathcal{L} \\ &= (\nabla A^\beta) (\stackrel{ \leftrightarrow }\nabla \cdot \Pi_\beta) -\nabla \mathcal{L} \end{aligned} Recapping, we have for the tensor and its adjoint \begin{aligned}0 &= \nabla \cdot T(a) = a \cdot \bar{T}(\nabla) \\ \Pi_\beta &\equiv \gamma_\alpha \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \\ T(a) &= \Pi_\beta (a \cdot \nabla) A^\beta - a \nabla \mathcal{L} \\ \bar{T}(\nabla) &= (\nabla A^\beta) (\stackrel{ \leftrightarrow }\nabla \cdot \Pi_\beta) - \nabla \mathcal{L} \end{aligned} \quad\quad\quad(31) For the adjoint, since $a \cdot \bar{T}(\nabla) = 0$ for all $a$, we must also have $\bar{T}(\nabla) = 0$, which means the adjoint of the canonical energy momentum tensor really provides not much more than a recipe for computing the Lagrangian gradient \begin{aligned}\nabla \mathcal{L} &= (\nabla A^\beta) (\stackrel{ \leftrightarrow }\nabla \cdot \Pi_\beta) \end{aligned} \quad\quad\quad(35) Having seen the adjoint notation, it was natural to see what this was for a multiple scalar field variable Lagrangian, even if it is not intrinsically useful. Observe that the identity (35), obtained so laboriously, is not more than syntactic sugar for the chain rule expansion of the Lagrangian partials (plus application of the Euler-Lagrange field equations). We could obtain this directly if desired much more easily than by factoring out $a$ from $\nabla \cdot T(a) = 0$. \begin{aligned}\partial_\mu \mathcal{L}&=\frac{\partial {\mathcal{L}}}{\partial {A^\beta}} \partial_\mu A^\beta+\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\mu \partial_\alpha A^\beta \\ &=\left( \partial_\alpha \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \right) \partial_\mu A^\beta+\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\alpha \partial_\mu A^\beta \\ &=\partial_\alpha\left(\left( \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \right) \partial_\mu A^\beta\right) \\ \end{aligned} Summing over $\mu$ for the gradient, this reproduces (35), with much less work \begin{aligned}\nabla \mathcal{L} &= \gamma^\mu \partial_\mu \mathcal{L} \\ &=\partial_\alpha\left(\left( \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \right) (\nabla A^\beta)\right) \\ &=(\Pi_\beta \cdot \stackrel{ \leftrightarrow }\nabla) (\nabla A^\beta) \end{aligned} Observe that the Euler-Lagrange field equations are implied in this relationship, so perhaps it has some utility. Also note that while it is simpler to directly compute this, without having started with the canonical energy momentum tensor, we would not know how the two of these were related. # References [4] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003. ## Existence of a symmetry for translational variation. Posted by peeterjoot on September 12, 2009 [Click here for a PDF of this sequence of posts with nicer formatting] Considering an example Lagrangian we found that there was a symmetry provided we could commute the variational derivative with the gradient \begin{aligned}\frac{\delta }{\delta \phi} \mathbf{a} \cdot \boldsymbol{\nabla} \mathcal{L}&=\mathbf{a} \cdot \boldsymbol{\nabla} \frac{\delta \mathcal{L}}{\delta \phi} \end{aligned} What this really means is not clear in general and a better answer to the existence question for incremental translation can be had by considering the transformation of the action directly around the stationary fields. Without really any loss of generality we can consider an action with a four dimensional spacetime volume element, and apply the incremental translation operator to this \begin{aligned}\int &d^4 x a \cdot \nabla \mathcal{L}( A^\beta + \bar{A}^\beta, \partial_\alpha A^\beta + \partial_\alpha \bar{A}^\beta) \\ &=\int d^4 x a \cdot \nabla \mathcal{L}( \bar{A}^\beta, \partial_\alpha \bar{A}^\beta)+\int d^4 x a \cdot \nabla \left(\frac{\partial {\mathcal{L}}}{\partial {A^\beta}} \bar{A^\beta}+\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\alpha \bar{A^\beta}\right)+ \cdots \end{aligned} For the first term we have $a \cdot \nabla \int d^4 x \mathcal{L}( \bar{A}^\beta, \partial_\alpha \bar{A}^\beta)$, but this integral is our stationary action. The remainder, to first order in the field variables, can then be expanded and integrated by parts \begin{aligned}\int &d^4 x a^\mu \partial_\mu \left(\frac{\partial {\mathcal{L}}}{\partial {A^\beta}} \bar{A^\beta}+\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\alpha \bar{A^\beta}\right) \\ &=\int d^4 x a^\mu \left(\left( \partial_\mu \frac{\partial {\mathcal{L}}}{\partial {A^\beta}} \right) \bar{A^\beta}+\frac{\partial {\mathcal{L}}}{\partial {A^\beta}} \left( \partial_\mu \bar{A^\beta} \right)+\left( \partial_\mu \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \right) \partial_\alpha \bar{A^\beta}+\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \left( \partial_\mu \partial_\alpha \bar{A^\beta} \right)\right) \\ &=\int d^4 x \left(\left( a^\mu \partial_\mu \frac{\partial {\mathcal{L}}}{\partial {A^\beta}} \right) \bar{A^\beta}-\left( \partial_\mu a^\mu \frac{\partial {\mathcal{L}}}{\partial {A^\beta}} \right)\bar{A^\beta} +\left( \partial_\mu \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \right) \partial_\alpha \bar{A^\beta}-\left( \partial_\mu a^\mu \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \right) \partial_\alpha \bar{A^\beta} \right) \\ \end{aligned} Since $a^\mu$ are constants, this is zero, so there can be no contribution to the field equations by the addition of the translation increment to the Lagrangian. ## On the existence of the symmetry for rotationally altered field Lagrangian Posted by peeterjoot on September 9, 2009 [Click here for a PDF of this sequence of posts with nicer formatting] ## General existence of the rotational symmetry. The previous example hints at a general method to demonstrate that the incremental Lorentz transform produces a symmetry (which was assumed). It will be sufficient to consider the variation around the stationary field variables for the change due to the action from the incremental rotation operator. That is \begin{aligned}\delta S = \int d^4 x (i \cdot x) \cdot \nabla \mathcal{L}( A^\beta + \bar{A}^\beta, \partial_\alpha A^\beta + \partial_\alpha \bar{A}^\beta) \end{aligned} \quad\quad\quad(42) Performing a first order Taylor expansion of the Lagrangian around the stationary field variables we have \begin{aligned}\delta S &= \int d^4 x (i \cdot x) \cdot \gamma^\mu \partial_\mu \mathcal{L}( A^\beta + \bar{A}^\beta, \partial_\alpha A^\beta + \partial_\alpha \bar{A}^\beta) \\ &= \int d^4 x (i \cdot x) \cdot \gamma^\mu \partial_\mu \left(\frac{\partial {\mathcal{L}}}{\partial {A^\beta}} \bar{A}^\beta+\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} (\partial_\alpha \bar{A}^\beta)\right) \\ &= \int d^4 x (i \cdot x) \cdot \gamma^\mu \left(\left(\partial_\mu \frac{\partial {\mathcal{L}}}{\partial {A^\beta}}\right) \bar{A}^\beta+\frac{\partial {\mathcal{L}}}{\partial {A^\beta}} \partial_\mu \bar{A}^\beta+\left(\partial_\mu \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}}\right) (\partial_\alpha \bar{A}^\beta)+\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\mu (\partial_\alpha \bar{A}^\beta)\right) \\ \end{aligned} Doing the integration by parts we have \begin{aligned}\delta S &= \int d^4 x \bar{A}^\beta \gamma^\mu \cdot \left((i \cdot x) \left(\partial_\mu \frac{\partial {\mathcal{L}}}{\partial {A^\beta}}\right) -\partial_\mu \left(\frac{\partial {\mathcal{L}}}{\partial {A^\beta}} (i \cdot x)\right)\right) \\ &+\int d^4 x (\partial_\alpha \bar{A}^\beta) \gamma^\mu \cdot \left((i \cdot x) \left(\partial_\mu \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}}\right) -\partial_\mu \left( \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} (i \cdot x) \right)\right) \\ &=\int d^4 x \bar{A}^\beta \left((i \cdot x) \cdot \nabla\frac{\partial {\mathcal{L}}}{\partial {A^\beta}}- \nabla \cdot (i \cdot x) \frac{\partial {\mathcal{L}}}{\partial {A^\beta}} \right) +(\partial_\alpha \bar{A}^\beta)\left((i \cdot x) \cdot \nabla\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}}- \nabla \cdot (i \cdot x) \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \right) \end{aligned} Since $(i \cdot x) \cdot \nabla f = \nabla \cdot (i \cdot x) f$ for any $f$, there is no change to the resulting field equations due to this incremental rotation, so we have a symmetry for any Lagrangian that is first order in its derivatives. ## (CORRECTED) Noether current for incremental Lorentz transformation. Posted by peeterjoot on September 8, 2009 Had logic errors in previous post on the same. Corrected here (replacing the pdf version, but retaining the previous mistaken notes). # Guts [Click here for a PDF of this sequence of posts with nicer formatting] Let’s assume that we can use the exponential generator of rotations \begin{aligned}e^{(i \cdot x) \cdot \nabla} = 1 + (i \cdot x) \cdot \nabla + \cdots \end{aligned} \quad\quad\quad(25) to alter a Lagrangian density. In particular, that we can use the first order approximation of this Taylor series, applying the incremental rotation operator $(i \cdot x) \cdot \nabla = i \cdot (x \wedge \nabla)$ to transform the Lagrangian. \begin{aligned}\mathcal{L} \rightarrow \mathcal{L} + (i \cdot x) \cdot \nabla \mathcal{L} \end{aligned} \quad\quad\quad(26) Suppose that we parametrize the rotation bivector $i$ using two perpendicular unit vectors $u$, and $v$. Here perpendicular is in the sense $u v = -v u$ so that $i = u \wedge v = u v$. For the bivector expressed this way our incremental rotation operator takes the form \begin{aligned}(i \cdot x) \cdot \nabla &=((u \wedge v) \cdot x) \cdot \nabla \\ &=(u (v \cdot x) - v (u \cdot x)) \cdot \nabla \\ &=(v \cdot x) u \cdot \nabla - (u \cdot x)) v \cdot \nabla \\ \end{aligned} The operator is reduced to a pair of torque-like scaled directional derivatives, and we’ve already examined the Noether currents for the translations induced by the directional derivatives. It’s not unreasonable to take exactly the same approach to consider rotation symmetries as we did for translation. We found for incremental translations \begin{aligned}a \cdot \nabla \mathcal{L}&=\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} (a \cdot \nabla) {A^\beta}\right) \end{aligned} \quad\quad\quad(27) So for incremental rotations the change to the Lagrangian is \begin{aligned}(i \cdot x) \cdot \nabla \mathcal{L}&=(v \cdot x)\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} (u \cdot \nabla) {A^\beta}\right) -(u \cdot x)\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} (v \cdot \nabla) {A^\beta}\right) \end{aligned} \quad\quad\quad(28) Since the choice to make $u$ and $v$ both unit vectors and perpendicular has been made, there is really no loss in generality to align these with a pair of the basis vectors, say $u = \gamma_\mu$ and $v = \gamma_\nu$. The incremental rotation operator is reduced to \begin{aligned}(i \cdot x) \cdot \nabla &=(\gamma_\nu \cdot x) \gamma_\mu \cdot \nabla - (\gamma_\mu \cdot x)) \gamma_\nu \cdot \nabla \\ &=x_\nu \partial_\mu - x_\mu \partial_\nu \\ \end{aligned} Similarly the change to the Lagrangian is \begin{aligned}(i \cdot x) \cdot \nabla \mathcal{L}&=x_\nu\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\mu {A^\beta}\right) -x_\mu\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\nu {A^\beta}\right) \end{aligned} \quad\quad\quad(29) Subtracting the two, essentially forming $(i \cdot x) \cdot \nabla \mathcal{L} - (i \cdot x) \cdot \nabla \mathcal{L} = 0$, we have \begin{aligned}0 =x_\nu\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\mu {A^\beta}- {\delta^\alpha}_\mu \mathcal{L}\right) -x_\mu\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\nu {A^\beta}- {\delta^\alpha}_\nu \mathcal{L}\right) \end{aligned} \quad\quad\quad(30) We previously wrote \begin{aligned}{T^\alpha}_\nu &= \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\nu A^\beta - {\delta^\alpha}_\nu \mathcal{L} \\ \end{aligned} for the Noether current of spacetime translation, and with that our conservation equation becomes \begin{aligned}0 = x_\nu \partial_\alpha {T^\alpha}_\mu - x_\mu \partial_\alpha {T^\alpha}_\nu \end{aligned} \quad\quad\quad(31) As is, this doesn’t really appear to say much, since we previously also found $\partial_\alpha {T^\alpha}_\nu = 0$. We appear to need a way to pull the x coordinates into the derivatives to come up with a more interesting statement. A test expansion of $\nabla \cdot (i \cdot x) \mathcal{L}$ to see what is left over compared to $(i \cdot x) \cdot \nabla \mathcal{L}$ shows that there is in fact no difference, and we actually have the identity \begin{aligned}i \cdot (x \wedge \nabla) \mathcal{L} = (i \cdot x) \cdot \nabla \mathcal{L} = \nabla \cdot (i \cdot x) \mathcal{L} \end{aligned} \quad\quad\quad(32) This suggests that we can pull the $x$ coordinates into the derivatives of (31) as in \begin{aligned}0 = \partial_\alpha \left( {T^\alpha}_\mu x_\nu - {T^\alpha}_\nu x_\mu \right) \end{aligned} \quad\quad\quad(33) However, expanding this derivative shows that this is fact not the case. Instead we have \begin{aligned}\partial_\alpha \left( {T^\alpha}_\mu x_\nu - {T^\alpha}_\nu x_\mu \right) &={T^\alpha}_\mu \partial_\alpha x_\nu - {T^\alpha}_\nu \partial_\alpha x_\mu \\ &={T^\alpha}_\mu \eta_{\alpha\nu}- {T^\alpha}_\nu \eta_{\alpha\mu} \\ &=T_{\nu\mu} - T_{\mu\nu} \end{aligned} So instead of a Noether current, following the procedure used to calculate the spacetime translation current, we have only a mediocre compromise \begin{aligned}{M^{\alpha}}_{\mu\nu} &\equiv {T^\alpha}_\mu x_\nu - {T^\alpha}_\nu x_\mu \\ \partial_\alpha {M^{\alpha}}_{\mu\nu} &= T_{\nu\mu} - T_{\mu\nu} \end{aligned} \quad\quad\quad(34) Jackson ([4]) ends up with a similar index upper expression \begin{aligned}M^{\alpha\beta\gamma} &\equiv T^{\alpha\beta} x^\gamma - T^{\alpha\gamma} x^\beta \\ \end{aligned} \quad\quad\quad(36) and then uses a requirement for vanishing 4-divergence of this quantity \begin{aligned}0 &= \partial_\alpha M^{\alpha\beta\gamma} \end{aligned} \quad\quad\quad(38) to symmetrize this tensor by subtracting off all the antisymmetric portions. The differences compared to Jackson with upper verses lower indexes are minor for we can follow the same arguments and arrive at the same sort of $0 - 0 = 0$ result as we had in (31) \begin{aligned}0 = x^\nu \partial_\alpha T^{\alpha\mu} - x^\mu \partial_\alpha T^{\alpha\nu} \end{aligned} \quad\quad\quad(39) The only difference is that our not-really-a-conservation equation becomes \begin{aligned}\partial_\alpha M^{\alpha\mu\nu} = T^{\nu\mu} - T^{\mu\nu} \end{aligned} \quad\quad\quad(40) ## An example of the symmetry. While not a proof that application of the incremental rotation operator is a symmetry, an example at least provides some comfort that this is a reasonable thing to attempt. Again, let’s consider the Coulomb Lagrangian \begin{aligned}\mathcal{L} = \frac{1}{{2}} (\boldsymbol{\nabla} \phi)^2 - \frac{1}{{\epsilon_0}}\rho \phi \end{aligned} For this we have \begin{aligned}\mathcal{L}' &= \mathcal{L} + (i \cdot \mathbf{x}) \cdot \boldsymbol{\nabla} \mathcal{L} \\ &= \mathcal{L} - (i \cdot \mathbf{x}) \cdot \frac{1}{{\epsilon_0}} \left( \rho \boldsymbol{\nabla} \phi + \phi \boldsymbol{\nabla} \rho \right) \end{aligned} If the variational derivative of the incremental rotation contribution is zero, then we have a symmetry. \begin{aligned}\frac{\delta }{\delta \phi} (i \cdot \mathbf{x}) \cdot \boldsymbol{\nabla} \mathcal{L} \\ &=(i \cdot \mathbf{x}) \cdot \frac{1}{{\epsilon_0}} \boldsymbol{\nabla} \rho - \sum_m \partial_m \left( (i \cdot \mathbf{x}) \cdot \frac{1}{{\epsilon_0}} \rho \mathbf{e}_m \right) \\ &=(i \cdot \mathbf{x}) \cdot \frac{1}{{\epsilon_0}} \boldsymbol{\nabla} \rho - \boldsymbol{\nabla} \cdot \left( (i \cdot \mathbf{x}) \frac{1}{{\epsilon_0}} \rho \right) \\ \end{aligned} As found in (32), we have $(i \cdot \mathbf{x}) \cdot \boldsymbol{\nabla} = \boldsymbol{\nabla} \cdot (i \cdot \mathbf{x})$, so we have \begin{aligned}\frac{\delta }{\delta \phi} (i \cdot \mathbf{x}) \cdot \boldsymbol{\nabla} \mathcal{L} = 0 \end{aligned} \quad\quad\quad(41) for this specific Lagrangian as expected. Note that the test expansion I used to state (32) was done using only the bivector $i = \gamma_\mu \wedge \gamma_\nu$. An expansion with $i = u^\alpha u^\beta \gamma_\alpha \wedge \gamma_\beta$ shows that this is also the case in shows that this is true more generally. Specifically, this expansion gives \begin{aligned}\nabla \cdot (i \cdot x) \mathcal{L} &= (i \cdot x) \cdot \nabla \mathcal{L} + (\eta_{\alpha\beta} - \eta_{\beta\alpha}) u^\alpha v^\beta \mathcal{L} \\ &= (i \cdot x) \cdot \nabla \mathcal{L} \end{aligned} (since the metric tensor is symmetric). Loosely speaking, the geometric reason for this is that $\nabla \cdot f(x)$ takes its maximum (or minimum) when $f(x)$ is colinear with $x$ and is zero when $f(x)$ is perpendicular to $x$. The vector $i \cdot x$ is a combined projection and 90 degree rotation in the plane of the bivector, and the divergence is left with no colinear components to operate on. While this commutation of the $i \cdot \mathbf{x}$ with the divergence operator didn’t help with finding the Noether current, it does at least show that we have a symmetry. Demonstrating the invariance for the general Lagrangian (at least the single field variable case) likely follows the same procedure as in this specific example above. # References [4] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975. ## Noether currents for incremental Lorentz transformation. Posted by peeterjoot on September 8, 2009 [Click here for a (CORRECTED) PDF of this sequence of posts with nicer formatting] Let’s assume that we can use the exponential generator of rotations \begin{aligned}e^{(i \cdot x) \cdot \nabla} = 1 + (i \cdot x) \cdot \nabla + \cdots \end{aligned} \quad\quad\quad(25) to alter a Lagrangian density. In particular, that we can use the first order approximation of this Taylor series, applying the incremental rotation operator $(i \cdot x) \cdot \nabla = i \cdot (x \wedge \nabla)$ to transform the Lagrangian. \begin{aligned}\mathcal{L} \rightarrow \mathcal{L} + (i \cdot x) \cdot \nabla \mathcal{L} \end{aligned} \quad\quad\quad(26) Suppose that we parametrize the rotation bivector $i$ using two perpendicular unit vectors $u$, and $v$. Here perpendicular is in the sense $u v = -v u$ so that $i = u \wedge v = u v$. For the bivector expressed this way our incremental rotation operator takes the form \begin{aligned}(i \cdot x) \cdot \nabla &=((u \wedge v) \cdot x) \cdot \nabla \\ &=(u (v \cdot x) - v (u \cdot x)) \cdot \nabla \\ &=(v \cdot x) u \cdot \nabla - (u \cdot x)) v \cdot \nabla \\ \end{aligned} The operator is reduced to a pair of torque-like scaled directional derivatives, and we’ve already examined the Noether currents for the translations induced by the directional derivatives. It’s not unreasonable to take exactly the same approach to consider rotation symmetries as we did for translation. We found for incremental translations \begin{aligned}a \cdot \nabla \mathcal{L}&=\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} (a \cdot \nabla) {A^\beta}\right) \end{aligned} \quad\quad\quad(27) So for incremental rotations the change to the Lagrangian is \begin{aligned}(i \cdot x) \cdot \nabla \mathcal{L}&=(v \cdot x)\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} (u \cdot \nabla) {A^\beta}\right) -(u \cdot x)\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} (v \cdot \nabla) {A^\beta}\right) \end{aligned} \quad\quad\quad(28) Since the choice to make $u$ and $v$ both unit vectors and perpendicular has been made, there is really no loss in generality to align these with a pair of the basis vectors, say $u = \gamma_\mu$ and $v = \gamma_\nu$. The incremental rotation operator is reduced to \begin{aligned}(i \cdot x) \cdot \nabla &=(\gamma_\nu \cdot x) \gamma_\mu \cdot \nabla - (\gamma_\mu \cdot x)) \gamma_\nu \cdot \nabla \\ &=x_\nu \partial_\mu - x_\mu \partial_\nu \\ \end{aligned} Similarly the change to the Lagrangian is \begin{aligned}(i \cdot x) \cdot \nabla \mathcal{L}&=x_\nu\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\mu {A^\beta}\right) -x_\mu\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\nu {A^\beta}\right) \end{aligned} \quad\quad\quad(29) Subtracting the two, essentially forming $(i \cdot x) \cdot \nabla \mathcal{L} - (i \cdot x) \cdot \nabla \mathcal{L} = 0$, we have \begin{aligned}0 =x_\nu\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\mu {A^\beta}- {\delta^\alpha}_\mu \mathcal{L}\right) -x_\mu\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\nu {A^\beta}- {\delta^\alpha}_\nu \mathcal{L}\right) \end{aligned} \quad\quad\quad(30) We previously wrote \begin{aligned}{T^\alpha}_\nu &= \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\nu A^\beta - {\delta^\alpha}_\nu \mathcal{L} \\ \end{aligned} for the Noether current of spacetime translation, and with that our conservation equation becomes \begin{aligned}0 = x_\nu \partial_\alpha {T^\alpha}_\mu - x_\mu \partial_\alpha {T^\alpha}_\nu \end{aligned} \quad\quad\quad(31) As is, this doesn’t really appear to say much, since we previously also found $\partial_\alpha {T^\alpha}_\nu = 0$. We appear to need a way to pull the x coordinates into the derivatives to come up with a more interesting statement. A test expansion of $\nabla \cdot (i \cdot x) \mathcal{L}$ to see what is left over compared to $(i \cdot x) \cdot \nabla \mathcal{L}$ shows that there is in fact no difference, and we actually have the identity \begin{aligned}i \cdot (x \wedge \nabla) \mathcal{L} = (i \cdot x) \cdot \nabla \mathcal{L} = \nabla \cdot (i \cdot x) \mathcal{L} \end{aligned} \quad\quad\quad(32) The geometric reason for this is that $\nabla \cdot f(x)$ takes its maximum (or minimum) when $f(x)$ is colinear with $x$ and is zero when $f(x)$ is perpendicular to $x$. The vector $i \cdot x$ is a combined projection and 90 degree rotation in the plane of the bivector, and the divergence is left with no colinear components to operate on. FIXME: bother showing this explicitly? The end result is that we should be able to bring the $x$ coordinates into the derivatives of (31) provided both are brought in. That gives us a more interesting conservation statement, something that has the looks of field angular momentum \begin{aligned}0 = \partial_\alpha \left( x_\nu {T^\alpha}_\mu - x_\mu {T^\alpha}_\nu \right) \end{aligned} \quad\quad\quad(33) The conservation identity could be summarized using \begin{aligned}{M^{\alpha}}_{\mu\nu} &\equiv x_\nu {T^\alpha}_\mu - x_\mu {T^\alpha}_\nu \\ 0 &= \partial_\alpha {M^{\alpha}}_{\mu\nu} \end{aligned} \quad\quad\quad(34) FIXME: Jackson ([4]) states a similar index upper expression \begin{aligned}M^{\alpha\mu\nu} &\equiv x_\nu T^{\alpha\mu} - x_\mu T^{\alpha\nu} \\ 0 &= \partial_\alpha M^{\alpha\mu\nu} \end{aligned} \quad\quad\quad(36) should try to show that these are identical or understand the difference. # References [4] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975. ## Noether currents for translation transformations of field densities. Posted by peeterjoot on September 7, 2009 [Click here for a PDF of this sequence of posts with nicer formatting] # Motivation Previously summarized the derivations for the Euler-Lagrange relations for a field Lagrangian density and a single parameter Noether current. Move on to incremental translation, with eyes on examing incremental Lorentz transformation, and eventually both translation and rotation. # Spacetime translation symmetries and Noether currents. Considering the effect of spacetime translation on the Lagrangian we examine the application of the first order linear Taylor series expansion shifting the vector parameters by an increment $a$. The Lagrangian alteration is \begin{aligned}\mathcal{L} \rightarrow e^{a \cdot \nabla }\mathcal{L} \approx \mathcal{L} + a \cdot \nabla \mathcal{L} \end{aligned} \quad\quad\quad(15) Similar to the addition of derivatives to the Lagrangians of dynamics, we can add in some types of total derivatives $\partial_\mu F^\mu$ to the Lagrangian without changing the resulting field equations (i.e. there is an associated “symmetry” for this Lagrangian alteration). The directional derivative $a \cdot \nabla \mathcal{L} = a^\mu \partial_\mu \mathcal{L}$ appears to be an example of a total derivative alteration that leaves the Lagrangian unchanged. ## On the symmetry. The fact that this translation necessarily results in the same field equations is not necessarily obvious. Using one of the simplest field Lagrangians, that of the Coulomb electrostatic law, we can illustrate that this is true in at least one case, and also see what is required in the general case \begin{aligned}\mathcal{L} = \frac{1}{{2}} (\boldsymbol{\nabla} \phi)^2 - \frac{1}{{\epsilon_0}}\rho \phi = \frac{1}{{2}} \sum_m(\partial_m \phi)^2 - \frac{1}{{\epsilon_0}}\rho \phi \end{aligned} \quad\quad\quad(16) With partials written $\partial_m f = f_m$ we summarize the field Euler-Lagrange equations using the variational derivative \begin{aligned}\frac{\delta }{\delta \phi} &=\frac{\partial }{\partial \phi} - \sum_m \partial_m \frac{\partial }{\partial \phi_m} \end{aligned} \quad\quad\quad(17) Where the extremum condition ${\delta \mathcal{L}}/{\delta \phi} = 0$ produces the field equations. For the Coulomb Lagrangian without (spatial) translation, we have \begin{aligned}\frac{\delta \mathcal{L}}{\delta \phi} &=- \frac{1}{{\epsilon_0}}\rho - \partial_{mm} \phi \end{aligned} \quad\quad\quad(18) So the extremum condition ${\delta \mathcal{L}}/{\delta \phi} = 0$ gives \begin{aligned}\boldsymbol{\nabla}^2 \phi = - \frac{1}{{\epsilon_0}}\rho \end{aligned} \quad\quad\quad(19) Equivalently, and probably more familiar, we write $\mathbf{E} = -\boldsymbol{\nabla} \phi$, and get the differential form of Coulomb’s law in terms of the electric field \begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = \frac{1}{{\epsilon_0}}\rho \end{aligned} \quad\quad\quad(20) To consider the translation case we have to first evaluate the first order translation produced by the directional derivative. This is \begin{aligned}\mathbf{a} \cdot \boldsymbol{\nabla} \mathcal{L} &= \sum_m a_m \partial_m \mathcal{L} \\ &= -\frac{\mathbf{a}}{\epsilon_0} \cdot (\rho \boldsymbol{\nabla} \phi + \phi \boldsymbol{\nabla} \rho) \end{aligned} For the translation to be a symmetry the evaluation of the variational derivative must be zero. In this case we have \begin{aligned}\frac{\delta }{\delta \phi} \mathbf{a} \cdot \boldsymbol{\nabla} \mathcal{L}&= -\frac{\mathbf{a}}{\epsilon_0} \cdot \frac{\delta }{\delta \phi} (\rho \boldsymbol{\nabla} \phi + \phi \boldsymbol{\nabla} \rho) \\ &= -\sum_m \frac{a_m}{\epsilon_0} \frac{\delta }{\delta \phi} (\rho \partial_m \phi + \phi \partial_m \rho) \\ &= -\sum_m \frac{a_m}{\epsilon_0} \left( \frac{\partial }{\partial \phi} - \sum_k \partial_k \frac{\partial }{\partial \phi_k}\right) (\rho \phi_m + \phi \rho_m) \\ \end{aligned} We see that the $\phi$ partials select only $\rho$ derivatives whereas the $\phi_k$ partials select only the $\rho$ term. All told we have zero \begin{aligned}\left( \frac{\partial }{\partial \phi} - \sum_k \partial_k \frac{\partial }{\partial \phi_k}\right) (\rho \phi_m + \phi \rho_m) &=\rho_m - \sum_k \partial_k \rho \delta_{km} \\ &=\rho_m - \partial_m \rho \\ &= 0 \end{aligned} This example illustrates that we have a symmetry provided we can “commute” the variational derivative with the gradient \begin{aligned}\frac{\delta }{\delta \phi} \mathbf{a} \cdot \boldsymbol{\nabla} \mathcal{L}&=\mathbf{a} \cdot \boldsymbol{\nabla} \frac{\delta \mathcal{L}}{\delta \phi} \end{aligned} \quad\quad\quad(21) Since ${\delta \mathcal{L}}/{\delta \phi} = 0$ by construction, the resulting field equations are unaltered by such a modification. Are there conditions where this commutation is not possible? Some additional exploration on symmetries associated with addition of derivatives to field Lagrangians was made previously in ([3]). After all was said and done, the conclusion motivated by this simple example was also reached. Namely, we require the commutation condition (21) between the variational derivative and the gradient of the Lagrangian. ## Noether current derivation. With the assumption that the Lagrangian translation induces a symmetry, we can proceed with the calculation of the Noether current. This procedure for deriving the Noether current for an incremental spacetime translation follows along similar lines as the scalar alteration considered previously. We start with the calculation of the first order alteration, expanding the derivatives. Let’s work with a multiple field Lagrangian $\mathcal{L} = \mathcal{L}(A^\beta, \partial_\alpha A^\beta)$ right from the start \begin{aligned}a \cdot \nabla \mathcal{L}&=a^\mu \partial_\mu \mathcal{L} \\ &=a^\mu \left(\frac{\partial {\mathcal{L}}}{\partial {A^\sigma}} \frac{\partial {A^\sigma}}{\partial {x^\mu}}+\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \frac{\partial {(\partial_\alpha A^\beta)}}{\partial {x^\mu}}\right) \\ \end{aligned} Using the Euler-Lagrange field equations in the first term, and switching integration order in the second this can be written as a single derivative \begin{aligned}a \cdot \nabla \mathcal{L}&=a^\mu \left(\partial_\alpha \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \frac{\partial {A^\beta}}{\partial {x^\mu}}+\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\alpha \frac{\partial {A^\beta}}{\partial {x^\mu}}\right) \\ &=a^\mu \partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \frac{\partial {A^\beta}}{\partial {x^\mu}}\right) \\ \end{aligned} In the scalar Noether current we were able to form an similar expression, but one that was a first order derivative that could be set to zero, to fix the conservation relationship. Here there’s no such freedom, but we can sneakily subtract $a \cdot \nabla \mathcal{L}$ from itself to calculate such a zero \begin{aligned}0 =\partial_\alpha \left(\frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} a^\mu \frac{\partial {A^\beta}}{\partial {x^\mu}} - a^\alpha \mathcal{L}\right) \end{aligned} \quad\quad\quad(22) Since this must hold for any vector $a$, we have the freedom to choose the simplest such vector, a unit vector $a = \gamma_\nu$, for which $a^\mu = {\delta^\mu}_\nu$. Our current and its zero divergence relationship then becomes \begin{aligned}{T^\alpha}_\nu &= \frac{\partial {\mathcal{L}}}{\partial {(\partial_\alpha A^\beta)}} \partial_\nu A^\beta - {\delta^\alpha}_\nu \mathcal{L} \\ 0 &= \partial_\alpha {T^\alpha}_\nu \end{aligned} \quad\quad\quad(23) This isn’t the symmetric energy momentum tensor that we want in the electrodynamics context although it can be obtained from it by adding just the right zero. TO BE CONTINUED: Noether current for incremental rotation. # References [3] Peeter Joot. Canonical energy momentum tensor and Lagrangian translation [online]. http://sites.google.com/site/peeterjoot/math2009/stress_energy_noethers.pdf. ## Noether currents for rotational changes. Posted by peeterjoot on September 5, 2009 [Click here for a PDF of this sequence of posts with nicer formatting] EDITed to remove maxwell Lagrangian stuff … think it was wrong. # Motivation The article ([1]) details the calculation for a conserved current associated with an incremental Poincare transformation. This is used to directly determine the symmetric energy momentum tensor for Maxwell’s equations, in contrast to the canonical energy momentum tensor (arising from spacetime translation) which is not symmetric but can be symmetrized with other arguments. I believe that I am slowly accumulating the tools required to understand this paper. One such tool is likely the exponential rotational generator examined in [2], utilizing the angular momentum operator. Here I review some of the Noether conservation calculations and the associated Noether currents for a few example Lagrangian densities. Then I hope to see how to apply similar techniques to these using an angular momentum operator alteration of the Lagrangian density. # Field Euler-Lagrange equations. The extremization of the action integral \begin{aligned}S &= \int \mathcal{L} d^4 x \end{aligned} \quad\quad\quad(1) can be dealt with (following Feynman) as a first order Taylor expansion and integration by parts exercise. A single field variable example serves to illustrate. A first order Lagrangian of a single field variable has the form \begin{aligned}\mathcal{L} = \mathcal{L}(\phi, \partial_\mu \phi) \end{aligned} \quad\quad\quad(2) Let us vary the field $\phi \rightarrow \phi + \bar{\phi}$, inducing a corresponding variation in the action \begin{aligned}S + \delta S&= \int \mathcal{L}(\phi + \bar{\phi}, \partial_\mu (phi + \bar{\phi}) d^4 x \\ &= \int d^4 x \left(\mathcal{L}(\bar{\phi}, \partial_\mu \bar{\phi})+\bar{\phi} \frac{\partial {\mathcal{L}}}{\partial {\phi}}+\partial_\mu \bar{\phi} \frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu \phi)}}+ \cdots \right) \end{aligned} Neglecting any second or higher order terms the change in the action from the assumed solution is \begin{aligned}\delta S&=\int d^4 x \left( \bar{\phi} \frac{\partial {\mathcal{L}}}{\partial {\phi}} +\partial_\mu \bar{\phi} \frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu \phi)}} \right) \end{aligned} \quad\quad\quad(3) This is now integrable by parts yielding \begin{aligned}\delta S&=\int d^3 x \left( {\left. \bar{\phi} \partial_\mu \mathcal{L} \right\vert}_{\partial x^\mu} \right)+\int d^4 x \bar{\phi} \left( \frac{\partial {\mathcal{L}}}{\partial {\phi}} - \partial_\mu \frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu \phi)}} \right) \end{aligned} \quad\quad\quad(4) Here $d^3 x$ is taken to mean that part of the integration not including $dx_\mu$. The field $\bar{\phi}$ is always required to vanish on the boundary as in the dynamic Lagrangian arguments, so the first integral is zero. If the remainder is zero for all fields $\bar{\phi}$, then the inner term must be zero, and we the field Euler-Lagrange equations as a result \begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {\phi}} - \partial_\mu \frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu \phi)}} = 0 \end{aligned} \quad\quad\quad(5) When we have multiple field variables, say $A_\nu$, the chain rule expansion leading to (3) will have to be modified to sum over all the field variables, and we end up instead with \begin{aligned}\delta S&=\int d^4 x \sum_{\nu} \bar{A_\nu} \left( \frac{\partial {\mathcal{L}}}{\partial {A_\nu}} - \partial_\mu \frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu A_\nu)}} \right) \end{aligned} \quad\quad\quad(6) So for $\delta S = 0$ for all $\bar{A}_\nu$ we have a set of equations, one for each $\nu$ \begin{aligned}\frac{\partial {\mathcal{L}}}{\partial {A_\nu}} - \partial_\mu \frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu A_\nu)}} = 0 \end{aligned} \quad\quad\quad(7) # Field Noether currents. The single parameter Noether conservation equation again is mainly application of the chain rule. Illustrating with the one field variable case, with an altered field variable $\phi \rightarrow \phi'(\theta)$, and \begin{aligned}\mathcal{L}' = \mathcal{L}(\phi', \partial_\mu \phi') \end{aligned} \quad\quad\quad(8) Examining the change of $\mathcal{L}'$ with $\theta$ we have \begin{aligned}\frac{d \mathcal{L}'}{d \theta}&=\frac{\partial {\mathcal{L}}}{\partial {\phi'}} \frac{\partial {\phi'}}{\partial {\theta}}+\frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu \phi')}}\frac{\partial {(\partial_\mu \phi')}}{\partial {\theta}} \end{aligned} For the last term we can switch up the order of differentiation \begin{aligned}\frac{\partial {(\partial_\mu \phi')}}{\partial {\theta}}&=\frac{\partial {}}{\partial {\theta}}\frac{\partial {\phi'}}{\partial {x^\mu}} \\ &= \frac{\partial {}}{\partial {x^\mu}} \frac{\partial {\phi'}}{\partial {\theta}} \end{aligned} Additionally, with substitution of the Euler-Lagrange equations in the first term we have \begin{aligned}\frac{d \mathcal{L}'}{d \theta}&=\left( \frac{\partial {}}{\partial {x^\mu}} \frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu \phi')}} \right) \frac{\partial {\phi'}}{\partial {\theta}}+\frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu \phi')}} \frac{\partial {}}{\partial {x^\mu}} \frac{\partial {\phi'}}{\partial {\theta}} \\ \end{aligned} But this can be directly anti-differentiated yielding the Noether conservation equation \begin{aligned}\frac{d \mathcal{L}'}{d \theta}=\frac{\partial {}}{\partial {x^\mu}} \left( \frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu \phi')}} \frac{\partial {\phi'}}{\partial {\theta}} \right) \end{aligned} \quad\quad\quad(9) With multiple field variables we’ll have a term in the chain rule expansion for each field variable. The end result is pretty much the same, but we have to sum over all the fields \begin{aligned}\frac{d \mathcal{L}'}{d \theta}=\sum_\nu \frac{\partial {}}{\partial {x^\mu}} \left( \frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu {A'}_\nu)}} \frac{\partial {{A'}_\nu}}{\partial {\theta}} \right) \end{aligned} \quad\quad\quad(10) Unlike the field Euler-Lagrange equations we have just one here, not one for each field variable. In this multivariable case, expression in vector form can eliminate the sum over field variables. With $A' = {A'}_\nu \gamma^\nu$, we have \begin{aligned}\frac{d \mathcal{L}'}{d \theta}=\frac{\partial {}}{\partial {x^\mu}} \left( \gamma_\nu \frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu {A'}_\nu)}} \cdot \frac{\partial {A'}}{\partial {\theta}} \right) \end{aligned} \quad\quad\quad(11) With an evaluation at $\theta = 0$, we have finally \begin{aligned}{\left. \frac{d \mathcal{L}'}{d \theta} \right\vert}_{\theta=0}=\frac{\partial {}}{\partial {x^\mu}} \left( \gamma_\nu \frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu {A}_\nu)}} \cdot {\left. \frac{\partial {A'}}{\partial {\theta}} \right\vert}_{\theta=0}\right) \end{aligned} \quad\quad\quad(12) When the Lagrangian alteration is independent of $\theta$ (i.e. is invariant), it is said that there is a symmetry. By (12) we have a conserved quantity associated with this symmetry, some quantity, say $J$ that has a zero divergence. That is \begin{aligned}J^\mu &= \gamma_\nu \frac{\partial {\mathcal{L}}}{\partial {(\partial_\mu {A}_\nu)}} \cdot {\left. \frac{\partial {A'}}{\partial {\theta}} \right\vert}_{\theta=0} \\ 0 &= \partial_\mu J^\mu \end{aligned} \quad\quad\quad(13) TO BE CONTINUED: Review the Noether derivation associated with spacetime translation, and the associated conservation currents. Eventually try to find the Noether current for a linearized alteration of the Lagrangian using the angular momentum operator or the full exponential operator. # References [1] M. Montesinos and E. Flores. {Symmetric energy-momentum tensor in Maxwell, Yang-Mills, and Proca theories obtained using only Noether’s theorem}. Arxiv preprint hep-th/0602190, 2006. [2] Peeter Joot. Generator of rotations in arbitrary dimensions. [online]. http://sites.google.com/site/peeterjoot/math2009/rotationGenerator.pdf.
2017-12-16 13:06:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 267, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 5451.189954874944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588072.75/warc/CC-MAIN-20171216123525-20171216145525-00637.warc.gz"}
https://docs.panda3d.org/1.10/python/reference/panda3d.core.AnalogNode
# AnalogNode¶ from panda3d.core import AnalogNode class AnalogNode Bases: DataNode This is the primary interface to analog controls like sliders and joysticks associated with a ClientBase. This creates a node that connects to the named analog device, if it exists, and provides hooks to the user to read the state of any of the sequentially numbered controls associated with that device. Each control can return a value ranging from -1 to 1, reflecting the current position of the control within its total range of motion. The user may choose up to two analog controls to place on the data graph as the two channels of an xy datagram, similarly to the way a mouse places its position data. In this way, an AnalogNode may be used in place of a mouse. Inheritance diagram __init__(param0: AnalogNode) __init__(client: ClientBase, device_name: str) __init__(device: InputDevice) clearOutput(channel: int)None Removes the output to the data graph associated with the indicated channel. See setOutput(). static getClassType()TypeHandle getControlState(index: int)float Returns the current position of indicated analog control identified by its index number, or 0.0 if the control is unknown. The normal range of a single control is -1.0 to 1.0. getNumControls()int Returns the number of analog controls known to the AnalogNode. This number may change as more controls are discovered. getOutput(channel: int)int Returns the analog control index that is output to the data graph on the indicated channel, or -1 if no control is output on that channel. See setOutput(). isControlKnown(index: int)bool Returns true if the state of the indicated analog control is known, or false if we have never heard anything about this particular control. isOutputFlipped(channel: int)bool Returns true if the analog control index that is output to the data graph on the indicated channel is flipped. See setOutput(). isValid()bool Returns true if the AnalogNode is valid and connected to a server, false otherwise. setOutput(channel: int, index: int, flip: bool)None Causes a particular analog control to be placed in the data graph for the indicated channel. Normally, a mouse uses channels 0 and 1 for the X and Y information, respectively; channels 0, 1, and 2 are available. If flip is true, the analog control value will be reversed before outputting it.
2020-10-01 21:41:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2529688775539398, "perplexity": 1862.7253487074643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402132335.99/warc/CC-MAIN-20201001210429-20201002000429-00297.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-10-quadratic-equations-10-3-quadratic-formula-concept-quiz-10-3-page-452/3
## Elementary Algebra Published by Cengage Learning # Chapter 10 - Quadratic Equations - 10.3 - Quadratic Formula - Concept Quiz 10.3 - Page 452: 3 False #### Work Step by Step The standard form of a quadratic equation is $ax^{2}+bx+c=0$. Also, the quadratic formula is: $x=\frac{-b \pm \sqrt {b^{2}-4ac}}{2a}$ A quadratic equation must be brought into its standard form of $ax^{2}+bx+c=0$ if it is to be solved by the quadratic formula. Since $3x^{2}+2x+5=0$ is according to the standard form of a quadratic equation $ax^{2}+bx+c=0$, it can be directly solved through the quadratic formula without the reduction of its coefficient $a$ to one. Therefore, the question statement is false. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-01-23 15:27:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7509372234344482, "perplexity": 402.13425655442245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584334618.80/warc/CC-MAIN-20190123151455-20190123173455-00618.warc.gz"}
https://mpl-interactions.readthedocs.io/en/0.9.1/API.html
API¶ pyplot¶ Control the output of standard plotting functions such as plot and hist using sliders and other widgets. When using the ipympl backend these functions will leverage ipywidgets for the controls, otherwise they will use the built-in Matplotlib widgets. interactive_plot Control a plot using widgets interactive_hist Control the contents of a histogram using widgets. interactive_scatter Control a scatter plot using widgets. interactive_imshow Control an image using widgets. interactive_axhline Control an horizontal line using widgets. interactive_axvline Control a vertical line using widgets. generic¶ Functions that will be useful irrespective of backend. heatmap_slicer Compare horizontal and/or vertical slices accross multiple arrays. zoom_factory Add ability to zoom with the scroll wheel. panhandler Enable panning a plot with any mouse button. image_segmenter Manually segment an image with the lasso selector. hyperslicer View slices from a hyperstack of images selected by sliders. utilities¶ Functions that make some features in Matplotlib a bit more convenient. ioff A context manager for turning interactive mode off. figure Matplotlib figure but a scalar figsize will multiply rcParams figsize. nearest_idx Return the index of the array that is closest to value.
2021-04-12 04:39:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2105920910835266, "perplexity": 10556.10586492392}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066568.16/warc/CC-MAIN-20210412023359-20210412053359-00094.warc.gz"}
http://clay6.com/qa/17717/if-overrightarrow-a-3-overrightarrow-b-large-frac-then-overrightarrow-a-tim
Browse Questions # If $|\overrightarrow a|=3,\:|\overrightarrow b|=\large\frac{\sqrt 2}{3}$, then $\overrightarrow a\times\overrightarrow b$ is a unit vector if the angle between $\overrightarrow a$ and $\overrightarrow b$ is ? $\begin{array}{1 1} \large\frac{\pi}{6} \\ \large\frac{\pi}{4} \\ \large\frac{\pi}{3} \\ \large\frac{\pi}{2} \end{array}$ $|\overrightarrow a\times\overrightarrow b|=|\overrightarrow a||\overrightarrow b| sin\theta$ $1=3.\large\frac{\sqrt 2}{3}$.$sin\theta$ $\Rightarrow\:sin\theta=\large\frac{1}{\sqrt 2}$ $\Rightarrow\:\theta=\large\frac{\pi}{4}$
2016-12-09 05:43:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851750135421753, "perplexity": 233.02360589223954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542686.84/warc/CC-MAIN-20161202170902-00121-ip-10-31-129-80.ec2.internal.warc.gz"}
https://indico.cern.ch/event/740038/timetable/?view=standard_numbered_inline_minutes
We deployed Indico v3.2. See our blog post for details on the changes. # Scale invariance in particle physics and cosmology Europe/Zurich 4/3-006 - TH Conference Room (CERN) ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Description Short description Theories with (classical or quantum) scale-invariance provide a dynamical origin of all mass scales and present a number of interesting aspects: they are an appealing framework to address the hierarchy problem and lead to naturally flat inflationary potentials and dark matter candidates.  The aim of the meeting is to discuss scale invariance in particle physics and cosmology. This theory institute is supported by the CERN-Korea collaboration program and the ERC grants NEO-NAT and NuBSM. Organizers: A. Eichhorn, H. M. Lee,  S. C. Park, J. Rubio, A. Salvio, S. Sibiryakov, M. Shaposhnikov, A. Strumia, C. Wetterich Partial list of speakers • Steven Abel • Damiano Anselmi • Fedor Bezrukov • John Donoghue • Astrid Eichhorn • Dumitru Ghilencea • Jinn-Ouk Gong • Christopher T. Hill • Bob Holdom • Deog-Ki Hong • Gerard 't Hooft • Sang Hui Im • D. R. Timothy Jones • Elias Kiritsis • Archil Kobakhidze • Manfred Lindner • Anupam Mazumdar • Philip Mannheim • Hermann Nicolai • Kin-ya Oda • Roberto Percacci • Eliezer Rabinovici • Graham Ross • Javier Rubio • Alberto Salvio • Francesco Sannino • Kellog Stelle • Christof Wetterich Practical information: There is no registration fee. A limited number of rooms have already been pre-booked at CERN hotels, please contact THworkshops.secretariat@cern.ch (after registration) if you want to stay there during the meeting. The deadline for registration was on 1 December 2018 and, therefore, the registration is now closed. Extended description The synergy of the Standard Model of particle physics and General Relativity led to a consistent framework that has been confirmed by numerous experiments and observations. In spite of their undeniable success, these theories cannot be considered as complete theories of Nature. On the one hand, they fail to explain basic observational facts such as the existence of neutrino masses, the presence of a sizable dark matter component or the matter-antimatter asymmetry of the Universe. On the other hand, there is no satisfactory account of tiny dimensionless ratios, such as the Fermi scale over the Planck scale (hierarchy problem), or the dark energy density or cosmological constant in units of the Planck mass. The discovery of a relatively light Higgs boson at the LHC together with the absence of new physics beyond the Standard Model rejuvenated scale symmetry as an appealing scenario where to address the hierarchy problem. This symmetry consists of a common multiplicative scaling of all fields according to their dimension. No dimensional parameters are allowed to appear in the action. In particular, dilatation symmetry ensures the absence of a Higgs mass term. In order to describe the appearance of physical scales, a viable scale-invariant theory should exhibit dilatation symmetry breaking in one way or another. This symmetry breaking could be explicit as a consequence of dimensional transmutation, as happens for instance in QCD. In this case, the dilatation symmetry is anomalous and appears only in the vicinity of non-trivial fixed points. In addition, spontaneous scale symmetry breaking driven, for instance, by the vacuum expectation value of a scalar field, could provide particle masses even in the absence of explicit mass terms. In this type of scenarios, scale invariance can be preserved at the quantum level by means of a scale-invariant regularization prescription or as a consequence of a fixed point of running couplings. The inclusion of gravity in the aforementioned scale-invariant framework might have far-reaching consequences. On the one hand, the breaking of the continuous dilatation symmetry translates into the appearance of a pseudo-Goldstone boson or dilaton which, due to its small mass, could potentially contribute to the early and late time acceleration of the Universe or to the number of relativistic degrees of freedom at big bang nucleosynthesis and recombination. On the other hand, the small value of the Higgs mass at the Planck scale and the associated emergence of scale invariance could be a natural consequence of asymptotic safety, as already suggested by several functional renormalization group studies. The goal of this theory institute is to discuss the role of classical and quantum scale invariance in high energy physics and cosmology. Participants • Alberto Salvio • Alessandro Strumia • Alexander Helmboldt • Alexander Vikman • Alexandros Karam • Andi Hektor • Andrey Shkerin • Anna Tokareva • Anupam Mazumdar • Archil Kobakhidze • Ariel Edery • Astrid Eichhorn • Bob Holdom • Bogumila Swiezewska • Chris Ripken • Christabel Powell • Christof Wetterich • Christopher Hill • Damiano Anselmi • Daniel Elander • Daniele Teresi • Deog Ki Hong • Dumitru Ghilencea • Eduardo Guendelman • Elias Kiritsis • Eliezer Rabinovici • Erwin Tanin • Fedor Bezrukov • Filipe de Oliveira Salles • Francesco Sannino • Gabriel Menezes • Georgios Karananas • Gerardus 't Hooft • Graham Ross • Guido Martinelli • Hang Bae Kim • Hardi Veermäe • Hermann Nicolai • Hisashi Okui • Hye-Sung Lee • Hyun Min Lee • Jan Kwapisz • Jason L Evans • Javier Rubio • Jinn-Ouk Gong • John Donoghue • Kellog Stelle • Kin-ya Oda • Leonardo Modesto • Manfred Lindner • Mario Herrero Valea • Massimiliano Rinaldi • Matti Heikinheimo • Michele Frigerio • Mikhail Shaposhnikov • Minho SON • Petr Jizba • Philip Mannheim • pyungwon ko • Roberto Franceschini • Roberto Percacci • Ruiwen Ouyang • Sang Hui Im • Satoshi Iso • Seong Chan Park • Seung J. Lee • Silvia De Bianchi • Silvia Vicentini • Sin Kyu Kang • SRIJA CHAKRABORTY • Steven Abel • Tommi Markkanen • Tommi Tenkanen • Vedran Brdar • Wanil Park Webcast There is a live webcast for this event TH secretariat • Monday, January 28 • 1 Quantum Scale Symmetry 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Quantum scale symmetry is the realization of scale invariance in a quantum field theory. No parameters with dimension of length or mass are present in the quantum effective action. Quantum scale symmetry is generated by quantum fluctuations via the presence of fixed points for running couplings. As for any global symmetry, the ground state or cosmological state may be scale invariant or not. Spontaneous breaking of scale symmetry leads to massive particles and predicts a massless Goldstone boson. A massless particle spectrum follows from scale symmetry of the effective action only if the ground state is scale symmetric. Approximate scale symmetry close to a fixed point leads to important consequences for observations in various areas of fundamental physics. We review consequences of scale symmetry for particle physics, quantum gravity and cosmology. For particle physics, scale symmetry is closely linked to the tiny ratio between the Fermi scale of weak interactions and the Planck scale for gravity. For quantum gravity, scale symmetry is associated to the ultraviolet fixed point which allows for a non-perturbatively renormalizable quantum field theory for all known interactions. The interplay between gravity and particle physics at this fixed point permits to predict couplings of the standard model or other effective low energy models'' for momenta below the Planck mass. In particular, quantum gravity determines the ratio of Higgs boson mass and top quark mass. In cosmology, approximate scale symmetry explains the almost scale-invariant primordial fluctuation spectrum which is at the origin of all structures in the universe. The pseudo-Goldstone boson of spontaneously broken approximate scale symmetry may be responsible for dynamical dark energy and a solution of the cosmological constant problem. Speaker: Wetterich Christof • 2 Scale Invariance and its Breaking in Cosmology 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Holographic ideas are used to set-up cosmology along the Wilsonian paradigm. Cosmological solutions are investigated, and the connection between de Sitter and Anti de Sitter regimes in the supergravity landscape is probed. Speaker: Elias Kiritsis • 10:20 AM Coffee break 4/2-011 - TH common room ### 4/2-011 - TH common room #### CERN 15 Show room on map • 10:50 AM Discussion 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map • 11:50 AM Lunch break • 3 K(E10) and Standard Model Fermions 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map In this talk I will describe an attempt to understand the fermion spectrum of the Standard Model (with three generations of quarks and leptons, and no "extra baggage") from a more fundamental theory. One interesting possible consequence of this scheme is the emergence of new super-heavy dark matter candidates. Speaker: Hermann Nicolai • 4 Three Scale Invariant Tales 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Speaker: Eliezer Rabinovici • 2:50 PM Coffee break 4/2-011 - TH common room ### 4/2-011 - TH common room #### CERN 15 Show room on map • 5 Scale Invariance and Symmetries in Inflation 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Speaker: Jinn-Ouk Gong • 6 Building a viable asymptotically safe SM 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map I report on recent work to embed the Standard Model within an asymptotically safe framework. This approach, which is based on gauge-Yukawa theories with interacting UV fixed points, focusses on providing a field theoretical UV completion to the SM along with radiative symmetry breaking. The framework yields several generic predictions. Speaker: Steven Abel • 7 Towards a Scale Invariant Theory of Gravity 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map I will discuss some of the recent results obtained in ghost free infinite derivative theory of gravity, which suggests towards scale invariant, conformally flat for static and rotating non-singular metric solutions at both linear and non-linear level. From quantum perspective such class of theory provides a new scale in the infrared which points towards transmutation of scales from ultraviolet to infrared. Based on these results I will discuss further conjectures that astrophysical blackholes can be mimicked by non local stars puffed up slightly larger than the horizon scale, such that information loss paradox can be ameliorated. Speaker: Anupam Mazumdar • 5:00 PM Discussion 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map • Tuesday, January 29 • 8 Dimensional Transmutation in Particle Physics and Cosmology 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Models with classical scale invariance (CSI) provides us with a dynamical origin for all masses (via dimensional transmutation) and can account for all evidence of beyond the standard model physics. Furthermore, a general theory with CSI is renormalizable (even in the gravity sector) and can solve the hierarchy problem. The price to pay is a classical ghost. The theory, however, admits quantizations that preserve unitarity and a Hamiltonian bounded from below. The solution of the hierarchy problem implies that the theory can be tested through inflationary data (indeed it predicts a (gravitational) isocurvature mode that could be observed in the next future). I will give an overview of CSI and introduce the subsequent talks on this subject. Speaker: Alberto Salvio • 9 Gauge Assisted Quadratic Gravity 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map In work with Gabriel Menezes, we are exploring the use of quantum field theory for quantum gravity at all scales. Starting from a scale invariant action, our variant uses an extra Yang-Mills gauge interaction to induce the Einstein action in such a way that gravity is kept weakly coupled at all scales. We have explored the unusual field theoretic aspects of this theory, and so far have promising results. Speaker: John Donoghue • 9:50 AM Coffee break 4/2-011 - TH common room ### 4/2-011 - TH common room #### CERN 15 Show room on map • 10 Spontaneous Breaking of Restricted Weyl Symmetry in Pure R^2 Gravity 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Recent work has shown that pure R^2 gravity (i.e. R^2 gravity with no extra R term) has a symmetry that is larger than scale symmetry and smaller than full Weyl symmetry. This has been dubbed restricted Weyl symmetry as it involves a Weyl transformation where the conformal factor has a constraint. Most importantly, this symmetry is spontaneously broken when the vacuum (background spacetime) has a non-zero Ricci scalar. In this case, the theory can be shown to be equivalent to Einstein gravity with non-zero cosmological constant and a massless scalar field. The massless scalar field is identified as the Goldstone boson of the broken sector. In spontaneously broken theories, the original symmetry of the Lagrangian is realized as a shift symmetry of the Goldstone bosons. We show that this is the case also here. The unbroken R=0 sector is completely different and has no connection to Einstein gravity. Speaker: Ariel Edery • 11 Fakeons, quantum gravity and the classical limit 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map A new quantization prescription is able to endow quantum field theory with a new type of "particle", the fakeon (fake particle), which mediates interactions, but cannot be observed. A massive fakeon of spin 2 (together with a scalar field) allows us to build a theory of quantum gravity that is both renormalizable and unitary, and to some extent unique. After presenting the general properties of this theory, I discuss its classical limit, which carries important remnants of the fakeon quantization prescription. Speaker: Damiano Anselmi • 11:20 AM Discussion 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map • 11:50 AM Lunch break • 12 Conformal extensions of the Standard Model 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Speaker: Manfred Lindner (Max Planck Institut fuer Kernphysik, Heidelberg, Germany) • 13 A ghost and a naked singularity: facing our demons 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map We encounter these demons on the path towards a UV complete QFT of gravity and a horizonless replacement for black holes. The fate of the ghost and related issues are discussed in the strong coupling version of classically scale invariant quadratic gravity. We compare this story to QCD. The 2-2-hole solutions appearing in a classical approximation of the gravity theory are then discussed, along with some new results. The timelike singularity is shrouded by a fireball and thorny issues of black hole horizons are avoided. Observable consequences might even lurk in present LIGO data. Speaker: Bob Holdom • 2:50 PM Coffee break 4/2-011 - TH common room ### 4/2-011 - TH common room #### CERN 15 Show room on map • 14 Scale invariance: super-cooling and Dark Matter 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map If the electroweak sector of the Standard Model is described by classically conformal dynamics, the early Universe evolution can be substantially altered. In particular, one generically has a significant period of super-cooling, often ended when quark condensates form at the QCD phase transition. This scenario is potentially rich in cosmological consequences, such as renewed possibilities for electroweak baryogenesis and gravitational-wave production. In the second part of the talk we will focus on Super-cool Dark Matter, a new mechanism of generation of the cosmological Dark-Matter relic density: super-cooling can easily suppress the amount of Dark Matter down to the desired level. This mechanism generically takes place in old and new scale-invariant models. Speakers: Dr Daniele Teresi (Università di Pisa), Satoshi Iso • 15 On the preheating in a scale invariant UV extension of Higgs inflation 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Higgs inflation with the nonminimal coupling to gravity \xi H^2 R is the unique model to realize inflation driven by the Standard Model Higgs field in a classically scale invariant way. However, the reheating in that model is not understood well yet. In particular, in the so-called “non-critical” regime, it turned out that there are violent instabilities in the longitudinal mode at very high energy scales. Since they lie beyond the cutoff scale of the theory, it is not clear if they are really physical, and how they affect the process of reheating if ever. In this talk, I will point out that by extending the model by adding the classically scale invariant R^2 term, the model is UV extended so that it becomes possible to analyze the instabilities within the validity of the theory. For stronger R^2 term, I show that the instability gradually disappear. I will clarify if there are some parameter spaces where the instabilities still remain below the cutoff scale and the instabilities are really physical. I also discuss how the reheating will proceed. • 16 Black Holes in Higher Derivative Gravity 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Including quadratic curvature terms in the gravitational action yields a renormalizable theory at the apparent cost of instability in the radiation field. One also needs to consider the effects upon classical solutions such as black holes. All vacuum solutions to Einstein’s theory remain good solutions to the higher derivative theory, so the Schwarzschild family carries over to the generalized theory. There are in addition non-Schwarzschild solutions, however, crossing the Schwarzschild family at a point governed by the Gross-Perry-Yaffe Lichnerowicz eigenvalue. This crossing point also appears to be a changeover point for classical stability between the Schwarzschild and non-Schwarzschild black hole families. Speaker: Kellog Stelle • 5:10 PM Discussion 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map • Wednesday, January 30 • 17 Scale symmetry, the Higgs and the cosmos 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map I will review a scale-invariant extension of the Standard Model and gravity able to support inflation and dark energy and containing just an additional degree of freedom on top of the Standard Model content. This scenario has some interesting features such as i) the existence of a conserved current that effectively forbids the generation of isocurvature perturbations ii) an alpha-attractor-like solution for the spectral tilt and the tensor-to-scalar ratio, iii) the absence of fifth-force effects and iv) a set of consistency relations between the inflationary and dark energy observables that can be tested with future cosmological observations. Speaker: Javier Rubio • 18 What do we know about quantum corrections to Higgs Inflation? 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Speaker: Fedor Bezrukov • 9:50 AM Coffee break 4/2-011 - TH common room ### 4/2-011 - TH common room #### CERN 15 Show room on map • 19 Quantum Scale Invariance and Weyl Conformal Gravity 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Recent results in (quantum) scale invariance and its spontaneous breaking are presented. In flat spacetime, for a scale invariant theory the scalar potential is presented at three loops while keeping manifest scale symmetry. This is possible in a scale-invariant regularization (in $d=4-2\epsilon$) in which the Goldstone (dilaton) of this symmetry generates (spontaneously) the subtraction scale ($\mu$). Although non-polynomial (effective) operators are generated at the quantum level, suppressed by the (large) dilaton vev, a classical hierarchy of vev's (Higgs vs dilaton vev) is quantum stable. In curved spacetime, conformal symmetry and consistency (no ghosts) demands one introduce the Weyl gauge field $\omega_\mu$ (and Weyl conformal geometry). In the {\it absence} of matter, Weyl's (conformal) quadratic gravity has spontaneous breaking (Stueckelberg mechanism) to Einstein action which is a "low-energy" effective theory below the mass of $\omega_\mu$ (where the geometry becomes Riemannian). In the {\it presence} of matter (Higgs) with non-minimal coupling to Weyl gravity, the breaking of Weyl conformal symmetry triggers EW symmetry breaking. (arXiv:1812.08613, 1809.09174, 1712.06024) Speaker: Dumitru Ghilencea • 20 Gravity, Scale Invariance and the Hierarchy Problem 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Combining the quantum scale invariance with the absence of new degrees of freedom above the electroweak scale leads to stability of the latter against perturbative quantum corrections. Nevertheless, the hierarchy between the weak and the Planck scales remains unexplained. We argue that this hierarchy can be generated by a non-perturbative effect relating the low energy and the Planck-scale physics. The effect is manifested in the existence of an instanton configuration contributing to the vacuum expectation value of the Higgs field. We analyze such configurations in several toy models and in a phenomenologically viable theory encompassing the Standard Model and General Relativity in a scale-invariant way. Dynamical gravity and a non-minimal coupling of it to the Higgs field play a crucial role in the mechanism. Speaker: Andrey Shkerin (EPFL) • 11:30 AM Discussion 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map • 12:00 PM Lunch break • 21 Conformal symmetry as an exact symmetry with Higgs mechanism (TH colloquium) 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map In theories such as asymptotically safe gravity, conformal symmetry is treated as a symmetry for the ultra-violet limit of quantum gravity. However, gravity can be formulated as a theory where conformal symmetry is exact, but broken in the same way as in the Brout-Englert-Higgs-Kibble formalism, where local gauge symmetry is still exactly valid but realised in an apparently asymmetric manner. This formally turns gravity into a renormalizable theory, except for the fact that a physically dubious particle emerges, a heavy excitation of the graviton with spin 2 but negative metric. It is not understood what exactly the role of such a particle would be, but it can be pointed out that, leaving this mystery as it is, does produce a scheme that is worth further study. It generates a system without any tuneable parameters, so it may be worth-while to investigate what the coupling parameters of such a theory would be, and check whether anything physically realistic can be produced. We must have matter added to the system, and the algebra will have to meet with rigorous constraints. Speaker: Gerard 't Hooft • 3:00 PM Coffee break 4/2-011 - TH common room ### 4/2-011 - TH common room #### CERN 15 Show room on map • 22 Inertial Weyl Symmetry Breaking, Dilaton, and Weyl Photon 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Globally Weyl invariant theories have a conserved current that is generally the derivative of a scalar.  In general expansion, this scalar flows to a constant, $K$, that serves as the order parameter of symmetry breaking, eg, the decay constant of the dilaton is proportional to \sqrt{K}.  If we introduce the Weyl photon the dilaton is eaten and the photon acquires a mass proportional to K. The divergence of the Weyl current is the  trace anomaly, and connected to the renormalization group (RG).  If the RG is interpreted as flow of coupling constants in Weyl invariant ratios, such as $VEV(phi_i)^2/K$, then the Weyl symmetry is maintained at the quantum level. Speaker: Christopher T. Hill • 23 Quantum Scale Invariance, Hierarchy generation and Inflation 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Global and local Weyl invariant theories of scalars and gravity can generate all mass scales spontaneously, initiated by a dynamical process of “inertial spontaneous symmetry breaking” that does not involve a potential. We discuss how inflation readily occurs and how an hierarchy of mass scales may be generated and consider its stability against quantum corrections. Speaker: Graham Ross • 4:50 PM Discussion 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map • 6:30 PM Reception Glassbox (in the Restaurant 1 area) ### Glassbox (in the Restaurant 1 area) • Thursday, January 31 • 24 Status and Perspectives of Asymptotic Safety 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map I will review the key idea underlying the asymptotic-safety programme both in particle physics as well as quantum gravity. I will discuss mechanisms that can generate asymptotic safety and will provide an overview of models that could exhibit these mechanisms. After reviewing recent developments I will give a short outlook on future perspectives. Speaker: Astrid Eichhorn • 25 Fundamental Interactions 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Speaker: Francesco Sannino • 10:20 AM Coffee break 4/2-011 - TH common room ### 4/2-011 - TH common room #### CERN 15 Show room on map • 26 Phenomenological and cosmological implications of hidden scale invariance 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map I discuss a class of effective low-energy theories that incorporate nonlinearly realised scale invariance through the dilaton field. Radiatively stable mass hierarchies are realised in a natural (without fine-tuning) way in this class of models, with a generic prediction of a light dilaton field. The cosmological electroweak phase transition in this scenario is triggered by the QCD phase transition. This has significant implications for potential gravitational wave signals, solar masses black holes and generation of matter-antimatter asymmetry at the QCD scale. Speaker: Archil Kobakhidze (The University of Sydney) • 11:30 AM Discussion 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map • 12:00 PM Lunch break • 27 Scale Invariant Theories of Gravity and the Meaning of the Planck Mass 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map I will review metric-affine theories of gravity and the occurrence of a Higgs mechanism that gives mass to the gravitational connection. I will then discuss the possibility of achieving quantum scale invariance at high energy in such theories. Speaker: Roberto Percacci • 2:50 PM Coffee break 4/2-011 - TH common room ### 4/2-011 - TH common room #### CERN 15 Show room on map • 28 Asymptotic Safety and Conformal Standard Model 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map There are many proposals to extend the Standard Model designed to deal with its fundamental inconsistencies. Since no new particles have been detected experimentally so far, the models which add only one more scalar particle and possibly right-chiral neutrinos are favored. One of them is the Conformal Standard Model, which proposes a coherent solution to the Standard Model drawbacks including the hierarchy problem and a dark matter candidate. On the other hand there are signs that gravity is asymptotically safe. If there are no intermediate scales between electroweak and Planck scale then the Conformal Standard Model supplemented with asymptotically safe gravity can be valid up to arbitrarily high energies and give a complete description of particle physics phenomena. Moreover asymptotic safety hypothesis restricts the mass of the second scalar particle to $300 \pm 28$ GeV, for a_{\lambda_3} <0. The masses of heavy neutrinos can also be estimated as $683 \pm 83$ GeV so these predictions can be explicitly tested in the nearby future. Speaker: Jan Kwapisz • 29 Scale invariance and strong dynamics as the origin of inflation and the Planck mass 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Classical scale invariance represents a promising framework for model building beyond the Standard Model. However, once coupled to gravity, any scale-invariant microscopic model requires an explanation for the origin of the Planck scale. In this talk, I will present a minimal example for such a mechanism and show how the Planck mass can be dynamically generated in a strongly coupled gauge sector. I will consider the case of hidden SU(N) gauge interactions that link the Planck scale to the condensation of a scalar bilinear operator that is nonminimally coupled to curvature. The effective theory at energies below the Planck mass contains two scalar fields: the pseudo-Nambu-Goldstone boson of spontaneously broken scale invariance (the dilaton) and a gravitational scalar degree of freedom that originates from the R^2 term in the effective action (the scalaron). I will discuss the effective potential for the coupled dilaton-scalaron system at one-loop order and demonstrate that it can be used to successfully realize a stage of slow-roll inflation in the early Universe. Remarkably enough, our predictions for the primordial scalar and tensor power spectra interpolate between those of standard R^2 inflation and linear chaotic inflation. For comparatively small gravitational couplings, one thus obtains a spectral index n_s ~= 0.97 and a tensor-to-scalar ratio as large as r ~= 0.08. • 30 Conformal Realization of the Neutrino Option and its Gravitational Wave Signature 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map It was recently proposed that the electroweak hierarchy problem is absent if the generation of the Higgs potential stems exclusively from quantum effects of heavy right-handed neutrinos which can also generate active neutrino masses via the type-I seesaw mechanism. Hence, in this framework dubbed the "neutrino option", the tree-level scalar potential is assumed to vanish at high energies. Such a scenario therefore lends itself particularly well to be embedded in a classically scale-invariant theory. In this talk we demonstrate that the minimal scale-invariant framework compatible with the "neutrino option" requires the Standard Model to be extended by two real scalar singlet fields in addition to right-handed neutrinos. We present the parameter space of the model for which a phenomenologically viable Higgs potential and neutrino masses are generated, and for which all coupling constants remain in the perturbative regime up to the Planck scale. In addition, we show that the phase transition connected with radiative scale symmetry breaking is of strong first order with a substantial amount of supercooling. This yields a sizable gravitational wave signal, so that the model can be fully tested by future gravitational wave observatories. In particular, most of the parameter space can already be probed by the upcoming LIGO science run starting in early 2019. Speakers: Dr Alexander Helmboldt (MPIK Heidelberg), Vedran Brdar (JGU Mainz) • 5:10 PM Discussion 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map • Friday, February 1 • 31 A generalized multiple-point (criticality) principle and inflation 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Speaker: Kin-ya Oda • 32 Scale invariant extension of the SM with strongly interacting hidden sector and dark pion DM (WIMP vs. SIMP) 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map Scale invariant extension of the SM with QCD-like strongly interacting hidden (dark) sector is interesting, since the dimensional transmutation and chiral symmetry breaking in the hidden sector could be the origin of electroweak symmetry breaking (EWSB), and all the masses of the SM particles as well as dark pions and dark baryons that could be good cold dark matter candidates. In this talk I discuss dark pion DM as WIMP vs. SIMP. Ignoring the West-Zumio-Witten (WZW) interaction, I first discuss dark pion as a WIMP using two different approaches, the chiral perturbation theory (ChPT) and AdS/QCD. Then I include the WZW interaction and discuss dark pion within SIMP scenario. However, the analysis based on ChPT indicates that the viable parameter space for SIMP seems to be outside the validity region of ChPT. I show that this problem can be resolved if we include dark vector mesons, and the SIMP idea can be realized in the dark pion sector. Speaker: Pyungwon Ko • 10:20 AM Coffee break 4/2-011 - TH common room ### 4/2-011 - TH common room #### CERN 15 Show room on map • 33 Making sense of the Nambu-Jona-Lasinio model via scale invariance 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map The status of the chiral-invariant Nambu-Jona-Lasinio (NJL) four-fermi model is quite equivocal. It serves as the paradigm for dynamical symmetry breaking and yet it is not renormalizable. NJL only studied one loop graphs with point vertices, and needed to use an ultraviolet cutoff. We propose to dress the point vertices with scale invariant vertices with anomalous dimensions. We show that if the dimension of the $\bar{\psi}\psi$ fermion mass operator is reduced from a canonical three to a dynamical two, the four fermion interaction becomes renormalizable to all orders in the four-fermion coupling constant. Additionally, we find that dynamical symmetry breaking then occurs with the fermion becoming massive, and we obtain a dynamical massless pseudoscalar Goldstone boson and a dynamical scalar Higgs boson. The Higgs boson mass is automatically of order the dynamical fermion mass, with there thus being no hierarchy problem. The Higgs boson automatically has a width, and the width could serve as a diagnostic to distinguish a dynamical Higgs from an elementary one. We extend the scale invariance to local conformal invariance as then coupled to a gravity theory, conformal gravity, that is conformal too. With Bender and Mannheim having shown that conformal gravity is a ghost-free, unitary theory, it can serve as a consistent theory of quantum gravity. We show that all of the achievements of supersymmetry can be achieved by conformal symmetry and conformal gravity instead, with there then being no need for any new particles at the LHC. Speaker: Philip Mannheim • 11:20 AM Discussion 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map • 11:50 AM Lunch break 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map • 34 Very light dilaton and naturally light Higgs boson 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map We study very light dilaton, arising from a scale-invariant ultraviolet theory of the Higgs sector in the standard model of particle physics. Imposing the scale symmetry below the ultraviolet scale of the Higgs sector, we alleviate the fine-tuning problem associated with the Higgs mass. When the electroweak symmetry is spontaneously broken radiatively à la Coleman-Weinberg, the dilaton develops a vacuum expectation value away from the origin to give an extra contribution to the Higgs potential so that the Higgs mass becomes naturally around the electroweak scale. The ultraviolet scale of the Higgs sector can be therefore much higher than the electroweak scale, as the dilaton drives the Higgs mass to the electroweak scale. We also show that the light dilaton in this scenario is a good candidate for dark matter of mass mD∼1 eV−10 keV, if the ultraviolet scale is about 10−100 TeV. Finally we propose a dilaton-assisted composite Higgs model to realize our scenario. In addition to the light dilaton the model predicts a heavy U(1) axial vector boson and two massive, oppositely charged, pseudo Nambu-Goldstone bosons, which might be accessible at LHC. Speaker: Deog-Ki Hong • 35 Continuum clockwork and classical scale invariance 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map The clockwork mechanism provides a useful picture to understand extra dimensions via continuum limit of the scheme (continuum clockwork). The continuum clockwork can find its concrete realization in the general linear dilaton model (GLD). GLD can be defined by classical scale invariance in the presence of supersymmetry, which implies a non-trivial selection rule for radiative corrections to dilaton potential. Known examples of GLD are heterotic M-theory, type II little string theory and non-critical string theories. Previously unexplored Kaluza-Klein spectra and couplings can be captured in GLD, which will be shown to be actually present in heterotic M-theory. Speaker: Sang Hui Im • 2:50 PM Coffee break 4/2-011 - TH common room ### 4/2-011 - TH common room #### CERN 15 Show room on map • 3:20 PM Discussion 4/3-006 - TH Conference Room ### 4/3-006 - TH Conference Room #### CERN 100 Show room on map
2022-06-27 02:32:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5058963894844055, "perplexity": 3540.5973470830077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00107.warc.gz"}
http://math.tutorcircle.com/analytical-geometry/parabola.html
Sales Toll Free No: 1-855-666-7446 # Parabola Top Sub Topics When a solid is cut by a plane, the curve common to the solid and the plane i.e., the curve which lies on the surface of the solid and the plane is called "section of the solid by a plane." Similarly, when a right circular cone is cut by planes in different positions, the sections obtained are the curves-circle, parabola, ellipse and hyperbola. In the figure, $O$ is the vertex of the solid cone and $AOA$' is its axis. a is the semi vertical angle. The section of the cone by a plane perpendicular to its axis is the circle. When a conic is cut by a plane which makes an angle ß = a with the axis then the section formed is called as Parabola. ## What is Parabola? Parabola is the locus of a point which moves such that its distance from the focus is equal to its distance from the directrix. It appears to be in the form of arch, when inverted into an arch structure, it results in a form, which allows equal vertical loading along its length. A parabola is a graph of a quadratic equation. ## Properties of Parabola Mainly the description of a parabola involves a point that is called as focus and a line called directrix. In simple word, directrix is a path followed by a point or line when moving. The line perpendicular to the directrix and which passes through the focus is called as axis of symmetry. The intersection point of parabola and axis of symmetry is called as vertex. The cord of parabola which is parallel to the directrix and passes through the focus is called as latus rectum. ## Equation of a Parabola A parabola is the set of all points $(x, y)$ in a plane that are equidistant from a fixed line and a fixed point not on the directrix. In the given figure below, $S$ is the focus and $l$ is the directrix of the parabola. Draw $SZ$ perpendicular to the line l. Let ''$O$'' be the middle point of $SZ$. Take $O$ as the origin, $OS$ produced as the x-axis and $OY$ perpendicular to $OS$. Let $P(x, y)$ be any point on the curve. Join $PS$ and draw $PM$ perpendicular to directrix $l$ and $PN$ perpendicular to x-axis. Take $OS$ = $OZ$ = $a$. Then the co-ordinates of $S$ are $(a, 0)$ and that of $Z$ = $(- a, 0)$ Here $S$ = $(a, 0)$ and $P$ = $(x, y)$. So by distance formula $PS$ = $\sqrt{(x - a)^2 + (y - 0)^2}$ $PM$ = $NZ$ = $NO$ + $OZ$ = $x + a$. [ Because NO is the x- coordinate of the point $P(x, y)$]. By definition, $\frac{PS}{PM}$ = 1 or $PS$ = $PM$ $\therefore$ $\sqrt{(x-a)^2+y^2}$ = $(x + a)$. Square both sides, $(x - a)$2 + $y$2 = $(x + a)$2 i.e., $x$2 + $a$2 - 2$ax$ + $y$2 = $x$2 + $a$2 + 2$ax$ i.e., $y$2 = 4$ax$ This is the equation of the Parabola with vertex as origin and the x-axis as its axis of symmetry. The standard form of the equation of parabola with vertex at the origin and the y-axis as its axis of symmetry is derived in a same manner. ## Latus Rectum of Parabola Through the focus $S$ as given in the figure below, draw a line perpendicular to the axis of the Parabola. Let the line cut Parabola at $L$ and $L'$, then $LSL$' is called the Latus rectum of the parabola. It can be regarded as the double ordinate passing through the focus. Draw $LM$ perpendicular to the directrix. Then by the difinition $LS$ = $LM$ = $SZ$ = 2$a$. $\therefore$ the lenght of semi latus rectun is $LS$ = 2$a$. Length of latus rectum is $LSL$' = 2$LS$ = 2(2$a$) = 4$a$. ## Vertex of a Parabola The vertex of the parabola is the point where the parabola crosses its axis. It is the highest or lowest point or also known as the maximum and minimum of vertex. When the co-efficient of the x2 term is positive, then the vertex will be the lowest point on the graph, i.e. the point will be lying at the bottom of the 'U' shape and when the co-efficient of the x2 term is negative, the vertex will be in the highest at the top of 'U' shape. ## Focus and Directrix of Parabola A parabola is the locus of a point which moves in a plane so that its distance from a fixed point in a plane is equal to its distance from a fixed straight line in that plane. The fixed point is called as the focus and the fixed straight line is called the directrix of the parabola. The distance of any point on the parabola from the focus is called the focal distance of the point. ## Example of Parabola Below are the few examples based on Parabola Question: Show that $(y - 3)$2 = 12$(x - 1)$ is a parabola. Find the equation of its axis and co-ordinates of the vertex and focus. Solution: Given: $(y - 3)$2 = 12$(x - 1)$ Change the origin to the point (1, 3) that is [$h$ = 1, $k$ = 3] then $x$ = $X$ + 1, $y$ = $Y$ + 3 ($Y$ + 3 - 3)2 = 12( $X$ + 1 - 1) i.e., $Y$2 = 12 $X$ which is the equation of the parabola. The Axis of the Parabola is $y$ = $k$ i.e., $y$ = 3 is a line parallel to x- axis. $\therefore$ Vertex = (1, 3)
2017-06-26 20:46:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5676259994506836, "perplexity": 229.63419484078054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320865.14/warc/CC-MAIN-20170626203042-20170626223042-00004.warc.gz"}
https://mathoverflow.net/questions/15973/quantum-analogue-of-wiener-process
# Quantum analogue of Wiener process The Wiener process (say, on $\mathbb{R}$) can be thought of as a scaling limit of a classical, discrete random walk. On the other hand, one can define and study quantum random walks, when the underlying stochastic process is governed by a unitary transform + measurement (for an excellent introduction, see http://arxiv.org/abs/quant-ph/0303081). My question is - do quantum random walks have a reasonable continuous limit, something which would give a quantum analogue of the Wiener process? -
2015-08-29 13:00:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7792373299598694, "perplexity": 328.9950135773858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064445.47/warc/CC-MAIN-20150827025424-00163-ip-10-171-96-226.ec2.internal.warc.gz"}
https://testbook.com/question-answer/an-undrained-triaxial-compression-test-is-carried--5ed1185af60d5d601c1b5e7d
# An undrained triaxial compression test is carried out on saturated clay sample under a cell pressure of 100 kN/m2. The sample failed at a deviator stress of 200 kN/m2. The cohesion of the given sample of clay is Free Practice With Testbook Mock Tests ## Options: 1. 100 kN/m2 2. 150 kN/m2 3. 200 kN/m2 4. 50 kN/m2 ### Correct Answer: Option 1 (Solution Below) This question was previously asked in TN TRB Civil 2012 Paper ## Solution: Concept: Unconfined compression test is the unique case of triaxial test in which no cell pressure is applied to the soil sample. It is an undrained test or quick test. Since the test produces only one Mohr circle (corresponds to σ3 = 0), the test applies only to soils for which Φu = 0, i.e saturated clays. The unconfined compression test is often used to determine in situ soft saturated fine-grained soil. $${\rm{C}} = \frac{{{\rm{Deviation\;stress}}}}{2} = \frac{{{{\rm{\sigma }}_{\rm{a}}}}}{2}$$ Calculation: ∴ $${\rm{C}} = \frac{{200}}{2} = 100\;{\bf{kN}}/{{\bf{m}}^2}$$
2021-07-27 17:30:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4355904757976532, "perplexity": 7504.389174865342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153474.19/warc/CC-MAIN-20210727170836-20210727200836-00705.warc.gz"}
http://mathhelpforum.com/algebra/204264-please-help-me-solve.html
solve the equation 1n x + 1n(X+2) =1...thank u Hey sharmala. Is Inx meant to be ln(x) where ln is the natural logarithm? yes.... Well here is the main hint: ln(x) + ln(y) = ln(x*y). i tried but the answer is incorrect Show us what you tried. ln(x)+ln(x+2) = ln(x*(x+2))
2017-11-18 04:54:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916424512863159, "perplexity": 11888.503233266929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804610.37/warc/CC-MAIN-20171118040756-20171118060756-00431.warc.gz"}
http://math.stackexchange.com/questions/102659/on-the-cayley-conformal-transform
# On the Cayley (conformal) transform Prove that the function \begin{align} \phi (z) = i \dfrac{1 - z}{1 + z} \end{align} maps the set $D = \{z \in \mathbb{C}: |z| < 1 \}$ one-to-one onto the set $U = \{ z \in \mathbb{C} : Im(z) > 0 \}$. (This is exercise 1.9 in "Function Theory..." by Green & Krantz, and is also a claim on the Wikipedia page, though they have a function $W:U \to D$...) For injective, I suppose that \begin{align} \phi (z) &= \phi (w) \\ \\ i \dfrac{1 - z}{1 + z}&= i \dfrac{1 - w}{1 + w} \end{align} ... and after a few manipulations, end up with $z = w$. For surjective, I am stuck. Q: How to I prove "onto"? Do I work with $z, w$ in $(x + i \cdot y)$ form? Also, how/where do I use that $|z| < 1$ (in $D$ ) or that $Im (\phi (z)) >0$ ? - Notice this is a Möbius transformation, so with the matrix representation it's easy to obtain an inverse function. Prove that this inverse works when composed on either side of $\phi$. – Jose27 Jan 26 '12 at 18:05 Recall that $\mathrm{Im}(w)=(w-\bar{w})/(2i)$. So, with $r=|z|$, we have $$\mathrm{Im}\;\phi(z)= \frac{1}{2}\left(\frac{1-z}{1+z}+\frac{1-\bar{z}}{1+\bar{z}}\right)=\frac{1-r^2}{|1+z|^2}>0,$$ which gives $r<1$. Additionally, with $w=u+iv$, solving the inequality $$|\phi^{-1}(w)|^2=\left|\frac{i-w}{i+w}\right|^2=\frac{u^2+(1-v)^2}{u^2+(1+v)^2}<1$$ gives simply $v>0$. Bijectivity follows from the bidirectionality of $$\mathrm{Im}\;\phi(z)>0 \iff |\phi^{-1}(w)|<1.$$ - That should do it, thanks! – The Chaz 2.0 Jan 27 '12 at 20:39 Just a quick follow-up: I get the calculation of $Im \phi (z)$, but how does this "... give $r < 1$ " ? – The Chaz 2.0 Feb 1 '12 at 13:50 @TheChaz: You want $\mathrm{Im}\;\phi>0$. The denominator is $$(1+\mathrm{Re}(z))^2+\mathrm{Im}(z)^2$$ hence is nonnegative. Now solve the numerator being greater than zero. – anon Feb 1 '12 at 15:44 Thanks. I wasn't seeing that the denominator is positive. – The Chaz 2.0 Feb 1 '12 at 16:10 @TheChaz Does this help? – anon Feb 1 '12 at 16:12 Okay, since the comment is not really that helpful here's an approach: Take $w\in U$ and assume $w=\phi(z)$ for some $z\in \mathbb{C}$ then $$i\frac{1-z}{1+z}=w$$ which implies $$z=\frac{i-w}{i+w}$$ Now notice that $|z|<1$ if and only if $|i-w|^2<|i+w|^2$ now expand these in terms of the real and imaginary part of $w$ to obtain $(1-Im(w))^2<(1+Im(w))^2$ and this happens if and only if $Im(w)>0$. This implies that $\phi$ is onto. -
2016-07-29 02:05:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970806837081909, "perplexity": 518.654404896387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829325.58/warc/CC-MAIN-20160723071029-00168-ip-10-185-27-174.ec2.internal.warc.gz"}
http://cbsephysics.in/cbse-physics/page/3/
## How was your AISSCE CBSE Class 12 Physics Exam 2013? How was the CBSE Class 12 Physics Exam today? Was it easy or difficult? Many students reported that the questions were unexpected and extra ordinarily tough. What was your experience? Were there questions beyond your comprehension? Were there questions out of Syllabus? Anju R  writes: Hi, The Physics CBSE exam was tough.It was not beyond the syllabus but it was confusing and was asked in a twisted manner.Almost all the students who prepared for the exams didnot leave any derivations,laws etc.But the saddest part was that they asked the big derivation for just one mark and only very few were asked.And most questions were unexpected ones. It was painful when I saw that after the exam almost all the students were crying.Sitting for a whole night for studying the derivations,laws etc.. gave nothing to us other than tears.We were happy after the English exam but the Physics exam gave a terror over us about Board Exams.I think 90 %  of the students could not complete saying that it was a lengthy paper. During the exam I could see many sitting with great confusion while few others in a hurry to complete  it and the rest of them sitting sadly.By the grace of God I completed 5 minutes before the alloted time and didn’t leave any question.I hope CBSE will consider us. Yours sincerely, Anju.R ## Question Bank in Physics Class XII The collection will also be helpful for students of other syllabuses. The file consists of syllabus, key points, collection of very short Answer (1 mark), Short answer type question – solved (2 marks), short answers (3 marks), Long answers (5 marks), Solved numericals and 3 sample papers. The contents are arranged chapter-wise. Any student will find this a boon for easy preparation and to score better marks in Physics. ## Solutions to NCERT Physics Class 12 (Ray Optics) 1. A small candle, 2.5 cm in size is placed at 27 cm in front of a concave mirror of radius of curvature 36 cm. At what distance from the mirror should a screen be placed in order to obtain a sharp image? Describe the nature and size of the image. If the candle is moved closer to the mirror, how would the screen have to be moved? Answer:Size of the candle, h= 2.5 cmImage size = h’Object distance, u= −27 cmRadius of curvature of the concave mirror, R= −36 cmFocal length of the concave mirror, f=R/2 = -18 cm Image distance = v The image distance can be obtained using the mirror formula: $\frac{1}{u}+\frac{1}{v}=\frac{1}{f}$ $\frac{1}{v}=\frac{1}{f}-\frac{1}{u}$ $=\frac{1}{-18}-\frac{1}{-27}=\frac{-3+2}{54}=-\frac{1}{54}$ Therefore, v=-54cm Therefore, the screen should be placed 54 cm away from the mirror to obtain a sharp image. The magnification of the image is given as: $m=\frac{h'}{h}=-\frac{v}{u}$ Therefore, $h'=-\frac{v}{u}\times&space;h&space;=&space;-\frac{-54}{-27}\times&space;2.5&space;=&space;-&space;5&space;cm$ The height of the candle’s image is 5 cm. The negative sign indicates that the image is inverted and real. If the candle is moved closer to the mirror, then the screen will have to be moved away from the mirror in order to obtain the image. 2. A 4.5 cm needle is placed 12 cm away from a convex mirror of focal length 15 cm. Give the location of the image and the magnification. Describe what happens as the needle is moved farther from the mirror. 3. A tank is filled with water to a height of 12.5 cm. The apparent depth of a needle lying at the bottom of the tank is measured by a microscope to be 9.4 cm. What is the refractive index of water? If water is replaced by a liquid of refractive index 1.63 up to the same height, by what distance would the microscope have to be moved to focus on the needle again? 4. Figures 9.34(a) and (b) show refraction of a ray in air incident at 60° with the normal to a glass-air and water-air interface, respectively. Predict the angle of refraction in glass when the angle of incidence in water is 45º with the normal to a water-glass interface [Fig. 9.34(c)]. 5. A small bulb is placed at the bottom of a tank containing water to a depth of 80 cm. What is the area of the surface of water through which light from the bulb can emerge out? Refractive index of water is 1.33. (Consider the bulb to be a point source.) 6. A prism is made of glass of unknown refractive index. A parallel beam of light is incident on a face of the prism. The angle of minimum deviation is measured to be 40°. What is the refractive index of the material of the prism? The refracting angle of the prism is 60°. If the prism is placed in water (refractive index 1.33), predict the new angle of minimum deviation of a parallel beam of light. 7. Double-convex lenses are to be manufactured from a glass of refractive index 1.55, with both faces of the same radius of curvature. What is the radius of curvature required if the focal length is to be 20 cm? 8. A beam of light converges at a point P. Now a lens is placed in the path of the convergent beam 12 cm from P. At what point does the beam converge if the lens is (a) a convex lens of focal length 20 cm, and (b) a concave lens of focal length 16 cm? 9. An object of size 3.0 cm is placed 14 cm in front of a concave lens of focal length 21 cm. Describe the image produced by the lens. What happens if the object is moved further away from the lens? 10. What is the focal length of a convex lens of focal length 30 cm in contact with a concave lens of focal length 20 cm? Is the system a converging or a diverging lens? Ignore thickness of the lenses. 11. A compound microscope consists of an objective lens of focal length 2.0 cm and an eyepiece of focal length 6.25 cm separated by a distance of 15 cm. How far from the objective should an object be placed in order to obtain the final image at (a) the least distance of distinct vision (25 cm), and (b) at infinity? What is the magnifying power of the microscope in each case? 12. A person with a normal near point (25 cm) using a compound microscope with objective of focal length 8.0 mm and an eyepiece of focal length 2.5 cm can bring an object placed at 9.0 mm from the objective in sharp focus. What is the separation between the two lenses? Calculate the magnifying power of the microscope, 13. A small telescope has an objective lens of focal length 144 cm and an eyepiece of focal length 6.0 cm. What is the magnifying power of the telescope? What is the separation between the objective and the eyepiece? 14. (a)A giant refracting telescope at an observatory has an objective lens of focal length 15 m. If an eyepiece of focal length 1.0 cm is used, what is the angular magnification of the telescope?(b) If this telescope is used to view the moon, what is the diameter of the image of the moon formed by the objective lens? The diameter of the moon is 3.48 × 106 m, and the radius of lunar orbit is 3.8 × 108 m. ## CBSE Physics Solved Board Question Papers 2008, 2009, 2010 & 2011 – All versions Download CBSE Physics Solved Board Question Papers 2008, 2009, 2010 & 2011 – All versions (Delhi, Outside Delhi & Foreign) The question Papers are in PDF format, all question papers with solution in a single file. So, it may take some time for the download to finish. CBSE Physics Class 12 Board Question Papers 2008 to 2011 all sets (SETS I, II & III) and versions (DELHI, FOREIGN, OUTSIDE DELHI) ## Quick Revision for Class X Physics SA1 CHAPTER -12 ELECTRICITY GIST OF THE LESSON 1. Positive and negative charges: The charge acquired by a glass rod when rubbed with silk is called positive charge and the charge acquired by an ebonite rod when rubbed with wool is called negative charge. 2. Coulomb: It is the S.I. unit of charge. One coulomb is defined as that amount of charge which repels an equal and similar charge with a force of 9 x 109 N when placed in vacuum at a distance of 1 meter from it. Charge on an electron = -1.6 x 10-19 coulomb. 3. Static and current electricities: Static electricity deals with the electric charges at rest while the current electricity deals with the electric charges in motion. 4. Conductor: A substance which allows passage of electric charges through it easily is called a ‘conductor’. A conductor offers very low resistance to the flow of current. For example copper, silver, aluminium etc. 5. Insulator: A substance that has infinitely high resistance does not allow electric current to flow through it. It is called an ‘insulator’. For example rubber, glass, plastic, ebonite etc. 6. Electric current: The flow of electric charges across a cross-section of a conductor constitutes an electric current. It is defined as the rate of flow of the electric charge through any section of a conductor. Electric current = Charge/Time     or        I = Q/t Electric current is a scalar quantity. 1. Ampere: It is the S.I. unit of current. If one coulomb of charge flows through any section of a conductor in one second, then current through it is said to be one ampere.                                                                                                                         1 ampere = 1 coulomb/1 second    or      1 A = 1C/1s = 1Cs-1                                                                                                                                                                                                              1 milliampere =    1 mA = 10-3 A                                                                                                                                                                                 1 microampere = 1µA = 10-6 A 2. Electric circuit: The closed path along which electric current flows is called an ‘electric circuit’. 3. Conventional current: Conventionally, the direction of motion of positive charges is taken as the direction of current. The direction of conventional current is opposite to that of the negatively charged electrons. 4. Electric field: It is the region around a charged body within which its influence can be experienced. Continue reading “Quick Revision for Class X Physics SA1” ## Sure Shot questions in Physics for CBSE Class 9 Summative Assessment (SA1) All CBSE Schools are conducting the SA1 (First Summative Assessment) in the month of September  (Kendriya Vidyalayas have already started).  At this point, we found it would be useful to the students to have a set of sure shot questions. Practising these will essentially help you score better marks in the forthcoming exams. ## Portions for SA1 MOTION, FORCE AND WORK (Motion, Force and Newton’s Laws of motion, Gravitation) 1. Derive equations of uniformly accelerated motion using graphical representation of motion. 2. Derive F=ma 3. State the law of conservation of linear momentum. Illustrate with an example 4. State and explain Archimedes’ principle. 5. Describe an experiment to verify Archimedes’ principle. 6. Distinguish density and relative density. 7. What are the effects produced by force? 8. What is friction? How is it caused? How can it be reduced? 9. Define impulse of a force. 10. Why does a cricket fielder pulls his hands backwards while taking a catch? 11. Define inertia and explain its types with suitable examples. 12. Describe two instances each where pressure is increased by decreasing the area and pressure is decreased by increasing hte area. 13. State Newton’s universal law of gravitation 14. Define G. 15. Distinguish g and G 16. Why motion and rest are said to be relative terms? 17. Define acceleration due to gravity at a place and discuss its variation with height, depth and latitude. 18. Write the differences between mass and weight. 19. Can an object be accelerated if it is moving with constant speed? Justify your answer with an example 20. Why is a person hurt more when he falls on a concrete floor than when he falls on a heap of sand from the same height? 21. The weight of an object on the surface of moon is 1.67N and its mass on its surface is 1 kg. Calculate its weight and mass on the surface of earth, (g on earth = 10 m/s2). 22. When a horse suddenly starts running, a careless rider falls backwards. Explain why? 23. State the action and reaction in the swimming action of a swimmer. 24. A stone is thrown vertically upwards with a velocity of 40 m/s and is caught back. Taking g=10 m/s2, calculate the maximum height reached by the stone.What is the net displacement and the total distance covered by the stone? ## Summative Assessment 1 (SA1) Sample Papers for class 9 Physics The Summative Assessment 1 is about to begin. As per request from student from various parts of India and abroad, we are publishing some solved sample papers in Physics, which we think, would help you in scoring better marks in the forthcoming SA1 exams. (Based on Motion) Class 9 Physics Sample Paper 1  Solved Class 9 Physics Sample Paper 2  Solved ## Transistor as a switch – Question received via Voicemail We received the Question “Please explain the working of a transistor as a switch” via email http://www.electronics-tutorials.ws/transistor/tran_4.html http://c8051.leongkj.net/learning_object/general_transistor_as_ switch.swf ## Five marks questions from Electronic Devices (Long Answer Type) 1. Explain the formation of energy Bands in solids. Distinguish between metals, insulators and semiconductors on the basis of band theory. 2. Distinguish between intrinsic and extrinsic semiconductors and the conduction in P type and N type semiconductors. 3. Explain the formation of depletion region and barrier potential in a pn junction. 4. Draw the circuit diagram used to study the Forward and reverse bias characteristics and draw the graph for forward bias and reverse bias. 5. Describe the working of a half wave rectifier  with the help of a neat labeled diagram and draw the input and output wave forms. 6. Describe the working of a full wave rectifier with the help of a neat labelled diagram and draw the input and output wave forms. 7. Draw the symbols of npn and pnp transistor. Show the biasing of a transistor and explain transistor action. 8. Describe the working of an npn transistor in CE configuration as an amplifier. 9. Explain the working of a transistor in CE configuration as oscillator. 10. Explain the action of transistor as a switch. (Have some more idea? Post them as comments)
2019-05-23 03:09:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6019452810287476, "perplexity": 934.1323368140925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257002.33/warc/CC-MAIN-20190523023545-20190523045545-00361.warc.gz"}
https://www.reddit.com/r/math/comments/eoujqh/software_to_host_math_competition/
× Have your contest be a google form (or one for each question, if you want to update results live as people submit answers). In the "results" tab, open up the spreadsheet corresponding to answers, and create a "Score" tab in the same sheet which uses some spreadsheet magic to convert survey results into scores using a bit of VLOOKUP() (or INDEX + MATCH) and adding together every correct answer.
2020-01-22 03:29:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23972781002521515, "perplexity": 2625.609549270867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606269.37/warc/CC-MAIN-20200122012204-20200122041204-00061.warc.gz"}
https://www.homebuiltairplanes.com/forums/threads/tillotson-212cc-and-225cc-on-efficient-ultralights.35864/page-2
# Tillotson 212cc and 225cc on efficient ultralights? ### Help Support Homebuilt Aircraft & Kit Plane Forum: #### patrickrio ##### Well-Known Member Bloop is cool but won't even be close to the mission reqs for a test pig (test piglet?) Of course, I did an HBA search and found a bunch of threads on the American eaglet, even some already talking about putting a propeller around the tail boom. You guys are already on this stuff..... Edit, and also talking about it needing more wing area...and talking about how it's handling sucks... no original ideas in my head it seems.... Last edited: #### Daleandee ##### Well-Known Member If you got 15 HP you can build a Whing Ding 2 that was designed by Bob Hovey: #### WonderousMountain ##### Well-Known Member On the subject of tails, I recommend the inverted Y. It does not need mixers & has positive stability under a down-lift condition. Equal proportions usually suffice. If you have a low climb rate, might ~CK LuPii #### patrickrio ##### Well-Known Member Wing Ding 2 above looks like a great plane if you have a back yard big enough for a little air strip... just go up and fly a few circles around the patch... It doesn't meet the speed requirements of what I think would be cool. The one I am thinking about could either have a folding prop for a self starting and sustaining part 103 glider that could go a long distance in a day. OR it could be fitted with a large diameter fixed or variable pitch prop for going long distances powered. Either one, being basically glider tech, could fly higher and faster while keeping under the 103 speed limit. Might ALMOST be ok for even longer distances. I have been looking around, and the American Eaglet really seems to have one of the simplest design styles that meets the criteria. Too bad it is bad tempered in flight. Another aircraft with a layout I think is good is the SunSeeker solar powered glider. Just the layout, which is essentially a faster version of the MuscleAir 2 human powered airplane. The propeller tech on the sunseeker is interesting also. 8 ft diameter variable pitch prop designed to rotate at about 400 rpm and has take off power of only 7HP (must be really anemic climb rate though....). I think the prop would be better rotating about the boom about 2 feet behind where the pod ends though....total TLAR thinking at the moment. Last edited: #### reo12 ##### Member 9-10 hp is simply foolish. It might work to get off the ground but there is no reserve climb capacity. Power loss from altitude density and inability to climb as fast or faster than sinking air will easily put the plane in a lack of climb - or easily - loss of altitude. Should this occur at the wrong point in take off or bad location and one could easily end up in the trees or power lines. I watched a friend have this happen with a Lazair equipped with a pair of Pioneer engines. He was unable to outclimb terrain on a damp, foggy morning. I used to make prop hubs for military standard engines. I talked with a gent once who put a 16 cu/inch 2 cylinder on a trike. This engine made right around 8.5 - 9 hp in the configuration it was in. The plane could not be flown on high altitude density days. I think he said he was never able to climb above 2000ft. He talked of a trip with a friend where he could not get above 450ft while he friend was at 2000ft. He had a number of times where sinking air would bring him down to the tops of the crops in the fields or the woods. Once - it did so where he could not climb above the trees in the fence rows that boxed the field. He was forced to fly circles waiting for the sink to subside for a moment when he was in the right part of the field to hop over the trees. He decided to stop flying the engine. He laughed of the flights being so marginal. He knew he'd tickled the tail of the fate's dragon and been lucky to fly another day. #### Taylor.S ##### Well-Known Member Using two or three engines seem to be the best option. The less hp you try to push through a small engine the longer it will last and the cooler it will run. #### nickec ##### Well-Known Member You need to go faster. Like the first iteration of the Colomban Cri Cri. Engine horsepowers quoted in the table are for one engine, so double them, since the Cri Cri has two engines in every version. #### Daleandee ##### Well-Known Member 9-10 hp is simply foolish. It might work to get off the ground but there is no reserve climb capacity. Power loss from altitude density and inability to climb as fast or faster than sinking air will easily put the plane in a lack of climb - or easily - loss of altitude. Should this occur at the wrong point in take off or bad location and one could easily end up in the trees or power lines. Many moons ago I was considering a Challenger 1 with the 28 HP Rotax engine on it. My instructor at the time warned against it for the very reason you state. He said such low power would leave me in the trees when I couldn't out climb some serious sink on a hot Carolina summer day. Since those days experience has proven that he was correct. I owned a VW powered Sonex for a number of years and a couple hundred hours. I flew it from SC to TN a couple of times and while lightly loaded with just me and my overnight bag, flying in the cool morning air in October, it would do OK. But put two people and enough fuel to go somewhere (never mind the luggage) on the same warm & muggy Carolina day and you had to pick your feet up when you went over the trees at the end of the runway. Personally, I like to fly planes that climb better than they glide ... #### patrickrio ##### Well-Known Member OK. So flying on a single 9+HP engine is an idea of a goal, maybe one that is not achievable. The IDEA would be to start with a known aircraft design and probably build a model of it with a small set of modifications to move toward an aircraft good for a test bed. So, maybe you could take a Moyes Tempest, change the tail boom to carbon fiber, change tail to inverted v (maybe) build a propeller around tail boom with reduction to engine (some of this has already been done with a motor, see pic) and power it at 22 HP. That might be a first design goal. For this first iteration, leave wing and pod on the tempest the same as original. then test this configuration in 1/3 scale model size before building full size. Next stage of modifications may be to do an updated pilot pod with lower weight and higher safety. Next stage Might be to do a new cantilever wing with carbon fiber pultrusions and carbon fiber D tube, wing now sized appropriately for new weight. Last stage might be to re-engine appropriately for the new aircraft. If the 9+HP engine won't work, you don't put it on. I think people assume I plan to start with that engine and put it on something that everyone already knows for a fact won't fly with it. That wouldn't be a good idea... Also, before anyone starts tearing apart the above plan for the Tempest, realize that I used it as an example. I would prefer to find a starting airframe that already weighs less than the Tempest..... and maybe is already closer to a good starting point as a test bed. We already know that it is possible to fly an airplane over the Rocky Mountains multiple times with 7HP in an airplane. The Sunseeker II did that in a plane with a gross weight of approximately 430lbs. What I would like to see is if a MUCH LESS EXPENSIVE and less finicky airplane can be built that can fly comfortably on a bit more HP. We also know that a really inexpensive airplane can fly quite well and speedily on 22hp already. See the SkyPup. So, using newer tech, and newly cheaper carbon fiber materials, can a cheap airplane be built that operates well on less HP than the SkyPup and maybe even approaches the efficiency of the Sunseeker? That is what I am thinking about. The Tillotson 9+HP is in this range and is an interesting engine because of it's price. SO it's an interesting goal. The tempest motor is below..... Last edited: #### patrickrio ##### Well-Known Member You need to go faster. Like the first iteration of the Colomban Cri Cri. Or the Colomban Luciole or the Rutan Quickie, or the Spacek SD-1....... Yes, it looks like if you get rid of the low stall speed and low max speed requirements of part 103 some of the design gets simpler. I just decided I wanted to stay ultralight for now..... basically because I have a preference for an airplane that is more motorglider like. The tech and cost improvements for carbon fiber can likely improve on the above class of aircraft as well.... but the engineering on those airplanes is pretty daunting to try to improve. That is another reason I think that the ultralight is a good place to work. I think that part 103 motorized aircraft have mostly not improved efficiency for about 30 years and I think there is some low hanging fruit there if new tech is applied. I think stealing some ideas from existing MicroLift gliders and maybe some of the newer rigid wing hang gliders and combining with newly cheaper carbon fiber will likely be enough to do it. Last edited: #### ElectricFlyer ##### Well-Known Member HBA Supporter And maybe build the propeller to rotate around the tail tube so the prop diameter can go up for efficiency There is someone doing that now -- couldnt find it to post. #### patrickrio ##### Well-Known Member There is someone doing that now -- couldnt find it to post. Here is a HBA tread that shows a few versions of propeller around the tail boom on a Moyes Tempest: Moyes Tempest Tailboom Motors In addition, here is a pic of the GFW-4 motorglider tailboom motor And also the tailboom motor on the sirius-C motorglider All of these implementations are small diameter folding propellers that are designed for self launch of a glider instead of sustained operation at efficiency. Sustained operation at efficiency would move toward large diameter variable pitch at low RPM like shown on the back of the Sunseeker II in the photo from my previous post. I know there are more of them, I would love to see them. I would like to see a gas engine version too. Last edited: #### Vigilant1 ##### Well-Known Member It is worth considering that it is very likely this 212-225 cc 4 stroke engine will have a continuous HP capability well short of 9 HP. And, ief we build a draggy plane that is right on the edge of the engine's capability, then continuous HP is what we'll care about. The problem will probably be heat. When fitted to airplanes and without the stock fan, these engines will only make the stock HP if cooling air of sufficient pressure is available to push air through their fins, and that there is enough volume. I suspect, at about 50 mph cruise speed, that pressure will be the sticking point. Lycomings, Continental's, VW Type 1s: they all need 30cc per HP to have enough surface area to shed the heat at their rated power. Tipi is working with an engine that has 810cc and he believes he can produce over 30 HP continuous (so 27 cc/HP),. He has reason to believe this will work because he is doing testing and because it is apparently working with some other planes, but they cruise at 80mph+, so they have considerably more air pressure available than would be available at Part 103 speeds. The Valley Engineering Big Twin and Big Bad Twin were 990+ cc engines that produced about 40 and 50 HP, respectively, for takeoff and climb. But, both engines were limited to 32 HP continuous due to heat shedding limitations. That's 31cc/hp, again in draggy, slow airframes mostly. #### patrickrio ##### Well-Known Member I have most of the tools or access to them for VW engines. I have helped rebuild several VW engines, and their more expensive German 6 cylinder boxer cousins. I am considering VW powered aircraft too (but full 4 cyl). But that would be a pretty standard direction if I take it, and I would mostly be posting with questions while building instead of before hand. It would be cool if there was a CF motorglider design that ran on VW power I could get plans for... but there really isn't much of that because the current low prices for CF materials is a very recent development. Maybe soon? I am also considering just doing a standard Skypup for fun... although my ultralight flight experience is 3 axis ultralights so 2 axis flight is less exciting. If I went the Skypup direction I would probably just bite the bullet and buy one of the new 4 cycle paraglider motors coming out and enclose the cockpit. This Idea is more of a mental exercise and is very interesting to me. It might be the kind of thing that you work on and iron out some ideas, but never actually fly. I do think that this engine is a bit too far to accomplish, but if you build something that improves on existing that would be interesting. I have experience with composites and have wanted to do some vacuum infusion work too. CF materials also are now pretty inexpensive, about the same as fiberglass when you take strength into account. If there was an existing home built CF motorglider ultralight design I could get plans for I would probably start doing such a plane soon. That is very interesting to me. #### Armilite ##### Well-Known Member This guy is using (2) Clone GX200s (68mm x 54 mm) 196.1cc/15hp = 13cc to make 1 hp built for 15hp each and has No Overheating problem. So if you use a Small Block for Plane use, I would use 13cc x hp you want will give you an idea of what CC Engine you need. 13cc x 20hp = 260cc (GX270). You have Small Block Honda/Clones that go up to the GX270 and Big Block Honda/Clones based on the Honda GX340/GX390 Engines. The Harbor Freight Predator 301 is the Smallest Big Block Single that I have seen. Big Blocks have the HD 1.0" PTO. For Big Blocks, I would use 12.4cc. A 460 Single 458cc Dynoed 37.37hp@5000rpm. 458cc/37hp = 12.4cc My thoughts on the Honda Gx200's suitability to re-power the Lazair The Lazair is a good Platform to Test different Small Engines. It flew marginally on (2) 5.5 hp Engines = 11hp, and it flew better with (2) Rotax 185ULs rated at 9.4 hp = 18.8 hp, but flew even better with (2) Solo 210's rated, 15 hp each = 30 hp! With an MTOW of 450 lbs = 204.1166 kg / 10 kg = 20.41166 kw needed = 27.37249 hp needed. 27.4/2= 13.7hp each. MTOW 450 lbs - 210 lbs Empty Weight = 240 lbs - 30 lbs Gas = 210 lbs for Pilot! Avg Pilot today falls between 180 lbs and 235 lbs. For planes to make Part 103 24 knot Stall Speed it has to be Max 254 lbs + 30 lbs Gas + with a 170 lb Pilot. The T-Bird I with a 277UL made Part 103. 28hp x 75% = 21hp. It makes 20.3hp@5250rpm. I would bet the T-Bird I Stall Speed is around 10.1hp@4000rpm to 12.3hp@4500rpm! It makes 15.1hp@4750rpm. 460 Single Dyno Test.
2021-06-21 12:46:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2609354555606842, "perplexity": 2486.649550021213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488273983.63/warc/CC-MAIN-20210621120456-20210621150456-00097.warc.gz"}
https://math.paperswithcode.com/latest
# Genus $0$ Modular curves of prime power level with a point defined over number fields other than $\Q$ Associated to an open subgroup $G$ of $\GL_2(\Zhat)$ satisfying conditions $-I \in G$ and $\det(G) \subsetneq (\Zhat)^{\times}$ there is a modular curve $X_G$ which is a smooth compact curve defined over an extension of $\Q.$ In this article, we give a complete list of all such prime power level genus $0$ modular curves with a point. Number Theory 0 04 Aug 2022 # Streaming Tensor Train Approximation 4 Aug 2022 STTA accesses $\mathcal T$ exclusively via two-sided random sketches of the original data, making it streamable and easy to implement in parallel -- unlike existing deterministic and randomized tensor train approximations. Numerical Analysis Numerical Analysis 0 04 Aug 2022 # Block Discrete Empirical Interpolation Methods We present two block variants of the discrete empirical interpolation method (DEIM); as a particular application, we will consider a CUR factorization. Numerical Analysis Numerical Analysis 0 03 Aug 2022 # An adaptive consensus based method for multi-objective optimization with uniform Pareto front approximation 2 Aug 2022 In this work we are interested in stochastic particle methods for multi-objective optimization. Optimization and Control 35Q70, 35Q84, 35Q93, 90C29, 90C56 0 02 Aug 2022 # Metric Dimension of a Diagonal Family of Generalized Hamming Graphs 2 Aug 2022 Classical Hamming graphs are Cartesian products of complete graphs, and two vertices are adjacent if they differ in exactly one coordinate. Combinatorics 05C69 (Primary) 05C12, 05B30, 05C15 (Secondary) 0 02 Aug 2022 # Implicit bulk-surface filtering method for node-based shape optimization and comparison of explicit and implicit filtering techniques 1 Aug 2022 This work studies shape filtering techniques, namely the convolution-based (explicit) and the PDE-based (implicit), and introduces an implicit bulk-surface filtering method to control the boundary smoothness and preserve the internal mesh quality simultaneously in the course of bulk (solid) shape optimization. Numerical Analysis Numerical Analysis Optimization and Control 709 01 Aug 2022 # Data-driven solutions of ill-posed inverse problems arising from doping reconstruction in semiconductors 1 Aug 2022 We model a general class of such photovoltaic technologies by ill-posed global and local inverse problems based on a drift-diffusion system which describes charge transport in a self-consistent electrical field. Numerical Analysis Numerical Analysis 68T07, 65N21, 35Q81 0 01 Aug 2022 # Confidence regions for the location of peaks of a smooth random field 30 Jul 2022 Local maxima of random processes are useful for finding important regions and are routinely used, for summarising features of interest (e. g. in neuroimaging). Statistics Theory Statistics Theory 1 30 Jul 2022 # Rational Noncrossing Coxeter-Catalan Combinatorics We solve two open problems in Coxeter-Catalan combinatorics. Combinatorics Representation Theory Primary: 05A15. Secondary: 05E05, 05E10, 20C08 0 30 Jul 2022 # On the Maximum Gonality of a Curve over a Finite Field 28 Jul 2022 In general, the gonality of a curve of genus $g \ge 2$ is at most $2g - 2$. Algebraic Geometry Number Theory 0 28 Jul 2022
2022-08-08 00:30:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3083992600440979, "perplexity": 2256.149017250627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00787.warc.gz"}
http://wiki.zcubes.com/index.php?title=Manuals/calci/COMPLEXNUM&mobileaction=toggle_view_mobile
# Manuals/calci/COMPLEXNUM COMPLEXNUM(Real,Imaginary,Suffix) • is the real part of the complex number. • is the imaginary part of the complex number. • is the imaginary unit of the complex number. ## Description • COMPLEXNUM function converts the real and imaginary coefficients into a complex number. • A complex number is a combination of a real and an imaginary number. • A number which is positive or negative, rational or irrational or decimals are called real numbers. • An Imaginary number is a number that when squring it gives a negative result. • For e.g.  . Because a negative times a negative is positive. • A complex number is a number is in the form  , where   and   are real numbers and   is the imaginary unit. Where • In  , here   is the real part of the complex number,   is the imaginary part of the complex number and   is the imaginary unit of a complex number like   or  . • To mention   and  , we must use the lower case only • In a complex number   real part is denoted by   & imaginary part is denoted by  . • COMPLEXNUM returns the error value, when   and   are non-numeric. •   should be either   or  , otherwise it shows error value. • A Complex number whose real part is zero is said to be purely imaginary. • A Complex number whose imaginary part is zero is a real number. In that cases we have to assign '0' for that part. 1. =COMPLEXNUM(5,2) gives 2. =COMPLEXNUM(5,2,["j"]) gives ## ZOS • The syntax is to calculate COMPLEXNUM in ZOS is •   is the real part. •   is the imaginary part. •   is imaginary unit which is either "i" or "j". • E.x: COMPLEXNUM(-1..1,10..11,"j") Complex Number ## Examples rn in sf RESULT COMPLEXNUM(real,imaginary,suffix) COMPLEXNUM(5,6) 5 6 5+i6 COMPLEXNUM(7,3,"j") 7 3 j 7+j3 COMPLEXNUM(4,0,"i") 4 0 i 4+i0 COMPLEXNUM(0,-4,"i") 0 (-4) i 0-i4 COMPLEXNUM(5,"j") 5 j Error Complex Numbers
2022-05-18 22:51:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8753718137741089, "perplexity": 2707.975211614955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00342.warc.gz"}
http://mathhelpforum.com/calculus/140255-double-integral-problem.html
# Thread: Double integral problem 1. ## Double integral problem I ahve the question $\displaystyle \int\int_D f(x,y) dxdy$, where $\displaystyle f(x,y)=xy^2$ and D is the region in the first quadrant D bounded by the curves $\displaystyle y = x^2$ and $\displaystyle x = y^2$. And i think im right in saying that this is the same as $\displaystyle \int^{\sqrt{x}}_{-\sqrt{x}} \int^{\sqrt{y}}_{-\sqrt{y}} xy^2 dxdy$ but when i do the first integral it equals 0 so i dont know what to do with the next 2. Your limits are wrong The integral is: $\displaystyle \int_{0}^{1} \int_{y^2}^{\sqrt{y}} xy^2 \, dx \, dy$ For the double integral over general region, the limits of the outer integral should be constants .. 3. oh right ok but how do you know what the integral limit are supposed to be? 4. What is the first step to solve a double integral over general region? 5. oh i get it know i watched YouTube - Calculating Double Integrals over General Regions and i was just thinking about the limits wrong thanks
2018-03-21 11:24:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641242027282715, "perplexity": 349.81245711912175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647612.53/warc/CC-MAIN-20180321102234-20180321122234-00663.warc.gz"}