url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://solvedlib.com/n/type-the-equation-of-the-given-graph-in-the-form-y-a-sin,21303095 | # Type the equation of the given graph in the form y = A sin @X or y=Acos @X)
###### Question:
Type the equation of the given graph in the form y = A sin @X or y=Acos @X)
#### Similar Solved Questions
##### A window is in the form of a rectangle surmounted by a semicircle. The rectangle is of clear glass, whereas the semicircle is of tinted glass that transmits only half as much light per unit area as clear glass does. The total perimeter is fixed. Find the proportions of the window that will admit the most light. Neglect the thickness of the frame.
A window is in the form of a rectangle surmounted by a semicircle. The rectangle is of clear glass, whereas the semicircle is of tinted glass that transmits only half as much light per unit area as clear glass does. The total perimeter is fixed. Find the proportions of the window that will admit the...
##### I need help in explaining the pharmacological action of Propanolol in an easy way that the...
I need help in explaining the pharmacological action of Propanolol in an easy way that the patient will understand my teaching and what part of the body does this actions take? The template says it is a "Nonselective beta blocker with negative inotropic, chronotropic dromotropic properties."...
##### Solve the inequality algebraically 218x+21)List the intervals and sign in each interval' Complete the following table, (Type your answers in interval notation Use ascending order ) Interval Sign
Solve the inequality algebraically 218x+21) List the intervals and sign in each interval' Complete the following table, (Type your answers in interval notation Use ascending order ) Interval Sign...
##### Two blocks of metal come into contact with one another. Given the following data: Block one...
Two blocks of metal come into contact with one another. Given the following data: Block one Specific heat 0.091 kcal/(kg°c) Mass 0.19 kg Initial temperature-22 °C Block two Specific heat - 0.21 kcal/(kg°c) Mass = 0.161 kg Initial temperature 90 °C What is the final temperature (in &d...
##### Draw all stereoisomers formed in each reaction.BrzNBSDMSO,HzoClz Hzo
Draw all stereoisomers formed in each reaction. Brz NBS DMSO,Hzo Clz Hzo...
##### Q5_ If there is gas that behaves almost like an ideal gas but follows the equation of state below:PT0.9 =nRTWe heat up this gas from 300 K to 500 K reversibly at constant pressure at atm: The heat capacity of the gas is known to be C p= (25.7 + Tx0.0133 T2x3.764*10-6_T3x7.310x10-11) Jmol/K What is the change in the internal energy AE for one mole of the gas after the heating?
Q5_ If there is gas that behaves almost like an ideal gas but follows the equation of state below: PT0.9 =nRT We heat up this gas from 300 K to 500 K reversibly at constant pressure at atm: The heat capacity of the gas is known to be C p= (25.7 + Tx0.0133 T2x3.764*10-6_T3x7.310x10-11) Jmol/K What is...
##### How much does a 0.36 m long copper rod with a 7.7 mm diameter stretch when...
How much does a 0.36 m long copper rod with a 7.7 mm diameter stretch when one end is fixed and the other is pulled with a 1.5 kN force...
##### To practice Problem-Solving Strategy 8.1 Static equilibrium problems. and Felbow. Part 1 When you lift an...
To practice Problem-Solving Strategy 8.1 Static equilibrium problems. and Felbow. Part 1 When you lift an object by moving only your forearm, the main lifting muscle in your arm is the biceps. Suppose the mass of a forearm is 1.20 kg. If the biceps is connected to the forearm a distance dbiceps = 2....
##### 9.4.49 V 1 MH1 conbilu
9.4.49 V 1 MH 1 conbilu...
##### Question 53 Not yet answered Points out of 2.50 P Flag question A diploid organism is...
Question 53 Not yet answered Points out of 2.50 P Flag question A diploid organism is able to make 32 different chromosomal rearrangements during metaphase 2. How many chromosomes do you expect to see in one of its somatic cells during G1? Answer: Which of the following statements is FALSE regarding...
##### Water exists in different forms depending on the pressure and temperature. A portion the phase diagram...
Water exists in different forms depending on the pressure and temperature. A portion the phase diagram for ice I, ice III and liquid water is shown below. What statement about the densities of three phases is correct. a. The density of liquid water is greater than the densities of either ice or ice ...
##### A broad-crested weir 1.6 m in height is built across the floor of a 3.25-m-wide rectangular...
A broad-crested weir 1.6 m in height is built across the floor of a 3.25-m-wide rectangular channel with the depth of water upstream of the weir being 2.45 m. Neglect friction losses and the upstream velocity head. a) What is the discharge across the weir? b) What is the water depth on top of the we...
##### Is this correct Write a formula for the general term or nth term for the sequence....
is this correct Write a formula for the general term or nth term for the sequence. Then find the indicated term. 6, 8, 10, 12, 14, ...; 010 an-2n + 4 (Simplify your answer. Type an expression using n as the variable.) 210 = 24 (Simplify your answer.)...
##### How does the US wage stagnation since the 70s relate to public health?
How does the US wage stagnation since the 70s relate to public health?...
##### 5731 Wurts [47DI End Ponit Tetl (Fiizt - J(Q Urotet ATT KEFtounenoCunder Eicenionno Di MttMrn conseny-eomesncaerstton Mou 0 Conubn izod Du Gorcondsrous noles nna_Dtvo FuAFr Gelao Wnich Inest beuts (tote} EDEAIC 370777 PLNiBal T and2413ond Ll [na Data F4+a @ &Linl uned 447;9 - ) B7Cn RAAaJntMAicctdn
5731 Wurts [47DI End Ponit Tetl (Fiizt - J(Q Urotet ATT KE Ftouneno Cunder Eicenionno Di MttMrn conseny-eomesncaerstton Mou 0 Conubn izod Du Gorcondsrous noles nna_Dtvo FuAFr Gelao Wnich Inest beuts (tote} EDEAIC 370777 PLNi Bal T and2413 ond Ll [na Data F4+a @ &Linl uned 447; 9 - ) B7 Cn RAAaJ ...
##### Whispering Winds Corp.’s balance sheet at December 31, 2018, is presented below. Whispering Winds Corp. Balance...
Whispering Winds Corp.’s balance sheet at December 31, 2018, is presented below. Whispering Winds Corp. Balance Sheet December 31, 2018 Cash $32,000 Accounts payable$13,050 Inventory 30,500 Interest payable 2,525 Prepaid insurance 7,320 Bonds payable 50,500 Equipment 38...
##### And tanseokmatiors qrpn_using parent_functions f(xh 2xt flx)a 2*-1_ 3 fLxl34-X Axk - 3*-2
and tanseokmatiors qrpn_using parent_functions f(xh 2xt flx)a 2*-1_ 3 fLxl34-X Axk - 3*-2...
##### Are human cells oxidase positive or oxidase negative? What isthe difference between an oxidase positive organisms and an oxidasenegative organism?
Are human cells oxidase positive or oxidase negative? What is the difference between an oxidase positive organisms and an oxidase negative organism?...
##### 2) Some solid MgF2 is placed into a beaker of water and some(but not all) of it dissolves, resulting in a saturatedsolution.a) Draw a picture of the contents of the beaker at the smallscale, clearly showing any ions that may exist in solution.b) Given that for MgF2the Ksp = 3.7 × 10−3, calculate themolarity of the fluoride ion in solution. Show all yourwork/calculations.c) If some NaF is added to the beaker and it completelydissolves, what would happen to the magnesium ion concentration insol
2) Some solid MgF2 is placed into a beaker of water and some (but not all) of it dissolves, resulting in a saturated solution. a) Draw a picture of the contents of the beaker at the small scale, clearly showing any ions that may exist in solution. b) Given that for MgF2the Ksp = 3.7 × 10−3...
##### The Operational Planning Pyramid was presented in a class handout and discussed on the slides. Which...
The Operational Planning Pyramid was presented in a class handout and discussed on the slides. Which of the following statements regarding that handoutdiscussion is are not true? a The operations pyramid is one side of a multi-sided pyramid. Other functional areas (marketing, finance, etc.) would ha...
##### If 20.8 grams of an aqueous solution of nickel(II) sulfate; NiSO4, contains 3.13 grams of nickel(ID) sulfate, what is the weight weight percent of nickel(I) sulfate in the solution?Weighthweight percent NiSO40
If 20.8 grams of an aqueous solution of nickel(II) sulfate; NiSO4, contains 3.13 grams of nickel(ID) sulfate, what is the weight weight percent of nickel(I) sulfate in the solution? Weighthweight percent NiSO4 0...
##### 1.Find fxy(x,y) if f(x,y)=(x^5+y^4)^6. 2. Find Cxy(x,y) if C(x,y)=6x^2-3xy-7y^2+2x-4y-3 Find (,,(Xy) if f(x,y)= (x + y)...
1.Find fxy(x,y) if f(x,y)=(x^5+y^4)^6. 2. Find Cxy(x,y) if C(x,y)=6x^2-3xy-7y^2+2x-4y-3 Find (,,(Xy) if f(x,y)= (x + y) fxy(x,y) = Find Cxy(x,y) if C(x,y) = 6x² + 3xy – 7y2 + 2x - 4y - 3. Cxy(x,y)=0...
##### Find y' if y Show your work and simplify. You may type your answer or handwrite and upload a pictureParagraphB4N < @ % h
Find y' if y Show your work and simplify. You may type your answer or handwrite and upload a picture Paragraph B 4 N < @ % h...
##### 5. (20 points)For any n € N, let be the relation on Z given by € ~ y 4 x = y mod n and let Z/ ~n be the set of equivalence classes: Prove that the function f : Zl ~67 Zl ~: given by f(lz]) = [2x + 1] is well-defined and surjective.
5. (20 points)For any n € N, let be the relation on Z given by € ~ y 4 x = y mod n and let Z/ ~n be the set of equivalence classes: Prove that the function f : Zl ~67 Zl ~: given by f(lz]) = [2x + 1] is well-defined and surjective....
##### Factory workers are constantly encouraged to practice zero tolerance when it comes to accidents in factories....
Factory workers are constantly encouraged to practice zero tolerance when it comes to accidents in factories. Accidents can occur because the working environment or conditions themselves are unsafe. On the other hand, accidents can occur due to carelessness of so-called human error. In addition, the...
##### Required: 4. Using the high-low method, calculate Camp Rainbow's total fixed operating costs and variable operating cost per child. 5. Using the high-low method results, calculate the camp's expected operating cost if 148 children attend a session.Complete this question by entering your answers in the tabs below.RequiredRequired 5Using the high-low method, calculate Camp Rainbow's total fixed operating costs and variable operating cost per child_ (Round your intermediate calculati
Required: 4. Using the high-low method, calculate Camp Rainbow's total fixed operating costs and variable operating cost per child. 5. Using the high-low method results, calculate the camp's expected operating cost if 148 children attend a session. Complete this question by entering your a...
##### The following information iS given for n-pentane at atm:Tb = 36.209C AH,p (36.20PC) = 357.6 J g T -129.70"€ AHea(-129.709C) = 116.7 Jlg Specific heat gas 1.650 Jlg % Specific heat liquid 2.280 Jlg % A 38.40 g sample of liquid n peutane initially at -39.90*€C . If the sample is heated at constant pressure (P = atm) KJ of energy are needed to raise the temperature of the sample to 58.90*€
The following information iS given for n-pentane at atm: Tb = 36.209C AH,p (36.20PC) = 357.6 J g T -129.70"€ AHea(-129.709C) = 116.7 Jlg Specific heat gas 1.650 Jlg % Specific heat liquid 2.280 Jlg % A 38.40 g sample of liquid n peutane initially at -39.90*€C . If the sample is heat...
##### A.20,00-mL sample of HSO4 is titrated with 0.0521 M NaOH. At the endpoint; it is found that 37.57 mL of titrant was used. What was the concentration of the HzSo2 HzSO4 (aq) 2 NaOH (aq) ~2 HzO () + Na,SO4(aq) What mass of HZO was produced by the neutralization?Insert or edit imageDo not enter anything hereQuestion 26Not yet answeredNot gradedFlag ' questionIn a constant pressure calorimeter (coffee cup calorimeter) general chemistry student finds that when 20.28 g of CszSO,(s) ( MM 361.9 g/m
A.20,00-mL sample of HSO4 is titrated with 0.0521 M NaOH. At the endpoint; it is found that 37.57 mL of titrant was used. What was the concentration of the HzSo2 HzSO4 (aq) 2 NaOH (aq) ~2 HzO () + Na,SO4(aq) What mass of HZO was produced by the neutralization? Insert or edit image Do not enter anyth... | 2022-07-06 03:47:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5012884140014648, "perplexity": 5423.562162029962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104660626.98/warc/CC-MAIN-20220706030209-20220706060209-00336.warc.gz"} |
https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapter-9-basic-algebra-9-4-order-of-operations-9-4-exercises-page-653/38 | Chapter 9 - Basic Algebra - 9.4 Order of Operations - 9.4 Exercises - Page 653: 38
53
Work Step by Step
$5(4^{2})-6(1+4)-(-3)$ 1. Solve inside the parenthesis first, so: $5(16) -6(5)-(-3)$ 2. When 2 negatives multiply, they become positive: $5(16) -6(5)+3$ 3. Expand the parenthesis: $80 -30+3 = 53$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2019-08-23 13:04:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5754559636116028, "perplexity": 2049.31328750785}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318421.65/warc/CC-MAIN-20190823130046-20190823152046-00388.warc.gz"} |
https://wangcc.me/post/2016-11-30/ | # 救贖
Trevor McDonald on Redemption
There are many views about the concept of redemption. It’s generally defined as the action of saving or being saved from sin, error , or evil. We take this to mean that someone has committed an act which is sinfull sinful or evil. My guest today did nothing of the thoughtsort. In fact, she was subjected to an active act of terrible violence and brutality, and it happened when she was only thirteen13. Madeleine Black was raped, yet today she can talk of having been redeemed through the active act of forgiveness.
Madeleine, how does one do that? How does one seek or gained gain that redemption?
It wasn’t something that I ever really set out to do. It There was a combination of about three things. I thought that I had worked it , really were when well and I was healed. But, but there was always something lurking underneath . and when my oldest daughter became nearly thirteen 13, and it was while whilst I was having therapy for over three years . that my therapist suggested to me that maybe they weren’t born rapists. And, and you know, I just. ..
, I just, there was no way I want wanted to forgive them, I just want wanted somebody to kidnap them, type tie them up, rape and torture them for hours on end like they had done to me. , but he planted a seed in my mind . and that seed started to grow . and I, I just really wanted to understand what did it take for them to take that path. , because they weren’t much older than me, . They were maybe 17,18. I wanted to know, how could they be so violent toward towards another human being, what had they seen or heard or experienced themselves that could have made make them behave in that way. ?
• Words worth to be remembered:
• redemption: n. (尤指基督教的)拯救,赎罪,救赎例句: They visited the Shrine of Our Lady to pray for redemption. 他们参观了圣母玛利亚的神龛,祈祷以期救赎。
• be subjected to sth: 有;遭受,承受 例句: Cars are subject to a high domestic tax. 买汽车要交很高的国内税。
• brutality /bruːˈtæl.ə.ti/: n. 残酷性,残忍行径 例句: the brutalities of war 战争的残酷
• sort: n. 种类;方式;品质 vt. 将…分类;将…排序;挑选出某物 vi. 分类;协调;交往
• redeem: vt. 赎回;挽回;兑换;履行;补偿;恢复
• whilst: conj. 同时;时时,有时;当…的时候
• set out: (怀着特定目的)开始,着手 例句: She set out with the aim of becoming the youngest ever winner of the championship. 她努力的目标就是成为史上最年轻的冠军。
• lurk /lɜːk/: v. 潜伏,潜藏 例句: It seems that old prejudices are still lurking beneath the surface. 表象背后似乎依然潜藏着旧有的偏见。
## 譯文
“救赎(redemption)”一词的概念有多种解读。它通常指挽救过失、罪恶,或从过失、罪恶中得到解放的过程。我们用这一词表明某人曾犯下罪行。而今天我们的嘉宾未曾犯过罪行。事实上,她在13岁时遭到严重的暴力虐待。玛德琳·布莱克曾被强奸。而她如今认为,自己通过“原谅”得到了救赎。
“寻求救赎”并非我本来的打算。这是三样东西的结合。我曾以为我已经从过去的伤痛中走了出来,那些创伤已经被治愈了。但总有东西潜藏着,让我无法真正释怀。在我长女快到13岁时,我已经接受了3年的心理治疗。我的治疗师暗示我那些人可能并非生来就是强奸犯。但我一点也不想去原谅他们。我只希望他们也被绑架、被绑起来、被强奸、被折磨数小时,就像他们曾对我做的那样。然而,心理医师的话却像颗种子被植入我的脑海,并开始生长。我开始想要了解他们为什么会走上“强奸”这条路。他们并没有比我大多少,大概17、18岁。我想知道,他们怎么会对别人那么残酷、暴力,他们曾看到、听到或经历过什么事情,导致他们会做出这样的行为。
##### Chaochen Wang 王 超辰
###### Assistant Professor
All models are wrong, but some are useful. | 2020-02-19 03:48:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38126495480537415, "perplexity": 4871.98276098882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144027.33/warc/CC-MAIN-20200219030731-20200219060731-00508.warc.gz"} |
https://space.stackexchange.com/questions/34143/whats-the-role-of-the-chainmail-and-scale-armor-on-insights-wts/37332 | What's the role of the chainmail and scale armor on InSight's WTS?
It's like taken directly from a medieval armor...
The role of wind and thermal shield (WTS) deployed over the seismometer is in its name - protecting it from noise and thermal changes. This is fairly clear - and achievable by classic means; a rigid protective shell on the outside, possibly a padding on the inside, the "bellow" skirt to align it with the uneven ground... but I just don't see where the chain mail and scale mail skirts come into the picture. Other than being rather heavy (pulling the kapton skirt to the ground) - which could be achieved by other (admittedly less cool looking) means, I don't see their role. Especially that neither was famed for being particularly good against heat, cold or wind.
How do they help in thermal and wind protection?
• What, no chainmail tag? I'll advocate for that if you advocate for a Gilligan's Island tag. – uhoh Feb 11 '19 at 11:47
• Perhaps to protect against pointy bits on the Martian surface? – Chris B. Behrens Feb 11 '19 at 15:50
• Allied with @ChrisB.Behrens comment. It might be a simple way to provide a flexible seal with the ground to minimize gaps between the ground & the protective skirt. – Fred Feb 11 '19 at 16:23
• Chainmail alone would suffice for that. Or even a spongy bottom seal. – SF. Feb 11 '19 at 16:29
• Mars is named after the roman god of war, after all. Better come prepared. – Ingolifs Feb 11 '19 at 22:12 | 2020-09-19 02:59:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3860558867454529, "perplexity": 3584.275131462816}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00533.warc.gz"} |
http://migf.com/acupuncture-in-pnmg/physical-activity-coefficient-68fef3 | This means to simply subtract the calories you’ve expended throughout the day from the ones you took in. Get access risk-free for 30 days, Physical activity level (PAL) takes into account total daily energy expenditure (TDEE) and basal metabolic rate (BMR). credit by exam that is accepted by over 1,500 colleges and universities. Place TDEE and BMR into the equation for physical activity level (PAL). For nonideal solution, the activity coefficient is calculated using the activity coefficient models. To learn more, visit our Earning Credit Page. What Is the Rest Cure in The Yellow Wallpaper? The following table and more data are also available here. The biggest piece of the puzzle is All of the following apply to the ionic strength range 0 - 0.1 M: (a). 's' : ''}}. 2019. Yes, activity coefficient can be greater than unity when there is a positive deviation from ideal behavior. Sciences, Culinary Arts and Personal © copyright 2003-2020 Study.com. All other trademarks and copyrights are the property of their respective owners. The ionic strength of the solution changes the behavior of the ions in the solution. Study.com has thousands of articles about every flashcard set{{course.flashcardSetCoun > 1 ? The apparent and real concentrations are not the same. Physical activity refers to any bodily movement produced by skeletal muscles that increase energy expenditure above a basal level. imaginable degree, area of Without this consideration, CaSO4 has a Ksp 7.1 x10-5. Ideal solutions assume there is no interaction between ions once the ions are in solution. courses that prepare you to earn There are two options to choose: Acid/Base dissociation (solves for the degree of dissociation) and Solubility (solves for solubility). For the phase separating mixture, the activity coefficient is greater than unity. Services. Objectives: Measuring the intraclass correlation coefficient (ICC) and design effect (DE) may help to modify the public health interventions for body mass index (BMI), physical activity and diet according to geographic targeting of interventions in different countries. {{courseNav.course.topics.length}} chapters | It can be divided into two main categories. This pH change occurs even with an inert salt such as NaNO3 or MgCl2, which we will examine in Example 2. House tasks like gardening, cooking and vacuuming all count as physical activity – get your house in order while moving more! The evidence shows that improving older patients physical ability, lifestyle behaviours and quality of life could promote building resilience and healthy ageing. Addition of a salt increases the ionic strength of the aqueous phase, which in turn separates the components of an emulsion into polar and nonpolar layers. The activity coefficientof an electrolyte solution is used to factor in the concentration-dependent interactions between ions in a solution. Ionic solutions are not ideal solutions. and career path that can help you find the school that's right for you. For the following two sample calculations, a shortened table of activities is presented here in Table 1. Physical Activity Guidelines for Americans external icon This report summarizes the scientific evidence on physical activity and health, and will be used by the government to develop the second edition of the Physical Activity Guidelines for Americans. Physical Activity Level should not be confused with the physical activity coefficients (PA values) used in the equations to estimate energy requirement. The amount of physical exertion in one day determines how many calories must be consumed in the same period to maintain activity and lose or gain weight as desired. Low levels of physical activity are a major risk factor for chronic conditions. You can test out of the The calculation of pH and molar solubility are two examples where the activity coefficient has a measurable effect on a common calculation. So, cations will encounter cations and anions. Sources: PAC, 1990, 62, 2167. Select a subject to preview related courses: You have probably been told that if you add a non-reactive ionic compound to a solution of pure water the pH of the solution will remain unaffected. The Physical Activity Level categories were defined as sedentary (PAL 1.0-1.39), low active (PAL 1.4-1.59), active (PAL 1.6-1.89), and very active (PAL 1.9-2.5). What this means is that the fugacity of a component in mixture is higher than that of pure component. Quiz & Worksheet - What are Circuits, Circuit Boards & Semiconductors? The second edition of the Physical Activity Guidelines for Americans Create your account, Already registered? We rank countries by activity inequality where lower values correspond to a more equal distribution of physical activity (using the Gini coefficient ranging from 0 to 1). The amount of physical exertion in one day determines how many calories must be consumed in the same period to maintain activity and lose or gain weight as desired. Being physically active improves mental and musculoskeletal health and reduces other risk factors such as overweight and obesity, high blood pressure and high blood cholesterol. Log in or sign up to add this lesson to a Custom Course. Fortunately for most of us, activity coefficients are provided on tables in physical chemistry or analytical chemistry texts. Chiral Molecules & Ions: Definition, Identification & Examples, Quiz & Worksheet - Activity Coefficient Equation, Over 83,000 lessons in all major subjects, {{courseNav.course.mDynamicIntFields.lessonCount}}, Enthalpy: Energy Transfer in Physical and Chemical Processes, Using Hess's Law to Calculate the Change in Enthalpy of a Reaction, Calorimetry: Measuring Heat Transfer and Heat Capacity, Predicting the Entropy of Physical and Chemical Changes, Free Energy: Predicting the Spontaneity of a Reaction, The Relationship Between Enthalpy (H), Free Energy (G) and Entropy (S), Electrochemistry: Free Energy and Cell Potential Energy, Energy Transformation: Definition, Types & Examples, Bond Enthalpy: Definition, Calculations & Values, Biological and Biomedical The physical activity coefficients are used in the EER equations to estimate energy requirements and are based on ranges of physical activity levels. Ions carry an associated ionic atmosphere which must be factored into equilibrium calculations which use concentration. The National Institutes of Health, Health Canada, and the National Academy of Sciences Institute of Medicine (IOM) Food and Nutrition Board convened a multidisciplinary expert panel under a contract from the US Department of Health and Human Services to “review the scientific literature regarding macronutrients and energy and develop estimates of daily intake that are compatible with good nutrition throughout the life span and that may decrease the risk of chronic disease.” In this article, we recount s… Equilibrium expressions for solutions are calculated assuming the expression applies to an ideal solution. Earn Transferable Credit & Get your Degree. Ideal solutions assume there is no interaction between ions once the ions are in solution. Copyright © 2020 Leaf Group Ltd., all rights reserved. This stands to reason because the more pronounced the ionic atmosphere, the less likely the will be able to form a precipitate. The meaning of an activity coefficient is explained by the behavior of ions in solution. The activity coefficient can also be found from a table of known activity coefficients. Which is to say, it's a number given to what is done in a day. In this context, PA indexes may also be used to normalize self-reported dietary energy intake (4). His prose, poetry and essays have been published in numerous journals and literary magazines. The activity coefficient is used to show how much the solution deviates from the ideal. Physical activity (PA) is a complex behavior that is difficult to assess accurately, partly because of its several dimensions. Regular physical activity is one of the most important things people can do to improve their health. ACE Fit offers a number of free tools and calculators to help you determine everything from your body mass index (BMI) to your target heart rate zone, your blood … The awareness regarding improving the physical activity of older adults has been growing among policymakers and healthcare professionals during the past few years. Online Vitmain Guide: How to Calculate Daily Energy Expenditure, Top End Sports: Calculating Energy Expenditure. | {{course.flashcardSetCount}} For ideal solutions, the activity coefficients of the species are equal to unity; hence, the activities in Equation (6.8) can be replaced by molar concentrations. credit-by-exam regardless of age or education level. Calculation gives a molar solubility of 8.34 x 10-3. Generally, the activity coefficient of an ion will not be calculated. To unlock this lesson you must be a Study.com Member. The solvent here is water and the temperature is 298 K. The program employs the Newton-Raphson method for nonlinear equations in solving for the cation concentration (CC) and activity coefficient (AC). Considering the number of factors which are involved in proper BMR calculation, including body surface area, body temperature and health, external temperature and gland function, completely precise BMR results can't be done in a simple equation, however, Harris-Benedict remains the best practical estimation available without proper testing. Ions of opposite charges shield the effective charge of an ion as shown in Figure 1. The Physical Activity Level (PAL) is the ratio of total energy expenditure to basal energy expenditure (TEE/BEE). Equilibrium expressions for solutions are calculated assuming the expression applies to an ideal solution. In solutions, the activity coefficient is a measure of how much a solution differs from an ideal solution — i.e., one in which the effectiveness of each molecule is equal to its theoretical effectiveness. The activity coefficient of an electrolyte solution is used to factor in the concentration-dependent interactions between ions in a solution. Visit the College Chemistry: Help and Review page to learn more. The National Guidelines on Physical Activity for Ireland recommend that children and young people should be active for at least 60 minutes a day every day (Department of Health, 2009). The general procedure described in this section for evaluating $$\g_{\pm}$$ requires knowledge of the osmotic coefficient $$\phi_m$$ as a function of molality. The activity coefficient is largely considered an empirical parameter that was traditionally introduced to correct the non-ideality observed in thermodynamic systems such as osmotic pressure. After the Ksp equilibrium expression is corrected for ionic activity coefficients, there is a 51.4% difference in the molar solubility. first two years of college and save thousands off your degree. to calculate pH for solutions of ionic compounds in water. As ionic strength increases, activity coefficients decrease. Advantages of Self-Paced Distance Learning, Advantages of Distance Learning Compared to Face-to-Face Learning, Top 50 K-12 School Districts for Teachers in Georgia, Those Winter Sundays: Theme, Tone & Imagery. The effect is to make anions and cations less attracted to one another. Smaller ions have a larger effective radius. Kroll holds a Master of Fine Arts in writing from the University of San Francisco. (Glossary of atmospheric chemistry terms (Recommendations 1990)) on page 2171 [] [] PAC, 1994, 66, 533. You can see this visually in Figure 2. Property Ownership & Conveyance Issues in Washington, Zeroes, Roots & X-Intercepts: Definitions & Properties, Manufactured Housing Rules in New Hampshire, Quiz & Worksheet - Analyzing The Furnished Room, Quiz & Worksheet - Difference Between Gangrene & Necrosis, Quiz & Worksheet - A Rose for Emily Chronological Order, Quiz & Worksheet - Nurse Ratched Character Analysis & Symbolism, Flashcards - Real Estate Marketing Basics, Flashcards - Promotional Marketing in Real Estate, Elementary School Math Worksheets & Printables, Middle School Physical Science: Homework Help Resource, AP Chemistry Syllabus Resource & Lesson Plans, Political Science 101: Intro to Political Science, AP US History Syllabus Resource & Lesson Plans, Quiz & Worksheet - Understanding Decimals, Quiz & Worksheet - The Circulatory System, Quiz & Worksheet - Bar Graphs and Pie Charts Review, Quiz & Worksheet - Deming, Juran, Crosby's Contributions to TQM, Quiz & Worksheet - Movement Through the Small Intestine, Graying of America: Birth Rate, Death Rate & Life Expectancy, HRCI Online Recertification & Continuing Education Credit, Accessibility and Disability Accommodations at Study.com, SBEC Technology Application Standards for Teachers, Tech and Engineering - Questions & Answers, Health and Medicine - Questions & Answers, Sort the following statements as true of false. Create an account to start this course today. Rough equations for TDEE (as a measurement of calories used) can be attained through the following equations: For sedentary people (office workers who rarely or never exercise): TDEE = weight (in pounds) x 14, For moderately active people (construction workers, those who exercise or play a sport 3 to 5 times a week): TDEE = weight (lbs) x 17, For active people (agricultural workers, those who exercise or play a sport daily): TDEE = weight (lbs) x 20. Activity coefficient definition is - the ratio of chemical activity to actual concentration : an arbitrary quantity that in the case of solutions is a measure of the deviation of a more or less concentrated solution from an ideal solution. In the lab, sometimes emulsions form and there … In 2015, 2.5% of the total disease burden was due to physical inactivity (AIHW 2019). Example 3 shows the calculation of CaSO4 when the activity coefficients are taken into account. The equation can be written as: An accurate calculation of BMR can only be done through specifically designed tests conducted when the digestive system is inactive (after 8 hours of sleep, 12 hours without food). All rights reserved. 42 preschool children (17 boys and 25 girls) were included. A critical dimension of P… Example 1 shows how a salt's ionic strength is different than its molar concentration. The solubility of ions is also affected. In the lab, sometimes emulsions form and there is no separation between an aqueous phase and organic phase. Moving more and sitting less have tremendous benefits for everyone, regardless of age, sex, race, ethnicity, or current fitness level. The equations use the variable of weight (w) in kilograms, height (h) in centimeters and age (a). Physical inactivity contributed 10–… But, ions solution do interact with other ions and solvents. General activity levels are expressed as: Jess Kroll has been writing since 2005. Instead, it will be gotten from a table of activity coefficients such as shown below, which lists γ as a function of ion charge, ion size and ionic strength. Quiz & Worksheet - Properties of Metallic Bonds, Prentice Hall Earth Science Chapter 1: Introduction to Earth Science, Prentice Hall Earth Science Chapter 2: Minerals, Prentice Hall Earth Science Chapter 3: Rocks, Prentice Hall Earth Science Chapter 4: Earth's Resources, Prentice Hall Earth Science Chapter 5: Weathering, Soil, and Mass Movements, CPA Subtest IV - Regulation (REG): Study Guide & Practice, CPA Subtest III - Financial Accounting & Reporting (FAR): Study Guide & Practice, ANCC Family Nurse Practitioner: Study Guide & Practice, Mergers, Acquisitions & Corporate Changes. The purpose of this study was to assess physical activities and activity coefficient of preschool children and was to give some concrete information to activate outdoor play and to probe the suggestions to activate outdoor play. Not sure what college you want to attend yet? The coefficient of physical activity: the concept, formula, calculation rules with examples and increase in CFA - Wellness - 2020. The more pronounced the ionic strength of the first two years of and. Calculate Weight Lost in Competition to unlock this lesson to a Custom Course solubility are examples... Very dilute solutions than that of pure component since 2005 - 0.1 M: ( a.... As well as numerous websites table ', type 2 diabetes and.. Factor for chronic conditions interactions between ions once the ions are in solution calculating energy expenditure ( TEE/BEE ) total! Phase and organic phase 8.34 x 10-3 solution do interact with other ions solvents... Seemingly correct results Rest Cure in the concentration-dependent interactions between ions in a solution unbiased! The ratio of total energy expenditure ( TDEE ) and basal metabolic rate ( BMR ) of an ion shown! Is positive or negative ) attracts more oppositely charged ions took in example 1 shows how salt! A numeric method of expressing one 's daily energy expenditure to basal energy above... Separating mixture, the activity coefficient is a complex behavior that is to. In kilograms, height ( h ) in kilograms, height ( )! Literary magazines 2020 Leaf Group Ltd., all rights reserved age, body and... Will be able to form a precipitate few years solution is used to show how much the solution from..., there is no interaction between ions in a mixture of chemical substances nonideal solution, the likely. Into account total daily energy expenditure to basal energy expenditure calculated assuming the expression applies an. That the fugacity of a given ion as shown in Figure 1 Review Page to learn,... The same & Worksheet - what are Circuits, Circuit Boards & Semiconductors health, 2013 ), 62 2167... Of physical activity of older adults has been growing among policymakers and healthcare professionals during the past few.. An associated ionic atmosphere around an ion becomes stronger example 2 apply to the ionic strength of and. And unaltered charge and concentration of a component in mixture is higher that! Adults has been writing since 2005 radius is the difference between Blended Learning & Learning! Lab, sometimes emulsions form and there is a little more to it just... To estimate energy requirement up to add a saturated solution of NaCl in this context, PA may... Activity – get your house in order while moving more TDEE and into! Ionic radius is the Rest Cure in the concentration-dependent interactions between ions in the seemingly results. Becomes stronger ionic radius physical activity coefficient the Fairest Way to Calculate daily energy expenditure to basal energy (! A salt 's ionic strength and effective ionic radius is the ratio of energy... Published in numerous journals and literary magazines are taken into account total daily expenditure... You need to find the right school Weight the Key to Weight Control: a simple equation of calories in... Deviates from the ones you took in nonideal solution, the activity coefficient models an... Here in table 1 for most of us, activity coefficients ( PA values ) used in to. Difference between Blended Learning & Distance Learning ( AIHW 2019 ) them is to make anions and cations less to. Do interact with other ions and solvents equilibrium calculations which use concentration school! Solves for solubility ) Ltd., all rights reserved rate ( BMR ) and during physical activities real are... All of the following two sample calculations, a shortened table of activity. A Course lets you earn progress by passing quizzes and exams and repetitive bodily movements: Acid/Base dissociation solves! Of dissociation ) and during physical activities factor for chronic conditions BMR into the equation for physical activity coefficient. To reasonably determine TDEE no separation between an aqueous phase and organic phase numerous websites pH... Of its several dimensions bodily movements is presented here in table 1 on a common practice to separate them to... Pac, 1990, 62, 2167 is to add a saturated solution of NaCl literary.! Of known activity coefficients are taken into account the number of calories used in the Wallpaper. Low levels of physical activity level ( PAL ) ( TEE/BEE ) of expressing one 's daily energy expenditure a. Little more to it than just that of age or education level equation of calories in minus out! The Key to Weight Control: a simple equation of calories used in day at! Do interact with other ions and solvents taken in a mixture of chemical substances to! Controlling Weight the Key to Weight Control: a simple equation of calories in minus calories out quiz Worksheet! Were included component in mixture is higher than that of pure component to Control. Also be used to determine the strength of association and to compare activity levels BMI... Be able to form a precipitate that four out of five children are sufficiently... Prose, poetry and essays have been published in numerous journals and literary magazines seemingly correct.! Shows that improving older patients physical ability, lifestyle behaviours and quality of life could building... Not do sufficient physical activity and Controlling Weight the Key to Weight Control: simple! Pure component cooking and vacuuming all count as physical activity level ( PAL ) takes into account this consideration CaSO4. An aqueous phase and organic phase to compare activity levels by BMI status college chemistry: Help and Page... Adults has been writing since 2005 been published in numerous journals and literary magazines do interact with ions! Subtract the calories you ’ ve expended throughout the day from the.... Common practice to separate them is to say, it 's a number given what. The molar solubility levels of physical activity refers to any bodily movement by. Several dimensions skeletal muscles that increase energy expenditure ( TDEE ) and metabolic. You need to find the right school info you need to find the school., the ionic strength of the first two years of college and save thousands off degree! Use the variable of Weight ( w ) in centimeters and age ( a magnitude it... Solves for the following apply to the ionic atmosphere around an ion will not be calculated as in... Master of Fine Arts in writing from the ideal that improving older patients ability. % difference in the concentration-dependent interactions between ions in solution property of their respective owners form there... One 's daily energy expenditure a number given to what is the charge the same a. News Drops, physical activity coefficient Honolulu Weekly '' and News Drops, ... Blended Learning & Distance Learning the use of Beer ’ s law which uses also concentration than... Separating mixture physical activity coefficient the activity coefficient of an electrolyte solution is used to show how much the deviates... House tasks like gardening, cooking and vacuuming all count as physical activity refers to any movement... Solution is used to factor in the equations use the variable of Weight ( )... Are not sufficiently active for health benefits ( Department of health, 2013 ) to what is the between! Basal metabolic rate ( BMR ) common calculation a saturated solution of NaCl as... Charge and concentration of a given ion as seen in equation 1 to any bodily movement produced by skeletal that. To assess accurately, partly because of its several dimensions BMI status among policymakers healthcare... Following apply to the ionic strength and effective ionic radius is the charge ( )... A Custom Course lets you earn progress by passing quizzes and exams just create an account in journals! As NaNO3 or MgCl2, which we will examine in example 2 expressing one 's daily energy.. Pa values ) used in thermodynamics to account for deviations from ideal behaviour in a mixture chemical... Drops, '' as well as numerous websites a pedometer ( physical activity coefficient counts the number of steps in! Not sufficiently active for health benefits ( Department of health, 2013.... And Controlling Weight the Key to Weight Control: a simple equation of calories in minus calories.... A simple equation of calories in minus calories out a component in mixture is higher than that of component. Vitmain Guide: how to Calculate daily energy expenditure ( TDEE ) and basal metabolic (... The meaning of an ion will not be calculated of water is larger which also... Count as physical activity level is a complex behavior that is difficult to assess accurately, because... Education level property of their respective owners evidence is that four out of the ions in the,... Copyright © 2020 Leaf Group Ltd., all rights reserved of NaCl, lifestyle behaviours and quality life. The Key to Weight Control: a simple equation of calories used in the concentration-dependent interactions ions... Equation for physical activity level ( PAL ) takes into account, 2013 ) for physical activity level ( )... Takes into account ( 17 boys and 25 girls ) were included the calories you ’ ve expended the!, '' as well as numerous websites regardless of age or education level than just that assuming... ) takes into account the number of steps taken in a mixture of chemical.... From the University of San Francisco basal energy expenditure is difficult to assess accurately partly... Concentrations are not the same energy requirement saturated solution of NaCl behaviours and quality of life promote... Are Circuits, Circuit Boards & Semiconductors to Calculate pH for solutions are calculated the. With an inert salt such as NaNO3 or MgCl2, which we will examine in example 2 to. Charges shield the effective charge of an ion becomes stronger online Vitmain Guide how! | 2021-09-25 00:32:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2726552188396454, "perplexity": 2823.3589742997606}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057584.91/warc/CC-MAIN-20210924231621-20210925021621-00457.warc.gz"} |
https://openstax.org/books/university-physics-volume-2/pages/3-challenge-problems | University Physics Volume 2
# Challenge Problems
University Physics Volume 2Challenge Problems
### Challenge Problems
94 .
One mole of an ideal monatomic gas occupies a volume of $1.0×10−2m31.0×10−2m3$ at a pressure of $2.0×105N/m2.2.0×105N/m2.$ (a) What is the temperature of the gas? (b) The gas undergoes a quasi-static adiabatic compression until its volume is decreased to $5.0×10−3m3.5.0×10−3m3.$ What is the new gas temperature? (c) How much work is done on the gas during the compression? (d) What is the change in the internal energy of the gas?
95 .
One mole of an ideal gas is initially in a chamber of volume $1.0×10−2m31.0×10−2m3$ and at a temperature of $27°C27°C$. (a) How much heat is absorbed by the gas when it slowly expands isothermally to twice its initial volume? (b) Suppose the gas is slowly transformed to the same final state by first decreasing the pressure at constant volume and then expanding it isobarically. What is the heat transferred for this case? (c) Calculate the heat transferred when the gas is transformed quasi-statically to the same final state by expanding it isobarically, then decreasing its pressure at constant volume.
96 .
A bullet of mass 10 g is traveling horizontally at 200 m/s when it strikes and embeds in a pendulum bob of mass 2.0 kg. (a) How much mechanical energy is dissipated in the collision? (b) Assuming that $CvCv$ for the bob plus bullet is 3R, calculate the temperature increase of the system due to the collision. Take the molecular mass of the system to be 200 g/mol.
97 .
The insulated cylinder shown below is closed at both ends and contains an insulating piston that is free to move on frictionless bearings. The piston divides the chamber into two compartments containing gases A and B. Originally, each compartment has a volume of $5.0×10−2m35.0×10−2m3$ and contains a monatomic ideal gas at a temperature of $0°C0°C$ and a pressure of 1.0 atm. (a) How many moles of gas are in each compartment? (b) Heat Q is slowly added to A so that it expands and B is compressed until the pressure of both gases is 3.0 atm. Use the fact that the compression of B is adiabatic to determine the final volume of both gases. (c) What are their final temperatures? (d) What is the value of Q?
98 .
In a diesel engine, the fuel is ignited without a spark plug. Instead, air in a cylinder is compressed adiabatically to a temperature above the ignition temperature of the fuel; at the point of maximum compression, the fuel is injected into the cylinder. Suppose that air at $20°C20°C$ is taken into the cylinder at a volume $V1V1$ and then compressed adiabatically and quasi-statically to a temperature of $600°C600°C$ and a volume $V2.V2.$ If $γ=1.4,γ=1.4,$ what is the ratio $V1/V2?V1/V2?$ (Note: In an operating diesel engine, the compression is not quasi-static.)
Order a print copy
As an Amazon Associate we earn from qualifying purchases. | 2021-08-04 07:17:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6887893676757812, "perplexity": 441.8693413814512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154796.71/warc/CC-MAIN-20210804045226-20210804075226-00170.warc.gz"} |
https://www.math.sinica.edu.tw/www/people/websty5_20.jsp?owner=ichuang | • ichuang$\color{red}{@}$math.sinica.edu.tw
• +886 2 2368-5999 ext. 740
• +886 2 2368-9771
• 交換代數
• 代數幾何
• Ph.D. Purdue University, Indiana (1992)
• B.S. 台灣大學 (1983)
• Research Fellow Academia Sinica 2001/12 - Present
• Associate Research Fellow Academia Sinica 1996/6 - 2001/12
• Assistant Research Fellow Academia Sinica 1992/8 - 1996/6
Dr. Huang's research interests are mainly in two directions:
• injective complexes in the category of modules (resp. sheaves of modules) over a commutative ring (resp. scheme),
• algebraic aspects of combinatorial phenomenon and identities of classical numbers.
• (with Mee-Kyoung Kim) "Numerical Semigroup Algebras" , Comm. Algebra , 48 (3), 1079-1088, 2020.
• (with Raheleh Jafari) "Factorizations in Numerical Semigroup Algebras" , J. Pure Appl. Algebra , 223 (5), 2258-2272, 2019.
• (with Cheon, Gi-Sang and Kim, Sooyeong) "Multivariate Riordan groups and their representations" , Linear Algebra Appl. , 514, 198-207, 2017.
• "Extensions of Tangent Cones of Monomial Curves" , J. Pure Appl. Algebra , 220, 3437-3449, 2016.
• "Residual Complex on the Tangent Cone of a Numerical Semigroup Ring" , Acta Math. Vietnam. , 40 (1), 149-160, 2015.
• "Convolution Identities and their Structure" , Int. J. Number Theory , 10 (2), 471-482, 2014.
• "Finitely Continuous Differentials on Generalized Power Series" , Math. Slovaca , 64 (2), 267-280, 2014.
• (with Shou-Te Chang) "Continuous Homomorphisms and Rings of Injective Dimension One" , Math. Scand. , 110, 181-197, 2012.
• "Algebraic Structures of Euler Numbers" , Proc. Amer. Math. Soc. , 140 (9), 2945-2952, 2012.
• "Two approaches to Mobius Inversion" , Bull. Austral. Math. Soc. , 85, 68-78, 2012.
• "Changes of Parameters for Generalized Power Series" , Comm. Algebra , 38, 2480-2498, 2010.
• "Method of Generating Differentials" , In Advances in Combinatorial Mathematics, Springer-Verlag , 127-154, 2009.
• "Algebraic Structures of Bernoulli Numbers and Polynomials" , arXiv:1005.0177 , 2012.
• (with C.-Y. Jean Chan) "Module Structure of an Injective Resolution" , Comm. Algebra , 35 (11), 3717-3750, 2007.
• "Cohomology of Vector Bundles from a Double Cover of the Projective Plane" , Osaka J. Math , 43, 557-579, 2006.
• (with Jan-Li Lin) "Residues for Akizuki's One-dimensional Local Domain" , Proc. Amer. Math. Soc. , 131 (7), 2015-2020, 2003.
• "Inverse Relations and Schauder Bases" , J. Combin. Theory Ser. A , 97, 203-224, 2002.
• "Residue Methods in Combinatorial Analysis. in Local Cohomology and its Applications, Lecture Notes in Pure and Appl. Math., Vol. 226, 255-342, Marcel Dekker." , 2002.
• "The Residue Theorem via an Explicit Construction of Traces" , J. Algebra , 245, 310-354, 2001.
• "Cohomology of Projective Space Seen by Residual Complexes" , Trans. Amer. Math. Soc. , 353 (8), 3097-3114, 2001.
• "An Explicit Construction of Residual Complexes" , J. Algebra , 225, 698-739, 2000.
• (with Su-Yun Huang) "Bernoulli Numbers and Polynomials via Residues" , J. Number Theory , 76, 178-193, 1999.
• "Reversion of Power Series by Residues" , Comm. Algebra , 26 (3), 803-812, 1998.
• "Theory of Residues on the Projective Plane" , Manuscripta Math. , 92 (2), 259-272, 1997.
• "Applications of Residues to Combinatorial Identities" , Proc. Amer. Math. Soc. , 125 (4), 1011-1017, 1997.
• "Constructions of Artinian Modules" , Comm. Algebra , 23 (13), 5025-5030, 1995.
• "A Residue Map and Its Applications to Some One Dimensional Rings" , Proc. Amer. Math. Soc. , 123 (8), 2369-2372, 1995.
• "Pseudofunctors on Modules with Zero Dimensional Support. Mem. Amer. Math. Soc., 114(548), 1995." , 1995. | 2020-06-03 17:19:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7967045307159424, "perplexity": 6697.050167006437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435238.60/warc/CC-MAIN-20200603144014-20200603174014-00115.warc.gz"} |
https://www.phys.virginia.edu/Announcements/talk-list.asp?GID=speakers_committee&ORDER=desc&OMITNULL=1&MAXTALK=50 | # Colloquia
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, April 29, 2022
3:30 PM
Ridley Hall, Room G008
## "Re-inventing fixed-target experiments to probe light dark matter"
Cristina Mantilla , Fermilab
[Host: Dustin Keller]
ABSTRACT:
The search for dark matter is ever-evolving and a number of experiments are underway to search for the constituents of dark matter across a vast range of masses. A largely unexplored regime is that where light dark matter particles, with masses between the electron and proton masses, may interact feebly with ordinary matter. I will discuss how small accelerator experiments can produce these dark matter candidates by scattering energetic particles on a fixed target. I will focus on the DarkQuest experiment at Fermilab that uses a proton beam and builds off of an existing detector and accelerator infrastructure. I will also describe how an electron beam can be complimentary used in the Light Dark Matter experiment (LDMX) at SLAC. I will explain the design and challenges of these experiments and their prospects for characterizing a dark matter signal in the near future.
Hoxton Lecture
Monday, April 25, 2022
7:00 PM
Newcomb Hall, Room Newcomb Hall Theater
Note special date.
Note special time.
Note special room.
## "Probing the Universe with Gravitational Waves"
Barry C. Barish , Professor of Physics, Emeritus, at Caltech, and Distinguished Professor of Physics, at UC Riverside, Nobel Laureate 2017
ABSTRACT:
The discovery of gravitational waves, predicted by Einstein in 1916, is enabling both important tests of the theory of general relativity, and represents the birth of a new astronomy. Modern astronomy, using all types of electromagnetic radiation, has giving us an amazing understanding of the complexities of the universe, and how it has evolved. Now, gravitational waves and neutrinos are beginning to provide the opportunity to pursue some of the same astrophysical phenomena in very different ways, as well as to observe phenomena that cannot be studied with electromagnetic radiation. The detection of gravitational waves and the emergence and prospects for this exciting new science will be explored.
VIDEO:
Joint Physics-Astronomy colloquium
##### Attend virtually via Zoom: https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, April 22, 2022
3:30 PM
Ridley Hall, Room G008
## "The expansion of space, a scaling symmetry, and a mirror-world dark sector"
Professor Lloyd Knox , UC Davis
[Host: Prof. Genya Kolomeisky]
ABSTRACT:
I will introduce, for those unfamiliar with general relativity, the notion of the expansion of space, before going on to discuss a 5 sigma discrepancy between two inferences of the rate of that expansion today. One of those inferences is highly indirect and model dependent, relying on measurements of maps of the cosmic microwave background (CMB), a thermal relic of the hot big bang. I will explain what the CMB is, and show how well our standard cosmological model describes its statistical properties, and how we can use that model to infer the expansion rate today. I will then describe a symmetry under a scaling transformation of all relevant time scales in the problem that can potentially be exploited to reconcile the two inferences. Significant constraints on such a solution come from measurements of the CMB energy density and the abundances of light elements produced in the big bang. The former constraint can be circumvented by use of a ‘mirror world’ dark sector — a copy of the standard model of particle physics with little to no interactions with standard model particles other than via gravity.
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, April 8, 2022
3:30 PM
Ridley Hall, Room G008
## "Electron spin resonance spectroscopy of quantum spin liquids"
Oleg Starykh , The University of Utah, Salt Lake City
[Host: Prof. Dima Pesin]
ABSTRACT:
Much of the current research in quantum magnetism is motivated by the search for an elusive quantum spin liquid (QSL) state of the magnetic matter. A salient feature of this entangled quantum state is the presence of fractionalized elementary excitations such as spin-1/2 spinons, interactions between which are mediated by the emergent gauge field.
In my talk, I provide a historical perspective on quantum spin liquids and shed some light on the origins of this weird” idea. Following a brief summary of the current state of affairs in QSL research, I describe how QSLs respond to the magnetic field and demonstrate surprising similarity of the described response to that of neutral Fermi liquids.
I illustrate these theoretical ideas with a recent electron spin resonance measurements on quasi-one-dimensional antiferromagnet K2CuSO4Br2. The experiment leads the first-ever spectroscopic determination of the (backscattering) interaction between spinon excitations of the quantum spin chain.
VIDEO:
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, April 1, 2022
3:30 PM
Ridley Hall, Room G008
## "How to Become a More Effective Mentor/Mentee"
Kirsten Tollefson , Professor and Associate Dean, Graduate School, at Michigan State University
[Host: Prof. Craig Group]
ABSTRACT:
I will discuss what recent research says about the science of effective mentorship and how we can learn to do it better. We will focus on 2 competencies - aligning expectations and maintaining effective communication - through some guided activities. I will give you examples of strategies and resources that can help you become a more effective mentor and mentee.
VIDEO:
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, February 25, 2022
3:30 PM
Online, Room via Zoom
Note special room.
## "A Career Outside the Academia: My Experiences as VP at a small Hi Tech Company"
Dr. Zuyu Zhao , Janis ULT Inc.
[Host: Prof. Bellave Shivaram]
ABSTRACT:
Most of the physics students will work in industry instead of academia after they got the degree (BSc., MS, or Ph.D.)
The speaker will share his personal 30 year experience of working in a small tech company after his post-doc period, including changes and challenges of culture, daily life, responsibilities, etc.
Advice will be discussed with the audience how meet the challenges.
The speaker’s main tasks were to develop ultra-low temperature facilities from 300mK to 10mK. Along with the speaker’s career growth, examples of his achievements are presented.
A brief business summary of the speaker’s company, before and after COVID, is presented at the end of the presentation.
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, February 18, 2022
3:30 PM
Ridley Hall, Room G008
## "Failure to Social Distance: Breaking Gathering Limits in Titan's Lakes"
Alex Rosenthal , University of Virginia - Department of Physics
[Host: Stefan Baessler]
ABSTRACT:
Saturn's moon Titan is a geologically and meteorologically active world seen as a potential location for the development of life. While we cannot immediately answer "Is there life on TItan?" or even "Are there life-forming reactions occurring?" we can take a step back and investigate a more foundational question: "What chemistries are occurring?" The answers to this question are stepping stones to understanding broader processes such as weather cycles, geology, and potentially organic reactions. We attack this question using molecular dynamics simulations. By identifying molecules that cluster or exhibit other interesting behaviors, we hope to identify possible sites for interesting chemical reactions that could produce large or prebiotic molecules, as well as characterize those reactions.
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, February 18, 2022
3:50 PM
Ridley Hall, Room G008
Note special time.
## "Coordinate Space Representation and Average Radius of Quark and Gluon Generalized Parton Distribution Functions"
Zaki Panjsheeri , University of Virginia - Department of Physics
[Host: Stefan Baessler]
ABSTRACT:
The task of the field of nuclear physics called femtography is to image the internal structure of strongly interacting particles, from single protons and neutrons to atomic nuclei. Protons and neutrons are composed of quarks and gluons, but the precise spatial arrangement of the two valence up quarks and one valence down quark, along with the sea quarks and gluons that contribute half of the momentum, remains unknown. A compelling method for deriving dynamical information about the internal structure of the proton is through the use of generalized parton distributions (GPDs). Two-dimensional Fourier transforms of GPDs provide insight into matter, charge, and radial distributions of the quarks and gluons inside the nucleon. We present an explicit calculation of such transforms in a spectator model framework using parametric analytic forms of GPDs, originally constrained using deeply virtual Compton scattering and lattice QCD data. We compare the valence quarks to the gluon distribution through, i.a., average radii, a notion of distance inside the nucleon, and we present a novel result for the radius of the gluon density.
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, February 18, 2022
4:10 PM
Ridley Hall, Room G008
Note special time.
## "Property Tuning of Layered Materials by Electrochemical Intercalation"
Dawn Ford , University of Virginia - Department of Physics
[Host: Stefan Baessler]
ABSTRACT:
Recent developments in two-dimensional (2D) magnetism have intensified the research on novel van-der Waals magnetic materials to explore new magnetic phenomena in the 2D limit. Among 2D magnetic materials, one model system is metal thiophosphates MPX3 (M = transition metal ions, X = chalcogen ions) in which the antiferromagnetic (AFM) properties are highly dependent on the choice of transition metal M. The van der Waals-type crystal structure allows the mechanical exfoliation of bulk crystals to obtain atomically thin layers. In MPX3, the AFM ordering is found to persist down to the atomically thin limit, making them a promising candidate for future device applications. Furthermore, the layered structure also permits the inter-layer intercalation, which is an effective way to tune the properties. With this motivation, we performed Li and Fe intercalation in NiPS3 by using electrochemical technique. In this method, the electrical potential causes electrons to flow from anode to cathode through the circuit within the battery leading to the intercalation of intercalant ions between the layers of the host sample as shown in figure below. By tuning the amount of charges intercalated during electrochemical intercalation, the number of intercalated ions in the host single crystal can be controlled. NiPS3 exhibits AFM ordering below TN = 155 K and the spin-flop transition above μ0H ≈ 6 T. The goal of this project is to intercalate different Li and Fe content in NiPS3 single crystals and characterize their magnetic properties. Mainly, we will focus on the tuning of the ordering temperature and the spin-flop field of pristine NiPS3. In addition, the transition from AFM state to other states such as ferromagnetism is also one important direction of this project. Li intercalation was found to increase the magnetization value of NiPS3. Future work will consist of characterizing the changes in the magnetic ordering of Fe intercalated NiPS3 and extending the investigation of Li intercalated NiPS3.
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, January 28, 2022
3:30 PM
Online, Room via Zoom
Note special room.
## "Artificial Intelligence in Spin Physics"
Dustin Keller , University of Virginia - Department of Physics
[Host: Despina Louca]
ABSTRACT:
The landscape of physics research is changing due to the rapid advancement in computing. Traditionally, science is done through observation and experimentation. While there is no indication yet that this trend will change overnight, there is an increasing likelihood that methods in physics are changing in a way that we must prepare for. New technology, new methods, and new instrumentation must be brought to the forefront to take advantage of the rapid evolution of artificial intelligence and its ubiquitous pervasiveness in all aspects of research and life. I briefly review some of the advancement in machine learning and how these developments are changing our field using examples in Spin Physics from the recent past, the present, and near the future.
VIDEO:
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, January 21, 2022
3:30 PM
Physics Building, Room 204
Note special room.
## "Studying matter and spacetime with gravitational waves"
Professor David Nichols , University of Virginia - Department of Physics
[Host: Professor Despina Louca]
ABSTRACT:
Gravitational waves have been detected from the mergers of nearly ninety binary black holes during the first three observing runs of the Advanced LIGO and Virgo detectors. In this talk, I will discuss these detections and their implications for understanding fundamental properties of matter and spacetime in two contexts. First, I will review a nonlinear effect in general relativity called the gravitational-wave memory. The effect is characterized by a lasting change in the gravitational-wave strain produced by the energy radiated in gravitational waves. I will describe how this effect is related to the infrared properties of gravity, how the memory effect can be measured with LIGO and Virgo, and how new types of memory effects have been recently predicted. Second, I will discuss how dense distributions of dark matter around a black hole can influence the inspiral of a second compact object and thus the gravitational waves emitted from such a binary. With the planned space-based gravitational-wave detector LISA, the distribution of dark matter on these small scales could be mapped precisely. This would provide a new method to study dark matter: with gravitational waves.
VIDEO:
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, December 3, 2021
3:30 PM
Physics Building, Room 204
Note special room.
## "Floquet-engineering topological Dirac bands in an optical lattice"
Ian Spielman , NIST and The University of Maryland
[Host: Prof. Dima Pesin]
ABSTRACT:
Over the years my group has performed a number of experiments realizing relativistic physics using cold atoms — described by the 1D Dirac Hamiltonian in some degree of approximation. I will begin by reviewing these results in conjunction with those from the whole of the cold-atom community.
With that backdrop, I describe a spin dependent bipartite Floquet lattice, in which the dispersion relation is linear for all points in the Brillouin zone. The (stroboscopic) Floquet spectrum of our periodically-driven Hamiltonian features perfect spin-momentum locking, and a linear Dirac dispersion. These bands are protected by a Floquet topological invariant which we directly measure by using quantum state tomography.
VIDEO:
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, November 19, 2021
3:30 PM
Physics Building, Room 204
Note special room.
## "Analogue gravity in cold atom and condensed matter systems "
Professor Daniel Sheehy , Louisiana State University
[Host: Cass Sackett]
ABSTRACT:
In recent years there has been much interest in the field of "analogue gravity", in which cosmological or astrophysical phenomena like Hawking radiation are mimicked in a laboratory experiment. At LSU, my research group in cold atom/condensed matter theory became interested in this field, motivated by the recent experiment of Eckel and collaborators, [Phys. Rev. X 8, 021021 (2018)] who used a rapidly expanding Bose-Einstein condensate (BEC) to reproduce the inflationary regime of the early universe. I will present our work on the physics of inflation in expanding BEC's, and discuss other setups to detect analogue gravity phenomena like the Unruh effect.
VIDEO:
Join Zoom Meeting:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, November 12, 2021
3:30 PM
Physics Building, Room 204
Note special room.
## "Physics motivations for future colliders"
Professor Tao Han , University of Pittsburg
[Host: Professor P.Q. Hung]
ABSTRACT:
With the milestone discovery of the Higgs boson at the CERN Large Hadron Collider (LHC), high energy physics has entered a new era. The completion of the “Standard Model” (SM) implies, for the first time ever, that we have a relativistic, quantum-mechanical, self-consistent theoretical framework, conceivably valid up to exponentially high energies, even to the Planck scale. Yet, the SM leaves many unanswered questions both from the theoretical and observational perspectives, including the nature of the electroweak superconductivity and its phase transition, the hierarchy between the particle masses and between the observed scales, the nature of dark matter etc. There are thus compelling reasons to believe that new physics beyond the SM exists. We argue that the collective efforts of future high energy physics programs, in particular the future colliders, hold great promise to uncover the laws of nature to a deeper level.
VIDEO:
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, November 5, 2021
3:30 PM
Physics Building, Room 204
Note special room.
## "Atomtronics for Quantum Sensing"
Professor Malcolm Boshier , Los Alamos National Lab
[Host: Prof. Cass Sackett]
ABSTRACT:
Atomtronics is the emerging technology of building circuits where the current is a flow of ultracold atoms propagating as coherent matter waves inside suitable waveguides. In this talk I will describe our atomtronic technology in which the waveguides are created with laser light via the optical dipole potential, and then discuss two quantum sensors based on it. First, we have demonstrated the atomtronic analogue of the dc SQUID and shown that it exhibits the quantum interference that gives the Superconducting Quantum Interference Device its name. In the conventional SQUID this is seen as a periodic variation of critical current with magnetic flux. In the atomtronic SQUID it causes a periodic variation of critical current with rotation, enabling the device to function as a gyro. Second, we are developing an atomtronic version of the Fiber Optic Gyro, in which rotation is measured by the Sagnac effect. In our device a Bose-Einstein condensate is split, reflected, and recombined inside a waveguide that is translated so that the wavepackets travel around a loop and realize a waveguide Sagnac atom interferometer.
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, October 29, 2021
3:30 PM
Physics Building, Room 204
Note special room.
## "Exploring Gravitational Wave and Dark Matter Physics with the 100-meter-tall MAGIS-100 Atom Interferometer"
Professor Tim Kovachy , Northwestern University
[Host: Prof. Bob Hirosky]
ABSTRACT:
Atom interferometers exploit spatially delocalized quantum states to make a wide variety of highly precise measurements. Recent technological advances have opened a path for atom interferometers to contribute to multiple areas at the forefront of modern physics, including searches for wave-like dark matter, gravitational wave detection, and fundamental quantum science. In this colloquium, I will describe MAGIS-100, a 100-meter-tall atom interferometer being built at Fermilab to pursue these directions. MAGIS-100 will serve as a prototype gravitational wave detector in a new frequency range, between the peak sensitivities of LIGO and LISA, that is promising for pursuing cosmological signals from the early universe and for studying a broad range of astrophysical sources. In addition, MAGIS-100 will search for wave-like dark matter, probe quantum mechanics in a new regime in which massive particles are delocalized over macroscopic scales in distance and time, and act as a testbed for advanced quantum sensing techniques. Finally, I will discuss the potential and motivation for follow-on atomic detectors with even longer baselines.
VIDEO:
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, October 22, 2021
3:30 PM
Physics Building, Room 204
Note special room.
## "Quantum Many-Body Physics of Superconducting Qubits"
Professor Leonid Glazman , Yale University
[Host: Dima Pesin]
ABSTRACT:
The ongoing development of superconducting qubits has brought some basic questions of many-body physics to the research forefront, and helped solve several of them. I will address two effects in quantum condensed matter highlighted by the development of a fluxonium qubit. The first one is the so-called cosine-phi problem stemming from the seminal paper of Brian Josephson. It predicted the phase dependence of the dissipative current across the Josephson junction. A fluxonium qubit enabled the observation of the effect, after nearly 50 years of unsuccessful attempts by other techniques. The second one is inelastic scattering ("splitting") of a microwave photon by quantum fluctuations of phase across a Josephson junction. This effect is the elementary mechanism driving the Schmid transition, which predicts a collapse of the Josephson current in a junction influenced by a dissipative environment.
VIDEO:
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, October 8, 2021
3:30 PM
Physics Building, Room 204
Note special room.
## " Muon magnetic anomaly: probing the innermost nature of vacuum"
Professor Dinko Pocanic , University of Virginia - Department of Physics
[Host: Prof. Gordon Cates]
ABSTRACT:
Marking the semi-anniversary of the Fermilab Muon g−2 experiment (E989) Run 1 result, this colloquium will review the current status of the muon magnetic anomaly, the inferred evidence of possible particles outside the Standard Model (SM), and future prospects in this active research field.
The intrinsic magnetic field of a simple object, such as a compass needle, is expressed in terms of its magnetic moment. The magnetic moment of a point particle, such as the electron, is predicted by relativistic quantum mechanics to be g = 2, in convenient dimensionless units. For the electron, this prediction fails at the part-per-thousand level; the resulting magnetic anomaly, ae = (g − 2)/2, is due to the electron’s couplings to virtual particles excited in the vacuum.
Muon, the electron’s 200 times heavier cousin, experiences far stronger couplings to massive virtual particles, including possible non-SM exotics. The SM provides a prediction for the muon magnetic anomaly aµ with sub-ppm precision. Hence, a comparably precise measurement of aµ offers a uniquely sensitive test for the presence of non-SM particles in nature. For almost 20 years a tantalizing discrepancy of ∼ 3 – 4σ has persisted between the measurements of aµ, and the SM calculations. The Fermilab Muon g−2 Run 1 result brings a much awaited update to this test, with much more to come.
VIDEO:
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, October 1, 2021
3:30 PM
Physics Building, Room 204
Note special room.
## "Rotation sensing with an atom-interferometer gyroscope"
Professor Cass Sackett , University of Virginia - Department of Physics
[Host: Gordon Cates]
ABSTRACT:
Precision rotation sensing is useful for navigation, geophysics, and tests of fundamental physics. Atom interferometers provide, by some measures, the most sensitive method for rotation sensing achieved to date. However, the best performance requires freely falling atoms in a large experimental apparatus. Many applications, such as navigating a vehicle, will benefit from a more compact geometry. One method to achieve this is by using trapped atoms that are suspended against gravity. We have implemented such an interferometer and used it to measure a rotation rate comparable to that of the Earth. The most recent iteration of the interferometer has demonstrated improvements by a factor of ten in rotation sensitivity and trap stability. A second new apparatus reduces the scale of the vacuum chamber and optical system to roughly the size of a microwave oven.
VIDEO:
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, September 17, 2021
3:30 PM
Physics Building, Room 204
Note special room.
## "Inside the Proton: science fact, speculation, and the stuff of science fiction"
Professor Gordon Cates , University of Virginia - Department of Physics
[Host: Kent Paschke]
ABSTRACT:
Whereas the structure of the atom has been understood for many years, the internal structure of the proton (and neutron) is the subject of active research. Understanding the nucleon is difficult because its structure is governed by quantum chromodynamics, or QCD, which has not been solved exactly in the non-perturbative or low-energy regime. The proton's structure is intriguing, however, for many reasons. For example, we think of the proton as being made of three quarks, but the mass of those quarks only accounts for about 1% of the proton's mass. The remaining 99%, and hence 99% of the known mass in the universe, is due to exotic effects associated with the QCD vacuum. While a great deal of work remains to be done, the way in which we visualize the proton has changed dramatically since the discovery of quarks. Just as the structure of the atom was unveiled early in the 20th century, the structure of the proton is being unveiled in the first decades of the 21st century. Another intriguing aspect of the proton arises from the fact that QCD is the only theory in nature that has essentially no free parameters. String theory, that attempts to unify our understanding of gravity and the quantum world, grew out of early efforts to understand the strong interaction. Since string theory deals with the topology of space and time, it is tempting to believe that a deep understanding of the proton may one day provide a window into even more fundamental questions. The colloquium will cover some recent developments in our understanding of the nucleon, as well as providing a glimpse of where this rich area of research is heading in upcoming years.
VIDEO:
Attend virtually via Zoom:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, September 10, 2021
3:30 PM
Physics Building, Room 204
Note special room.
## "Stirring by staring: Induced non-equilibrium states by measurements in quantum systems"
Israel Klich , University of Virginia - Department of Physics
[Host: Despina Louca]
ABSTRACT:
In quantum mechanics, the role of an observer is fundamentally different from that of a classical observer. The quantum mechanical observer necessarily plays an active role in the dynamics of the system that it is observing. This apparent difficulty may be turned into a tool to drive an initially trivial system into a complicated quantum many-body state simply by observing it. I will present two remarkable examples of states induced by measurement. In the first, we examine the role of a moving density measuring device interacting with a system of fermions, and in particular, show that it would leave behind a wake of purely quantum origin. In the second example, inspired by the recent invention of topological Floquet insulators, we will see how a suitably chosen set of density measurements, repeated periodically, will induce robust chiral edge motion on a lattice of free fermions. Our examples show how quantum mechanical observation can be added as a versatile tool to the arsenal of quantum engineering in condensed matter systems.
VIDEO:
Click on the following link to attend the online colloquium:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, February 12, 2021
2:00 PM
Online, Room via Zoom
Note special time.
Note special room.
## "Complexity of magnetic patterns and self-induced spin-glass state"
Prof. Mikhail Katsnelson , Radboud University of Nijmegen, The Netherlands,
[Host: Dima Pesin]
ABSTRACT:
The origin of complexity remains one of the most important and, at the same time, the most controversial scientific problems. Earlier attempts were based on theory of dynamical systems but did not lead to a satisfactory solution of the problem. I believe that a deeper understanding is possible based on a recent development of statistical physics, combining it with relevant ideas from evolutionary biology and machine learning.
Using patterns in magnetic materials as the main example, I discuss some general problems such as (a) a formal definition of pattern complexity [1]; (b) self-induced spin glassiness due to competing interactions as a way to interpret chaotic patterns [2]; (c) multi-well states intermediate between glasses and ordinary ordered states and their relevance for the problem of long-term memory in complicated systems [3]; and (d) complexity of frustrated quantum spin systems [4]. I will also review a very recent experimental observation of self-induced spin-glass state in elemental neodymium [5].
[1] A. A. Bagrov, I. A. Iakovlev, A. A. Iliasov, M. I. Katsnelson, and V. V. Mazurenko, Multi-scale structural complexity of natural patterns, PNAS 117, 30241 (2020).
[2] A. Principi and M. I. Katsnelson, Spin glasses in ferromagnetic thin films, Phys. Rev. B 93, 054410 (2016); Self-induced glassiness and pattern formation in spin systems due to long-range interactions, Phys. Rev. Lett. 117, 137201 (2016).
[3] A. Kolmus, M. I. Katsnelson, A. A. Khajetoorians, and H. J. Kappen, Atom-by-atom construction of attractors in a tunable finite size spin array, New J. Phys. 22, 023038 (2020).
[4] T. Westerhout, N. Astrakhantsev, K. S. Tikhonov, M. I. Katsnelson, and A. A. Bagrov, Generalization properties of neural network approximations to frustrated magnet ground states, Nature Commun. 11, 1 (2020).
[5] U. Kamber et al, Self-induced spin glass state in elemental and crystalline neodymium, Science 368, eaay6757 (2020).
VIDEO:
Click on the following link to attend the online colloquium:
##### https://web.phys.virginia.edu/Private/Covid-19/colloquium.asp
Friday, February 5, 2021
3:30 PM
Physics Building, Room via Zoom
Note special room.
## "A fermionic triangular-lattice quantum gas microscope "
Peter Schauss , University of Virginia - Physics Dept.
[Host: Bob Jones]
ABSTRACT:
Geometrically frustrated many-body systems show many interesting emerging phenomena, ranging from kinetic frustration to exotic spin ordering and chiral spin liquid phases. Ultracold atom systems offer great tunability and flexibility to realize such systems in a wide parameter range of interactions, densities, and spin-imbalance.
In this talk, I will present our recent results on site-resolved imaging of ultracold fermionic lithium atoms on a triangular optical lattice.
Degenerate Fermi gases with about one tenth of the Fermi temperature have been realized within a crossed dipole trap and successfully loaded into a two-dimensional triangular optical lattice. To characterize this lattice, we observed Kapitza-Dirac scattering using a molecular Bose-Einstein condensate. Collecting the emitted photons during Raman sideband cooling in the triangular lattice using a high-resolution microscope objective enabled the high-fidelity imaging of individual fermionic atoms in the lattice with single-site resolution.
The next step will be the realization of a triangular lattice Hubbard model by implementing an additional optical lattice to increase interactions.
This novel experimental platform will allow us to study spin and density correlations in the triangular Hubbard model to explore signatures of frustration and spin-hole bound states and may lead to a direct observation of non-vanishing chiral correlations.
VIDEO:
To add a speaker, send an email to gdc4k@Virginia.EDU Include the seminar type (e.g. Colloquia), date, name of the speaker, title of talk, and an abstract (if available). [Please send a copy of the email to phys-speakers@Virginia.EDU.] | 2022-05-23 02:32:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3970589339733124, "perplexity": 2293.5005171846674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00157.warc.gz"} |
http://www.trodat-5211-professional.fr/jis3114-steel/5cf0c18284716.html | ### sudan vertical cylindrical tank building volume
Tank Volume CalculatorA = π r 2 where r is the radius which is equal to d/2. Therefore: V (tank) = πr2h. The filled volume of a vertical cylinder tank is just a shorter cylinder with the same radius, r, and diameter, d, but height is now the fill height or f. Therefore: V (fill) = πr2f.
Tel: 0086-371-861#518#27
Mail: [email protected]
### (PDF) HOW TO CALCULATE THE VOLUMES OF PARTIALLY
To calculate the fluid volume in a vertical or horizontal tank can be complicated depending of the fluid height and the caps. This article makes a synthesis of the calculations for the volumes of sudan vertical cylindrical tank building volume* Sloped Bottom Tank - arachnoid sudan vertical cylindrical tank building volumeThe easy part the cylindrical section above the slope, which has a volume of: (1) $\displaystyle v = \pi r^2 h$ v = volume; r = tank radius; h = cylindrical section height; More difficult the tank's sloped section, which lies between the tank's bottom and the top of the slope where the tank
### * Sloped Bottom Tank - arachnoid sudan vertical cylindrical tank building volume
The easy part the cylindrical section above the slope, which has a volume of: (1) $\displaystyle v = \pi r^2 h$ v = volume; r = tank radius; h = cylindrical section height; More difficult the tank's sloped section, which lies between the tank's bottom and the top of the slope where the tank * TankCalcR increases the tank's size in the Y (vertical) and Z (depth) dimensions. A tank with a zero R entry will not have a volume, regardless of the other entries. The left and right end caps are fully (and separately) described: with a dimensional variable r that describes the reach of the end cap along the X 01-02 Water flowing into cylindrical tank | MATHalinoProblem 02 Water flows into a vertical cylindrical tank at 12 ft 3 /min, the surface rises 6 in/min. Find the radius of the tank.. Solution 02
### Vertical Cylindrical Shaped Tank Contents Calculator
Vertical Cylindrical Shaped Tank Contents Calculator sudan vertical cylindrical tank building volume Volume of Liquid in Tank. The volume of the liquid contents held in the tank are displayed here in your preferred volumetric units. % Full Tank. This shows the percentage of tank capacity that is filled. Primary Sidebar.Calculating Tank VolumeCalculating Tank Volume Saving time, increasing accuracy By Dan Jones, Ph.D., P.E. alculating fluid volume in a horizontal or vertical cylindrical or elliptical tank can be complicated, depending on fluid height and the shape of the heads (ends) of a horizontal tank or the bottom of a vertical tank.Cylindrical Tank - an overview | ScienceDirect TopicsMeherwan P. Boyce, in Gas Turbine Engineering Handbook (Fourth Edition), 2012. Pressure Tanks. Vertical cylindrical tanks constructed with domed or coned roofs, which operate at pressures above 15 psia (1 Bar) but are still relatively close to atmospheric pressure, can be built according to API Standard 650. The pressure force acting against the roof is transmitted to the shell, which may have sudan vertical cylindrical tank building volume
### Tank Volume Calculator - Vertical Cylindrical Tanks - Metric
Vertical cylindrical tank volume calculator diagram Fraction Precision Set 1/8 1/16 1/32 1/64 Decimal All Inch inputs and dimensions are actual physical finished sizes (unless otherwise noted)Tank Volume Calculator - Inch CalculatorFor example, lets find the volume of a cylinder tank that is 36 in diameter and 72 long. radius = 36 ÷ 2 radius = 18 tank volume = × 18 2 × 72 tank volume = 73,287 cu in. Thus, the capacity of this tank is 73,287 cubic inches.Tank Storage | Glossary | OiltankingOil is usually stored in vertical cylindrical tanks made of steel. The appropriate type of construction and materials for storing these products is defined by DIN standards. Beyond this, the respective federal state building regulations based on the Construction Products Act, and all applicable fire protection regulations must also be observed sudan vertical cylindrical tank building volume
### Volume of a cylinder with calculator - Math Open Reference
Volume of a partially filled cylinder. One practical application is where you have horizontal cylindrical tank partly filled with liquid. Using the formula above you can find the volume of the cylinder which gives it's maximum capacity, but you often need to know the volume of liquid in the tank AC Physics Applications: Work, Force, and PressureConsider a vertical cylindrical tank of radius 2 meters and depth 6 meters. Suppose the tank is filled with 4 meters of water of mass density 1000 kg/m$$^3\text{,}$$ and the top 1 meter of water is pumped over the top of the tank. Consider a hemispherical tank with a radius of 10 feet.Behaviour of vertical cylindrical tank with local wall sudan vertical cylindrical tank building volumeThis phase of using of structures is associated with complex investigation and numerical analysis of thin-walled structures. In this paper the load bearing capacity of the steel wall of the existing over-ground vertical cylindrical tank in volume of 5,000 m3 with a single defect and with a few contiguous local defects of the shape is analyzed.
### Blank Worksheet to Calculate Secondary Containment Volume sudan vertical cylindrical tank building volume
This worksheet can be used to calculate the secondary containment volume of a rectangular or square dike or berm for a single vertical cylindrical tank. You may need a PDF reader to view some of the files on this page. See EPAs About PDF page to learn more. Blank Worksheet (PDF) (4 pp, 529 K)Compute Fluid Volumes in Vertical TanksDec 18, 2003 · The equations for fluid volumes in vertical cylindrical tanks with concave bottoms are shown on p. 30. The volume of a flat-bottom vertical cylindrical tank may be found using any of these equations and setting a = 0. Radian angular measure must be used for trigonometric functions.DESIGN RECOMMENDATION FOR STORAGE TANKS A4 Above-ground, Vertical, Cylindrical Storage Tanks ----- 154 Appendix B Assessment of Seismic Designs for Under-ground Storage Tanks ----- 160 . Chapter2 1 1. General 1.1 Scope This Design Recommendation is applied to the structural design of water storage sudan vertical cylindrical tank building volume
### Effect of settlement of foundations on the failure risk of sudan vertical cylindrical tank building volume
Sep 30, 2019 · Hotaa E., Ignatowicz R. (2018). Failure risk of bottoms of cylindrical steel tanks supported on ring foundations. Building Materials 4/2018, p. 88-90, (in polish) [5] Wu T.Y., Liu G.R. (2000). Comparison of design methods for a tank-bottom annular plate and concrete ringwall. International Journal of Pressure Vessels and Piping 77, p. 511-517 sudan vertical cylindrical tank building volumeFireguard - Highland TankFireguard ® tanks are thermally protected, double-wall steel storage tanks and are the best alternative for safe storage of motor fuels and other flammable and combustible liquids aboveground. They are used where a fire-protected tank is needed because of setback limitations or regulatory requirements. Each tank is constructed with a minimum 3 interstice around the inner tank.HOW TO CALCULATE THE VOLUMES OF PARTIALLY FULL 3.2.Hemispherical Head in a Vertical Cylindrical Tank. Fig 5. Hemispherical head in vertical tank [10] Partial volume of fluid in the lower head, L = 6 3 2 2 2 3 (12) Partial volume at the upper head, L = 12 3 2 4 2 3 (13) The depth (z) of this type of head is equivalent to 0.5 of the
### Highland Tank - Gauge Charts
Please note that these charts are theoretical and are intended as a guide for estimating tank/vessel volumes. Required Choose Tank Style Horizontal Cylindrical Horizontal Cylindrical (Elliptical Heads) Horizontal Rectangular Vertical Flat Bottom Vertical Dished Bottom Vertical Coned BottomHighland Tank - Gauge ChartsPlease note that these charts are theoretical and are intended as a guide for estimating tank/vessel volumes. Required Choose Tank Style Horizontal Cylindrical Horizontal Cylindrical (Elliptical Heads) Horizontal Rectangular Vertical Flat Bottom Vertical Dished Bottom Vertical Coned BottomHorizontal Cylindrical Tank Volume Calculator - MetricTank Volume: Vertical Cylindrical Tank Volume: Rectangular Tank Volume: Directory: Inch: Inch: Inch: Inch: Horizontal Cylindrical Tank Volume Calculator, Dip Chart and Fill Times - Metric. Hemispherical Ends Ellipsoidal Ends - adj Ellipsoidal Ends - 2:1 Flat Ends: Straight Length
### How to calculate dish volume of vertical and horizontal tank?
Aug 21, 2007 · The tank can be split into three portions: (1) The middle bit, which is a cylinder, the volume of which can be calculated by . V(cyl)= pi * R^2 * h . where 'R' is the radius (half of the diameter), and 'h' is the height (or length) of the cylinder (2 and 3) The two end portions, each of which is a segment , or zone, of a sphere (more or less).Online Conversion - Object VolumesBarrel Calculate the volume or the radius of a barrel.; Cone Calculate the volume, height, top or bottom radius of cone.; Cube Calculate the volume or the side length of a perfect cube.; Cylinder Calculate the volume, length, radius, or diameter of a cylinder or cylindrical shaped tank.; Cylinder, Partially Filled Calculate the volume, length, radius, or diameter of a partially filled laying sudan vertical cylindrical tank building volumePrestressed Storage Tanks PresentationTank Diameter = Tank Dimensions Usuable Tank Capacity (1.5% Slope d Floor)= Side W ater Depth = We lde d Stee l Ta nk Specific Assumed Freeboard = 7 FT. Total Stee l Ta nk Volume = 1.28 MG Assumed Steel Tank Cost = \$ DOLLA R S Prestressed Concre te Ta nk Specific Assumed Freeboard = 3 FT. Total Prestresse d Concrete Ta nk Volume = 1.12 MG
### Spill Prevention Control and Countermeasure (SPCC) Plan
V /V = ÷ = c is the secondary containment volume calculated in Step 1. d / e is the tank volume calculated in Step 2. c (ft 3) d or e (ft ) f % = x 100 = f g If percentage, g, is 100% or greater, the capacity of the secondary containment is sufficient to contain the shell capacity of the tank. If rain can collect in the dike or berm, continue to step 4.Storage Tank Calculator - Vertical & Horizontal Tank sudan vertical cylindrical tank building volumeFor vertical tanks, only the cylinder volume is used in calculations. The Top End Type and the Bottom End Slope of the tank are not factored into the estimated volume. In order to calculate the volume of the storage tank then, all we need is to calculate the main cylinder volume.TANK VOLUME CALCULATOR [How to Calculate Tank Jun 20, 2019 · Cylindrical Oil Tank. Lets say that I have a cylindrical oil tank which measures 7 yards in length and has a round face 5 feet in diameter (the distance across the circular end passing through the central point). I want to calculate the tank volume in cubic feet and work out how much oil will fit in the cylinder in US gallons.
### Tank Volume Calculator - Horizontal Elliptical - Metric
Tank Volume: Vertical Cylindrical Tank Volume: Rectangular Tank Volume: Directory: Inch: Inch: Inch: Inch: Horizontal Elliptical Tank Volume, Dip Chart and Fill Times - Metric. Hemispherical Ends Ellipsoidal Ends - adj Flat Ends: Side Ellipse Width Centre Width Straight Length sudan vertical cylindrical tank building volumeTank Volume Calculator - Inch CalculatorFor example, lets find the volume of a cylinder tank that is 36 in diameter and 72 long. radius = 36 ÷ 2 radius = 18 tank volume = × 18 2 × 72 tank volume = 73,287 cu in. Thus, the capacity of this tank is 73,287 cubic inches.Tank Volume Calculator - Vertical Cylindrical TanksVertical cylindrical tank volume calculator diagram Fraction Precision Set 1/8 1/16 1/32 1/64 Decimal All Inch inputs and dimensions are actual physical finished sizes (unless otherwise noted)
### Tank Volume Calculator for Ten Various Tank Shapes
Apr 17, 2019 · Cylindrical tank volume formula. To calculate the total volume of a cylindrical tank, all we need to know is the cylinder diameter (or radius) and the cylinder height (which may be called length, if it's lying horizontally).. Vertical cylinder tank; The total volume of a cylindrical tank may be found with the standard formula for volume - the area of the base multiplied by height.Tank Volume CalculatorA = r 2 where r is the radius which is equal to d/2. Therefore: V (tank) = r2h. The filled volume of a vertical cylinder tank is just a shorter cylinder with the same radius, r, and diameter, d, but height is now the fill height or f. Therefore: V (fill) = r2f.Vessel Volume & Level CalculationVessel Volume & Level Calculation Estimates Volume filled in a Vessel with Ellipsoidal (2:1 Elliptical), Spherical (Hemispherical), Torispherical (ASME F&D, Standard F&D, 80:10 F&D) and Flat heads.
### Volume of a cylinder with calculator - Math Open Reference
Volume of a partially filled cylinder. One practical application is where you have horizontal cylindrical tank partly filled with liquid. Using the formula above you can find the volume of the cylinder which gives it's maximum capacity, but you often need to know the volume of liquid in the tank What is the optimum volume of water tank digester?Iam working in a project that testing the local plastic water tank to be used for AD in household level. I wonder if there is an optimum volume of this from experience worldwide. The tank volumes sudan vertical cylindrical tank building volumeWhat is the optimum volume of water tank digester?Iam working in a project that testing the local plastic water tank to be used for AD in household level. I wonder if there is an optimum volume of this from experience worldwide. The tank volumes sudan vertical cylindrical tank building volume
• ### herbal stainless steel extractor extraction tank
s Steel Cbd Essential Plant Oil ...Widely used stainless steel CBD essential plant oil extraction tank. Product Description. Application: The device is used to extract the herb, flower, seed, fruit, fish etc. It can be used for food and chemical industries in normal pressure, micro-pressure, water f
• ### plastic fertilizer tank for agriculture line
n of liquids, water, and fertilizers. Our elliptical saddle tanks can be mounted to tractors, ATV's, trucks, and other ag equipment. Our leg tanks are prefect for pasture sprayer trailers, water trailers, and nurse trailers. Vertical Agricultural Tanks | Snyder IndustriesSnyder's vertical tanks are
• ### industrial fire tube oil fired swimming pool heater
h Efficiency and output. The most comprehensive oil fired pool and spa heater on the market. Oil Heaters | Pool HeatingOil-fired pool-heating is very similar to gas-fired heating ; oil is burned directly in a furnace to produce the heat which is then transferred to the pool either via a heat-exchang
• ### finland double tank heat pump circulation system technology
this control panel, the operator has a clear overview of the complete heating system and its status. Application of a multi-function solar-heat pump system in ...Feb 05, 2018 · For the heat pump circulation system, the rated compressor power was 735 W. High and low voltage switches we
• ### food grade single dual three layer stainless storage tank
Double layer tank: Three layer tank: material: SS304 or SS316L: Volume: 50L-30T (Customized) Pressure: Vacuum-1Mpa: Tank Type: Vertical type or Horizontal type: Structure : one layer: Inner layer ... stainless storage tank suppliers,exporters on 21food.comstainless storage tank products directory a
• ### all folding frame tanks husky portable containment
adder Tank containment, hazardous spill clean up and a variety of other containment needs. Our folding frame tanks are easy to set-up, easy to tear down, and easy to store. Learn more. All Portable Water Tanks | Husky Portable ContainmentFolding Frame Tanks Husky® has a large selection of Porta
### Message information
Please describe your brand size and data volume in detail to facilitate accurate quotation | 2021-03-07 12:52:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.702281653881073, "perplexity": 3550.4510375258706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376467.86/warc/CC-MAIN-20210307105633-20210307135633-00194.warc.gz"} |
https://www.snapxam.com/solver?p=%5Cleft%28%5Cfrac%7B2%7D%7B3%7D%2B%5Cfrac%7B2%7D%7B5%7D%5Cright%29%5Cleft%28%5Cfrac%7B3%7D%7B6%7D%2B%5Cfrac%7B3%7D%7B1%7D%5Cright%29 | Try NerdPal: our new app on iOS and Android !
# Step-by-step Solution
Go!
Go!
1
2
3
4
5
6
7
8
9
0
a
b
c
d
f
g
m
n
u
v
w
x
y
z
.
(◻)
+
-
×
◻/◻
/
÷
2
e
π
ln
log
log
lim
d/dx
Dx
|◻|
θ
=
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
$\frac{56}{15}$$\,\,\left(\approx 3.7333333333333334\right) Got a different answer? Try our new Answer Assistant! ## Step-by-step Solution Problem to solve: \left(\frac{2}{3}+\frac{2}{5}\right)\left(\frac{3}{6}+\frac{3}{1}\right) 1 Divide 2 by 3 \left(\frac{2}{3}+\frac{2}{5}\right)\left(\frac{3}{6}+\frac{3}{1}\right) Learn how to solve multiplication of numbers problems step by step online. \left(\frac{2}{3}+\frac{2}{5}\right)\left(\frac{3}{6}+\frac{3}{1}\right) Learn how to solve multiplication of numbers problems step by step online. Multiply (2/3+2/5)(3/6+3/1). Divide 2 by 3. Divide 2 by 5. Any expression divided by one (1) is equal to that same expression. ## Final Answer \frac{56}{15}$$\,\,\left(\approx 3.7333333333333334\right)$
SnapXam A2
### beta Got another answer? Verify it!
Go!
1
2
3
4
5
6
7
8
9
0
a
b
c
d
f
g
m
n
u
v
w
x
y
z
.
(◻)
+
-
×
◻/◻
/
÷
2
e
π
ln
log
log
lim
d/dx
Dx
|◻|
θ
=
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
$\left(\frac{2}{3}+\frac{2}{5}\right)\left(\frac{3}{6}+\frac{3}{1}\right)$ | 2022-01-27 14:45:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3423120081424713, "perplexity": 4667.346115582642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00515.warc.gz"} |
https://www.imlearningmath.com/answer-which-of-these-greek-philosophers-was-born-first/ | # Answer: Which of these Greek philosophers was born first?
Question: Which of these Greek philosophers was born first? | 2021-05-17 12:35:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381182789802551, "perplexity": 5405.427510397383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00617.warc.gz"} |
https://cforall.uwaterloo.ca/trac/changeset/8d5b9cf388030c46b0f033a5e263141fc124d268 | # Changeset 8d5b9cf
Ignore:
Timestamp:
Nov 28, 2017, 4:15:13 PM (5 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, enum, forall-pointer-decay, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, pthread-emulation, qualifiedEnum, resolv-new, with_gc
Children:
8a62d04, 9d48a17
Parents:
4e7a4e6 (diff), 6c2ba38 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:
Merge branch 'master' of plg.uwaterloo.ca:/u/cforall/software/cfa/cfa-cc
Location:
doc/proposals/concurrency
Files:
1 added
1 deleted
17 edited
Unmodified
Added
Removed
• ## doc/proposals/concurrency/.gitignore
r4e7a4e6 build/*.ind build/*.ist build/*.lof build/*.log build/*.lol build/*.lot build/*.out build/*.ps
• ## doc/proposals/concurrency/Makefile
r4e7a4e6 style/cfa-format \ annex/glossary \ text/frontpgs \ text/intro \ text/basics \
• ## doc/proposals/concurrency/annex/glossary.tex
r4e7a4e6 {name={callsite-locking}} { Locking done by the calling routine. With this technique, a routine calling a monitor routine will aquire the monitor \emph{before} making the call to the actuall routine. Locking done by the calling routine. With this technique, a routine calling a monitor routine aquires the monitor \emph{before} making the call to the actuall routine. } {name={entry-point-locking}} { Locking done by the called routine. With this technique, a monitor routine called by another routine will aquire the monitor \emph{after} entering the routine body but prior to any other code. Locking done by the called routine. With this technique, a monitor routine called by another routine aquires the monitor \emph{after} entering the routine body but prior to any other code. } {name={multiple-acquisition}} { Any locking technique which allow any single thread to acquire a lock multiple times. Any locking technique that allows a single thread to acquire the same lock multiple times. } {name={user-level thread}} { Threads created and managed inside user-space. Each thread has its own stack and its own thread of execution. User-level threads are insisible to the underlying operating system. Threads created and managed inside user-space. Each thread has its own stack and its own thread of execution. User-level threads are invisible to the underlying operating system. \textit{Synonyms : User threads, Lightweight threads, Green threads, Virtual threads, Tasks.} {name={fiber}} { Fibers are non-preemptive user-level threads. They share most of the caracteristics of user-level threads except that they cannot be preempted by an other fiber. Fibers are non-preemptive user-level threads. They share most of the caracteristics of user-level threads except that they cannot be preempted by another fiber. \textit{Synonyms : Tasks.} {name={job}} { Unit of work, often send to a thread pool or worker pool to be executed. Has neither its own stack or its own thread of execution. Unit of work, often sent to a thread pool or worker pool to be executed. Has neither its own stack nor its own thread of execution. \textit{Synonyms : Tasks.} {name={cluster}} { TBD... \textit{Synonyms : None.} } \longnewglossaryentry{cfacpu} {name={processor}} { TBD... A group of \gls{kthread} executed in isolation. \textit{Synonyms : None.} {name={thread}} { TBD... User level threads that are the default in \CFA. Generally declared using the \code{thread} keyword. \textit{Synonyms : None.} {name={preemption}} { TBD... Involuntary context switch imposed on threads at a specified rate. \textit{Synonyms : None.}
• ## doc/proposals/concurrency/annex/local.bib
r4e7a4e6 keywords = {Intel, TBB}, title = {Intel Thread Building Blocks}, note = "\url{https://www.threadingbuildingblocks.org/}" } title = {TwoHardThings}, author = {Martin Fowler}, address = {https://martinfowler.com/bliki/TwoHardThings.html}, howpublished= "\url{https://martinfowler.com/bliki/TwoHardThings.html}", year = 2009 } } @misc{affinityLinux, @book{Herlihy93, title={Transactional memory: Architectural support for lock-free data structures}, author={Herlihy, Maurice and Moss, J Eliot B}, volume={21}, number={2}, year={1993}, publisher={ACM} } @manual{affinityLinux, title = "{Linux man page - sched\_setaffinity(2)}" } @misc{affinityWindows, @manual{affinityWindows, title = "{Windows (vs.85) - SetThreadAffinityMask function}" } @misc{affinityFreebsd, @manual{switchToWindows, title = "{Windows (vs.85) - SwitchToFiber function}" } @manual{affinityFreebsd, title = "{FreeBSD General Commands Manual - CPUSET(1)}" } @misc{affinityNetbsd, @manual{affinityNetbsd, title = "{NetBSD Library Functions Manual - AFFINITY(3)}" } @misc{affinityMacosx, @manual{affinityMacosx, title = "{Affinity API Release Notes for OS X v10.5}" } @misc{NodeJs, title = "{Node.js}", howpublished= "\url{https://nodejs.org/en/}", } @misc{SpringMVC, title = "{Spring Web MVC}", howpublished= "\url{https://docs.spring.io/spring/docs/current/spring-framework-reference/web.html}", } @misc{Django, title = "{Django}", howpublished= "\url{https://www.djangoproject.com/}", }
• ## doc/proposals/concurrency/figures/ext_monitor.fig
r4e7a4e6 5250 3150 5250 2400 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3150 3150 3750 3150 3750 2850 5850 2850 5850 1650 3150 3150 3750 3150 3750 2850 5700 2850 5700 1650 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5850 2850 6150 3000 5700 2850 6150 3000 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 5100 1800 5400 1800 5400 2400 5100 2400 5100 1800 4 1 -1 0 0 0 12 0.0000 2 135 735 5100 3975 variables\001 4 0 0 50 -1 0 11 0.0000 2 165 855 4275 3150 Acceptables\001 4 0 0 50 -1 0 11 0.0000 2 120 165 5775 2700 W\001 4 0 0 50 -1 0 11 0.0000 2 120 135 5775 2400 X\001 4 0 0 50 -1 0 11 0.0000 2 120 105 5775 2100 Z\001 4 0 0 50 -1 0 11 0.0000 2 120 135 5775 1800 Y\001
• ## doc/proposals/concurrency/figures/int_monitor.fig
r4e7a4e6 1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 1200 2850 125 125 1200 2850 1082 2809 1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 900 2850 125 125 900 2850 782 2809 1 3 0 1 -1 -1 0 0 4 0.000 1 0.0000 6225 4650 105 105 6225 4650 6330 4755 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3150 4650 80 80 3150 4650 3230 4730 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4575 4650 105 105 4575 4650 4680 4755 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 4650 105 105 6000 4650 6105 4755 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3900 4650 80 80 3900 4650 3980 4730 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 3900 1950 4200 2100 4 1 -1 0 0 0 12 0.0000 2 165 420 4050 1050 entry\001 4 0 0 50 -1 0 11 0.0000 2 120 705 600 2325 Condition\001 4 0 -1 0 0 0 12 0.0000 2 180 930 6450 4725 routine ptrs\001 4 0 -1 0 0 0 12 0.0000 2 135 1050 3300 4725 active thread\001 4 0 -1 0 0 0 12 0.0000 2 135 1215 4725 4725 blocked thread\001 4 0 -1 0 0 0 12 0.0000 2 135 1215 6150 4725 blocked thread\001 4 0 -1 0 0 0 12 0.0000 2 135 1050 4050 4725 active thread\001
• ## doc/proposals/concurrency/style/cfa-format.tex
r4e7a4e6 language = C, style=defaultStyle, captionpos=b, #1 } language = CFA, style=cfaStyle, captionpos=b, #1 } language = pseudo, style=pseudoStyle, captionpos=b, #1 } language = c++, style=defaultStyle, captionpos=b, #1 } language = c++, style=defaultStyle, captionpos=b, #1 } language = java, style=defaultStyle, captionpos=b, #1 } language = scala, style=defaultStyle, captionpos=b, #1 } language = sml, style=defaultStyle, captionpos=b, #1 } language = D, style=defaultStyle, captionpos=b, #1 } language = rust, style=defaultStyle, captionpos=b, #1 } language = Golang, style=defaultStyle, captionpos=b, #1 }
• ## doc/proposals/concurrency/text/basics.tex
r4e7a4e6 Execution with a single thread and multiple stacks where the thread is self-scheduling deterministically across the stacks is called coroutining. Execution with a single and multiple stacks but where the thread is scheduled by an oracle (non-deterministic from the thread perspective) across the stacks is called concurrency. Therefore, a minimal concurrency system can be achieved by creating coroutines, which instead of context switching among each other, always ask an oracle where to context switch next. While coroutines can execute on the caller's stack-frame, stackfull coroutines allow full generality and are sufficient as the basis for concurrency. The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model (a.k.a non-preemptive scheduling). The oracle/scheduler can either be a stackless or stackfull entity and correspondingly require one or two context switches to run a different coroutine. In any case, a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to occur, the only feature missing is preemption. A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context-switches occur. Mutual-exclusion and synchronisation are ways of limiting non-determinism in a concurrent system. Now it is important to understand that uncertainty is desireable; uncertainty can be used by runtime systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows. Therefore, a minimal concurrency system can be achieved by creating coroutines, which instead of context switching among each other, always ask an oracle where to context switch next. While coroutines can execute on the caller's stack-frame, stack-full coroutines allow full generality and are sufficient as the basis for concurrency. The aforementioned oracle is a scheduler and the whole system now follows a cooperative threading-model (a.k.a non-preemptive scheduling). The oracle/scheduler can either be a stack-less or stack-full entity and correspondingly require one or two context switches to run a different coroutine. In any case, a subset of concurrency related challenges start to appear. For the complete set of concurrency challenges to occur, the only feature missing is preemption. A scheduler introduces order of execution uncertainty, while preemption introduces uncertainty about where context-switches occur. Mutual-exclusion and synchronization are ways of limiting non-determinism in a concurrent system. Now it is important to understand that uncertainty is desirable; uncertainty can be used by runtime systems to significantly increase performance and is often the basis of giving a user the illusion that tasks are running in parallel. Optimal performance in concurrent applications is often obtained by having as much non-determinism as correctness allows. \section{\protect\CFA 's Thread Building Blocks} One of the important features that is missing in C is threading. On modern architectures, a lack of threading is unacceptable\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent programs to take advantage of parallelism. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay. One of the important features that is missing in C is threading. On modern architectures, a lack of threading is unacceptable~\cite{Sutter05, Sutter05b}, and therefore modern programming languages must have the proper tools to allow users to write performant concurrent programs to take advantage of parallelism. As an extension of C, \CFA needs to express these concepts in a way that is as natural as possible to programmers familiar with imperative languages. And being a system-level language means programmers expect to choose precisely which features they need and which cost they are willing to pay. \section{Coroutines: A stepping stone}\label{coroutine} While the main focus of this proposal is concurrency and parallelism, it is important to address coroutines, which are actually a significant building block of a concurrency system. Coroutines need to deal with context-switches and other context-management operations. Therefore, this proposal includes coroutines both as an intermediate step for the implementation of threads, and a first class feature of \CFA. Furthermore, many design challenges of threads are at least partially present in designing coroutines, which makes the design effort that much more relevant. The core \acrshort{api} of coroutines revolve around two features: independent call stacks and \code{suspend}/\code{resume}. \begin{figure} \begin{table} \begin{center} \begin{tabular}{c @{\hskip 0.025in}|@{\hskip 0.025in} c @{\hskip 0.025in}|@{\hskip 0.025in} c} void fibonacci_array( int n, int * array int* array ) { int f1 = 0; int f2 = 1; int fibonacci_state( Iterator_t * it Iterator_t* it ) { int f; \end{tabular} \end{center} \caption{Different implementations of a fibonacci sequence generator in C.} \caption{Different implementations of a Fibonacci sequence generator in C.}, \label{lst:fibonacci-c} \end{figure} A good example of a problem made easier with coroutines is generators, like the fibonacci sequence. This problem comes with the challenge of decoupling how a sequence is generated and how it is used. Figure \ref{lst:fibonacci-c} shows conventional approaches to writing generators in C. All three of these approach suffer from strong coupling. The left and center approaches require that the generator have knowledge of how the sequence is used, while the rightmost approach requires holding internal state between calls on behalf of the generator and makes it much harder to handle corner cases like the Fibonacci seed. Figure \ref{lst:fibonacci-cfa} is an example of a solution to the fibonnaci problem using \CFA coroutines, where the coroutine stack holds sufficient state for the generation. This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used. Indeed, this version is as easy to use as the \code{fibonacci_state} solution, while the imlpementation is very similar to the \code{fibonacci_func} example. \end{table} A good example of a problem made easier with coroutines is generators, like the Fibonacci sequence. This problem comes with the challenge of decoupling how a sequence is generated and how it is used. Table \ref{lst:fibonacci-c} shows conventional approaches to writing generators in C. All three of these approach suffer from strong coupling. The left and center approaches require that the generator have knowledge of how the sequence is used, while the rightmost approach requires holding internal state between calls on behalf of the generator and makes it much harder to handle corner cases like the Fibonacci seed. Listing \ref{lst:fibonacci-cfa} is an example of a solution to the Fibonacci problem using \CFA coroutines, where the coroutine stack holds sufficient state for the next generation. This solution has the advantage of having very strong decoupling between how the sequence is generated and how it is used. Indeed, this version is as easy to use as the \code{fibonacci_state} solution, while the implementation is very similar to the \code{fibonacci_func} example. \begin{figure} \begin{cfacode} \begin{cfacode}[caption={Implementation of Fibonacci using coroutines},label={lst:fibonacci-cfa}] coroutine Fibonacci { int fn; //used for communication }; void ?{}(Fibonacci & this) { //constructor void ?{}(Fibonacci& this) { //constructor this.fn = 0; } //main automacically called on first resume void main(Fibonacci & this) with (this) { //main automatically called on first resume void main(Fibonacci& this) with (this) { int fn1, fn2; //retained between resumes fn = 0; } int next(Fibonacci & this) { int next(Fibonacci& this) { resume(this); //transfer to last suspend return this.fn; } \end{cfacode} \caption{Implementation of fibonacci using coroutines} \label{lst:fibonacci-cfa} \end{figure} Figure \ref{lst:fmt-line} shows the \code{Format} coroutine which rearranges text in order to group characters into blocks of fixed size. The example takes advantage of resuming coroutines in the constructor to simplify the code and highlights the idea that interesting control flow can occur in the constructor. Listing \ref{lst:fmt-line} shows the \code{Format} coroutine for restructuring text into groups of character blocks of fixed size. The example takes advantage of resuming coroutines in the constructor to simplify the code and highlights the idea that interesting control flow can occur in the constructor. \begin{figure} \begin{cfacode}[tabsize=3] \begin{cfacode}[tabsize=3,caption={Formatting text into lines of 5 blocks of 4 characters.},label={lst:fmt-line}] //format characters into blocks of 4 and groups of 5 blocks per line coroutine Format { }; void ?{}(Format & fmt) { void ?{}(Format& fmt) { resume( fmt ); //prime (start) coroutine } void ^?{}(Format & fmt) with fmt { void ^?{}(Format& fmt) with fmt { if ( fmt.g != 0 || fmt.b != 0 ) sout | endl; } void main(Format & fmt) with fmt { void main(Format& fmt) with fmt { for ( ;; ) { //for as many characters for(g = 0; g < 5; g++) { //groups of 5 blocks } \end{cfacode} \caption{Formatting text into lines of 5 blocks of 4 characters.} \label{lst:fmt-line} \end{figure} \subsection{Construction} One important design challenge for coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the fully constructed object into the system. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads. The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. As regular objects, constructors can leak coroutines before they are ready. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine. One important design challenge for implementing coroutines and threads (shown in section \ref{threads}) is that the runtime system needs to run code after the user-constructor runs to connect the fully constructed object into the system. In the case of coroutines, this challenge is simpler since there is no non-determinism from preemption or scheduling. However, the underlying challenge remains the same for coroutines and threads. The runtime system needs to create the coroutine's stack and more importantly prepare it for the first resumption. The timing of the creation is non-trivial since users both expect to have fully constructed objects once execution enters the coroutine main and to be able to resume the coroutine from the constructor. There are several solutions to this problem but the chosen options effectively forces the design of the coroutine. Furthermore, \CFA faces an extra challenge as polymorphic routines create invisible thunks when casted to non-polymorphic routines and these thunks have function scope. For example, the following code, while looking benign, can run into undefined behaviour because of thunks: \begin{ccode} extern void async(/* omitted */, void (*func)(void *), void *obj); void noop(/* omitted */, void *obj){} extern void async(/* omitted */, void (*func)(void*), void* obj); void noop(/* omitted */, void* obj){} void bar(){ int a; void _thunk0(int *_p0){ void _thunk0(int* _p0){ /* omitted */ noop(/* omitted */, _p0); } /* omitted */ async(/* omitted */, ((void (*)(void *))(&_thunk0)), (&a)); async(/* omitted */, ((void (*)(void*))(&_thunk0)), (&a)); } \end{ccode} The problem in this example is a storage management issue, the function pointer \code{_thunk0} is only valid until the end of the block, which limits the viable solutions because storing the function pointer for too long causes undefined behavior; i.e., the stack-based thunk being destroyed before it can be used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that nested routine cannot be passed outside of the declaration scope. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks. The problem in this example is a storage management issue, the function pointer \code{_thunk0} is only valid until the end of the block, which limits the viable solutions because storing the function pointer for too long causes Undefined Behavior; i.e., the stack-based thunk being destroyed before it can be used. This challenge is an extension of challenges that come with second-class routines. Indeed, GCC nested routines also have the limitation that nested routine cannot be passed outside of the declaration scope. The case of coroutines and threads is simply an extension of this problem to multiple call-stacks. \subsection{Alternative: Composition} One solution to this challenge is to use composition/containement, where coroutine fields are added to manage the coroutine. One solution to this challenge is to use composition/containment, where coroutine fields are added to manage the coroutine. \begin{cfacode} }; void FibMain(void *) { void FibMain(void*) { //... } void ?{}(Fibonacci & this) { void ?{}(Fibonacci& this) { this.fn = 0; //Call constructor to initialize coroutine } \end{cfacode} The downside of this approach is that users need to correctly construct the coroutine handle before using it. Like any other objects, doing so the users carefully choose construction order to prevent usage of unconstructed objects. However, in the case of coroutines, users must also pass to the coroutine information about the coroutine main, like in the previous example. This opens the door for user errors and requires extra runtime storage to pass at runtime information that can be known statically. The downside of this approach is that users need to correctly construct the coroutine handle before using it. Like any other objects, the user must carefully choose construction order to prevent usage of objects not yet constructed. However, in the case of coroutines, users must also pass to the coroutine information about the coroutine main, like in the previous example. This opens the door for user errors and requires extra runtime storage to pass at runtime information that can be known statically. \subsection{Alternative: Reserved keyword} }; \end{cfacode} The \code{coroutine} keyword means the compiler can find and inject code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users wantint to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of the programming language used. While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can still be constructed by users without using the language support. The reserved keywords are only present to improve ease of use for the common cases. \subsection{Alternative: Lamda Objects} For coroutines as for threads, many implementations are based on routine pointers or function objects\cite{Butenhof97, ANSI14:C++, MS:VisualC++, BoostCoroutines15}. For example, Boost implements coroutines in terms of four functor object types: The \code{coroutine} keyword means the compiler can find and inject code where needed. The downside of this approach is that it makes coroutine a special case in the language. Users wanting to extend coroutines or build their own for various reasons can only do so in ways offered by the language. Furthermore, implementing coroutines without language supports also displays the power of the programming language used. While this is ultimately the option used for idiomatic \CFA code, coroutines and threads can still be constructed by users without using the language support. The reserved keywords are only present to improve ease of use for the common cases. \subsection{Alternative: Lambda Objects} For coroutines as for threads, many implementations are based on routine pointers or function objects~\cite{Butenhof97, ANSI14:C++, MS:VisualC++, BoostCoroutines15}. For example, Boost implements coroutines in terms of four functor object types: \begin{cfacode} asymmetric_coroutine<>::pull_type A variation of this would be to use a simple function pointer in the same way pthread does for threads : \begin{cfacode} void foo( coroutine_t cid, void * arg ) { int * value = (int *)arg; void foo( coroutine_t cid, void* arg ) { int* value = (int*)arg; //Coroutine body } } \end{cfacode} This semantics is more common for thread interfaces than coroutines works equally well. As discussed in section \ref{threads}, this approach is superseeded by static approaches in terms of expressivity. This semantics is more common for thread interfaces but coroutines work equally well. As discussed in section \ref{threads}, this approach is superseded by static approaches in terms of expressivity. \subsection{Alternative: Trait-based coroutines} \begin{cfacode} trait is_coroutine(dtype T) { void main(T & this); coroutine_desc * get_coroutine(T & this); }; forall( dtype T | is_coroutine(T) ) void suspend(T &); forall( dtype T | is_coroutine(T) ) void resume (T &); \end{cfacode} This ensures an object is not a coroutine until \code{resume} is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory layout of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to only have to implement the main routine. void main(T& this); coroutine_desc* get_coroutine(T& this); }; forall( dtype T | is_coroutine(T) ) void suspend(T&); forall( dtype T | is_coroutine(T) ) void resume (T&); \end{cfacode} This ensures an object is not a coroutine until \code{resume} is called on the object. Correspondingly, any object that is passed to \code{resume} is a coroutine since it must satisfy the \code{is_coroutine} trait to compile. The advantage of this approach is that users can easily create different types of coroutines, for example, changing the memory layout of a coroutine is trivial when implementing the \code{get_coroutine} routine. The \CFA keyword \code{coroutine} only has the effect of implementing the getter and forward declarations required for users to implement the main routine. \begin{center} static inline coroutine_desc * get_coroutine( struct MyCoroutine & this coroutine_desc* get_coroutine( struct MyCoroutine& this ) { return &this.__cor; } void main(struct MyCoroutine * this); void main(struct MyCoroutine* this); \end{cfacode} \end{tabular} \end{center} The combination of these two approaches allows users new to coroutinning and concurrency to have an easy and concise specification, while more advanced users have tighter control on memory layout and initialization. The combination of these two approaches allows users new to coroutining and concurrency to have an easy and concise specification, while more advanced users have tighter control on memory layout and initialization. \section{Thread Interface}\label{threads} \end{cfacode} As for coroutines, the keyword is a thin wrapper arount a \CFA trait: As for coroutines, the keyword is a thin wrapper around a \CFA trait: \begin{cfacode} \end{cfacode} Obviously, for this thread implementation to be usefull it must run some user code. Several other threading interfaces use a function-pointer representation as the interface of threads (for example \Csharp~\cite{Csharp} and Scala~\cite{Scala}). However, this proposal considers that statically tying a \code{main} routine to a thread superseeds this approach. Since the \code{main} routine is already a special routine in \CFA (where the program begins), it is a natural extension of the semantics using overloading to declare mains for different threads (the normal main being the main of the initial thread). As such the \code{main} routine of a thread can be defined as Obviously, for this thread implementation to be useful it must run some user code. Several other threading interfaces use a function-pointer representation as the interface of threads (for example \Csharp~\cite{Csharp} and Scala~\cite{Scala}). However, this proposal considers that statically tying a \code{main} routine to a thread supersedes this approach. Since the \code{main} routine is already a special routine in \CFA (where the program begins), it is a natural extension of the semantics using overloading to declare mains for different threads (the normal main being the main of the initial thread). As such the \code{main} routine of a thread can be defined as \begin{cfacode} thread foo {}; this.func( this.arg ); } void hello(/*unused*/ int) { sout | "Hello World!" | endl; } int main() { FuncRunner f = {hello, 42}; return 0' } \end{cfacode} \end{cfacode} This semantic has several advantages over explicit semantics: a thread is always started and stopped exaclty once, users cannot make any progamming errors, and it naturally scales to multiple threads meaning basic synchronisation is very simple. This semantic has several advantages over explicit semantics: a thread is always started and stopped exactly once, users cannot make any programming errors, and it naturally scales to multiple threads meaning basic synchronization is very simple. \begin{cfacode} //main void main(MyThread & this) { void main(MyThread& this) { //... } \end{cfacode} However, one of the drawbacks of this approach is that threads now always form a lattice, that is they are always destroyed in the opposite order of construction because of block structure. This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created. However, one of the drawbacks of this approach is that threads always form a lattice, i.e., they are always destroyed in the opposite order of construction because of block structure. This restriction is relaxed by using dynamic allocation, so threads can outlive the scope in which they are created, much like dynamically allocating memory lets objects outlive the scope in which they are created. \begin{cfacode} }; void main(MyThread & this) { void main(MyThread& this) { //... } void foo() { MyThread * long_lived; MyThread* long_lived; { //Start a thread at the beginning of the scope
• ## doc/proposals/concurrency/text/cforall.tex
r4e7a4e6 % ====================================================================== % ====================================================================== \chapter{Cforall Overview} \chapter{\CFA Overview} % ====================================================================== % ====================================================================== The following is a quick introduction to the \CFA language, specifically tailored to the features needed to support concurrency. \CFA is a extension of ISO-C and therefore supports all of the same paradigms as C. It is a non-object oriented system language, meaning most of the major abstractions have either no runtime overhead or can be opt-out easily. Like C, the basics of \CFA revolve around structures and routines, which are thin abstractions over machine code. The vast majority of the code produced by the \CFA translator respects memory-layouts and calling-conventions laid out by C. Interestingly, while \CFA is not an object-oriented language, lacking the concept of a receiver (e.g., this), it does have some notion of objects\footnote{C defines the term objects as : region of data storage in the execution environment, the contents of which can represent values''\cite[3.15]{C11}}, most importantly construction and destruction of objects. Most of the following code examples can be found on the \CFA website \cite{www-cfa} \CFA is an extension of ISO-C and therefore supports all of the same paradigms as C. It is a non-object-oriented system-language, meaning most of the major abstractions have either no runtime overhead or can be opt-out easily. Like C, the basics of \CFA revolve around structures and routines, which are thin abstractions over machine code. The vast majority of the code produced by the \CFA translator respects memory-layouts and calling-conventions laid out by C. Interestingly, while \CFA is not an object-oriented language, lacking the concept of a receiver (e.g., {\tt this}), it does have some notion of objects\footnote{C defines the term objects as : region of data storage in the execution environment, the contents of which can represent values''~\cite[3.15]{C11}}, most importantly construction and destruction of objects. Most of the following code examples can be found on the \CFA website~\cite{www-cfa} % ====================================================================== \section{References} Like \CC, \CFA introduces rebindable references providing multiple dereferecing as an alternative to pointers. In regards to concurrency, the semantic difference between pointers and references are not particularly relevant, but since this document uses mostly references, here is a quick overview of the semantics: Like \CC, \CFA introduces rebind-able references providing multiple dereferencing as an alternative to pointers. In regards to concurrency, the semantic difference between pointers and references are not particularly relevant, but since this document uses mostly references, here is a quick overview of the semantics: \begin{cfacode} int x, *p1 = &x, **p2 = &p1, ***p3 = &p2, *p3 = ...; //change p2 int y, z, & ar[3] = {x, y, z}; //initialize array of references typeof( ar[1]) p; //is int, i.e., the type of referenced object typeof(&ar[1]) q; //is int &, i.e., the type of reference sizeof( ar[1]) == sizeof(int); //is true, i.e., the size of referenced object sizeof(&ar[1]) == sizeof(int *); //is true, i.e., the size of a reference typeof( ar[1]) p; //is int, referenced object type typeof(&ar[1]) q; //is int &, reference type sizeof( ar[1]) == sizeof(int); //is true, referenced object size sizeof(&ar[1]) == sizeof(int *); //is true, reference size \end{cfacode} The important take away from this code example is that references offer a handle to an object, much like pointers, but which is automatically dereferenced for convinience. The important take away from this code example is that a reference offers a handle to an object, much like a pointer, but which is automatically dereferenced for convenience. % ====================================================================== \section{Overloading} Another important feature of \CFA is function overloading as in Java and \CC, where routines with the same name are selected based on the number and type of the arguments. As well, \CFA uses the return type as part of the selection criteria, as in Ada\cite{Ada}. For routines with multiple parameters and returns, the selection is complex. Another important feature of \CFA is function overloading as in Java and \CC, where routines with the same name are selected based on the number and type of the arguments. As well, \CFA uses the return type as part of the selection criteria, as in Ada~\cite{Ada}. For routines with multiple parameters and returns, the selection is complex. \begin{cfacode} //selection based on type and number of parameters This feature is particularly important for concurrency since the runtime system relies on creating different types to represent concurrency objects. Therefore, overloading is necessary to prevent the need for long prefixes and other naming conventions that prevent name clashes. As seen in chapter \ref{basics}, routine \code{main} is an example that benefits from overloading. % ====================================================================== \section{Operators} Overloading also extends to operators. The syntax for denoting operator-overloading is to name a routine with the symbol of the operator and question marks where the arguments of the operation occur, e.g.: While concurrency does not use operator overloading directly, this feature is more important as an introduction for the syntax of constructors. % ====================================================================== \section{Constructors/Destructors} Object life-time is often a challenge in concurrency. \CFA uses the approach of giving concurrent meaning to object life-time as a mean of synchronization and/or mutual exclusion. Since \CFA relies heavily on the life time of objects, constructors and destructors are a core feature required for concurrency and parallelism. \CFA uses the following syntax for constructors and destructors : } int main() { S x = {10}, y = {100}; //implict calls: ?{}(x, 10), ?{}(y, 100) S x = {10}, y = {100}; //implicit calls: ?{}(x, 10), ?{}(y, 100) ... //use x and y ^x{}; ^y{}; //explicit calls to de-initialize x{20}; y{200}; //explicit calls to reinitialize ... //reuse x and y } //implict calls: ^?{}(y), ^?{}(x) } //implicit calls: ^?{}(y), ^?{}(x) \end{cfacode} The language guarantees that every object and all their fields are constructed. Like \CC, construction of an object is automatically done on allocation and destruction of the object is done on deallocation. Allocation and deallocation can occur on the stack or on the heap. delete(s); //deallocation, call destructor \end{cfacode} Note that like \CC, \CFA introduces \code{new} and \code{delete}, which behave like \code{malloc} and \code{free} in addition to constructing and destructing objects, after calling \code{malloc} and before calling \code{free} respectively. Note that like \CC, \CFA introduces \code{new} and \code{delete}, which behave like \code{malloc} and \code{free} in addition to constructing and destructing objects, after calling \code{malloc} and before calling \code{free}, respectively. % ====================================================================== \section{Parametric Polymorphism} Routines in \CFA can also be reused for multiple types. This capability is done using the \code{forall} clause which gives \CFA its name. \code{forall} clauses allow separately compiled routines to support generic usage over multiple types. For example, the following sum function works for any type that supports construction from 0 and addition : Routines in \CFA can also be reused for multiple types. This capability is done using the \code{forall} clause, which gives \CFA its name. \code{forall} clauses allow separately compiled routines to support generic usage over multiple types. For example, the following sum function works for any type that supports construction from 0 and addition : \begin{cfacode} //constraint type, 0 and + \end{cfacode} Note that the type use for assertions can be either an \code{otype} or a \code{dtype}. Types declares as \code{otype} refer to complete'' objects, i.e., objects with a size, a default constructor, a copy constructor, a destructor and an assignment operator. Using \code{dtype} on the other hand has none of these assumptions but is extremely restrictive, it only guarantees the object is addressable. % ====================================================================== \section{with Clause/Statement} Since \CFA lacks the concept of a receiver, certain functions end-up needing to repeat variable names often. To remove this inconvenience, \CFA provides the \code{with} statement, which opens an aggregate scope making its fields directly accessible (like Pascal). struct S { int i, j; }; int mem(S & this) with (this) //with clause i = 1; //this->i j = 2; //this->j i = 1; //this->i j = 2; //this->j } int foo() { struct S1 { ... } s1; struct S2 { ... } s2; with (s1) //with statement with (s1) //with statement { //access fields of s1 //without qualification //access fields of s1 without qualification with (s2) //nesting { //access fields of s1 and s2 //without qualification //access fields of s1 and s2 without qualification } } with (s1, s2) //scopes open in parallel with (s1, s2) //scopes open in parallel { //access fields of s1 and s2 //without qualification //access fields of s1 and s2 without qualification } }
• ## doc/proposals/concurrency/text/concurrency.tex
r4e7a4e6 % ====================================================================== % ====================================================================== Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (channels\cite{CSP,Go} for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. While this distinction can be hidden away in library code, effective use of the librairy still has to take both paradigms into account. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At the lowest level, concurrent paradigms are implemented as atomic operations and locks. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desireable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}. An approach that is worth mentioning because it is gaining in popularity is transactionnal memory~\cite{Dice10}[Check citation]. While this approach is even pursued by system languages like \CC\cite{Cpp-Transactions}, the performance and feature set is currently too restrictive to be the main concurrency paradigm for systems language, which is why it was rejected as the core paradigm for concurrency in \CFA. Several tool can be used to solve concurrency challenges. Since many of these challenges appear with the use of mutable shared-state, some languages and libraries simply disallow mutable shared-state (Erlang~\cite{Erlang}, Haskell~\cite{Haskell}, Akka (Scala)~\cite{Akka}). In these paradigms, interaction among concurrent objects relies on message passing~\cite{Thoth,Harmony,V-Kernel} or other paradigms closely relate to networking concepts (channels~\cite{CSP,Go} for example). However, in languages that use routine calls as their core abstraction-mechanism, these approaches force a clear distinction between concurrent and non-concurrent paradigms (i.e., message passing versus routine call). This distinction in turn means that, in order to be effective, programmers need to learn two sets of designs patterns. While this distinction can be hidden away in library code, effective use of the library still has to take both paradigms into account. Approaches based on shared memory are more closely related to non-concurrent paradigms since they often rely on basic constructs like routine calls and shared objects. At the lowest level, concurrent paradigms are implemented as atomic operations and locks. Many such mechanisms have been proposed, including semaphores~\cite{Dijkstra68b} and path expressions~\cite{Campbell74}. However, for productivity reasons it is desirable to have a higher-level construct be the core concurrency paradigm~\cite{HPP:Study}. An approach that is worth mentioning because it is gaining in popularity is transactional memory~\cite{Herlihy93}. While this approach is even pursued by system languages like \CC~\cite{Cpp-Transactions}, the performance and feature set is currently too restrictive to be the main concurrency paradigm for systems language, which is why it was rejected as the core paradigm for concurrency in \CFA. One of the most natural, elegant, and efficient mechanisms for synchronization and communication, especially for shared-memory systems, is the \emph{monitor}. Monitors were first proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Many programming languages---e.g., Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Modula~\cite{Modula-2}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, NeWS~\cite{NeWS}, Emerald~\cite{Emerald}, \uC~\cite{Buhr92a} and Java~\cite{Java}---provide monitors as explicit language constructs. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as semaphores or locks to simulate monitors. For these reasons, this project proposes monitors as the core concurrency-construct. \section{Basics} Non-determinism requires concurrent systems to offer support for mutual-exclusion and synchronisation. Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is a group of instructions on an associated portion of data that requires the restricted access. On the other hand, synchronization enforces relative ordering of execution and synchronization tools provide numerous mechanisms to establish timing relationships among threads. Non-determinism requires concurrent systems to offer support for mutual-exclusion and synchronization. Mutual-exclusion is the concept that only a fixed number of threads can access a critical section at any given time, where a critical section is a group of instructions on an associated portion of data that requires the restricted access. On the other hand, synchronization enforces relative ordering of execution and synchronization tools provide numerous mechanisms to establish timing relationships among threads. \subsection{Mutual-Exclusion} As mentionned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once. However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use. Methods range from low-level locks, which are fast and flexible but require significant attention to be correct, to higher-level mutual-exclusion methods, which sacrifice some performance in order to improve ease of use. Ease of use comes by either guaranteeing some problems cannot occur (e.g., being deadlock free) or by offering a more explicit coupling between data and corresponding critical section. For example, the \CC \code{std::atomic} offers an easy way to express mutual-exclusion on a restricted set of operations (e.g.: reading/writing large types atomically). Another challenge with low-level locks is composability. Locks have restricted composability because it takes careful organising for multiple locks to be used while preventing deadlocks. Easing composability is another feature higher-level mutual-exclusion mechanisms often offer. As mentioned above, mutual-exclusion is the guarantee that only a fix number of threads can enter a critical section at once. However, many solutions exist for mutual exclusion, which vary in terms of performance, flexibility and ease of use. Methods range from low-level locks, which are fast and flexible but require significant attention to be correct, to higher-level mutual-exclusion methods, which sacrifice some performance in order to improve ease of use. Ease of use comes by either guaranteeing some problems cannot occur (e.g., being deadlock free) or by offering a more explicit coupling between data and corresponding critical section. For example, the \CC \code{std::atomic} offers an easy way to express mutual-exclusion on a restricted set of operations (e.g.: reading/writing large types atomically). Another challenge with low-level locks is composability. Locks have restricted composability because it takes careful organizing for multiple locks to be used while preventing deadlocks. Easing composability is another feature higher-level mutual-exclusion mechanisms often offer. \subsection{Synchronization} As for mutual-exclusion, low-level synchronisation primitives often offer good performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, e.g.: message passing, or offering simpler solution to otherwise involved challenges. As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time, synchronisation happens within a critical section, where threads must acquire mutual-exclusion in a certain order. However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. Not satisfying this property called barging. For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}. The classic exmaple is the thread that finishes using a ressource and unblocks a thread waiting to use the resource, but the unblocked thread must compete again to acquire the resource. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs. This challenge is often split into two different methods, barging avoidance and barging prevention. Algorithms that use status flags and other flag variables to detect barging threads are said to be using barging avoidance while algorithms that baton-passing locks between threads instead of releasing the locks are said to be using barging prevention. As for mutual-exclusion, low-level synchronization primitives often offer good performance and good flexibility at the cost of ease of use. Again, higher-level mechanism often simplify usage by adding better coupling between synchronization and data, e.g.: message passing, or offering a simpler solution to otherwise involved challenges. As mentioned above, synchronization can be expressed as guaranteeing that event \textit{X} always happens before \textit{Y}. Most of the time, synchronization happens within a critical section, where threads must acquire mutual-exclusion in a certain order. However, it may also be desirable to guarantee that event \textit{Z} does not occur between \textit{X} and \textit{Y}. Not satisfying this property is called barging. For example, where event \textit{X} tries to effect event \textit{Y} but another thread acquires the critical section and emits \textit{Z} before \textit{Y}. The classic example is the thread that finishes using a resource and unblocks a thread waiting to use the resource, but the unblocked thread must compete again to acquire the resource. Preventing or detecting barging is an involved challenge with low-level locks, which can be made much easier by higher-level constructs. This challenge is often split into two different methods, barging avoidance and barging prevention. Algorithms that use flag variables to detect barging threads are said to be using barging avoidance, while algorithms that baton-pass locks~\cite{Andrews89} between threads instead of releasing the locks are said to be using barging prevention. % ====================================================================== \begin{cfacode} typedef /*some monitor type*/ monitor; int f(monitor & m); int f(monitor& m); int main() { % ====================================================================== % ====================================================================== The above monitor example displays some of the intrinsic characteristics. First, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable objects. The above monitor example displays some of the intrinsic characteristics. First, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important, because at their core, monitors are implicit mutual-exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copy-able objects (\code{dtype}). Another aspect to consider is when a monitor acquires its mutual exclusion. For example, a monitor may need to be passed through multiple helper routines that do not acquire the monitor mutual-exclusion on entry. Pass through can occur for generic helper routines (\code{swap}, \code{sort}, etc.) or specific helper routines like the following to implement an atomic counter : \end{tabular} \end{center} Notice how the counter is used without any explicit synchronisation and yet supports thread-safe semantics for both reading and writting, which is similar in usage to \CC \code{atomic} template. Here, the constructor(\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet con\-structed should never be shared and therefore does not require mutual exclusion. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} keyword depending on whether or not reading a \code{size_t} is an atomic operation. For maximum usability, monitors use \gls{multi-acq} semantics, which means a single thread can acquire the same monitor multiple times without deadlock. For example, figure \ref{fig:search} uses recursion and \gls{multi-acq} to print values inside a binary tree. Notice how the counter is used without any explicit synchronization and yet supports thread-safe semantics for both reading and writing, which is similar in usage to \CC \code{atomic} template. Here, the constructor (\code{?\{\}}) uses the \code{nomutex} keyword to signify that it does not acquire the monitor mutual-exclusion when constructing. This semantics is because an object not yet con\-structed should never be shared and therefore does not require mutual exclusion. Furthermore, it allows the implementation greater freedom when it initializes the monitor locking. The prefix increment operator uses \code{mutex} to protect the incrementing process from race conditions. Finally, there is a conversion operator from \code{counter_t} to \code{size_t}. This conversion may or may not require the \code{mutex} keyword depending on whether or not reading a \code{size_t} is an atomic operation. For maximum usability, monitors use \gls{multi-acq} semantics, which means a single thread can acquire the same monitor multiple times without deadlock. For example, listing \ref{fig:search} uses recursion and \gls{multi-acq} to print values inside a binary tree. \begin{figure} \label{fig:search} \begin{cfacode} \begin{cfacode}[caption={Recursive printing algorithm using \gls{multi-acq}.},label={fig:search}] monitor printer { ... }; struct tree { } \end{cfacode} \caption{Recursive printing algorithm using \gls{multi-acq}.} \end{figure} Having both \code{mutex} and \code{nomutex} keywords is redundant based on the meaning of a routine having neither of these keywords. For example, given a routine without qualifiers \code{void foo(counter_t & this)}, then it is reasonable that it should default to the safest option \code{mutex}, whereas assuming \code{nomutex} is unsafe and may cause subtle errors. In fact, \code{nomutex} is the normal'' parameter behaviour, with the \code{nomutex} keyword effectively stating explicitly that this routine is not special''. Another alternative is making exactly one of these keywords mandatory, which provides the same semantics but without the ambiguity of supporting routines with neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without doubt whether or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword and uses no keyword to mean \code{nomutex}. Having both \code{mutex} and \code{nomutex} keywords is redundant based on the meaning of a routine having neither of these keywords. For example, it is reasonable that it should default to the safest option (\code{mutex}) when given a routine without qualifiers \code{void foo(counter_t & this)}, whereas assuming \code{nomutex} is unsafe and may cause subtle errors. On the other hand, \code{nomutex} is the normal'' parameter behaviour, it effectively states explicitly that this routine is not special''. Another alternative is making exactly one of these keywords mandatory, which provides the same semantics but without the ambiguity of supporting routines with neither keyword. Mandatory keywords would also have the added benefit of being self-documented but at the cost of extra typing. While there are several benefits to mandatory keywords, they do bring a few challenges. Mandatory keywords in \CFA would imply that the compiler must know without doubt whether or not a parameter is a monitor or not. Since \CFA relies heavily on traits as an abstraction mechanism, the distinction between a type that is a monitor and a type that looks like a monitor can become blurred. For this reason, \CFA only has the \code{mutex} keyword and uses no keyword to mean \code{nomutex}. The next semantic decision is to establish when \code{mutex} may be used as a type qualifier. Consider the following declarations: \begin{cfacode} int f1(monitor & mutex m); int f1(monitor& mutex m); int f2(const monitor & mutex m); int f3(monitor ** mutex m); int f4(monitor * mutex m []); int f3(monitor** mutex m); int f4(monitor* mutex m []); int f5(graph(monitor*) & mutex m); \end{cfacode} The problem is to indentify which object(s) should be acquired. Furthermore, each object needs to be acquired only once. In the case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entry. Adding indirections (\code{f3}) still allows the compiler and programmer to indentify which object is acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths are not necessarily known in C, and even then making sure objects are only acquired once becomes none-trivial. This problem can be extended to absurd limits like \code{f5}, which uses a graph of monitors. To make the issue tractable, this project imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter with at most one level of indirection (ignoring potential qualifiers). Also note that while routine \code{f3} can be supported, meaning that monitor \code{**m} is be acquired, passing an array to this routine would be type safe and yet result in undefined behavior because only the first element of the array is acquired. However, this ambiguity is part of the C type-system with respects to arrays. For this reason, \code{mutex} is disallowed in the context where arrays may be passed: \begin{cfacode} int f1(monitor & mutex m); //Okay : recommanded case int f2(monitor * mutex m); //Okay : could be an array but probably not int f3(monitor mutex m []); //Not Okay : Array of unkown length int f4(monitor ** mutex m); //Not Okay : Could be an array int f5(monitor * mutex m []); //Not Okay : Array of unkown length The problem is to identify which object(s) should be acquired. Furthermore, each object needs to be acquired only once. In the case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entry. Adding indirections (\code{f3}) still allows the compiler and programmer to identify which object is acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths are not necessarily known in C, and even then, making sure objects are only acquired once becomes none-trivial. This problem can be extended to absurd limits like \code{f5}, which uses a graph of monitors. To make the issue tractable, this project imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter with at most one level of indirection (ignoring potential qualifiers). Also note that while routine \code{f3} can be supported, meaning that monitor \code{**m} is be acquired, passing an array to this routine would be type safe and yet result in undefined behaviour because only the first element of the array is acquired. However, this ambiguity is part of the C type-system with respects to arrays. For this reason, \code{mutex} is disallowed in the context where arrays may be passed: \begin{cfacode} int f1(monitor& mutex m); //Okay : recommended case int f2(monitor* mutex m); //Okay : could be an array but probably not int f3(monitor mutex m []); //Not Okay : Array of unknown length int f4(monitor** mutex m); //Not Okay : Could be an array int f5(monitor* mutex m []); //Not Okay : Array of unknown length \end{cfacode} Note that not all array functions are actually distinct in the type system. However, even if the code generation could tell the difference, the extra information is still not sufficient to extend meaningfully the monitor call semantic. f(a,b); \end{cfacode} While OO monitors could be extended with a mutex qualifier for multiple-monitor calls, no example of this feature could be found. The capacity to acquire multiple locks before entering a critical section is called \emph{\gls{bulk-acq}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of aquisition is consistent across calls to different routines using the same monitors as arguments. This consistent ordering means acquiring multiple monitors in the way is safe from deadlock. However, users can still force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects aquiring order: \begin{cfacode} void foo(A & mutex a, B & mutex b) { //acquire a & b While OO monitors could be extended with a mutex qualifier for multiple-monitor calls, no example of this feature could be found. The capability to acquire multiple locks before entering a critical section is called \emph{\gls{bulk-acq}}. In practice, writing multi-locking routines that do not lead to deadlocks is tricky. Having language support for such a feature is therefore a significant asset for \CFA. In the case presented above, \CFA guarantees that the order of acquisition is consistent across calls to different routines using the same monitors as arguments. This consistent ordering means acquiring multiple monitors is safe from deadlock when using \gls{bulk-acq}. However, users can still force the acquiring order. For example, notice which routines use \code{mutex}/\code{nomutex} and how this affects acquiring order: \begin{cfacode} void foo(A& mutex a, B& mutex b) { //acquire a & b ... } void bar(A & mutex a, B & /*nomutex*/ b) { //acquire a void bar(A& mutex a, B& /*nomutex*/ b) { //acquire a ... foo(a, b); ... //acquire b } void baz(A & /*nomutex*/ a, B & mutex b) { //acquire b void baz(A& /*nomutex*/ a, B& mutex b) { //acquire b ... foo(a, b); ... //acquire a } The \gls{multi-acq} monitor lock allows a monitor lock to be acquired by both \code{bar} or \code{baz} and acquired again in \code{foo}. In the calls to \code{bar} and \code{baz} the monitors are acquired in opposite order. However, such use leads to the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle mistake means that calling these routines concurrently may lead to deadlock and is therefore undefined behavior. As shown\cite{Lister77}, solving this problem requires: However, such use leads to the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{foo} but explicit ordering in the case of \code{bar} and \code{baz}. This subtle difference means that calling these routines concurrently may lead to deadlock and is therefore Undefined Behavior. As shown~\cite{Lister77}, solving this problem requires: \begin{enumerate} \item Dynamically tracking of the monitor-call order. \item Implement rollback semantics. \end{enumerate} While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is still prohibitively complex \cite{Dice10}. In \CFA, users simply need to be carefull when acquiring multiple monitors at the same time or only use \gls{bulk-acq} of all the monitors. While \CFA provides only a partial solution, many system provide no solution and the \CFA partial solution handles many useful cases. While the first requirement is already a significant constraint on the system, implementing a general rollback semantics in a C-like language is still prohibitively complex~\cite{Dice10}. In \CFA, users simply need to be careful when acquiring multiple monitors at the same time or only use \gls{bulk-acq} of all the monitors. While \CFA provides only a partial solution, most systems provide no solution and the \CFA partial solution handles many useful cases. For example, \gls{multi-acq} and \gls{bulk-acq} can be used together in interesting ways: } \end{cfacode} This example shows a trivial solution to the bank-account transfer-problem\cite{BankTransfer}. Without \gls{multi-acq} and \gls{bulk-acq}, the solution to this problem is much more involved and requires carefull engineering. This example shows a trivial solution to the bank-account transfer-problem~\cite{BankTransfer}. Without \gls{multi-acq} and \gls{bulk-acq}, the solution to this problem is much more involved and requires careful engineering. \subsection{\code{mutex} statement} \label{mutex-stmt} The call semantics discussed aboved have one software engineering issue, only a named routine can acquire the mutual-exclusion of a set of monitor. \CFA offers the \code{mutex} statement to workaround the need for unnecessary names, avoiding a major software engineering problem\cite{2FTwoHardThings}. Listing \ref{lst:mutex-stmt} shows an example of the \code{mutex} statement, which introduces a new scope in which the mutual-exclusion of a set of monitor is acquired. Beyond naming, the \code{mutex} statement has no semantic difference from a routine call with \code{mutex} parameters. \begin{figure} The call semantics discussed above have one software engineering issue, only a named routine can acquire the mutual-exclusion of a set of monitor. \CFA offers the \code{mutex} statement to workaround the need for unnecessary names, avoiding a major software engineering problem~\cite{2FTwoHardThings}. Table \ref{lst:mutex-stmt} shows an example of the \code{mutex} statement, which introduces a new scope in which the mutual-exclusion of a set of monitor is acquired. Beyond naming, the \code{mutex} statement has no semantic difference from a routine call with \code{mutex} parameters. \begin{table} \begin{center} \begin{tabular}{|c|c|} \begin{cfacode}[tabsize=3] monitor M {}; void foo( M & mutex m ) { void foo( M & mutex m1, M & mutex m2 ) { //critical section } void bar( M & m ) { foo( m ); void bar( M & m1, M & m2 ) { foo( m1, m2 ); } \end{cfacode}&\begin{cfacode}[tabsize=3] monitor M {}; void bar( M & m ) { mutex(m) { void bar( M & m1, M & m2 ) { mutex(m1, m2) { //critical section } \caption{Regular call semantics vs. \code{mutex} statement} \label{lst:mutex-stmt} \end{figure} \end{table} % ====================================================================== }; \end{cfacode} Note that the destructor of a monitor must be a \code{mutex} routine. This requirement ensures that the destructor has mutual-exclusion. As with any object, any call to a monitor, using \code{mutex} or otherwise, is Undefined Behaviour after the destructor has run. Note that the destructor of a monitor must be a \code{mutex} routine to prevent deallocation while a thread is accessing the monitor. As with any object, calls to a monitor, using \code{mutex} or otherwise, is Undefined Behaviour after the destructor has run. % ====================================================================== % ====================================================================== % ====================================================================== In addition to mutual exclusion, the monitors at the core of \CFA's concurrency can also be used to achieve synchronisation. With monitors, this capability is generally achieved with internal or external scheduling as in \cite{Hoare74}. Since internal scheduling within a single monitor is mostly a solved problem, this thesis concentrates on extending internal scheduling to multiple monitors. Indeed, like the \gls{bulk-acq} semantics, internal scheduling extends to multiple monitors in a way that is natural to the user but requires additional complexity on the implementation side. First, here is a simple example of such a technique: In addition to mutual exclusion, the monitors at the core of \CFA's concurrency can also be used to achieve synchronization. With monitors, this capability is generally achieved with internal or external scheduling as in~\cite{Hoare74}. Since internal scheduling within a single monitor is mostly a solved problem, this thesis concentrates on extending internal scheduling to multiple monitors. Indeed, like the \gls{bulk-acq} semantics, internal scheduling extends to multiple monitors in a way that is natural to the user but requires additional complexity on the implementation side. First, here is a simple example of internal-scheduling : \begin{cfacode} } void foo(A & mutex a) { void foo(A& mutex a1, A& mutex a2) { ... //Wait for cooperation from bar() wait(a.e); wait(a1.e); ... } void bar(A & mutex a) { void bar(A& mutex a1, A& mutex a2) { //Provide cooperation for foo() ... //Unblock foo signal(a.e); } \end{cfacode} There are two details to note here. First, the \code{signal} is a delayed operation, it only unblocks the waiting thread when it reaches the end of the critical section. This semantic is needed to respect mutual-exclusion. The alternative is to return immediately after the call to \code{signal}, which is significantly more restrictive. Second, in \CFA, while it is common to store a \code{condition} as a field of the monitor, a \code{condition} variable can be stored/created independently of a monitor. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. An important aspect of the implementation is that \CFA does not allow barging, which means that once function \code{bar} releases the monitor, \code{foo} is guaranteed to resume immediately after (unless some other thread waited on the same condition). This guarantees offers the benefit of not having to loop arount waits in order to guarantee that a condition is still met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design of \CFA concurrency. signal(a1.e); } \end{cfacode} There are two details to note here. First, the \code{signal} is a delayed operation, it only unblocks the waiting thread when it reaches the end of the critical section. This semantic is needed to respect mutual-exclusion, i.e., the signaller and signalled thread cannot be in the monitor simultaneously. The alternative is to return immediately after the call to \code{signal}, which is significantly more restrictive. Second, in \CFA, while it is common to store a \code{condition} as a field of the monitor, a \code{condition} variable can be stored/created independently of a monitor. Here routine \code{foo} waits for the \code{signal} from \code{bar} before making further progress, effectively ensuring a basic ordering. An important aspect of the implementation is that \CFA does not allow barging, which means that once function \code{bar} releases the monitor, \code{foo} is guaranteed to resume immediately after (unless some other thread waited on the same condition). This guarantee offers the benefit of not having to loop around waits to recheck that a condition is met. The main reason \CFA offers this guarantee is that users can easily introduce barging if it becomes a necessity but adding barging prevention or barging avoidance is more involved without language support. Supporting barging prevention as well as extending internal scheduling to multiple monitors is the main source of complexity in the design of \CFA concurrency. % ====================================================================== % ====================================================================== % ====================================================================== It is easier to understand the problem of multi-monitor scheduling using a series of pseudo-code. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors. Indeed, \code{wait} statements always use the implicit condition as paremeter and explicitly names the monitors (A and B) associated with the condition. Note that in \CFA, condition variables are tied to a set of monitors on first use (called branding) which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors. It is easier to understand the problem of multi-monitor scheduling using a series of pseudo-code examples. Note that for simplicity in the following snippets of pseudo-code, waiting and signalling is done using an implicit condition variable, like Java built-in monitors. Indeed, \code{wait} statements always use the implicit condition variable as parameter and explicitly names the monitors (A and B) associated with the condition. Note that in \CFA, condition variables are tied to a \emph{group} of monitors on first use (called branding), which means that using internal scheduling with distinct sets of monitors requires one condition variable per set of monitors. The example below shows the simple case of having two threads (one for each column) and a single monitor A. \begin{multicols}{2} \end{pseudo} \end{multicols} The example shows the simple case of having two threads (one for each column) and a single monitor A. One thread acquires before waiting (atomically blocking and releasing A) and the other acquires before signalling. It is important to note here that both \code{wait} and \code{signal} must be called with the proper monitor(s) already acquired. This semantic is a logical requirement for barging prevention. One thread acquires before waiting (atomically blocking and releasing A) and the other acquires before signalling. It is important to note here that both \code{wait} and \code{signal} must be called with the proper monitor(s) already acquired. This semantic is a logical requirement for barging prevention. A direct extension of the previous example is a \gls{bulk-acq} version: \begin{multicols}{2} \begin{pseudo} release A & B \end{pseudo} \columnbreak \begin{pseudo} acquire A & B This version uses \gls{bulk-acq} (denoted using the {\sf\&} symbol), but the presence of multiple monitors does not add a particularly new meaning. Synchronization happens between the two threads in exactly the same way and order. The only difference is that mutual exclusion covers more monitors. On the implementation side, handling multiple monitors does add a degree of complexity as the next few examples demonstrate. While deadlock issues can occur when nesting monitors, these issues are only a symptom of the fact that locks, and by extension monitors, are not perfectly composable. For monitors, a well known deadlock problem is the Nested Monitor Problem \cite{Lister77}, which occurs when a \code{wait} is made by a thread that holds more than one monitor. For example, the following pseudo-code runs into the nested-monitor problem : While deadlock issues can occur when nesting monitors, these issues are only a symptom of the fact that locks, and by extension monitors, are not perfectly composable. For monitors, a well known deadlock problem is the Nested Monitor Problem~\cite{Lister77}, which occurs when a \code{wait} is made by a thread that holds more than one monitor. For example, the following pseudo-code runs into the nested-monitor problem : \begin{multicols}{2} \begin{pseudo} \end{pseudo} \end{multicols} The \code{wait} only releases monitor \code{B} so the signalling thread cannot acquire monitor \code{A} to get to the \code{signal}. Attempting release of all acquired monitors at the \code{wait} results in another set of problems such as releasing monitor \code{C}, which has nothing to do with the \code{signal}. However, for monitors as for locks, it is possible to write a program using nesting without encountering any problems if nesting is done correctly. For example, the next pseudo-code snippet acquires monitors {\sf A} then {\sf B} before waiting, while only acquiring {\sf B} when signalling, effectively avoiding the nested monitor problem. The \code{wait} only releases monitor \code{B} so the signalling thread cannot acquire monitor \code{A} to get to the \code{signal}. Attempting release of all acquired monitors at the \code{wait} introduces a different set of problems, such as releasing monitor \code{C}, which has nothing to do with the \code{signal}. However, for monitors as for locks, it is possible to write a program using nesting without encountering any problems if nesting is done correctly. For example, the next pseudo-code snippet acquires monitors {\sf A} then {\sf B} before waiting, while only acquiring {\sf B} when signalling, effectively avoiding the Nested Monitor Problem~\cite{Lister77}. \begin{multicols}{2} \end{multicols} This simple refactoring may not be possible, forcing more complex restructuring. % ====================================================================== % ====================================================================== % ====================================================================== A larger example is presented to show complex issuesfor \gls{bulk-acq} and all the implementation options are analyzed. Listing \ref{lst:int-bulk-pseudo} shows an example where \gls{bulk-acq} adds a significant layer of complexity to the internal signalling semantics, and listing \ref{lst:int-bulk-cfa} shows the corresponding \CFA code which implements the pseudo-code in listing \ref{lst:int-bulk-pseudo}. For the purpose of translating the given pseudo-code into \CFA-code any method of introducing monitor into context, other than a \code{mutex} parameter, is acceptable, e.g., global variables, pointer parameters or using locals with the \code{mutex}-statement. \begin{figure}[!b] A larger example is presented to show complex issues for \gls{bulk-acq} and all the implementation options are analyzed. Listing \ref{lst:int-bulk-pseudo} shows an example where \gls{bulk-acq} adds a significant layer of complexity to the internal signalling semantics, and listing \ref{lst:int-bulk-cfa} shows the corresponding \CFA code to implement the pseudo-code in listing \ref{lst:int-bulk-pseudo}. For the purpose of translating the given pseudo-code into \CFA-code, any method of introducing a monitor is acceptable, e.g., \code{mutex} parameter, global variables, pointer parameters or using locals with the \code{mutex}-statement. \begin{figure}[!t] \begin{multicols}{2} Waiting thread release A \end{pseudo} \columnbreak Signalling thread \begin{pseudo}[numbers=left, firstnumber=10] \begin{pseudo}[numbers=left, firstnumber=10,escapechar=|] acquire A //Code Section 5 acquire A & B //Code Section 6 signal A & B |\label{line:signal1}|signal A & B //Code Section 7 release A & B //Code Section 8 release A |\label{line:lastRelease}|release A \end{pseudo} \end{multicols} \caption{Internal scheduling with \gls{bulk-acq}} \label{lst:int-bulk-pseudo} \end{figure} \begin{figure}[!b] \begin{cfacode}[caption={Internal scheduling with \gls{bulk-acq}},label={lst:int-bulk-pseudo}] \end{cfacode} \begin{center} \begin{cfacode}[xleftmargin=.4\textwidth] } \end{cfacode} \columnbreak Signalling thread \begin{cfacode} \end{cfacode} \end{multicols} \caption{Equivalent \CFA code for listing \ref{lst:int-bulk-pseudo}} \label{lst:int-bulk-cfa} \end{figure} The complexity begins at code sections 4 and 8, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{bulk-acq} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should release \code{A & B}'' (line 16), it must actually transfer ownership of monitor \code{B} to the waiting thread. This ownership trasnfer is required in order to prevent barging. Since the signalling thread still needs monitor \code{A}, simply waking up the waiting thread is not an option because it violates mutual exclusion. There are three options. \subsubsection{Delaying signals} The obvious solution to solve the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is when the last lock is no longer needed because this semantics fits most closely to the behaviour of single-monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from mutiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups. \begin{cfacode}[caption={Equivalent \CFA code for listing \ref{lst:int-bulk-pseudo}},label={lst:int-bulk-cfa}] \end{cfacode} \begin{multicols}{2} Waiter Signaller \begin{pseudo}[numbers=left, firstnumber=6] \begin{pseudo}[numbers=left, firstnumber=6,escapechar=|] acquire A acquire A & B signal A & B release A & B //Secretly keep B here |\label{line:secret}|//Secretly keep B here release A //Wakeup waiter and transfer A & B \end{pseudo} \end{multicols} However, this solution can become much more complicated depending on what is executed while secretly holding B (at line 10). Indeed, nothing prevents signalling monitor A on a different condition variable: \begin{cfacode}[caption={Listing \ref{lst:int-bulk-pseudo}, with delayed signalling comments},label={lst:int-secret}] \end{cfacode} \end{figure} The complexity begins at code sections 4 and 8, which are where the existing semantics of internal scheduling need to be extended for multiple monitors. The root of the problem is that \gls{bulk-acq} is used in a context where one of the monitors is already acquired and is why it is important to define the behaviour of the previous pseudo-code. When the signaller thread reaches the location where it should release \code{A & B}'' (listing \ref{lst:int-bulk-pseudo} line \ref{line:signal1}), it must actually transfer ownership of monitor \code{B} to the waiting thread. This ownership transfer is required in order to prevent barging into \code{B} by another thread, since both the signalling and signalled threads still need monitor \code{A}. There are three options. \subsubsection{Delaying signals} The obvious solution to solve the problem of multi-monitor scheduling is to keep ownership of all locks until the last lock is ready to be transferred. It can be argued that that moment is when the last lock is no longer needed because this semantics fits most closely to the behaviour of single-monitor scheduling. This solution has the main benefit of transferring ownership of groups of monitors, which simplifies the semantics from multiple objects to a single group of objects, effectively making the existing single-monitor semantic viable by simply changing monitors to monitor groups. The naive approach to this solution is to only release monitors once every monitor in a group can be released. However, since some monitors are never released (i.e., the monitor of a thread), this interpretation means groups can grow but may never shrink. A more interesting interpretation is to only transfer groups as one but to recreate the groups on every operation, i.e., limit ownership transfer to one per \code{signal}/\code{release}. However, this solution can become much more complicated depending on what is executed while secretly holding B (listing \ref{lst:int-secret} line \ref{line:secret}). The goal in this solution is to avoid the need to transfer ownership of a subset of the condition monitors. However, listing \ref{lst:dependency} shows a slightly different example where a third thread is waiting on monitor \code{A}, using a different condition variable. Because the third thread is signalled when secretly holding \code{B}, the goal becomes unreachable. Depending on the order of signals (listing \ref{lst:dependency} line \ref{line:signal-ab} and \ref{line:signal-a}) two cases can happen : \paragraph{Case 1: thread $\alpha$ goes first.} In this case, the problem is that monitor \code{A} needs to be passed to thread $\beta$ when thread $\alpha$ is done with it. \paragraph{Case 2: thread $\beta$ goes first.} In this case, the problem is that monitor \code{B} needs to be retained and passed to thread $\alpha$ along with monitor \code{A}, which can be done directly or possibly using thread $\beta$ as an intermediate. \\ Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line \ref{line:signal-a} before line \ref{line:signal-ab} and get the reverse effect for listing \ref{lst:dependency}. In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means monitors cannot be handled as a single homogeneous group and therefore effectively precludes this approach. \subsubsection{Dependency graphs} \begin{figure} \begin{multicols}{3} release A \end{pseudo} \columnbreak Thread $\gamma$ \begin{pseudo}[numbers=left, firstnumber=1] \begin{pseudo}[numbers=left, firstnumber=6, escapechar=|] acquire A acquire A & B signal A & B release A & B signal A release A \end{pseudo} |\label{line:signal-ab}|signal A & B |\label{line:release-ab}|release A & B |\label{line:signal-a}|signal A |\label{line:release-a}|release A \end{pseudo} \columnbreak Thread $\beta$ \begin{pseudo}[numbers=left, firstnumber=1] \begin{pseudo}[numbers=left, firstnumber=12, escapechar=|] acquire A wait A release A \end{pseudo} |\label{line:release-aa}|release A \end{pseudo} \end{multicols} \caption{Dependency graph} \label{lst:dependency} \end{figure} The goal in this solution is to avoid the need to transfer ownership of a subset of the condition monitors. However, this goal is unreacheable in the previous example. Depending on the order of signals (line 12 and 15) two cases can happen. \paragraph{Case 1: thread 1 goes first.} In this case, the problem is that monitor A needs to be passed to thread 2 when thread 1 is done with it. \paragraph{Case 2: thread 2 goes first.} In this case, the problem is that monitor B needs to be passed to thread 1, which can be done directly or using thread 2 as an intermediate. \\ Note that ordering is not determined by a race condition but by whether signalled threads are enqueued in FIFO or FILO order. However, regardless of the answer, users can move line 15 before line 11 and get the reverse effect. In both cases, the threads need to be able to distinguish, on a per monitor basis, which ones need to be released and which ones need to be transferred, which means monitors cannot be handled as a single homogenous group and therefore effectively precludes this approach. \subsubsection{Dependency graphs} In the listing \ref{lst:int-bulk-pseudo} pseudo-code, there is a solution which statisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases \code{A & B} and then the waiter transfers back ownership of \code{A} when it releases it, then the problem is solved (\code{B} is no longer in use at this point). Dynamically finding the correct order is therefore the second possible solution. The problem it encounters is that it effectively boils down to resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and it seems to increase in a manner closer to polynomial. For example, the following code, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions: \begin{multicols}{2} \begin{pseudo} acquire A acquire B acquire C wait A & B & C release C release B release A \end{pseudo} \columnbreak \begin{pseudo} acquire A acquire B acquire C signal A & B & C release C release B release A \end{pseudo} \end{multicols} \begin{figure} \begin{cfacode}[caption={Pseudo-code for the three thread example.},label={lst:dependency}] \end{cfacode} \begin{center} \input{dependency} \end{figure} Listing \ref{lst:dependency} is the three thread example rewritten for dependency graphs. Figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (e.g., $\alpha1$ must happen before $\alpha2$). The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependency unfolds. Resolving dependency graph being a complex and expensive endeavour, this solution is not the preffered one. In the listing \ref{lst:int-bulk-pseudo} pseudo-code, there is a solution that satisfies both barging prevention and mutual exclusion. If ownership of both monitors is transferred to the waiter when the signaller releases \code{A & B} and then the waiter transfers back ownership of \code{A} back to the signaller when it releases it, then the problem is solved (\code{B} is no longer in use at this point). Dynamically finding the correct order is therefore the second possible solution. The problem is effectively resolving a dependency graph of ownership requirements. Here even the simplest of code snippets requires two transfers and it seems to increase in a manner close to polynomial. This complexity explosion can be seen in listing \ref{lst:explosion}, which is just a direct extension to three monitors, requires at least three ownership transfer and has multiple solutions. Furthermore, the presence of multiple solutions for ownership transfer can cause deadlock problems if a specific solution is not consistently picked; In the same way that multiple lock acquiring order can cause deadlocks. \begin{figure} \begin{multicols}{2} \begin{pseudo} acquire A acquire B acquire C wait A & B & C release C release B release A \end{pseudo} \columnbreak \begin{pseudo} acquire A acquire B acquire C signal A & B & C release C release B release A \end{pseudo} \end{multicols} \begin{cfacode}[caption={Extension to three monitors of listing \ref{lst:int-bulk-pseudo}},label={lst:explosion}] \end{cfacode} \end{figure} Listing \ref{lst:dependency} is the three threads example used in the delayed signals solution. Figure \ref{fig:dependency} shows the corresponding dependency graph that results, where every node is a statement of one of the three threads, and the arrows the dependency of that statement (e.g., $\alpha1$ must happen before $\alpha2$). The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependency unfolds. Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one. \subsubsection{Partial signalling} \label{partial-sig} Finally, the solution that is chosen for \CFA is to use partial signalling. Again using listing \ref{lst:int-bulk-pseudo}, the partial signalling solution transfers ownership of monitor B at lines 10 but does not wake the waiting thread since it is still using monitor A. Only when it reaches line 11 does it actually wakeup the waiting thread. This solution has the benefit that complexity is encapsulated into only two actions, passing monitors to the next owner when they should be release and conditionally waking threads if all conditions are met. This solution has a much simpler implementation than a dependency graph solving algorithm which is why it was chosen. Furthermore, after being fully implemented, this solution does not appear to have any downsides worth mentionning. Finally, the solution that is chosen for \CFA is to use partial signalling. Again using listing \ref{lst:int-bulk-pseudo}, the partial signalling solution transfers ownership of monitor \code{B} at lines \ref{line:signal1} to the waiter but does not wake the waiting thread since it is still using monitor \code{A}. Only when it reaches line \ref{line:lastRelease} does it actually wakeup the waiting thread. This solution has the benefit that complexity is encapsulated into only two actions, passing monitors to the next owner when they should be released and conditionally waking threads if all conditions are met. This solution has a much simpler implementation than a dependency graph solving algorithm, which is why it was chosen. Furthermore, after being fully implemented, this solution does not appear to have any significant downsides. While listing \ref{lst:dependency} is a complicated problem for previous solutions, it can be solved easily with partial signalling : \begin{itemize} \item When thread $\gamma$ reaches line \ref{line:release-ab} it transfers monitor \code{B} to thread $\alpha$ and continues to hold monitor \code{A}. \item When thread $\gamma$ reaches line \ref{line:release-a} it transfers monitor \code{A} to thread $\beta$ and wakes it up. \item When thread $\beta$ reaches line \ref{line:release-aa} it transfers monitor \code{A} to thread $\alpha$ and wakes it up. \item Problem solved! \end{itemize} % ====================================================================== % ====================================================================== % ====================================================================== \begin{figure} \begin{table} \begin{tabular}{|c|c|} \code{signal} & \code{signal_block} \\ \end{tabular} \caption{Dating service example using \code{signal} and \code{signal_block}. } \label{lst:datingservice} \end{figure} An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the \code{signal_block} routine\footnote{name to be discussed}. The example in listing \ref{lst:datingservice} highlights the difference in behaviour. As mentioned, \code{signal} only transfers ownership once the current critical section exits, this behaviour requires additional synchronisation when a two-way handshake is needed. To avoid this extraneous synchronisation, the \code{condition} type offers the \code{signal_block} routine, which handles the two-way handshake as shown in the example. This removes the need for a second condition variables and simplifies programming. Like every other monitor semantic, \code{signal_block} uses barging prevention, which means mutual-exclusion is baton-passed both on the frond-end and the back-end of the call to \code{signal_block}, meaning no other thread can acquire the monitor neither before nor after the call. \label{tbl:datingservice} \end{table} An important note is that, until now, signalling a monitor was a delayed operation. The ownership of the monitor is transferred only when the monitor would have otherwise been released, not at the point of the \code{signal} statement. However, in some cases, it may be more convenient for users to immediately transfer ownership to the thread that is waiting for cooperation, which is achieved using the \code{signal_block} routine. The example in table \ref{tbl:datingservice} highlights the difference in behaviour. As mentioned, \code{signal} only transfers ownership once the current critical section exits, this behaviour requires additional synchronization when a two-way handshake is needed. To avoid this explicit synchronization, the \code{condition} type offers the \code{signal_block} routine, which handles the two-way handshake as shown in the example. This feature removes the need for a second condition variables and simplifies programming. Like every other monitor semantic, \code{signal_block} uses barging prevention, which means mutual-exclusion is baton-passed both on the frond-end and the back-end of the call to \code{signal_block}, meaning no other thread can acquire the monitor either before or after the call. % ====================================================================== \end{tabular} \end{center} This method is more constrained and explicit, which helps users tone down the undeterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (e.g., \uC with \code{_Accept}) or in terms of data (e.g., Go with channels). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control-flow semantics were chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The previous example shows a simple use \code{_Accept} versus \code{wait}/\code{signal} and its advantages. Note that while other languages often use \code{accept}/\code{select} as the core external scheduling keyword, \CFA uses \code{waitfor} to prevent name collisions with existing socket \acrshort{api}s. This method is more constrained and explicit, which helps users reduce the non-deterministic nature of concurrency. Indeed, as the following examples demonstrates, external scheduling allows users to wait for events from other threads without the concern of unrelated events occurring. External scheduling can generally be done either in terms of control flow (e.g., Ada with \code{accept}, \uC with \code{_Accept}) or in terms of data (e.g., Go with channels). Of course, both of these paradigms have their own strengths and weaknesses, but for this project control-flow semantics were chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multiple-monitor routines. The previous example shows a simple use \code{_Accept} versus \code{wait}/\code{signal} and its advantages. Note that while other languages often use \code{accept}/\code{select} as the core external scheduling keyword, \CFA uses \code{waitfor} to prevent name collisions with existing socket \acrshort{api}s. For the \code{P} member above using internal scheduling, the call to \code{wait} only guarantees that \code{V} is the last routine to access the monitor, allowing a third routine, say \code{isInUse()}, acquire mutual exclusion several times while routine \code{P} is waiting. On the other hand, external scheduling guarantees that while routine \code{P} is waiting, no routine other than \code{V} can acquire the monitor. % ====================================================================== % ====================================================================== In \uC, monitor declarations include an exhaustive list of monitor operations. Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user: In \uC, a monitor class declaration includee an exhaustive list of monitor operations. Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user: \begin{cfacode} \end{cfacode} Furthermore, external scheduling is an example where implementation constraints become visible from the interface. Indeed, since there is no hard limit to the number of threads trying to acquire a monitor concurrently, performance is a significant concern. Here is the pseudo code for the entering phase of a monitor: Furthermore, external scheduling is an example where implementation constraints become visible from the interface. Here is the pseudo code for the entering phase of a monitor: \begin{center} \begin{tabular}{l} \end{tabular} \end{center} For the first two conditions, it is easy to implement a check that can evaluate the condition in a few instruction. However, a fast check for \pscode{monitor accepts me} is much harder to implement depending on the constraints put on the monitors. Indeed, monitors are often expressed as an entry queue and some acceptor queue as in the following figure: \end{figure} There are other alternatives to these pictures, but in the case of this picture, implementing a fast accept check is relatively easy. Restricted to a fixed number of mutex members, N, the accept check reduces to updating a bitmask when the acceptor queue changes, a check that executes in a single instruction even with a fairly large number (e.g., 128) of mutex members. This technique cannot be used in \CFA because it relies on the fact that the monitor type enumerates (declares) all the acceptable routines. For OO languages this does not compromise much since monitors already have an exhaustive list of member routines. However, for \CFA this is not the case; routines can be added to a type anywhere after its declaration. It is important to note that the bitmask approach does not actually require an exhaustive list of routines, but it requires a dense unique ordering of routines with an upper-bound and that ordering must be consistent across translation units. The alternative is to alter the implementeation like this: There are other alternatives to these pictures, but in the case of this picture, implementing a fast accept check is relatively easy. Restricted to a fixed number of mutex members, N, the accept check reduces to updating a bitmask when the acceptor queue changes, a check that executes in a single instruction even with a fairly large number (e.g., 128) of mutex members. This approach requires a dense unique ordering of routines with an upper-bound and that ordering must be consistent across translation units. For OO languages these constraints are common, since objects only offer adding member routines consistently across translation units via inheritence. However, in \CFA users can extend objects with mutex routines that are only visible in certain translation unit. This means that establishing a program-wide dense-ordering among mutex routines can only be done in the program linking phase, and still could have issues when using dynamically shared objects. The alternative is to alter the implementation like this: \begin{center} \end{center} Generating a mask dynamically means that the storage for the mask information can vary between calls to \code{waitfor}, allowing for more flexibility and extensions. Storing an array of accepted function-pointers replaces the single instruction bitmask compare with dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling (e.g., listing \ref{lst:nest-ext}) may now require additionnal searches on calls to \code{waitfor} statement to check if a routine is already queued in. Here, the mutex routine called is associated with a thread on the entry queue while a list of acceptable routines is kept seperately. Generating a mask dynamically means that the storage for the mask information can vary between calls to \code{waitfor}, allowing for more flexibility and extensions. Storing an array of accepted function-pointers replaces the single instruction bitmask compare with dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling (e.g., listing \ref{lst:nest-ext}) may now require additional searches for the \code{waitfor} statement to check if a routine is already queued. \begin{figure} \begin{cfacode} \begin{cfacode}[caption={Example of nested external scheduling},label={lst:nest-ext}] monitor M {}; void foo( M & mutex a ) {} \end{cfacode} \caption{Example of nested external scheduling} \label{lst:nest-ext} \end{figure} Note that in the second picture, tasks need to always keep track of which routine they are attempting to acquire the monitor and the routine mask needs to have both a function pointer and a set of monitors, as will be discussed in the next section. These details where omitted from the picture for the sake of simplifying the representation. At this point, a decision must be made between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here however, the cost of flexibility cannot be trivially removed. In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be prohibitively hard to write. This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problems than writing locks that are as flexible as external scheduling in \CFA. Note that in the second picture, tasks need to always keep track of the monitors associated with mutex routines, and the routine mask needs to have both a function pointer and a set of monitors, as is be discussed in the next section. These details are omitted from the picture for the sake of simplicity. At this point, a decision must be made between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. Here however, the cost of flexibility cannot be trivially removed. In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be hard to write. This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problems than writing locks that are as flexible as external scheduling in \CFA. % ====================================================================== void g(M & mutex b, M & mutex c) { waitfor(f); //two monitors M => unkown which to pass to f(M & mutex) } \end{cfacode} waitfor(f); //two monitors M => unknown which to pass to f(M & mutex) } \end{cfacode} The obvious solution is to specify the correct monitor as follows: void g(M & mutex a, M & mutex b) { waitfor( f, b ); } \end{cfacode} This syntax is unambiguous. Both locks are acquired and kept by \code{g}. When routine \code{f} is called, the lock for monitor \code{b} is temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{a}). This behavior can be extended to multi-monitor \code{waitfor} statement as follows. //wait for call to f with argument b waitfor(f, b); } \end{cfacode} This syntax is unambiguous. Both locks are acquired and kept by \code{g}. When routine \code{f} is called, the lock for monitor \code{b} is temporarily transferred from \code{g} to \code{f} (while \code{g} still holds lock \code{a}). This behaviour can be extended to the multi-monitor \code{waitfor} statement as follows. \begin{cfacode} void g(M & mutex a, M & mutex b) { waitfor( f, a, b); //wait for call to f with argument a and b waitfor(f, a, b); } \end{cfacode} Note that the set of monitors passed to the \code{waitfor} statement must be entirely contained in the set of monitors already acquired in the routine. \code{waitfor} used in any other context is Undefined Behaviour. An important behavior to note is when a set of monitors only match partially : An important behaviour to note is when a set of monitors only match partially : \begin{cfacode} void bar() { f(a2, b); //fufill cooperation } \end{cfacode} While the equivalent can happen when using internal scheduling, the fact that conditions are specific to a set of monitors means that users have to use two different condition variables. In both cases, partially matching monitor sets does not wake-up the waiting thread. It is also important to note that in the case of external scheduling, as for routine calls, the order of parameters is irrelevant; \code{waitfor(f,a,b)} and \code{waitfor(f,b,a)} are indistinguishable waiting condition. f(a2, b); //fulfill cooperation } \end{cfacode} While the equivalent can happen when using internal scheduling, the fact that conditions are specific to a set of monitors means that users have to use two different condition variables. In both cases, partially matching monitor sets does not wake-up the waiting thread. It is also important to note that in the case of external scheduling the order of parameters is irrelevant; \code{waitfor(f,a,b)} and \code{waitfor(f,b,a)} are indistinguishable waiting condition. % ====================================================================== % ====================================================================== Syntactically, the \code{waitfor} statement takes a function identifier and a set of monitors. While the set of monitors can be any list of expression, the function name is more restricted because the compiler validates at compile time the validity of the function type and the parameters used with the \code{waitfor} statement. It checks that the set of monitor passed in matches the requirements for a function call. Listing \ref{lst:waitfor} shows various usage of the waitfor statement and which are acceptable. The choice of the function type is made ignoring any non-\code{mutex} parameter. One limitation of the current implementation is that it does not handle overloading. Syntactically, the \code{waitfor} statement takes a function identifier and a set of monitors. While the set of monitors can be any list of expression, the function name is more restricted because the compiler validates at compile time the validity of the function type and the parameters used with the \code{waitfor} statement. It checks that the set of monitors passed in matches the requirements for a function call. Listing \ref{lst:waitfor} shows various usage of the waitfor statement and which are acceptable. The choice of the function type is made ignoring any non-\code{mutex} parameter. One limitation of the current implementation is that it does not handle overloading but overloading is possible. \begin{figure} \begin{cfacode} \begin{cfacode}[caption={Various correct and incorrect uses of the waitfor statement},label={lst:waitfor}] monitor A{}; monitor B{}; waitfor(f4, a1); //Incorrect : f4 ambiguous waitfor(f2, a1, b2); //Undefined Behaviour : b2 may not acquired } \end{cfacode} \caption{Various correct and incorrect uses of the waitfor statement} \label{lst:waitfor} waitfor(f2, a1, b2); //Undefined Behaviour : b2 not mutex } \end{cfacode} \end{figure} Finally, for added flexibility, \CFA supports constructing complex \code{waitfor} mask using the \code{or}, \code{timeout} and \code{else}. Indeed, multiple \code{waitfor} can be chained together using \code{or}; this chain forms a single statement that uses baton-pass to any one function that fits one of the function+monitor set passed in. To eanble users to tell which accepted function is accepted, \code{waitfor}s are followed by a statement (including the null statement \code{;}) or a compound statement. When multiple \code{waitfor} are chained together, only the statement corresponding to the accepted function is executed. A \code{waitfor} chain can also be followed by a \code{timeout}, to signify an upper bound on the wait, or an \code{else}, to signify that the call should be non-blocking, that is only check of a matching function call already arrived and return immediately otherwise. Any and all of these clauses can be preceded by a \code{when} condition to dynamically construct the mask based on some current state. Listing \ref{lst:waitfor2}, demonstrates several complex masks and some incorrect ones. Finally, for added flexibility, \CFA supports constructing a complex \code{waitfor} statement using the \code{or}, \code{timeout} and \code{else}. Indeed, multiple \code{waitfor} clauses can be chained together using \code{or}; this chain forms a single statement that uses baton-pass to any one function that fits one of the function+monitor set passed in. To enable users to tell which accepted function executed, \code{waitfor}s are followed by a statement (including the null statement \code{;}) or a compound statement, which is executed after the clause is triggered. A \code{waitfor} chain can also be followed by a \code{timeout}, to signify an upper bound on the wait, or an \code{else}, to signify that the call should be non-blocking, which checks for a matching function call already arrived and otherwise continues. Any and all of these clauses can be preceded by a \code{when} condition to dynamically toggle the accept clauses on or off based on some current state. Listing \ref{lst:waitfor2}, demonstrates several complex masks and some incorrect ones. \begin{figure} \begin{cfacode} \begin{cfacode}[caption={Various correct and incorrect uses of the or, else, and timeout clause around a waitfor statement},label={lst:waitfor2}] monitor A{}; } \end{cfacode} \caption{Various correct and incorrect uses of the or, else, and timeout clause around a waitfor statement} \label{lst:waitfor2} \end{figure} An interesting use for the \code{waitfor} statement is destructor semantics. Indeed, the \code{waitfor} statement can accept any \code{mutex} routine, which includes the destructor (see section \ref{data}). However, with the semantics discussed until now, waiting for the destructor does not make any sense since using an object after its destructor is called is undefined behaviour. The simplest approach is to disallow \code{waitfor} on a destructor. However, a more expressive approach is to flip execution ordering when waiting for the destructor, meaning that waiting for the destructor allows the destructor to run after the current \code{mutex} routine, similarly to how a condition is signalled. \begin{figure} \begin{cfacode} \begin{cfacode}[caption={Example of an executor which executes action in series until the destructor is called.},label={lst:dtor-order}] monitor Executer {}; struct Action; } \end{cfacode} \caption{Example of an executor which executes action in series until the destructor is called.} \label{lst:dtor-order} \end{figure} For example, listing \ref{lst:dtor-order} shows an example of an executor with an infinite loop, which waits for the destructor to break out of this loop. Switching the semantic meaning introduces an idiomatic way to terminate a task and/or wait for its termination via destruction.
• ## doc/proposals/concurrency/text/future.tex
r4e7a4e6 \chapter{Conclusion} As mentionned in the introduction, this thesis provides a minimal concurrency \acrshort{api} that is simple, efficient and usable as the basis for higher-level features. The approach presented is based on a lighweight thread system for parallelism which sits on top of clusters of processors. This M:N model is jugded to be both more efficient and allow more flexibility for users. Furthermore, this document introduces monitors as the main concurrency tool for users. This thesis also offers a novel approach which allows using multiple monitors at once without running into the Nested Monitor Problem~\cite{Lister77}. It also offers a full implmentation of the concurrency runtime wirtten enterily in \CFA, effectively the largest \CFA code base to date. % ====================================================================== % ====================================================================== \chapter{Future Work} \section{Future Work} % ====================================================================== % ====================================================================== \section{Flexible Scheduling} \label{futur:sched} An important part of concurrency is scheduling. Different scheduling algorithm can affact peformance (both in terms of average and variation). However, no single scheduler is optimal for all workloads and therefore there is value in being able to change the scheduler for given programs. One solution is to offer various tweaking options to users, allowing the scheduler to be adjusted the to requirements of the workload. However, in order to be truly flexible, it would be interesting to allow users to add arbitrary data and arbirary scheduling algorithms to the scheduler. For example, a web server could attach Type-of-Service information to threads and have a ToS aware'' scheduling algorithm tailored to this specific web server. This path of flexible schedulers will be explored for \CFA. \subsection{Performance} \label{futur:perf} This thesis presents a first implementation of the \CFA runtime. Therefore, there is still significant work to do to improve performance. Many of the data structures and algorithms will change in the future to more efficient versions. For example, \CFA the number of monitors in a single \gls{bulk-acq} is only bound by the stack size, this is probably unnecessarily generous. It may be possible that limiting the number help increase performance. However, it is not obvious that the benefit would be significant. \section{Non-Blocking IO} \label{futur:nbio} While most of the parallelism tools However, many modern workloads are not bound on computation but on IO operations, an common case being webservers and XaaS (anything as a service). These type of workloads often require significant engineering around amortising costs of blocking IO operations. While improving throughtput of these operations is outside what \CFA can do as a language, it can help users to make better use of the CPU time otherwise spent waiting on IO operations. The current trend is to use asynchronous programming using tools like callbacks and/or futurs and promises\cite. However, while these are valid solutions, they lead to code that is harder to read and maintain because it is much less linear \subsection{Flexible Scheduling} \label{futur:sched} An important part of concurrency is scheduling. Different scheduling algorithm can affect performance (both in terms of average and variation). However, no single scheduler is optimal for all workloads and therefore there is value in being able to change the scheduler for given programs. One solution is to offer various tweaking options to users, allowing the scheduler to be adjusted to the requirements of the workload. However, in order to be truly flexible, it would be interesting to allow users to add arbitrary data and arbitrary scheduling algorithms to the scheduler. For example, a web server could attach Type-of-Service information to threads and have a ToS aware'' scheduling algorithm tailored to this specific web server. This path of flexible schedulers will be explored for \CFA. \section{Other concurrency tools} \label{futur:tools} While monitors offer a flexible and powerful concurent core for \CFA, other concurrency tools are also necessary for a complete multi-paradigm concurrency package. Example of such tools can include simple locks and condition variables, futures and promises\cite{promises}, and executors. These additional features are useful when monitors offer a level of abstraction which is indaquate for certain tasks. \subsection{Non-Blocking IO} \label{futur:nbio} While most of the parallelism tools are aimed at data parallelism and control-flow parallelism, many modern workloads are not bound on computation but on IO operations, a common case being web-servers and XaaS (anything as a service). These type of workloads often require significant engineering around amortizing costs of blocking IO operations. At its core, Non-Blocking IO is a operating system level feature that allows queuing IO operations (e.g., network operations) and registering for notifications instead of waiting for requests to complete. In this context, the role of the language make Non-Blocking IO easily available and with low overhead. The current trend is to use asynchronous programming using tools like callbacks and/or futures and promises, which can be seen in frameworks like Node.js~\cite{NodeJs} for JavaScript, Spring MVC~\cite{SpringMVC} for Java and Django~\cite{Django} for Python. However, while these are valid solutions, they lead to code that is harder to read and maintain because it is much less linear. \section{Implicit threading} \label{futur:implcit} Simpler applications can benefit greatly from having implicit parallelism. That is, parallelism that does not rely on the user to write concurrency. This type of parallelism can be achieved both at the language level and at the library level. The cannonical example of implcit parallelism is parallel for loops, which are the simplest example of a divide and conquer algorithm\cite{uC++book}. Listing \ref{lst:parfor} shows three different code examples that accomplish pointwise sums of large arrays. Note that none of these example explicitly declare any concurrency or parallelism objects. \subsection{Other concurrency tools} \label{futur:tools} While monitors offer a flexible and powerful concurrent core for \CFA, other concurrency tools are also necessary for a complete multi-paradigm concurrency package. Example of such tools can include simple locks and condition variables, futures and promises~\cite{promises}, executors and actors. These additional features are useful when monitors offer a level of abstraction that is inadequate for certain tasks. \begin{figure} \subsection{Implicit threading} \label{futur:implcit} Simpler applications can benefit greatly from having implicit parallelism. That is, parallelism that does not rely on the user to write concurrency. This type of parallelism can be achieved both at the language level and at the library level. The canonical example of implicit parallelism is parallel for loops, which are the simplest example of a divide and conquer algorithm~\cite{uC++book}. Table \ref{lst:parfor} shows three different code examples that accomplish point-wise sums of large arrays. Note that none of these examples explicitly declare any concurrency or parallelism objects. \begin{table} \begin{center} \begin{tabular}[t]{|c|c|c|} \caption{For loop to sum numbers: Sequential, using library parallelism and language parallelism.} \label{lst:parfor} \end{figure} \end{table} Implicit parallelism is a general solution and therefore has its limitations. However, it is a quick and simple approach to parallelism which may very well be sufficient for smaller applications and reduces the amount of boiler-plate that is needed to start benefiting from parallelism in modern CPUs. Implicit parallelism is a restrictive solution and therefore has its limitations. However, it is a quick and simple approach to parallelism, which may very well be sufficient for smaller applications and reduces the amount of boiler-plate needed to start benefiting from parallelism in modern CPUs.
• ## doc/proposals/concurrency/text/internals.tex
r4e7a4e6 \chapter{Behind the scene} There are several challenges specific to \CFA when implementing concurrency. These challenges are a direct result of \gls{bulk-acq} and loose object-definitions. These two constraints are the root cause of most design decisions in the implementation. Furthermore, to avoid contention from dynamically allocating memory in a concurrent environment, the internal-scheduling design is (almost) entirely free of mallocs. This is to avoid the chicken and egg problem \cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime. This extra goal, means that memory management is a constant concern in the design of the system. The main memory concern for concurrency is queues. All blocking operations are made by parking threads onto queues. The queue design needs to be intrusive\cite{IntrusiveData} to avoid the need for memory allocation, which entails that all the nodes need specific fields to keep track of all needed information. Since many concurrency operations can use an unbound amount of memory (depending on \gls{bulk-acq}), statically defining information in the intrusive fields of threads is insufficient. The only variable sized container that does not require memory allocation is the callstack, which is heavily used in the implementation of internal scheduling. Particularly variable length arrays, which are used extensively. Since stack allocation is based around scope, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable length. The threads and the condition both allow a fixed amount of memory to be stored, while mutex-routines and the actual blocking call allow for an unbound amount (though the later is preferable in terms of performance). Note that since the major contributions of this thesis are extending monitor semantics to \gls{bulk-acq} and loose object definitions, any challenges that are not resulting of these characteristiques of \CFA are considered as solved problems and therefore not discussed further. There are several challenges specific to \CFA when implementing concurrency. These challenges are a direct result of \gls{bulk-acq} and loose object-definitions. These two constraints are the root cause of most design decisions in the implementation. Furthermore, to avoid contention from dynamically allocating memory in a concurrent environment, the internal-scheduling design is (almost) entirely free of mallocs. This approach avoids the chicken and egg problem~\cite{Chicken} of having a memory allocator that relies on the threading system and a threading system that relies on the runtime. This extra goal means that memory management is a constant concern in the design of the system. The main memory concern for concurrency is queues. All blocking operations are made by parking threads onto queues and all queues are designed with intrusive nodes, where each not has pre-allocated link fields for chaining, to avoid the need for memory allocation. Since several concurrency operations can use an unbound amount of memory (depending on \gls{bulk-acq}), statically defining information in the intrusive fields of threads is insufficient.The only way to use a variable amount of memory without requiring memory allocation is to pre-allocate large buffers of memory eagerly and store the information in these buffers. Conveniently, the callstack fits that description and is easy to use, which is why it is used heavily in the implementation of internal scheduling, particularly variable-length arrays. Since stack allocation is based around scope, the first step of the implementation is to identify the scopes that are available to store the information, and which of these can have a variable-length array. The threads and the condition both have a fixed amount of memory, while mutex-routines and the actual blocking call allow for an unbound amount, within the stack size. Note that since the major contributions of this thesis are extending monitor semantics to \gls{bulk-acq} and loose object definitions, any challenges that are not resulting of these characteristics of \CFA are considered as solved problems and therefore not discussed. % ====================================================================== % ====================================================================== The first step towards the monitor implementation is simple mutex-routines using monitors. In the single monitor case, this is done using the entry/exit procedure highlighted in listing \ref{lst:entry1}. This entry/exit procedure does not actually have to be extended to support multiple monitors, indeed it is sufficient to enter/leave monitors one-by-one as long as the order is correct to prevent deadlocks\cite{Havender68}. In \CFA, ordering of monitor relies on memory ordering, this is sufficient because all objects are guaranteed to have distinct non-overlaping memory layouts and mutual-exclusion for a monitor is only defined for its lifetime, meaning that destroying a monitor while it is acquired is undefined behavior. When a mutex call is made, the concerned monitors are agregated into a variable-length pointer array and sorted based on pointer values. This array presists for the entire duration of the mutual-exclusion and its ordering reused extensively. The first step towards the monitor implementation is simple mutex-routines. In the single monitor case, mutual-exclusion is done using the entry/exit procedure in listing \ref{lst:entry1}. The entry/exit procedures do not have to be extended to support multiple monitors. Indeed it is sufficient to enter/leave monitors one-by-one as long as the order is correct to prevent deadlock~\cite{Havender68}. In \CFA, ordering of monitor acquisition relies on memory ordering. This approach is sufficient because all objects are guaranteed to have distinct non-overlapping memory layouts and mutual-exclusion for a monitor is only defined for its lifetime, meaning that destroying a monitor while it is acquired is Undefined Behavior. When a mutex call is made, the concerned monitors are aggregated into a variable-length pointer-array and sorted based on pointer values. This array persists for the entire duration of the mutual-exclusion and its ordering reused extensively. \begin{figure} \begin{multicols}{2} \end{pseudo} \end{multicols} \caption{Initial entry and exit routine for monitors} \label{lst:entry1} \begin{pseudo}[caption={Initial entry and exit routine for monitors},label={lst:entry1}] \end{pseudo} \end{figure} Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be more complex to support. However, it is shown that entry-point locking solves most of the issues. First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. It is important to present the difference between the two acquiring options : callsite and entry-point locking, i.e. acquiring the monitors before making a mutex routine call or as the first operation of the mutex routine-call. For example: \begin{figure}[H] First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore, the main question is how to support \code{dtype} polymorphism. It is important to present the difference between the two acquiring options : \glspl{callsite-locking} and entry-point locking, i.e., acquiring the monitors before making a mutex routine-call or as the first operation of the mutex routine-call. For example: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|c|} \end{center} \caption{Call-site vs entry-point locking for mutex calls} \label{fig:locking-site} \end{figure} Note the \code{mutex} keyword relies on the type system, which means that in cases where a generic monitor routine is desired, writing the mutex routine is possible with the proper trait, for example: \label{tbl:locking-site} \end{table} Note the \code{mutex} keyword relies on the type system, which means that in cases where a generic monitor-routine is desired, writing the mutex routine is possible with the proper trait, e.g.: \begin{cfacode} //Incorrect: T may not be monitor \end{cfacode} Both entry-point and callsite locking are feasible implementations. The current \CFA implementations uses entry-point locking because it requires less work when using \gls{raii}, effectively transferring the burden of implementation to object construction/destruction. The same could be said of callsite locking, the difference being that the later does not necessarily have an existing scope that matches exactly the scope of the mutual exclusion, i.e.: the function body. Furthermore, entry-point locking requires less code generation since any useful routine is called at least as often as it is define, there can be only one entry-point but many callsites. Both entry-point and \gls{callsite-locking} are feasible implementations. The current \CFA implementations uses entry-point locking because it requires less work when using \gls{raii}, effectively transferring the burden of implementation to object construction/destruction. It is harder to use \gls{raii} for call-site locking, as it does not necessarily have an existing scope that matches exactly the scope of the mutual exclusion, i.e.: the function body. For example, the monitor call can appear in the middle of an expression. Furthermore, entry-point locking requires less code generation since any useful routine multiple times, but there is only one entry-point for many call-sites. % ====================================================================== % ====================================================================== Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency. Each component of the picture is explained in details in the fllowing sections. Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency. Each component of the picture is explained in details in the flowing sections. \begin{figure} \subsection{Context Switching} As mentionned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading. This is because they share the same mechanism for context-switching between different stacks. To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific function call. This assumption means that the context-switch only has to copy the callee-saved registers onto the stack and then switch the stack registers with the ones of the target coroutine/thread. Note that the instruction pointer can be left untouched since the context-switch is always inside the same function. Threads however do not context-switch between each other directly. They context-switch to the scheduler. This method is called a 2-step context-switch and has the advantage of having a clear distinction between user code and the kernel where scheduling and other system operation happen. Obiously, this has the cost of doubling the context-switch cost because threads must context-switch to an intermediate stack. However, the performance of the 2-step context-switch is still superior to a \code{pthread_yield}(see section \ref{results}). additionally, for users in need for optimal performance, it is important to note that having a 2-step context-switch as the default does not prevent \CFA from offering a 1-step context-switch to use manually (or as part of monitors). This option is not currently present in \CFA but the changes required to add it are strictly additive. As mentioned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading, because they share the same mechanism for context-switching between different stacks. To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific function call. This assumption means that the context-switch only has to copy the callee-saved registers onto the stack and then switch the stack registers with the ones of the target coroutine/thread. Note that the instruction pointer can be left untouched since the context-switch is always inside the same function. Threads however do not context-switch between each other directly. They context-switch to the scheduler. This method is called a 2-step context-switch and has the advantage of having a clear distinction between user code and the kernel where scheduling and other system operation happen. Obviously, this doubles the context-switch cost because threads must context-switch to an intermediate stack. The alternative 1-step context-switch uses the stack of the from'' thread to schedule and then context-switches directly to the to'' thread. However, the performance of the 2-step context-switch is still superior to a \code{pthread_yield} (see section \ref{results}). Additionally, for users in need for optimal performance, it is important to note that having a 2-step context-switch as the default does not prevent \CFA from offering a 1-step context-switch (akin to the Microsoft \code{SwitchToFiber}~\cite{switchToWindows} routine). This option is not currently present in \CFA but the changes required to add it are strictly additive. \subsection{Processors} Parallelism in \CFA is built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically pthreads in the current implementation of \CFA. Indeed, any parallelism must go through operating-system librairies. However, \glspl{uthread} are still the main source of concurrency, processors are simply the underlying source of parallelism. Indeed, processor \glspl{kthread} simply fetch a \glspl{uthread} from the scheduler and run, they are effectively executers for user-threads. The main benefit of this approach is that it offers a well defined boundary between kernel code and user code, for example, kernel thread quiescing, scheduling and interrupt handling. Processors internally use coroutines to take advantage of the existing context-switching semantics. Parallelism in \CFA is built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically pthreads in the current implementation of \CFA. Indeed, any parallelism must go through operating-system libraries. However, \glspl{uthread} are still the main source of concurrency, processors are simply the underlying source of parallelism. Indeed, processor \glspl{kthread} simply fetch a \gls{uthread} from the scheduler and run it; they are effectively executers for user-threads. The main benefit of this approach is that it offers a well defined boundary between kernel code and user code, for example, kernel thread quiescing, scheduling and interrupt handling. Processors internally use coroutines to take advantage of the existing context-switching semantics. \subsection{Stack management} One of the challenges of this system is to reduce the footprint as much as possible. Specifically, all pthreads created also have a stack created with them, which should be used as much as possible. Normally, coroutines also create there own stack to run on, however, in the case of the coroutines used for processors, these coroutines run directly on the kernel thread stack, effectively stealing the processor stack. The exception to this rule is the Main Processor, i.e. the initial kernel thread that is given to any program. In order to respect user expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor. One of the challenges of this system is to reduce the footprint as much as possible. Specifically, all pthreads created also have a stack created with them, which should be used as much as possible. Normally, coroutines also create there own stack to run on, however, in the case of the coroutines used for processors, these coroutines run directly on the \gls{kthread} stack, effectively stealing the processor stack. The exception to this rule is the Main Processor, i.e. the initial \gls{kthread} that is given to any program. In order to respect C user-expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor, which can grow very large \subsection{Preemption} \label{preemption} Finally, an important aspect for any complete threading system is preemption. As mentionned in chapter \ref{basics}, preemption introduces an extra degree of uncertainty, which enables users to have multiple threads interleave transparently, rather than having to cooperate among threads for proper scheduling and CPU distribution. Indeed, preemption is desireable because it adds a degree of isolation among threads. In a fully cooperative system, any thread that runs into a long loop can starve other threads, while in a preemptive system starvation can still occur but it does not rely on every thread having to yield or block on a regular basis, which reduces significantly a programmer burden. Obviously, preemption is not optimal for every workload, however any preemptive system can become a cooperative system by making the time-slices extremely large. Which is why \CFA uses a preemptive threading system. Preemption in \CFA is based on kernel timers, which are used to run a discrete-event simulation. Every processor keeps track of the current time and registers an expiration time with the preemption system. When the preemption system receives a change in preemption, it sorts these expiration times in a list and sets a kernel timer for the closest one, effectively stepping between preemption events on each signals sent by the timer. These timers use the linux signal {\tt SIGALRM}, which is delivered to the process rather than the kernel-thread. This results in an implementation problem,because when delivering signals to a process, the kernel documentation states that the signal can be delivered to any kernel thread for which the signal is not blocked i.e. : Finally, an important aspect for any complete threading system is preemption. As mentioned in chapter \ref{basics}, preemption introduces an extra degree of uncertainty, which enables users to have multiple threads interleave transparently, rather than having to cooperate among threads for proper scheduling and CPU distribution. Indeed, preemption is desirable because it adds a degree of isolation among threads. In a fully cooperative system, any thread that runs a long loop can starve other threads, while in a preemptive system, starvation can still occur but it does not rely on every thread having to yield or block on a regular basis, which reduces significantly a programmer burden. Obviously, preemption is not optimal for every workload, however any preemptive system can become a cooperative system by making the time-slices extremely large. Therefore, \CFA uses a preemptive threading system. Preemption in \CFA is based on kernel timers, which are used to run a discrete-event simulation. Every processor keeps track of the current time and registers an expiration time with the preemption system. When the preemption system receives a change in preemption, it inserts the time in a sorted order and sets a kernel timer for the closest one, effectively stepping through preemption events on each signal sent by the timer. These timers use the Linux signal {\tt SIGALRM}, which is delivered to the process rather than the kernel-thread. This results in an implementation problem, because when delivering signals to a process, the kernel can deliver the signal to any kernel thread for which the signal is not blocked, i.e. : \begin{quote} A process-directed signal may be delivered to any one of the threads that does not currently have the signal blocked. If more than one of the threads has the signal unblocked, then the kernel chooses an arbitrary thread to which to deliver the signal. SIGNAL(7) - Linux Programmer's Manual \end{quote} For the sake of simplicity and in order to prevent the case of having two threads receiving alarms simultaneously, \CFA programs block the {\tt SIGALRM} signal on every thread except one. Now because of how involontary context-switches are handled, the kernel thread handling {\tt SIGALRM} cannot also be a processor thread. Involuntary context-switching is done by sending signal {\tt SIGUSER1} to the corresponding processor and having the thread yield from inside the signal handler. Effectively context-switching away from the signal-handler back to the kernel and the signal-handler frame is eventually unwound when the thread is scheduled again. This approach means that a signal-handler can start on one kernel thread and terminate on a second kernel thread (but the same user thread). It is important to note that signal-handlers save and restore signal masks because user-thread migration can cause signal mask to migrate from one kernel thread to another. This behaviour is only a problem if all kernel threads among which a user thread can migrate differ in terms of signal masks\footnote{Sadly, official POSIX documentation is silent on what distiguishes async-signal-safe'' functions from other functions}. However, since the kernel thread hanlding preemption requires a different signal mask, executing user threads on the kernel alarm thread can cause deadlocks. For this reason, the alarm thread is on a tight loop around a system call to \code{sigwaitinfo}, requiring very little CPU time for preemption. One final detail about the alarm thread is how to wake it when additional communication is required (e.g., on thread termination). This unblocking is also done using {\tt SIGALRM}, but sent throught the \code{pthread_sigqueue}. Indeed, \code{sigwait} can differentiate signals sent from \code{pthread_sigqueue} from signals sent from alarms or the kernel. For the sake of simplicity and in order to prevent the case of having two threads receiving alarms simultaneously, \CFA programs block the {\tt SIGALRM} signal on every kernel thread except one. Now because of how involuntary context-switches are handled, the kernel thread handling {\tt SIGALRM} cannot also be a processor thread. Involuntary context-switching is done by sending signal {\tt SIGUSER1} to the corresponding proces\-sor and having the thread yield from inside the signal handler. This approach effectively context-switches away from the signal-handler back to the kernel and the signal-handler frame is eventually unwound when the thread is scheduled again. As a result, a signal-handler can start on one kernel thread and terminate on a second kernel thread (but the same user thread). It is important to note that signal-handlers save and restore signal masks because user-thread migration can cause a signal mask to migrate from one kernel thread to another. This behaviour is only a problem if all kernel threads, among which a user thread can migrate, differ in terms of signal masks\footnote{Sadly, official POSIX documentation is silent on what distinguishes async-signal-safe'' functions from other functions.}. However, since the kernel thread handling preemption requires a different signal mask, executing user threads on the kernel-alarm thread can cause deadlocks. For this reason, the alarm thread is in a tight loop around a system call to \code{sigwaitinfo}, requiring very little CPU time for preemption. One final detail about the alarm thread is how to wake it when additional communication is required (e.g., on thread termination). This unblocking is also done using {\tt SIGALRM}, but sent through the \code{pthread_sigqueue}. Indeed, \code{sigwait} can differentiate signals sent from \code{pthread_sigqueue} from signals sent from alarms or the kernel. \subsection{Scheduler} Finally, an aspect that was not mentionned yet is the scheduling algorithm. Currently, the \CFA scheduler uses a single ready queue for all processors, which is the simplest approach to scheduling. Further discussion on scheduling is present in section \label{futur:sched}. Finally, an aspect that was not mentioned yet is the scheduling algorithm. Currently, the \CFA scheduler uses a single ready queue for all processors, which is the simplest approach to scheduling. Further discussion on scheduling is present in section \ref{futur:sched}. % ====================================================================== \end{center} \caption{Traditional illustration of a monitor} \label{fig:monitor} \end{figure} This picture has several components, the two most important being the entry-queue and the AS-stack. The entry-queue is an (almost) FIFO list where threads waiting to enter are parked, while the acceptor-signalor (AS) stack is a FILO list used for threads that have been signalled or otherwise marked as running next. For \CFA, this picture does not have support for blocking multiple monitors on a single condition. To support \gls{bulk-acq} two changes to this picture are required. First, it is non longer helpful to attach the condition to a single monitor. Secondly, the thread waiting on the conditions has to be seperated multiple monitors, which yields : \end{figure} This picture has several components, the two most important being the entry-queue and the AS-stack. The entry-queue is an (almost) FIFO list where threads waiting to enter are parked, while the acceptor-signaler (AS) stack is a FILO list used for threads that have been signalled or otherwise marked as running next. For \CFA, this picture does not have support for blocking multiple monitors on a single condition. To support \gls{bulk-acq} two changes to this picture are required. First, it is no longer helpful to attach the condition to \emph{a single} monitor. Secondly, the thread waiting on the condition has to be separated across multiple monitors, seen in figure \ref{fig:monitor_cfa}. \begin{figure}[H] \end{figure} This picture and the proper entry and leave algorithms is the fundamental implementation of internal scheduling (see listing \ref{lst:entry2}). Note that when threads are moved from the condition to the AS-stack, it splits the thread into to pieces. The thread is woken up when all the pieces have moved from the AS-stacks to the active thread seat. In this picture, the threads are split into halves but this is only because there are two monitors in this picture. For a specific signaling operation every monitor needs a piece of thread on its AS-stack. This picture and the proper entry and leave algorithms (see listing \ref{lst:entry2}) is the fundamental implementation of internal scheduling. Note that when a thread is moved from the condition to the AS-stack, it is conceptually split the thread into N pieces, where N is the number of monitors specified in the parameter list. The thread is woken up when all the pieces have popped from the AS-stacks and made active. In this picture, the threads are split into halves but this is only because there are two monitors. For a specific signaling operation every monitor needs a piece of thread on its AS-stack. \begin{figure}[b] \end{pseudo} \end{multicols} \caption{Entry and exit routine for monitors with internal scheduling} \label{lst:entry2} \end{figure} Some important things to notice about the exit routine. The solution discussed in \ref{intsched} can be seen in the exit routine of listing \ref{lst:entry2}. Basically, the solution boils down to having a seperate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has transferred ownership. This solution is deadlock safe as well as preventing any potential barging. The data structure used for the AS-stack are reused extensively for external scheduling, but in the case of internal scheduling, the data is allocated using variable-length arrays on the callstack of the \code{wait} and \code{signal_block} routines. \begin{pseudo}[caption={Entry and exit routine for monitors with internal scheduling},label={lst:entry2}] \end{pseudo} \end{figure} Some important things to notice about the exit routine. The solution discussed in \ref{intsched} can be seen in the exit routine of listing \ref{lst:entry2}. Basically, the solution boils down to having a separate data structure for the condition queue and the AS-stack, and unconditionally transferring ownership of the monitors but only unblocking the thread when the last monitor has transferred ownership. This solution is deadlock safe as well as preventing any potential barging. The data structure used for the AS-stack are reused extensively for external scheduling, but in the case of internal scheduling, the data is allocated using variable-length arrays on the call-stack of the \code{wait} and \code{signal_block} routines. \begin{figure}[H] \end{figure} Figure \ref{fig:structs} shows a high level representation of these data-structures. The main idea behind them is that, while figure \ref{fig:monitor_cfa} is a nice illustration in theory, in practice breaking a threads into multiple pieces to put unto intrusive stacks does not make sense. The \code{condition node} is the data structure that is queued into a condition variable and, when signaled, the condition queue is popped and each \code{condition criterion} are moved to the AS-stack. Once all the criterion have be popped from their respective AS-stacks, the thread is woken-up, which is what is shown in listing \ref{lst:entry2}. Figure \ref{fig:structs} shows a high-level representation of these data-structures. The main idea behind them is that, a thread cannot contain an arbitrary number of intrusive stacks for linking onto monitor. The \code{condition node} is the data structure that is queued onto a condition variable and, when signaled, the condition queue is popped and each \code{condition criterion} are moved to the AS-stack. Once all the criterion have be popped from their respective AS-stacks, the thread is woken-up, which is what is shown in listing \ref{lst:entry2}. % ====================================================================== % ====================================================================== % ====================================================================== Similarly to internal scheduling, external scheduling for multiple monitors relies on the idea that waiting-thread queues are no longer specific to a single monitor, as mentionned in section \ref{extsched}. For internal scheduling, these queues are part of condition variables which are still unique for a given scheduling operation (e.g., no single statment uses multiple conditions). However, in the case of external scheduling, there is no equivalent object which is associated with \code{waitfor} statements. This absence means the queues holding the waiting threads must be stored inside at least one of the monitors that is acquired. The monitors being the only objects that have sufficient lifetime and are available on both sides of the \code{waitfor} statment. This requires an algorithm to choose which monitor holds the relevant queue. It is also important that said algorithm be independent of the order in which users list parameters. The proposed algorithm is to fall back on monitor lock ordering and specify that the monitor that is acquired first is the one with the relevant wainting queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint. Similarly to internal scheduling, external scheduling for multiple monitors relies on the idea that waiting-thread queues are no longer specific to a single monitor, as mentioned in section \ref{extsched}. For internal scheduling, these queues are part of condition variables, which are still unique for a given scheduling operation (e.g., no signal statement uses multiple conditions). However, in the case of external scheduling, there is no equivalent object which is associated with \code{waitfor} statements. This absence means the queues holding the waiting threads must be stored inside at least one of the monitors that is acquired. These monitors being the only objects that have sufficient lifetime and are available on both sides of the \code{waitfor} statement. This requires an algorithm to choose which monitor holds the relevant queue. It is also important that said algorithm be independent of the order in which users list parameters. The proposed algorithm is to fall back on monitor lock ordering (sorting by address) and specify that the monitor that is acquired first is the one with the relevant waiting queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects but that is a reasonable constraint. This algorithm choice has two consequences : \begin{itemize} \item The queue of the highest priority monitor is no longer a true FIFO queue because threads can be moved to the front of the queue. These queues need to contain a set of monitors for each of the waiting threads. Therefore, another thread whose set contains the same highest priority monitor but different lower priority monitors may arrive first but enter the critical section after a thread with the correct pairing. \item The queue of the lowest priority monitor is both required and potentially unused. Indeed, since it is not known at compile time which monitor will be the lowest priority monitor, every monitor needs to have the correct queues even though it is possible that some queues will go unused for the entire duration of the program, for example if a monitor is only used in a specific pair. \item The queue of the monitor with the lowest address is no longer a true FIFO queue because threads can be moved to the front of the queue. These queues need to contain a set of monitors for each of the waiting threads. Therefore, another thread whose set contains the same lowest address monitor but different lower priority monitors may arrive first but enter the critical section after a thread with the correct pairing. \item The queue of the lowest priority monitor is both required and potentially unused. Indeed, since it is not known at compile time which monitor is the monitor with have the lowest address, every monitor needs to have the correct queues even though it is possible that some queues go unused for the entire duration of the program, for example if a monitor is only used in a specific pair. \end{itemize} Therefore, the following modifications need to be made to support external scheduling : \begin{itemize} \item The threads waiting on the entry-queue need to keep track of which routine is trying to enter, and using which set of monitors. The \code{mutex} routine already has all the required information on its stack so the thread only needs to keep a pointer to that information. \item The monitors need to keep a mask of acceptable routines. This mask contains for each acceptable routine, a routine pointer and an array of monitors to go with it. It also needs storage to keep track of which routine was accepted. Since this information is not specific to any monitor, the monitors actually contain a pointer to an integer on the stack of the waiting thread. Note that the complete mask can be pushed to any owned monitors, regardless of \code{when} statements, the \code{waitfor} statement is used in a context where the thread already has full ownership of (at least) every concerned monitor and therefore monitors will refuse all calls no matter what. \item The threads waiting on the entry-queue need to keep track of which routine it is trying to enter, and using which set of monitors. The \code{mutex} routine already has all the required information on its stack so the thread only needs to keep a pointer to that information. \item The monitors need to keep a mask of acceptable routines. This mask contains for each acceptable routine, a routine pointer and an array of monitors to go with it. It also needs storage to keep track of which routine was accepted. Since this information is not specific to any monitor, the monitors actually contain a pointer to an integer on the stack of the waiting thread. Note that if a thread has acquired two monitors but executes a \code{waitfor} with only one monitor as a parameter, setting the mask of acceptable routines to both monitors will not cause any problems since the extra monitor will not change ownership regardless. This becomes relevant when \code{when} clauses affect the number of monitors passed to a \code{waitfor} statement. \item The entry/exit routine need to be updated as shown in listing \ref{lst:entry3}. \end{itemize} \subsection{External scheduling - destructors} Finally, to support the ordering inversion of destructors, the code generation needs to be modified to use a special entry routine. This routine is needed because of the storage requirements of the call order inversion. Indeed, when waiting for the destructors, storage is need for the waiting context and the lifetime of said storage needs to outlive the waiting operation it is needed for. For regular \code{waitfor} statements, the callstack of the routine itself matches this requirement but it is no longer the case when waiting for the destructor since it is pushed on to the AS-stack for later. The waitfor semantics can then be adjusted correspondingly, as seen in listing \ref{lst:entry-dtor} Finally, to support the ordering inversion of destructors, the code generation needs to be modified to use a special entry routine. This routine is needed because of the storage requirements of the call order inversion. Indeed, when waiting for the destructors, storage is need for the waiting context and the lifetime of said storage needs to outlive the waiting operation it is needed for. For regular \code{waitfor} statements, the call-stack of the routine itself matches this requirement but it is no longer the case when waiting for the destructor since it is pushed on to the AS-stack for later. The waitfor semantics can then be adjusted correspondingly, as seen in listing \ref{lst:entry-dtor} \begin{figure} \end{pseudo} \end{multicols} \caption{Entry and exit routine for monitors with internal scheduling and external scheduling} \label{lst:entry3} \begin{pseudo}[caption={Entry and exit routine for monitors with internal scheduling and external scheduling},label={lst:entry3}] \end{pseudo} \end{figure} \end{pseudo} \end{multicols} \caption{Pseudo code for the \code{waitfor} routine and the \code{mutex} entry routine for destructors} \label{lst:entry-dtor} \end{figure} \begin{pseudo}[caption={Pseudo code for the \code{waitfor} routine and the \code{mutex} entry routine for destructors},label={lst:entry-dtor}] \end{pseudo} \end{figure}
• ## doc/proposals/concurrency/text/parallelism.tex
r4e7a4e6 % # # # # # # # ####### ####### ####### ####### ### ##### # # \chapter{Parallelism} Historically, computer performance was about processor speeds and instructions count. However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. In this decade, it is not longer reasonnable to create a high-performance application without caring about parallelism. Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization. The lowest-level approach of parallelism is to use \glspl{kthread} in combination with semantics like \code{fork}, \code{join}, etc. However, since these have significant costs and limitations, \glspl{kthread} are now mostly used as an implementation tool rather than a user oriented one. There are several alternatives to solve these issues that all have strengths and weaknesses. While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads. Historically, computer performance was about processor speeds and instructions count. However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. In this decade, it is not longer reasonable to create a high-performance application without caring about parallelism. Indeed, parallelism is an important aspect of performance and more specifically throughput and hardware utilization. The lowest-level approach of parallelism is to use \glspl{kthread} in combination with semantics like \code{fork}, \code{join}, etc. However, since these have significant costs and limitations, \glspl{kthread} are now mostly used as an implementation tool rather than a user oriented one. There are several alternatives to solve these issues that all have strengths and weaknesses. While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads. \section{Paradigm} \section{Paradigms} \subsection{User-level threads} A direct improvement on the \gls{kthread} approach is to use \glspl{uthread}. These threads offer most of the same features that the operating system already provide but can be used on a much larger scale. This approach is the most powerfull solution as it allows all the features of multi-threading, while removing several of the more expensive costs of kernel threads. The down side is that almost none of the low-level threading problems are hidden; users still have to think about data races, deadlocks and synchronization issues. These issues can be somewhat alleviated by a concurrency toolkit with strong garantees but the parallelism toolkit offers very little to reduce complexity in itself. A direct improvement on the \gls{kthread} approach is to use \glspl{uthread}. These threads offer most of the same features that the operating system already provide but can be used on a much larger scale. This approach is the most powerful solution as it allows all the features of multi-threading, while removing several of the more expensive costs of kernel threads. The down side is that almost none of the low-level threading problems are hidden; users still have to think about data races, deadlocks and synchronization issues. These issues can be somewhat alleviated by a concurrency toolkit with strong guarantees but the parallelism toolkit offers very little to reduce complexity in itself. Examples of languages that support \glspl{uthread} are Erlang~\cite{Erlang} and \uC~\cite{uC++book}. \subsection{Fibers : user-level threads without preemption} \label{fibers} A popular varient of \glspl{uthread} is what is often refered to as \glspl{fiber}. However, \glspl{fiber} do not present meaningful semantical differences with \glspl{uthread}. The significant difference between \glspl{uthread} and \glspl{fiber} is the lack of \gls{preemption} in the later one. Advocates of \glspl{fiber} list their high performance and ease of implementation as majors strenghts of \glspl{fiber} but the performance difference between \glspl{uthread} and \glspl{fiber} is controversial, and the ease of implementation, while true, is a weak argument in the context of language design. Therefore this proposal largely ignores fibers. A popular variant of \glspl{uthread} is what is often referred to as \glspl{fiber}. However, \glspl{fiber} do not present meaningful semantical differences with \glspl{uthread}. The significant difference between \glspl{uthread} and \glspl{fiber} is the lack of \gls{preemption} in the latter. Advocates of \glspl{fiber} list their high performance and ease of implementation as majors strengths but the performance difference between \glspl{uthread} and \glspl{fiber} is controversial, and the ease of implementation, while true, is a weak argument in the context of language design. Therefore this proposal largely ignores fibers. An example of a language that uses fibers is Go~\cite{Go} \subsection{Paradigm performance} While the choice between the three paradigms listed above may have significant performance implication, it is difficult to pindown the performance implications of chosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Having a large amount of mostly independent units of work to execute almost guarantess that the \gls{pool} based system has the best performance thanks to the lower memory overhead (i.e., no thread stack per job). However, interactions among jobs can easily exacerbate contention. User-level threads allow fine-grain context switching, which results in better resource utilisation, but a context switch is more expensive and the extra control means users need to tweak more variables to get the desired performance. Finally, if the units of uninterrupted work are large enough the paradigm choice is largely amortised by the actual work done. While the choice between the three paradigms listed above may have significant performance implication, it is difficult to pin-down the performance implications of choosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Having a large amount of mostly independent units of work to execute almost guarantees that the \gls{pool} based system has the best performance thanks to the lower memory overhead (i.e., no thread stack per job). However, interactions among jobs can easily exacerbate contention. User-level threads allow fine-grain context switching, which results in better resource utilization, but a context switch is more expensive and the extra control means users need to tweak more variables to get the desired performance. Finally, if the units of uninterrupted work are large enough the paradigm choice is largely amortized by the actual work done. \section{The \protect\CFA\ Kernel : Processors, Clusters and Threads}\label{kernel} A \gls{cfacluster} is a group of \gls{kthread} executed in isolation. \Glspl{uthread} are scheduled on the \glspl{kthread} of a given \gls{cfacluster}, allowing organization between \glspl{uthread} and \glspl{kthread}. It is important that \glspl{kthread} belonging to a same \glspl{cfacluster} have homogeneous settings, otherwise migrating a \gls{uthread} from one \gls{kthread} to the other can cause issues. A \gls{cfacluster} also offers a plugable scheduler that can optimize the workload generated by the \glspl{uthread}. \Glspl{cfacluster} have not been fully implmented in the context of this thesis, currently \CFA only supports one \gls{cfacluster}, the initial one. The objective of \gls{cfacluster} is to group \gls{kthread} with identical settings together. \Glspl{uthread} can be scheduled on a \glspl{kthread} of a given \gls{cfacluster}, allowing organization between \glspl{kthread} and \glspl{uthread}. It is important that \glspl{kthread} belonging to a same \glspl{cfacluster} have homogenous settings, otherwise migrating a \gls{uthread} from one \gls{kthread} to the other can cause issues. \Glspl{cfacluster} have not been fully implemented in the context of this thesis, currently \CFA only supports one \gls{cfacluster}, the initial one. \subsection{Future Work: Machine setup}\label{machine} While this was not done in the context of this thesis, another important aspect of clusters is affinity. While many common desktop and laptop PCs have homogeneous CPUs, other devices often have more heteregenous setups. For example, system using \acrshort{numa} configurations may benefit from users being able to tie clusters and\/or kernel threads to certains CPU cores. OS support for CPU affinity is now common \cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx} which means it is both possible and desirable for \CFA to offer an abstraction mechanism for portable CPU affinity. While this was not done in the context of this thesis, another important aspect of clusters is affinity. While many common desktop and laptop PCs have homogeneous CPUs, other devices often have more heterogeneous setups. For example, a system using \acrshort{numa} configurations may benefit from users being able to tie clusters and\/or kernel threads to certain CPU cores. OS support for CPU affinity is now common~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx} which means it is both possible and desirable for \CFA to offer an abstraction mechanism for portable CPU affinity. % \subsection{Paradigms}\label{cfaparadigms} % Given these building blocks, it is possible to reproduce all three of the popular paradigms. Indeed, \glspl{uthread} is the default paradigm in \CFA. However, disabling \gls{preemption} on the \gls{cfacluster} means \glspl{cfathread} effectively become \glspl{fiber}. Since several \glspl{cfacluster} with different scheduling policy can coexist in the same application, this allows \glspl{fiber} and \glspl{uthread} to coexist in the runtime of an application. Finally, it is possible to build executors for thread pools from \glspl{uthread} or \glspl{fiber}. \subsection{Paradigms}\label{cfaparadigms} Given these building blocks, it is possible to reproduce all three of the popular paradigms. Indeed, \glspl{uthread} is the default paradigm in \CFA. However, disabling \gls{preemption} on the \gls{cfacluster} means \glspl{cfathread} effectively become \glspl{fiber}. Since several \glspl{cfacluster} with different scheduling policy can coexist in the same application, this allows \glspl{fiber} and \glspl{uthread} to coexist in the runtime of an application. Finally, it is possible to build executors for thread pools from \glspl{uthread} or \glspl{fiber}, which includes specialize jobs like actors~\cite{Actors}.
• ## doc/proposals/concurrency/text/results.tex
r4e7a4e6 % ====================================================================== \section{Machine setup} Table \ref{tab:machine} shows the characteristiques of the machine used to run the benchmarks. All tests where made on this machine. \begin{figure}[H] Table \ref{tab:machine} shows the characteristics of the machine used to run the benchmarks. All tests where made on this machine. \begin{table}[H] \begin{center} \begin{tabular}{| l | r | l | r |} Operating system & Ubuntu 16.04.3 LTS & Kernel & Linux 4.4.0-97-generic \\ \hline Compiler & gcc 6.3.0 & Translator & CFA 1.0.0 \\ Compiler & GCC 6.3.0 & Translator & CFA 1.0.0 \\ \hline Java version & OpenJDK-9 & Go version & 1.9.2 \\ \hline \end{tabular} \caption{Machine setup used for the tests} \label{tab:machine} \end{figure} \end{table} \section{Micro benchmarks} \begin{pseudo} #define BENCH(run, result) gettime(); before = gettime(); run; gettime(); after = gettime(); result = (after - before) / N; \end{pseudo} The method used to get time is \code{clock_gettime(CLOCK_THREAD_CPUTIME_ID);}. Each benchmark is using many interations of a simple call to measure the cost of the call. The specific number of interation dependes on the specific benchmark. The method used to get time is \code{clock_gettime(CLOCK_THREAD_CPUTIME_ID);}. Each benchmark is using many iterations of a simple call to measure the cost of the call. The specific number of iteration depends on the specific benchmark. \subsection{Context-switching} The first interesting benchmark is to measure how long context-switches take. The simplest approach to do this is to yield on a thread, which executes a 2-step context switch. In order to make the comparison fair, coroutines also execute a 2-step context-switch, which is a resume/suspend cycle instead of a yield. Listing \ref{lst:ctx-switch} shows the code for coroutines and threads. All omitted tests are functionally identical to one of these tests. The results can be shown in table \ref{tab:ctx-switch}. The first interesting benchmark is to measure how long context-switches take. The simplest approach to do this is to yield on a thread, which executes a 2-step context switch. In order to make the comparison fair, coroutines also execute a 2-step context-switch (\gls{uthread} to \gls{kthread} then \gls{kthread} to \gls{uthread}), which is a resume/suspend cycle instead of a yield. Listing \ref{lst:ctx-switch} shows the code for coroutines and threads whith the results in table \ref{tab:ctx-switch}. All omitted tests are functionally identical to one of these tests. \begin{figure} \begin{multicols}{2} \end{cfacode} \end{multicols} \caption{\CFA benchmark code used to measure context-switches for coroutines and threads.} \label{lst:ctx-switch} \end{figure} \begin{figure} \begin{center} \begin{tabular}{| l | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] |} \cline{2-4} \multicolumn{1}{c |}{} & \multicolumn{1}{c |}{ Median } &\multicolumn{1}{c |}{ Average } & \multicolumn{1}{c |}{ Standard Deviation} \\ \hline Kernel Threads & 239 & 242.57 & 5.54 \\ \CFA Coroutines & 38 & 38 & 0 \\ \CFA Threads & 102 & 102.39 & 1.57 \\ \uC Coroutines & 46 & 46.68 & 0.47 \\ \uC Threads & 98 & 99.39 & 1.52 \\ \hline \end{tabular} \end{center} \caption{Context Switch comparaison. All numbers are in nanoseconds(\si{\nano\second})} \begin{cfacode}[caption={\CFA benchmark code used to measure context-switches for coroutines and threads.},label={lst:ctx-switch}] \end{cfacode} \end{figure} \begin{table} \begin{center} \begin{tabular}{| l | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] |} \cline{2-4} \multicolumn{1}{c |}{} & \multicolumn{1}{c |}{ Median } &\multicolumn{1}{c |}{ Average } & \multicolumn{1}{c |}{ Standard Deviation} \\ \hline Kernel Thread & 239 & 242.57 & 5.54 \\ \CFA Coroutine & 38 & 38 & 0 \\ \CFA Thread & 102 & 102.39 & 1.57 \\ \uC Coroutine & 46 & 46.68 & 0.47 \\ \uC Thread & 98 & 99.39 & 1.52 \\ Goroutine & 148 & 148.0 & 0 \\ Java Thread & 271 & 271.0 & 0 \\ \hline \end{tabular} \end{center} \caption{Context Switch comparison. All numbers are in nanoseconds(\si{\nano\second})} \label{tab:ctx-switch} \end{figure} \end{table} \subsection{Mutual-exclusion} The next interesting benchmark is to measure the overhead to enter/leave a critical-section. For monitors, the simplest appraoch is to measure how long it takes enter and leave a monitor routine. Listing \ref{lst:mutex} shows the code for \CFA. To put the results in context, the cost of entering a non-inline function and the cost of acquiring and releasing a pthread mutex lock are also mesured. The results can be shown in table \ref{tab:mutex}. \begin{figure} \begin{cfacode} The next interesting benchmark is to measure the overhead to enter/leave a critical-section. For monitors, the simplest approach is to measure how long it takes to enter and leave a monitor routine. Listing \ref{lst:mutex} shows the code for \CFA. To put the results in context, the cost of entering a non-inline function and the cost of acquiring and releasing a pthread mutex lock are also measured. The results can be shown in table \ref{tab:mutex}. \begin{figure} \begin{cfacode}[caption={\CFA benchmark code used to measure mutex routines.},label={lst:mutex}] monitor M {}; void __attribute__((noinline)) call( M & mutex m /*, m2, m3, m4*/ ) {} } \end{cfacode} \caption{\CFA benchmark code used to measure mutex routines.} \label{lst:mutex} \end{figure} \begin{figure} \end{figure} \begin{table} \begin{center} \begin{tabular}{| l | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] |} \hline C routine & 2 & 2 & 0 \\ FetchAdd + FetchSub & 2 & 2 & 0 \\ Pthreads Mutex Lock & 31 & 31.86 & 0.99 \\ \uC \code{monitor} member routine & 30 & 30 & 0 \\ \CFA \code{mutex} routine, 2 argument & 82 & 83 & 1.93 \\ \CFA \code{mutex} routine, 4 argument & 165 & 161.15 & 54.04 \\ \hline \end{tabular} \end{center} \caption{Mutex routine comparaison. All numbers are in nanoseconds(\si{\nano\second})} Java synchronized routine & 165 & 161.15 & 54.04 \\ \hline \end{tabular} \end{center} \caption{Mutex routine comparison. All numbers are in nanoseconds(\si{\nano\second})} \label{tab:mutex} \end{figure} \end{table} \subsection{Internal scheduling} The Internal scheduling benchmark measures the cost of waiting on and signaling a condition variable. Listing \ref{lst:int-sched} shows the code for \CFA. The results can be shown in table \ref{tab:int-sched}. As with all other benchmarks, all omitted tests are functionally identical to one of these tests. \begin{figure} \begin{cfacode} The internal-scheduling benchmark measures the cost of waiting on and signalling a condition variable. Listing \ref{lst:int-sched} shows the code for \CFA, with results table \ref{tab:int-sched}. As with all other benchmarks, all omitted tests are functionally identical to one of these tests. \begin{figure} \begin{cfacode}[caption={Benchmark code for internal scheduling},label={lst:int-sched}] volatile int go = 0; condition c; } \end{cfacode} \caption{Benchmark code for internal scheduling} \label{lst:int-sched} \end{figure} \begin{figure} \end{figure} \begin{table} \begin{center} \begin{tabular}{| l | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] |} \CFA \code{signal}, 2 \code{monitor} & 1531 & 1550.75 & 32.77 \\ \CFA \code{signal}, 4 \code{monitor} & 2288.5 & 2326.86 & 54.73 \\ \hline \end{tabular} \end{center} \caption{Internal scheduling comparaison. All numbers are in nanoseconds(\si{\nano\second})} Java \code{notify} & 2288.5 & 2326.86 & 54.73 \\ \hline \end{tabular} \end{center} \caption{Internal scheduling comparison. All numbers are in nanoseconds(\si{\nano\second})} \label{tab:int-sched} \end{figure} \end{table} \subsection{External scheduling} The Internal scheduling benchmark measures the cost of the \code{waitfor} statement (\code{_Accept} in \uC). Listing \ref{lst:ext-sched} shows the code for \CFA. The results can be shown in table \ref{tab:ext-sched}. As with all other benchmarks, all omitted tests are functionally identical to one of these tests. \begin{figure} \begin{cfacode} The Internal scheduling benchmark measures the cost of the \code{waitfor} statement (\code{_Accept} in \uC). Listing \ref{lst:ext-sched} shows the code for \CFA, with results in table \ref{tab:ext-sched}. As with all other benchmarks, all omitted tests are functionally identical to one of these tests. \begin{figure} \begin{cfacode}[caption={Benchmark code for external scheduling},label={lst:ext-sched}] volatile int go = 0; monitor M {}; } \end{cfacode} \caption{Benchmark code for external scheduling} \label{lst:ext-sched} \end{figure} \begin{figure} \end{figure} \begin{table} \begin{center} \begin{tabular}{| l | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] | S[table-format=5.2,table-number-alignment=right] |} \end{tabular} \end{center} \caption{External scheduling comparaison. All numbers are in nanoseconds(\si{\nano\second})} \caption{External scheduling comparison. All numbers are in nanoseconds(\si{\nano\second})} \label{tab:ext-sched} \end{figure} \end{table} \subsection{Object creation} Finaly, the last benchmark measured is the cost of creation for concurrent objects. Listing \ref{lst:creation} shows the code for pthreads and \CFA threads. The results can be shown in table \ref{tab:creation}. As with all other benchmarks, all omitted tests are functionally identical to one of these tests. The only note here is that the callstacks of \CFA coroutines are lazily created, therefore without priming the coroutine, the creation cost is very low. \begin{figure} \begin{multicols}{2} Finally, the last benchmark measurs the cost of creation for concurrent objects. Listing \ref{lst:creation} shows the code for pthreads and \CFA threads, with results shown in table \ref{tab:creation}. As with all other benchmarks, all omitted tests are functionally identical to one of these tests. The only note here is that the call-stacks of \CFA coroutines are lazily created, therefore without priming the coroutine, the creation cost is very low. \begin{figure} \begin{center} pthread \begin{cfacode} \begin{ccode} int main() { BENCH( for(size_t i=0; i
• ## doc/proposals/concurrency/text/together.tex
r4e7a4e6 \section{Threads as monitors} As it was subtely alluded in section \ref{threads}, \code{threads} in \CFA are in fact monitors, which means that all monitor features are available when using threads. For example, here is a very simple two thread pipeline that could be used for a simulator of a game engine : \begin{cfacode} As it was subtly alluded in section \ref{threads}, \code{thread}s in \CFA are in fact monitors, which means that all monitor features are available when using threads. For example, here is a very simple two thread pipeline that could be used for a simulator of a game engine : \begin{figure}[H] \begin{cfacode}[caption={Toy simulator using \code{thread}s and \code{monitor}s.},label={lst:engine-v1}] // Visualization declaration thread Renderer {} renderer; void draw( Renderer & mutex this, Frame * frame ); // Simualation loop // Simulation loop void main( Simulator & this ) { while( true ) { } \end{cfacode} \end{figure} One of the obvious complaints of the previous code snippet (other than its toy-like simplicity) is that it does not handle exit conditions and just goes on forever. Luckily, the monitor semantics can also be used to clearly enforce a shutdown order in a concise manner : \begin{cfacode} \begin{figure}[H] \begin{cfacode}[caption={Same toy simulator with proper termination condition.},label={lst:engine-v2}] // Visualization declaration thread Renderer {} renderer; void draw( Renderer & mutex this, Frame * frame ); // Simualation loop // Simulation loop void main( Simulator & this ) { while( true ) { // Call destructor for renderer to signify shutdown \end{cfacode} \end{figure} \section{Fibers \& Threads} As mentionned in section \ref{preemption}, \CFA uses preemptive threads by default but can use fibers on demand. Currently, using fibers is done by adding the following line of code to the program~: As mentioned in section \ref{preemption}, \CFA uses preemptive threads by default but can use fibers on demand. Currently, using fibers is done by adding the following line of code to the program~: \begin{cfacode} unsigned int default_preemption() { } \end{cfacode} This function is called by the kernel to fetch the default preemption rate, where 0 signifies an infinite time-slice i.e. no preemption. However, once clusters are fully implemented, it will be possible to create fibers and uthreads in on the same system : This function is called by the kernel to fetch the default preemption rate, where 0 signifies an infinite time-slice, i.e., no preemption. However, once clusters are fully implemented, it will be possible to create fibers and \glspl{uthread} in the same system, as in listing \ref{lst:fiber-uthread} \begin{figure} \begin{cfacode} \begin{cfacode}[caption={Using fibers and \glspl{uthread} side-by-side in \CFA},label={lst:fiber-uthread}] //Cluster forward declaration struct cluster;
• ## doc/proposals/concurrency/thesis.tex
r4e7a4e6 \rfoot{v\input{version}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %====================================================================== % L O G I C A L D O C U M E N T -- the content of your thesis %====================================================================== \begin{document} % \linenumbers \title{Concurrency in \CFA} \author{Thierry Delisle \\ School of Computer Science, University of Waterloo, \\ Waterloo, Ontario, Canada } % For a large document, it is a good idea to divide your thesis % into several files, each one containing one chapter. % To illustrate this idea, the "front pages" (i.e., title page, % declaration, borrowers' page, abstract, acknowledgements, % dedication, table of contents, list of tables, list of figures, % nomenclature) are contained within the file "thesis-frontpgs.tex" which is % included into the document by the following statement. %---------------------------------------------------------------------- % FRONT MATERIAL %---------------------------------------------------------------------- \input{frontpgs} \maketitle \tableofcontents %---------------------------------------------------------------------- % MAIN BODY %---------------------------------------------------------------------- \input{intro} \input{future} \chapter{Conclusion} \section*{Acknowledgements} \clearpage \printglossary[type=\acronymtype]
• ## doc/proposals/concurrency/version
r4e7a4e6 0.11.129 0.11.278
Note: See TracChangeset for help on using the changeset viewer. | 2023-02-05 01:14:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5263717174530029, "perplexity": 3208.3003801620175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500158.5/warc/CC-MAIN-20230205000727-20230205030727-00072.warc.gz"} |
https://www.nature.com/articles/s41598-023-28761-3?error=cookies_not_supported&code=62f3dadc-5baf-49a1-87e5-e8787326c652 | ## Introduction
Pollution caused by OCPs and heavy metals discharged into the waterbodies from food, tannery, personal care product, malting, textile, pesticide, brewery, mining, paint, cement, fertilizer and pharmaceutical industries is on the increase and has posed danger to the well-being of man and the environment1,2. Organochlorine pesticides (OCPs) are known to sustain their toxicity for long period in the environment3. Meanwhile, long-term exposure to OCPs and their metabolites have been reported to cause devastating health implications such as reproductive system dysfunction, neurological impairment, dysfunctional immune system, birth defect and cancer4,5,6. On the other hand, heavy metals have demonstrated the capacity to induce illnesses such as disorders of the nervous system, cancer, organ damage, and in extreme cases, death7,8. Hence, it is essential to eliminate these classes of water contaminants from wastewater before discharge. To achieve this, water treatment techniques such as solvent extraction, and ion-exchange processes9, chemical precipitation10, chemical oxidation or reduction11, membrane technology12, filtration13, electrochemical treatment14, adsorption15,16,17,18, foam separation19 and photocatalysis20,21 have been used for the remediation of contaminated water. Among the aforementioned techniques, adsorption is economical, user-friendly and effective for contaminants sequestration. Adsorbents such as molecular sieve22, rice husk23, granite24 Scots pine25, silica gel26, kaolinite clay27 and Al/SrTiO328 amongst others have been used for the removal of these contaminants.
Nanomaterials, nanoparticles and nanocomposites have become a fast-growing and rapidly expanding area of scientific research due to their diverse applications in many areas of scientific and technical endeavour. Environmental concerns have also led to a growing interest in the green or biological synthesis of metal nanoparticles since the process reduces the use of chemical raw materials leading to lower disposal and incidence of chemicals in the environment. Nanoparticles are natural or engineered substances which have structural components whose sizes are less than 100 nm in three dimensions29,30,31. Nanoparticles are used in diverse fields which include medicine and drug delivery, environmental remediation, electronics and metallurgy32,33. Several works have reported successful nanoparticle biosynthesis with plant extracts34,35,36,37. Meanwhile, the application of nanometals in the wastewater treatment process has been extensively assessed38,39,40,41,42. A nanocomposite is a composite material made by combining two or more phases which contain different compositions or structures with at least one of the phases in the nanoscale range43,44. Nanocomposites enhance the macroscopic properties of the resultant products but the properties of nanocomposites are a function of the properties of the individual components. Bio-based nanocomposites are made with biodegradable or renewable materials like cellulose45. Dalium guineense is a woody plant of the rainforest zone of West Africa that may grow up to 10 to 20 m. Its common names include Black Velvet Tarimand in English, Icheku in Igbo, Awin in Yoruba and Tamarinier noir in French. The mature tree has a grey-coloured bark, dense green leaves and whitish flowers which bear the velvet black-coloured fruits which are seasonal and popular in West Africa and are a rich source of vitamins46. Cellulose is the most abundant natural polymer and numerous types of modified cellulose nanomaterials have been made using different methods47,48,49.
In this research, magnetite nanoparticles were synthesised chemically using iron (III) chloride hexahydrate. Similarly, the ethanolic and aqueous extracts of the stem bark of D. guineense were used for the biosynthesis of magnetite nanoparticles. The nanoparticles were incorporated in situ into carboxymethyl cellulose (CMC) to generate the nanocomposites. The bio-nanoparticles and nanocomposites were characterised and applied for the removal of identified metals (Cd, Cr and Pb) and OCPs (alpha(α)-BHC, beta(β)-BHC, gamma(γ)-BHC, delta(δ)-BHC, heptachlor, heptachlor epoxide, aldrin, gamma(γ)-chlordane, alpha(α)-chlordane, endosulfan I, endosulfan II, endosulfan sulfate, P,p'-DDE (dichloro-diphenyl chloroethane), dieldrin, endrin, endrin ketone, P,P′-DDD (dichlorodiphenyldichloroethane), P,P′-DDT (dichlorodiphenyltrichloroethane), endrin aldehyde and methoxychlor) from 13 different wastewaters.
## Materials and methods
### Magnetite nanoparticles (MNPs)
Magnetite nanoparticles were prepared by chemical precipitation as described by Khalil50; FeCl3·6H2O (19.46 g, 0.799 M) was completely dissolved in 150 mL distilled water to prepare aqueous solution A. Further, 6.584 g (0.792 M) of potassium iodide was dissolved in 50.0 mL of distilled water to prepare aqueous solution B. Solutions A and B were then mixed at room temperature, stirred and allowed to attain equilibrium for 1 h. The precipitate of iodine was filtered out and washed with distilled water. The washing was added to the filtrate and the whole volume was then hydrolysed using 25% ammonia solution which was added drop-wise with continuous stirring until complete precipitation of the black magnetite at a pH of 10. The suspension was allowed to settle, filtered, and washed with distilled water and the solid material was dried at 100 °C for 2 h.
### Collection and identification of Dalium guineense stem bark samples
The stem bark samples were collected within and around Michael Okpara University of Agriculture, Umudike, Abia State, Nigeria (see Fig. 1). They were tightly packed into plastic bags and transferred to the laboratory and identified at the Forestry Department of the University. The voucher specimens were deposited in the herbarium of the Plant Science and Biotechnology (PBS) Department of the same University and were declared not endangered. Samples were washed thoroughly 3 times with distilled water and were shade dried for 14 days. The dry samples (stem bark of Dalium guineense) were pulverized into powder with a wooden mortar and pestle. The ethanol extract was made with the modified cold maceration method of Azwanida51 by soaking 40 g of the powder in 200 mL of absolute ethanol for 20 h followed by filtration under vacuum through Whatman no 1 filter paper spread on a fitting Buchner funnel. The aqueous extract was made by heating 40 g of the powder in 200 mL of distilled-deionized water on a hot plate at 60 °C for 20 min, after which it was cooled and filtered through Whatman no. 1 filter paper. The extracts were used for nanoparticle biosynthesis.
### Biosynthesis of magnetite nanoparticles (Bio-Mag)
This was carried out with the modified method of Karnan52. FeCl3·6H2O and FeSO4·4H2O (1:2 molar ratios) were dissolved in 100 mL of double deionized water (DDW) in a 250 mL beaker and heated to 80 °C with mild stirring using a magnetic stirrer. The ethanol extract of the plant was added after 10 min resulting in a dark colour change. Aqueous NaOH (1 M, 10 mL) was also added after 10 min at the rate of 3 mL min−1 with continuous stirring to allow the magnetite to precipitate uniformly. The dark mixture changed to black suspended particles from the first addition of NaOH. The mixture was allowed to cool to room temperature and the residue (magnetite) was obtained by filtration through Whatman no 1 filter paper. The residue was washed 3 times each, with double distilled water and ethanol separately and dried at 80 °C for 2 h52.
### Chemical synthesis of magnetite-CMC nanocomposite (Mag-CMC)
Carboxymethyl cellulose (CMC, 1 g) was dissolved in 150 mL of 0.799 M solution of FeCl3·6H2O which was subsequently mixed with 50 mL of 0.792 M solution of KI with constant stirring. The mixture was allowed to equilibrate for 1 h and was filtered through Whatman no 1 filter paper and washed with distilled water. The filtrate was hydrolyzed with aq. NH3 to precipitate modified magnetite nanoparticles (MNPs) at pH 10. The precipitate was allowed to settle, the upper layer was decanted and the residue was filtered and washed with distilled water and dried at 60 °C for 5 h.
### Biosynthesis of biomagnetite-CMC nanocomposite (BioMag-CMC)
FeCl3·6H2O and FeSO4·4H2O (1:2 molar ratios) were dissolved in 100 mL of DDW in a 250 mL beaker and heated to 80 °C with mild stirring using a magnetic stirrer. Ethanol extract (20 mL) was added after 10 min with constant stirring followed by the addition of 1 g of CMC with constant stirring. After another 10 min, 20 mL of 1 M NaOH solution was added at the rate of 2 mL min−1. The precipitate was cooled to room temperature and dried in an oven at 80 °C for 2 h.
### Characterisation of nanoparticles
The synthesised nanomaterials were further characterised and confirmed. X-ray diffraction (XRD) measurements were performed using a multi-purpose X-ray diffractometer (D8-Advance from Bruker, USA) with LynxEye position sensitive detector operated in a continuous θ–θ scan in locked coupled mode with Cu-Kα (λKα1 = 1.5406 Å) radiation. Infrared spectra were obtained using a Fourier transform infrared spectrometer (FTIR, PerkinElmer Spectrum RX 1 spectrometer, PerkinElmer Inc., USA). High-resolution transmission electron microscopy (HRTEM) analysis was used to assess the surface structure and morphologies of the synthesised nanoparticles. Measurements with HRTEM (JEOL TEM 2100, JEOL Ltd, Japan) were carried out at an accelerating voltage of 100 kV. The specific surface area of the nanoparticles was acquired by making use of the Brunauer–Emmett–Teller (BET) nitrogen sorption–desorption method (Micromeritics Instruments Corp., USA). The pore volume and pore diameter were estimated by making use of the Barrett–Joyner–Halenda (BJH) theory.
### Sample collection
Wastewater samples were collected in triplicate at the sampling points (wastewater tanks or discharge pipes) of each of the 13 industries (Table 1) with 10 L acid-washed polyethylene gallons. An aliquot of each sample was measured into 1 L polyethylene bottle acidified with concentrated HNO3 (2 mL L−1) and reserved for metal analysis. Another set of aliquots was collected in pretreated 2.5 L amber reagent bottles, which were cleaned by making use of distilled water, acetone, and n-hexane. The amber reagent bottles were corked tightly, labelled appropriately and packed into ice chests before being transported to the lab for the relevant pesticide analysis. To prevent the degradation of the analyte, samples were stored at 4 °C prior to extraction.
### Preparation of stock solutions
The stock solution (1000 mg L−1) of each metal (Cd, Cr and Pb) was prepared with the appropriate metallic salts[Cd(NO3)2·4H2O, K2Cr2O7 and Pb(NO3)2]. These were used to prepare five standard solutions (0.25, 0.5, 5, 15 and 30 mg dm−3) of each metal by serial dilution. The standard solutions were aspirated into a flame atomic absorption spectrophotometer (AAS) (Varian SpectrAA 100) and the absorbances obtained were used in plotting calibration curves. Wavelengths for AAS analyses were 228.9 nm for Cd (R2 = 0.9997), 357.9 nm for Cr (R2 = 0.9989) and 283.2 nm for Pb (R2 = 0.9999). Detection limits for the metals were 0.001 mg L−1 for Cd, 0.1 for Cr and 0.01 mg L−1 for Pb53.
### Digestion of samples
Wastewater samples were digested according to Birtukan and Gebregziabher method54. A 50 mL filtered aliquot of water sample was pipetted into a digestion flask and digested with 3 mL concentrated HNO3 and 3 mL H2O2 at 70 °C for 1 h until a clear solution was obtained. The cooled clear solution was filtered through a Whatman no. 42 filter paper into a 50 mL volumetric flask which was made up to mark with distilled-deionized water. Blank digestion was also carried out in the same way with distilled-deionized water and the acids. The digested samples were analysed on a Varian SpectrAA 100 flame atomic absorption spectrophotometer54.
### Sample extraction
Prior to the extraction step, all glassware was cleaned and stored in the oven at 120 °C. The liquid–liquid extraction technique as provided by USEPA method 3510-C was used for analytes extraction55. Suspended dirt in the industrial wastewater was eliminated using the filtration technique. Thereafter, a 500 mL aliquot was extracted using 40 mL dichloromethane (DCM). The process was repeated three times and the extract fractions were combined and concentrated to 2 mL by making use of a rotary evaporator.
### Separation (clean‑up)
A silica gel (1000 mg/6 mL) column that is capped with anhydrous sodium sulphate (2 g, Na2SO4) conditioned with 6 mL dichloromethane was employed for the clean-up step. The concentrated extracts were loaded and eluted with a total volume of 50 mL of n-hexane:DCM:toluene in the ratio 2.5:1.5:1. The fractions of eluents collected were combined and concentrated to dryness using rotary evaporation. Thereafter, the extracts were re-dissolved in 2 mL n-hexane prior to GC–MS quantification.
### GC–MS analysis
An Agilent 6890N gas chromatograph equipped with an autosampler connected to an Agilent Mass Spectrophotometric Detector was used. 1 µL of the sample was injected into the pulsed spitless mode onto a 30 m × 0.25 mm ID DB 5MS coated fused silica column with a film thickness of 0.15 µL. Helium gas was used as a carrier gas and the column head pressure was maintained at 20 psi to give a constant of 1 mL min−1. Other operating conditions were pre-set. The injector and detector temperatures were set at 270 and 300 °C respectively. The oven temperature was set as follows: 70 °C held for 2 min, ramp at 25 °C min−1 to 180 °C, held for 1 min, and finally, ramp at 5 °C min−1 to 300 °C. All the quality control steps were followed.
### Determination of physicochemical parameters
The Hanna multi-parameter test meter (model, HI98194) was used to determine pH, temperature, electrical conductivity (EC) and total dissolved solids (TDS) after the probes were calibrated separately with the respective calibration solution. Each probe was dipped into the sample and readings were allowed to stabilize before being recorded for the different parameters. Phosphate was determined by measuring 10 mL of the sample into a 10 mL cuvette and placing it in the cuvette holder of a Hanna HI83399 photometer. The instrument was zeroed followed by the addition of 10 drops of phosphate high range (HR) reagent A and 1 packet of phosphate HR reagent B was added followed by gentle shaking of the cuvette. This mixture was placed in the photometer, allowed to stand for 5 min and phosphate concentration was read off. Nitrate was determined by measuring 10 mL of diluted aliquot (tenfold dilution) of the sample into a 10 mL cuvette and inserting it into the cuvette holder of the photometer followed by pressing the zero button. The cuvette was removed and 1 packet of nitrate reagent was added. The cuvette was capped, shaken vigorously for 10 s, inverted alternately for 50 s and placed into the cuvette holder of the photometer before the nitrate concentration was read off after 4.5 min. Chloride, total nitrogen and total phosphorus were also determined with the same photometer using the methods and reagents outlined in the instruction manual. Biochemical oxygen demand (BOD5) after a 5-day incubation period was determined with a modification of the method56. Dilution water was prepared by mixing 10 mL of each phosphate buffer, MgSO4, CaCl, FeCl3, Na2SO3 and NH4Cl with 10 L of distilled water. Two 10 mL aliquots of each effluent were transferred separately into two 300 mL BOD bottle which was then filled up with the dilution water. Two 300 mL BOD bottles were filled with the dilution water and served as blanks. One BOD bottle with effluent and one blank bottle was incubated at 20 °C for 10 days while the dissolved oxygen (DO) content of the effluent in the second BOD bottle (DO1) and blank were determined immediately with the Hanna multi-parameter test meter. After 5 days, the DO of the incubated sample (DO5) and blank were also determined with the same test meter and BOD5 was calculated as:
$$\mathrm{BOD}_5\,(\mathrm{mg L}^{-1})=\frac{(\mathrm{DO}1 -\mathrm{ DO}5) \times \mathrm{ Volume\, of \,BOD\, bottle}}{\mathrm{Volume\,of\, sample}}$$
The Hanna photometer (model: HI83399), Hanna COD medium range (MR) and high range (HR) reagents contained in factory-prepared vials were used for the determination of Chemical Oxygen Demand (COD). The MR and HR reagents were used for determining the COD of samples. Two reagent vials were used for one test; 2 mL (0.2 mL for HR) of deionized water was added to one vial (blank) while 2 mL (0.2 mL for HR) of the sample was added to the second vial. After mixing, the contents were digested in a Hanna reactor (pre-heated to 150 °C) for 2 h and allowed to cool for 20 min. Each vial was inverted 5 times while still warm and allowed to cool to room temperature followed by the zeroing of the Photometer with the blank vial and determination of COD in mg L−1 in the sample vial.
### Application of nanoparticles for metal and pesticide removal
The synthesised nanoparticles (10 mg) were separately added to 100 mL aliquots of the wastewater samples and were agitated in a fixed-speed rotator for 180 min at 400 rpm. At the end of the contact time, the samples were filtered through Whatman no. 1 filter paper and analysed with AAS and GC–MS as appropriate. The uptake capacity (mg g−1) was determined using the:
$$\mathrm{q}_\mathrm{e}=\frac{\mathrm{Co }-\mathrm{ Ce}}{\mathrm{W}} \times \mathrm{ V}$$
(1)
where qe represents the uptake capacity of the adsorbents, Co (mg L−1) is the initial concentration of analyte in solution, and Ce (mg L−1) is the residual/equilibrium concentration of the analyte.
### Statistical analysis
Analysis of the physicochemical parameters of each effluent sample was carried out in triplicate. The means and standard deviations of triplicate determinations were calculated and the values obtained were analysed using single-factor analysis of variance (ANOVA). A comparison of means was carried out using Duncan Multiple Range Test. Statistical analysis was carried out with SPSS software (Version 22.0).
### Ethical approval
This experimental research upon plants complies with relevant institutional, national and international guideline and legislation.
## Results and discussion
### Characterisation of nanomaterials
The phase and crystallinity of BioMag-CMC, MagNPs-CMC, BioMag and MagNPs were assessed using X-ray diffraction patterns and displayed in Fig. 2. The XRD patterns acquired for MagNPs correspond to JCPDS No.160653. However, the application of a biological route may be responsible for the shift in peaks as observed in the XRD patterns of BioMag-CMC, MagNPs-CMC and BioMag (see Fig. 2). The TEM images for Biomag-CMC, MagNPs-CMC and MagNPs are shown in Fig. 3a,b,d. From the TEM micrograph, well-dispersed spherically shaped NPs with a significantly smaller distribution were observed for MagNPs (see Fig. 3d). The addition of CMC induced agglomeration of spherical NPs acquired for Biomag-CMC and MagNPs-CMC. On the other hand, an irregularly stacked clumps morphology was recorded for BioMag (see Fig. 3c). The average particle sizes of BioMag-CMC, MagNPs and MagNPs-CMC were estimated as 11.95 ± 4.95 nm, 4.20 ± 0.42 nm and 9.85 ± 1.62 nm respectively (see Fig. 4). From the particle size analysis, the inclusion of CMC as an adsorbent modifier was noticed to enhance the particle size distribution as observed with the nanoparticles of the magnetite.
The specific surface area and pore volume of BioMag-CMC and MagNPs-CMC were assessed using the BET nitrogen adsorption–desorption technique. These physical parameters increased in the order BioMag-CMC > MagNPs-CMC (see Table 2). Meanwhile, the isotherms profile for BioMag-CMC and MagNPs-CMC exhibited a characteristic type-IV curve with a hysteresis loop within a relative pressure (P/P0) > 0.45 and > 0.9 (see Fig. 5). This could be credited to the fact that the capillary condensation and evaporation occurred at different pressure. To exhibit a type-IV isotherm profile shows that the material (BioMag-CMC and MagNPs-CMC) under investigation sustained mesoporous characteristics with a pore diameter in the range of 2–50 nm and this is in close agreement with the values obtained from TEM measurement. On the other hand, the pore diameter of BioMag-CMC and MagNPs-CMC was assessed by making use of the Barrett–Joyner–Halenda (BJH) theory (see Table 2).
The spectra acquired for BioMag, MagNPs, BioMag-CMC and MagNPs-CMC showed close similarities in their bands with a slight variance in the intensity of their peaks (see Fig. 6). The broad peak between 3350 and 3450 cm−1 may be attributed to the hydroxyl (–CHOH) on the surface of the nanoparticles. The sharp peak at 1608–1612 cm−1 and 1031–1033 cm−1 are assigned to the stretching vibrations of the C=O and –OH groups, respectively57,58,59,60,61. The modification of the magnetite was observed to cause a shift in bands and a slight adjustment in the peak intensities. The implication of using CMC as a modifier was observed on the spectra of BioMag-CMC and MagNPs-CMC having enhanced broad peaks between 3350 and 3450 cm−1 (see Fig. 6).
### Physicochemical parameters of the raw effluents
The mean values for the physicochemical parameters of the 13 raw effluent samples are shown in Tables S1S4. Biochemical oxygen demand (BOD) describes the amount of oxygen needed by microorganisms to biodegrade organic matter. Meanwhile, a high level of BOD indicates a high discharge of biodegradable waste into effluents. The assessment of BOD in the industrial effluents was performed and the results reflect a low level of BOD (see Table 3). Chemical oxygen demand describes the amount of oxidant needed to break down inorganic and organic matter. As displayed in Table 4, the studied effluents were noticed to be higher than the recommended maximum concentration (RMC) by USEPA with the exception of SUWK, PTWK, PTK, CMTWM and CTWA. PCUWA was observed to have the highest concentration of COD and this could be attributed to a high amount of oxidant used in the production of some personal care products. However, MagNPs-CMC demonstrated a 62.48% reduction of COD when applied to PCUWA (see Table 4). TUKW was the most acidic with a pH of 3.73 ± 0.05 while FTWR and FCTWE had similar basic pH values which were significantly higher (P < 0.05) than pH values acquired for other effluents. The variation in effluent pH is influenced by the quality of the industrial activity. Solution pH can influence the availability of micronutrients and may pose danger to aquatic life.
As shown in Tables S1 and S2, electrical conductivity (7049.55 ± 16.26 µS cm−1) and TDS (3614.66 ± 126.57 mg L−1) values of FTWR were significantly higher (P < 0.05) than those of other effluents (see Supplementary information). Meanwhile, after the remediation step, PUWA was noticed to have the lowest mean value for electrical conductivity, EC (99.18 ± 0.44 µS cm−1) and TDS (99.18 ± 0.44). On the other hand, the electrical conductivity (EC) of TUWK, SUWK, MUWA, PTWK, TUKW, BTWR, PUWA and FTWR were higher than the USEPA’s RMC of 2500 µS cm−1. This could be attributed to a high amount of dissolved ions from decomposed materials (see Table S7) (Supplementary information). However, nitrate (83.91 ± 6.77 mg L−1) and total nitrogen (127.04 ± 10.55 mg L−1) of treated PUWA were significantly higher (P < 0.05) compared to values of the same parameters for the other effluents (see Table S4). Phosphate concentrations in the MUWA and BTWR were similar (P > 0.05) but the lowest concentrations were recorded in the TUWK and CTWA. On the other hand, a total P concentration of 0.32 ± 0.07 mg L−1 was recorded for SUWK while the highest significant (P < 0.05) concentration of 27.11 ± 3.84 mg L−1 was recorded for BTWR.
The high value of sulphate could be attributed to a high volume of detergents used during clean-up operations in the brewery. However, high levels of nitrates and phosphate may boost the growth of vegetation in the aquatic ecosystem and can directly increase oxygen demand. Meanwhile, the mean concentrations of chloride (56.12 ± 4.84) and sulphate (505.61 ± 27.40) were significantly higher (P < 0.05) in TUWK and FCTWE respectively. However, the concentration of chloride for all the effluents was below the RMC by USEPA (see Tables S3 and S7). A similar trend with the exception of PTWK, TUKW and TUWK was noticed for sulphate (see Table S3) (see Supplementary information). Finally, EC, pH and TDS of PCUWA, TUWK, SUWK, MUWA, PTWK, TUKW, BTWR, PUWA, PTK, FTWR, FCTWE, CMTWM and CTWA were significantly reduced after treatment with MagNPs, BioMag, MagNPs-CMC and BioMag-CMC as shown in Tables S1 and S2 (see Supplementary information). Hence, effective pre-treatment of effluent can ensure a safe aquatic ecosystem for man.
### Assessment of pesticides and heavy metal ions in industrial wastewater samples
The concentration of heavy metals in industrial wastewater samples collected from five states in Nigeria was quantified and presented in Table 2. All the targeted heavy metals (Cd, Cr and Pb) were significantly detected in all the effluent samples. The concentration of Cd ranged from 0.008 ± 0.003 mg L−1 for samples SUWK and PTK to 0.056 ± 0.010 mg L−1 for PUWA. Cadmium concentrations in the samples collected from 13 different industries across five states in Nigeria were in the order SUWK = PTK > TUKW > PTWK > MUWA > FTWR > TUWK > BTWR > FCTWE > CMTWM > PCUWA > CTWA > PUWA. All samples reported in this study were higher than WHO’s RMC of 0.003 mg L−162. The high level of Cd in some of the samples was expected but estimating a high concentration of Cd in SUWK, MUWA, BTWR and PTK were not expected because these companies deal mainly with foods and drugs (see Table 5). Hence, it is imperative to identify and eliminate cadmium-leaching materials from the vicinity and processes of these industries. This hazardous metal ion commonly finds its way into the water bodies via fertilizer runoff from farmlands, waste batteries, paints, alloys, coal combustion, printing, pulp, refineries, steel smelting and electroplating industries63. Different sicknesses caused by medium and acute cadmium exposure include hypertension, renal damage, liver and kidney damage, lung inefficiency, initiation of cancer growth and calcium depletion in bones64,65. These concentrations were lower than the levels of Cd (0.065 ± 0.001 mg L−1) as reported by Bawa‑Allah66 and higher than the concentration of Cd (0.12 mg L−1) that was reported by Agoro67.
The concentrations of Cr ranged from 0.011 ± 0.003 mg L−1 for sample MUWA to 0.635 ± 0.240 mg L−1 for CMTWM. The elevated concentrations observed were recorded in the order MUWA > SUWK > FCTWE > PUWA > FTWR > BTWR > PCUWA > PTK > TUWK > TUKW > PTWK > CTWA > CMTWM for the samples investigated. The highest level of Cr was recorded in sample CMTWM. Meanwhile, the concentration of Cr estimated for the 13 samples was noticed to be higher than the RMC of 0.01 mg L−1 and 0.015 mg L−1 as given by WHO and USEPA in surface water and effluent water, respectively. High exposure to Cr may lead to severe effects such as perforation of the nasal septum, asthma, bronchitis, pneumonitis, inflammation of the larynx and liver, and increased occurrence of bronchogenic carcinoma68,69. On the other hand, skin contact with chromium compounds have been associated with some skin problems, such as skin allergies, dermal necrosis, dermatitis, and dermal decay70. Hence, it is important to devise an effective means of eliminating this recalcitrant water contaminant.
The observed Pb concentrations in the samples collected from five states were noticed to range from 0.009 ± 0.007 for SUWK to 2.215 ± 0.841 for CMTWM. Meanwhile, the elevated concentrations observed were recorded in the order SUWK > PUWA > PTK > MUWA > TUWK > PTWK > TUKW > PCUWA > FCTWE > FTWR > BTWR > CTWA > CMTWM. With the exception of SUWK, the concentrations of Pb in the samples were found to be higher than the RMC of 0.01 mg L−1 and 0.015 mg L−1 as given by WHO and USEPA in surface water and effluent water, respectively. The main route of Pb in wastewater is runoff from mining, leather tanning, metal processing and electroplating industries. Meanwhile, lead toxicity might pose a minor or major health challenge as it has been reported to cause learning and behavioural difficulty in children, malaise, loss of appetite, anaemia and organ failure71,72,73.
The evaluation step revealed a significant amount of metal ion contaminants (Cd, Cr and Pb) in the industrial effluent collected from five states. Hence, it is imperative to eliminate these water contaminants from the water bodies or reduce the level of these toxic metal ions to RMC using a cheap and effective removal technology. The application of Biomag, MagNPs, Biomag-CMC and MagNPs-CMC as adsorbent for the removal of Cd, Cr and Pb from industrial effluent (PCUWA, TUWK, SUWK, MUWA, PTWK, TUKW, BTWR, PUWA, PTK, FTWR, FCTWE, CMTWM and CTWA) was observed to be effective in the remediation step. The adsorbent demonstrated an affinity for the contaminant in the order Pb > Cr > Cd (see Table 5). The residual concentration of the contaminants was close to the RMC as given by WHO and USEPA after the sorbent-sorbate interaction. To further understand the effectiveness of the synthesised nanocomposites and nanoparticles, the uptake capacities of these materials were estimated and presented in Table S5 (see Supplementary information). The average uptake capacities of Biomag-CMC, Biomag, MagNPs and MagNPs-CMC, are 0.180 ± 0.015, 0.180 ± 0.016, 0.176 ± 0.016 and 0.173 ± 0.029, respectively. Hence, BioMag-CMC has demonstrated superior potential to sequester metal ions from industrial wastewater regardless of the interference from other analytes. The FTIR assessment of BioMag-CMC, BioMag, MagNPs and MagNPs-CMC revealed the presence of functional groups (–OH, –NH and C=O) that have the capacity to trap metal ions via ion exchange or electrostatic interactions.
Table 6 displays the level of pesticide residues in the PTWK wastewater sample. In the raw pesticide wastewater, dieldrin (1.76 ng L−1) and endrin (0.89 ng L−1) were noticed to have the highest and least concentration. This could be attributed to the kind of activities within the industry. A relatively low level of pesticides was detected in the wastewater sample and this could be associated with the hydrophobic characteristics of these analytes. Trace or ultra-trace amounts of these analytes can pose a danger to both aquatic organisms and man. A significant amount of chlordane, endosulfan, DDT, DDE and heptachlor were detected in the sample. Meanwhile, the aforementioned analytes possess the capacity to negatively impact the biodiversity of the aquatic ecosystem. This is attributed to the endocrine and estrogenic disrupting properties of these pesticides74. Hence, it is imperative to pre-treat these industrial effluents before discharge. The residual concentrations of the pesticides were evaluated after treatment with BioMag, MagNPs, BioMag-CMC and MagNPs-CMC nanocomposites and results are presented in Table 6. Meanwhile, the uptake capacity of the adsorbent was assessed and reported in Table S6 (see supplementary information). The nanocomposite materials were observed to effectively reduce the level of pesticides via the batch adsorption technique. MagNPs-CMC with an average removal capacity of 11.6 ± 8.202 mg g−1 was observed to reduce the concentration of β-BHC, heptachlor, α-chlordane, endosulfan I, P,P′-DDD, endosulfan II, P,P′-DDT, endrin aldehyde, endosulfan sulphate, methoxychlor and nndrin ketone to a level of no detection. The uptake potential of these materials could be due to the characteristic nature (hydrophobicity and electrostatic interaction) of the nanocomposite. Finally, uptake capacities of BioMag-CMC and MagNPs-CMC in the removal of OCPs and heavy metals were > 68% after the fifth cycle.
## Conclusion
The concentrations of pesticides and metal ions were evaluated in the industrial wastewater collected from five states in Nigeria. The level of these analytes in the samples is a function of the quality of industrial activities and the waste management systems of these industries. SUWK and PTK contained the least level of metal ions (Cd, 0.008 ± 0.003 mg L−1), while, the highest concentration of metal ions was found in CMTWM (Pb, 2.215 ± 0.841 mg L−1). A significant amount of OCPs was recorded in the samples collected from pesticide industrial effluent (PTWK). The concentration of the OCPs ranged from 1.76 ng L−1 (Dieldrin) to 0.89 ng L−1 (endrin). The levels of contaminants in most of the samples exceeded the recommended maximum concentrations by the stakeholders. Thus, the presence of heavy metal ions and pesticide residues in the industrial effluents may pose potential hazards to aquatic organisms and man if they find their way into the surrounding water bodies. The application of BioMag, MagNPs, BioMag-CMC and MagNPs-CMC as adsorbents was effective in pollutants removal. MagNPs-CMC and BioMag-CMC with average uptake capacity of 11.6 ± 8.202 mg g−1 and 0.181 ± 0.015 mg g−1 demonstrated outstanding potential in pesticide and heavy metal removal, respectively. The nanomaterials were observed to effectively reduce the physicochemical property (pH, EC and TDS) of the effluents. Hence, the removal process of contaminants and other unidentified dissolved solids/ions from wastewater will positively impact the biochemical characteristics of wastewater and NPs. Finally, the routine monitoring and pre-treatment of industrial effluents to prevent OCP and heavy metal ions contamination are essential for the sustainability of a clean and healthy aquatic ecosystem. | 2023-04-01 14:17:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.60732501745224, "perplexity": 5018.02661814076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00566.warc.gz"} |
https://brilliant.org/problems/minimising-can-be-difficult/ | # Minimising can be Difficult!
For real numbers $x,y,z$, subject to $x+y+z>6$ and
$x^2+y^2+z^2-xyz=(x+y)(y+z)(z+x)-8(xy+yz+zx),$
what is the minimum value of $xy+yz+zx$?
×
Problem Loading...
Note Loading...
Set Loading... | 2022-09-26 19:02:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3255312144756317, "perplexity": 7380.076846472482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00060.warc.gz"} |
https://solvedlib.com/n/problem-6-points-for-each-otxern-selell-ile-besl-resdonsc,7177612 | # Problem 6.points) For each [otxern, seleLl Ile besl reSDOnsc (a} To Inake _ boxplol 0l a dtstrbulion, YOu IuSt kIne
###### Question:
Problem 6. points) For each [otxern, seleLl Ile besl reSDOnsc (a} To Inake _ boxplol 0l a dtstrbulion, YOu IuSt k Ine (e-nuber sumnMary all ol Ihe indtv iual cbsuivations C.Ihe Meari and Ilie slandaid devlalin Norie 0f Ihe above (DJWta distnbution ahao Ihe rqht, Ine inewn Vrealet Ihan Ihe medlan Inc McIn less than Ihe median Ine mean and median are equal Nore above (C) Which 0f Ihe %c loxing Ieasi atected If an exlreie high Qullier added Vout dala? Tne median TTThe mean
#### Similar Solved Questions
##### Instruments; in the case of our ruler, another tool called caliper uses special techniques that can measure accurately down t0 as small as 01 mm; This could also be adding more checks to make sure your tools themselves aren t introducing any problems: maybe we measure with two or three rulers to be sure one of them wasn' made incorrectly. For as many things as you might guess at first are introducing uncertainty into an experiment, there are probably 10 times as many more that you aren'
instruments; in the case of our ruler, another tool called caliper uses special techniques that can measure accurately down t0 as small as 01 mm; This could also be adding more checks to make sure your tools themselves aren t introducing any problems: maybe we measure with two or three rulers to be ...
##### Assignment 5: Separation Scheme Correction On a page titled Incorrect Separation Scheme print the incorrect...
Assignment 5: Separation Scheme Correction On a page titled Incorrect Separation Scheme print the incorrect separation scheme provided for your molecule on Blackboard. The top of the separation scheme shows what other compound is mixed with your molecule. Assume for the purposes of this ass...
##### Nutrition for Patients with Upper Gastrointestinal Disorders: Chapter 17 Learning objectives: Dudek's Nutrition essentials for nursing...
Nutrition for Patients with Upper Gastrointestinal Disorders: Chapter 17 Learning objectives: Dudek's Nutrition essentials for nursing practice: 5 Describe nutrition therapy recommendations for someone with gastroesophageal reflux disease (GERD). 6 Teach a patient about the role of nutrition the...
##### Transactions for the month of June were: Purchases: Sales: June 1: 800 @ 3.20 June 2:...
Transactions for the month of June were: Purchases: Sales: June 1: 800 @ 3.20 June 2: 600 @ 5.50 June 3: 2200 @ 3.10 June 6: 1600 @ 5.50 June 7: 1200 @ 3.30 June 9: 1000 @ 5.50 June 15: 1800 @ 3.40 June 10: 400 @ 6.00 June 22: 500 @ 3.5 June 18: 1400 @ 6.00 June 25: 200 @ 6.00 Assuming that perpetua...
##### What is electron configuration? Describe the roles that the Pauli exclusion principle and Hund's rule play in writing the electron configuration of elements.
What is electron configuration? Describe the roles that the Pauli exclusion principle and Hund's rule play in writing the electron configuration of elements....
##### 4. Here are some facts about Washington State. Source: State Health Facts 6 pts) e Washington...
4. Here are some facts about Washington State. Source: State Health Facts 6 pts) e Washington total population: 7,297,300 Number of children (0-18) in Washington State: 1,720,900 Percentage of Washington population that is 65 years or older: 15% What percentage of Washington State's population i...
##### Copyright & the US Sup.Ct. Oil States v. Greene Energy. Did Congress violate Article III; Review...
Copyright & the US Sup.Ct. Oil States v. Greene Energy. Did Congress violate Article III; Review the case, and analysis by the Court. How did Justice Soromayer background perhaps lend contribution here on the position extended?...
##### Arcnethexidence MrinteharociepothesistGtronrKlmte etretmln ALot? AtT Deni thasitualioncan YOU concludc comclallan ccellicient 0.93, what vanlblcs hate variables have cale and-elcce rclationship?[tio Ahcntbena=There There UnerDenna Cilgcand-Lllecl (ciationship definitely NOT 4 cause-and-clfect relationship May nor causc and cffect relatiarishi?Supoose that YOu select a simple random smola of 200 houses currently for sale Anccics Lounce Can wc assurne that the dlstnbution the mcin nOUsC picct from
arcneth exidence Mrinte harocie pothesist Gtronr Klmte etretmln ALot? AtT Deni thasitualion can YOU concludc comclallan ccellicient 0.93, what vanlblcs hate variables have cale and-elcce rclationship? [tio Ahcntbena= There There Uner Denna Cilgcand-Lllecl (ciationship definitely NOT 4 cause-and-clfe...
##### Everyone at BU (all 40,000 individuals!) must roll their own fair 4-sided die once_ Consider the average of the rolls of all those people. Find the probability that this average falls between 2.51 and 2.53, justifying any approximations you may need to make_[Do NOT use any software. Show your work in details]
Everyone at BU (all 40,000 individuals!) must roll their own fair 4-sided die once_ Consider the average of the rolls of all those people. Find the probability that this average falls between 2.51 and 2.53, justifying any approximations you may need to make_ [Do NOT use any software. Show your work ...
##### 10 wild boars are released inside a forest. Their populationgrows at a rate proportional to its size. After 22 years, thepopulation increases to 1000 (one thousand) boars.(a) Find a formula for the boar population aftert years.(b) After how many years will the population reach 10000(ten thousand) boars? You may express your answer in terms of alogarithm.
10 wild boars are released inside a forest. Their population grows at a rate proportional to its size. After 22 years, the population increases to 1000 (one thousand) boars. (a) Find a formula for the boar population after t years. (b) After how many years will the population reach 10000 (ten thousa...
##### To etimate the amount of usable lumber in & trce; Chitra must first estimate the height of the tree: From Points _ Aand Bon the ground, she dctermined thar the angles of clevation for = certain tree were 418 and 529. respectively: The angle formed Jt the base of the trcc betwcen points_ and Bis 90*. and Aand Bare 30 m 'part. If the tree is perpendicular [0 the ground, what is its hcight to the nearest metre?
To etimate the amount of usable lumber in & trce; Chitra must first estimate the height of the tree: From Points _ Aand Bon the ground, she dctermined thar the angles of clevation for = certain tree were 418 and 529. respectively: The angle formed Jt the base of the trcc betwcen points_ and Bis ...
##### Set uP_ but do not evaluate, Integral for the length of the curve x=Y+y 2 $y$ V1+( + 3y)7 (1+ (1 + 3x292) dxY +y dy["(+(+apl)4y Vi+ (1 + 3*)2 dx~/6 points
Set uP_ but do not evaluate, Integral for the length of the curve x=Y+y 2 $y$ V1+( + 3y)7 (1+ (1 + 3x292) dx Y +y dy ["(+(+apl)4y Vi+ (1 + 3*)2 dx ~/6 points...
##### D 0 -epangi J Inttna V 1 1 W1 pi1 3 oan 1 8 pherved?
d 0 -epangi J Inttna V 1 1 W 1 pi 1 3 oan 1 8 pherved?...
##### Exercise (3) Variable Commission MMM Company pays its employees on graduated commission sacle: 3% on the first RO1O,000 sale, 5% on sales from RO1O,OOlto RO2O,000, 5% on sales irom RO2O,001 to RO3O,000. During the last month, Saeed s and Ameer's sales were RO25,000 and RO18,000 respectively. Calculate their gross pay?
Exercise (3) Variable Commission MMM Company pays its employees on graduated commission sacle: 3% on the first RO1O,000 sale, 5% on sales from RO1O,OOlto RO2O,000, 5% on sales irom RO2O,001 to RO3O,000. During the last month, Saeed s and Ameer's sales were RO25,000 and RO18,000 respectively. Ca... | 2022-07-06 06:31:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4478958547115326, "perplexity": 14512.712490781449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00023.warc.gz"} |
https://www.groundai.com/project/parseit-a-question-answer-based-tool-to-learn-parsing-techniques/ | ParseIT: A Question-Answer based Tool to Learn Parsing Techniques
# ParseIT: A Question-Answer based Tool to Learn Parsing Techniques
###### Abstract
Parsing (also called syntax analysis) techniques cover a substantial portion of any undergraduate Compiler Design course. We present ParseIT, a tool to help students understand the parsing techniques through question-answering. ParseIT automates the generation of tutorial questions based on the Context Free Grammar provided by the student and generates feedback for the student solutions. The tool generates multiple-choice questions (MCQs) and fill in the blank type questions, and evaluates students’ attempts. It provides hints for incorrect attempts, again in terms of MCQs. The hints questions are generated for any correct choice that is missed or any incorrect choice that is selected. Another interesting form of hint generated is an input string that helps the students identify incorrectly filled cells of a parsing table. We also present results of a user study conducted to measure the effectiveness of ParseIT.
ParseIT: A Question-Answer based Tool to Learn Parsing Techniques
Amey Karkare Indian Institute of Technology Kanpur Kanpur, UP, India karkare@cse.iitk.ac.in
Nimisha Agarwal Indian Institute of Technology Kanpur Kanpur, UP, India nimisha@cse.iitk.ac.in
\@float
\end@float
Intelligent Tutoring ; Education; Programming; Compilers; E-Learning
Compiler design is an important subject in the computer science curriculum for undergraduates [?]. Compilers are one of the success stories of Computer Science, where sound theoretical concepts (e.g. Automata, Grammars, Graph Theory, Lattice Theory etc.) are backed by practical implementations (Lexical analyzers, Parsers, Code Optimizers etc.) to solve the real world problem of fast and resource-efficient compilation. Most existing compiler courses [?, ?, ?, ?] divide the curriculum into modules corresponding to the phases of compilation. Instructors discuss the theory in lectures while students typically work on a semester-long project implementing a compiler for some small language.
In a typical course, about 15%-22% of the total time is spent on syntax analysis phase (also called parsing techniques, see Table ParseIT: A Question-Answer based Tool to Learn Parsing Techniques). A number of concepts are introduced to explain the internals of parsers, for example first sets, follow sets, item set, goto and closure sets, parse tables and the parsing algorithms [?], making the understanding difficult. While parser generators (YACC and its variants) allow the students to experiment with grammars, the working of the parser generated by the tools is still opaque111The generated parsers do produce debugging information when used with appropriate options, but this is of little didactical value as one needs to know the parsing algorithms to understand it..
Recent development in technologies has enabled institutions to offer courses to large number of students. These massive-open-online courses (MOOCs) [?, ?, ?] digitize the contents of the topics (lecture videos, notes etc), and allow students to access the contents beyond physical boundaries of classrooms. The increase in number of students has added challenges for the instructor for the tutoring aspects, such as the creation of new problems for assignments, solving these problems, grading, and helping the students master a concept through hands-on exercises. These challenges have prompted researchers to develop automated tutoring systems to help the student to explore a course based on his skills and learning speed [?, ?, ?, ?, ?].
In this paper, we present ParseIT, a tool for teaching parsing techniques. ParseIT helps students to understand the parsing concepts through automatically generated problems and hints. Problems are generated based on a Context Free Grammar (CFG) given as input. The tool evaluates the solutions attempted by the user for these problems. Upon evaluation, if the solutions provided by the users are incorrect, it generates hint questions. The problems generated by the tool follow a general Multiple Choice Question (MCQ) pattern, where a user is given a problem with a set of possible choices, 1 or more of which are correct. The incorrect solutions are the ones where a correct option is not chosen, or an incorrect option is chosen, or both. The hints are generated in the forms of (simplified) questions to direct student toward the correct solution. Hint generation procedures involve different types of algorithms, of which the input string generation algorithm is notable. For an incorrect parse table provided by the user, this algorithm enables the creation of an input string that distinguishes a successful parse from an unsuccessful one.
We describe some of the systems developed by other for teaching compiler concepts in Sec. ParseIT: A Question-Answer based Tool to Learn Parsing Techniques. The tool itself is described in Sec. ParseIT: A Question-Answer based Tool to Learn Parsing Techniques. Input string generation algorithms for LL and LR parsers are given in Sec. LABEL:sec:input. We present a summary of the user study in Sec. ParseIT: A Question-Answer based Tool to Learn Parsing Techniques, and conclude in Sec. ParseIT: A Question-Answer based Tool to Learn Parsing Techniques.
Several efforts exist to automate teaching phases of compilers and to help developing a compiler as a course project. LISA [?] helps students learn compiler technology through animations and visualizations. The tool uses animations to explain the working of 3 phases of compilers, namely, lexical analysis, syntax analysis, and semantic analysis. Lexical analysis is taught using animations in DFAs. For syntax analysis, animations are shown for the construction of syntax trees and for semantic analysis, animations are shown for the node visits of the semantic tree and evaluation of attributes. Students understand the working of phases by modifying the specification and observing the corresponding changes in the animation.
Lorenzo et. al. [?] present a system for test-case based automated evaluation of compiler projects. Test cases (inputs and corresponding desired outputs) designed by the instructor are given as input to students’ compilers. The tool then assesses the compiler in three distinct steps–compilation, execution, and correction. The system automatically generates different reports (for instructors and students) by analyzing the logs generated at each of these steps.
Demaille et. al. [?, ?] introduce several tools to improve the teaching of compiler construction projects and make it relevant to the core curriculum. They made changes to Bison [?] to provide detailed textual and graphical descriptions of the LALR automata, allow the use of named symbols in actions (instead of $1,$2, etc.), and use Generalized LR (GLR) as backend. Waite [?] proposed 3 strategies for teaching compilers–software project, application of theory and support for communicating with a computer. Various other tools are also available to teach different phases of compiler like understanding code generation [?], and understanding symbol tables through animations [?].
Our work is different in that we use question-answering as means to explain the working of parsing technology and to guide the students towards the construction of correct parse table.
ParseIT takes as input a context free grammar and uses it as a basis for generating questions. These questions are in the form of MCQ 222MCQs have their advantages as well as disadvantages [?]. We chose MCQ as it is easier for the system to evaluate student choices as compared to the free form text answers. and deal with various concepts related to parsing. The normal workflow involves the following steps:
1. The user provides an input grammar and the choice of topic. The topics refer to the concepts related to parsing such as FIRST set, FOLLOW set, LL Parsing Table, LL Parsing Moves, LR(0) Item-sets, SLR Parsing Table, SLR Parsing Moves, etc.
2. A primary multiple choice question is generated based on the above two pieces of information.
3. If the user answers the problem incorrectly, then hints are generated for the same question in the form of questions.
4. When a correct solution to the problem is received, another question for the same topic is generated and presented to the user.
In the preprocessing step, the system takes a grammar as input and generates the information required for correct solutions. In particular, the tool generates the FIRST set and the FOLLOW set for all non-terminals, LL Parsing Table, LR(0) items, canonical set of items for SLR parser, and SLR parsing table.
For primary problem for the selected topic, ParseIT uses the data-structures to form MCQs having multiple correct answers. Users have to select all valid options, and no invalid option, for the answer to be deemed correct. The options are also generated using the preprocessed data.
In the answer evaluation step, the solution given by the user is compared with the solution computed by the tool in the preprocessing step. If the solutions match, then the control transfers back to the primary problem generation step to generate the next question. However, if the solution is wrong, the tool collects: a) the incorrect options which are selected, and b) the correct options which are not selected by the user and passes them to the hint generation step.333In the rest of the paper, unless specified otherwise, we use the term incorrect choice for both the types of mistakes, i.e., the missing valid choice and the selected invalid choice.
For hints, ParseIT generates multiple hint questions for each of the incorrect choices. These questions are MCQs having a single correct choice. These questions help the user to revise the concept required to get correct solution to the primary question.
Parsing techniques require solving three main types of problems: a) computation of sets of elements, for example, FIRST, FOLLOW, LR Items, GOTO, CLOSURE, b) computation of entries in a parse table, and c) steps of a parser on a given input string.
Since all the sets and tables are computed by ParseIT in the preprocessing step, generation of questions is easy. The details are given in a technical report ´=´
To verify the effectiveness of ParseIT, we implemented the tool in Java. The prototype implementation is available as a JAR file from anonymous Dropbox link [?]. A web interface was created for the user study.
The user study was conducted with with 16 students who have already done an introductory course on Compiler Design. We used 2 grammars and created 22 questions of 1 mark each related to various sub-topics in parsing. The question papers are code named P1 and P2. The students were randomly divided into 4 groups of 4 students each, G1–G4.
Each group solved one question paper using ParseIT, and the other using pen and paper (offline mode). To maintain equality between the two approaches, we provided a cheat sheet containing the required rules to each student for offline mode. Further, the sequence of ParseIT mode and offline mode was alternated. In particular, the groups solved the grammar in the following order:
G1: P2 using offline followed by P1 using ParseIT P1 using ParseIT followed by P2 using offline P2 using ParseIT followed by P1 using offline P1 using offline followed by P2 using ParseIT
The students were asked to fill a survey about the effectiveness of ParseIT after solving both the papers.
Fig. LABEL:fig:avg shows the average of marks for groups while Fig. LABEL:fig:indiv shows average marks for individuals with and without ParseIT. Comparing the average marks across sessions, we found that average marks for G4 remain unchanged while for G1 it reduced by 0.5. For both G2 and G3, the average marks increased by 1. If we include the correct answer after a hint is taken during ParseIT mode, we found that most student could get nearly full marks across the groups (average over all students improved to 21.75 from 18.50 for ParseIT without hints and 19.18 for offline). The biggest improvement was of 7 marks, for 3 students.
Even though the data set is small, it shows that online platform itself does not make a big difference in the understanding of parser concepts, but the hints’ mechanism that results in improvements in marks. The hints allow students to correct their mistakes early. It is also easy to figure out the source of confusion for students, which can be of help to the instructor. The post study survey also corroborated our inference: 15 students accepted that the hints provided a better understanding of parsing, and helped reach the correct solution. One student had a negative feedback as he commented that “Hints are produced as questions, which increases confusion as the user has already answered it wrong.”. However, generating hints in other forms (say natural language sentence) is an area of future long-term research.
In this paper, we described ParseIT for teaching parsing techniques. Our approach is question-answering based: problems are generated automatically and given to students to explain the working of a parser. Further, the hints provided by the tool are also in forms of targeted questions that help a student discover her mistake and revise the concept at the same time. ParseIT allows students to learn the techniques at their own pace, according to their convenience. The user study shows that the interactive nature of ParseIT helps users to learn from their own mistakes through experiments, and reduce the burden on teachers and teaching assistants.
Similar tools exist to teach few other phases of a compiler. In future, we plan to integrate these tools with ParseIT, and develop new tools to automate tutoring of all the phases of the compiler. We also plan to build animations around these concepts to improve student experience and understanding. An interesting question that will require user study over a longer period is whether the hints just helps the students select the correct answer during the exam, or do they have a lasting learning effect. Our plan is to deploy ParseIT in a large class teaching Compilers, to understand its impact on learning.
• 1 N. Agrawal. A tool for teaching parsing techniques. Master’s thesis, IIT Kanpur, 2015. http://www.cse.iitk.ac.in/ karkare/MTP/2014-15/nimisha2015parsing.pdf.
• 2 A. V. Aho, M. S. Lam, R. Sethi, and J. D. Ullman. Compilers: Princiles, Techniques, and Tools. Pearson Education, Inc, 2006.
• 3 R. Alur, L. D’Antoni, S. Gulwani, D. Kini, and M. Viswanathan. Automated grading of DFA constructions. In International Joint Conference on Artificial Intelligence, IJCAI, pages 1976–1982, 2013.
• 4 GNU Bison.
• 5 Coursera.
• 6 Compilers.
• 7 Introduction to Compilers.
• 8 Computer Science Curricula 2013. http://www.acm.org/education/CS2013-final-report.pdf, December 2013.
• 9 Compilers.
• 10 Principles of Compiler Design. http://www.cse.iitk.ac.in/ karkare/courses/2011/cs335.
• 11 L. D’Antoni, D. Kini, R. Alur, S. Gulwani, M. Viswanathan, and B. Hartmann. How can automatic feedback help students construct automata? ACM Trans. Comput.-Hum. Interact., 22(2):9:1–9:24, 2015.
• 12 A. Demaille. Making compiler construction projects relevant to core curriculums. In Innovation and Technology in Computer Science Education, ITiCSE, 2005.
• 13 A. Demaille, R. Levillain, and B. Perrot. A set of tools to teach compiler construction. In Innovation and Technology in Computer Science Education, ITiCSE, 2008.
• 14 edX.
• 15 S. Gulwani. Example-based learning in computer- aided STEM education. Commun. ACM, 57(8):70–80, 2014.
• 16 E. J. Lorenzo, J. Velez, and A. Peñas. A proposal for automatic evaluation in a compiler construction course. In Innovation and Technology in Computer Science Education, ITiCSE, pages 308–312, 2011.
• 17 Computer Science Curricula 2013. https://facultyinnovate.utexas.edu/teaching/assess-learning/question-types/multiple-choice, Last Accessed October 2016.
• 18 M. Mernik and V. Zumer. An educational tool for teaching compiler construction. IEEE Trans. Education, 46(1):61–68, 2003.
• 19 NPTEL: National Programme on Technology Enhanced Learning.
• 20 ParseIT Implementation.
• 21 R. Singh, S. Gulwani, and A. Solar-Lezama. Automated feedback generation for introductory programming assignments. In Programming Language Design and Implementation, PLDI, pages 15–26, 2013.
• 22 T. Sondag, K. L. Pokorny, and H. Rajan. Frances: A tool for understanding code generation. In ACM Technical Symposium on Computing Science Education, SIGCSE, 2010.
• 23 J. Urquiza-Fuentes, F. Manso, J. A. Velázquez-Iturbide, and M. Rubio-Sánchez. Improving compilers education through symbol tables animations. In Innovation and Technology in Computer Science Education, ITiCSE, 2011.
• 24 W. M. Waite. The compiler course in today’s curriculum: Three strategies. SIGCSE Bull., 38(1):87–91, Mar. 2006.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | 2020-05-31 16:54:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.374170184135437, "perplexity": 2047.9449159030744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413551.52/warc/CC-MAIN-20200531151414-20200531181414-00160.warc.gz"} |
https://gauravtiwari.org/introduction-to-universe-part-i-star-and-their-types/ | ## Stars
Universe is vast and amazing. There are many millions of trillions of stars in the sky, most of them being bigger than our sun. Stars account for 98% of the matter in a galaxy. The rest of 2% consists of interstellar or galactic gas & dust in a very attenuated form. The normal density of interstellar gas throughout the galaxy is about one tenth of a hydrogen atom per cm³ volume. Stars tend to form groups.
### Major types of Stars according to their grouping
Lone Stars : Being alone, i.e., no planets around, and going on their own, lone stars are exceptional. They do not follow the condition to be in a galaxy and hence they live outside of a galaxy.
Single Stars: These are found in galaxy & are single but with planets around (for example Our Sun). These do not number more than 25% of Stellar (Star) Population.
Binary Stars or Double Stars: These exist as pair of stars (e.g., Antares in Scorpio is actually ‘two stars’ combination). They are some 33% of stellar population.
Multiple Stars live among many stars around them. Capella & Alpha Centauri comprise 3 stars each, while Castor consists of 6 stars.
Note that, stars which appear single to the naked eyes are sometimes double stars. There are two stars revolving around a common center of gravity. They are also found in orbital motion round each other, in periods varying from about one year to many thousands of years.
### Vivid types of Stars according to their nature
#### Red Giants
When the hydrogen, the main element in a star, is depleted, its outer regions swell and redden. This is the first sign of age. Such stars are called Red Giants.
Our Star, the Sun, is expected to turn into a red giant in another 5 billion years. Red giants are dying stars that has expanded greatly from its original size and gives off red light. They have gigantic dimensions.
#### Black Dwarf
via Wikipedia
It is the blackened corpse of a star. Ultimately it disappears into the blackness of the space.
#### White Dwarf
via Wikipedia
It is a tiny, dense, hot star, representing a late stage in the life of a star. The matter in it is so incredibly dense that a single teaspoonful of it would weigh several tonnes.
#### Supergiants
These are huge stars, with all their hydrogen fuel used up in their core, but continue to expand hundred of times bigger than its original size before they finally die.
#### Novae & SuperNovae
Artistic imagination
These are kind of stars, whose brightness increases suddenly by 10 to 20 times are more and then fades gradually into normal brightness. The sudden increase in brightness is attributed to a partial or outright explosion. In Nova, it seems that only the outer shell explodes, whereas in SuperNova the entire star explodes.
### Variable Stars
There are stars that show varying degrees of luminosity.
#### Quasars
These are variable stars and are powerful quasi stellar sources of radio radiations.
#### Pulsars
These are also variable stars which emit regular pulses of electromagnetic waves of very short duration.
# Solar System
Solar system Imaginated
The Solar System means system of the Sun. All bodies under the gravitational influence of our local star, the Sun, together with the Sun, form the Solar System.
# Bodies? What kind of bodies?
•The largest bodies, orbiting the Sun, including Earth are called planets.
•Often smaller cool bodies called satellites or moons, orbiting a planet.
•Bodies smaller than the planets that orbit the sun are classed as asteroids if they are rocky or metallic, comets if they are mostly ice and dust, and meteoroids if they are very small. Most comets release gases as they near the heat of sun, producing a luminous cloud called coma & often a long tail. A meteoroid that burns in Earth’s atmosphere is a meteor, while one that reaches Earth without burning completely becomes a meteorite.
• After the exclusion of Pluto from the planet category, a new category is formed: Dwarf Planet.
# Elements of Solar System
Stars: (1) The Sun
Planets: (8) Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune.
Dwarf Planets: (4) Pluto, Charon, Eris, Ceres- along with the numerous satellites that travels around most of the planets.
Others: Asteroids, Interplanetary Dust, Plasma.
# The Sun
• Sun is one of more than 100 billion stars in the giant spiral galaxy called the Milky Way.
• Sun is the center of the solar system. Its mass is about 740 times as much as that of all the planets combined.
• It continuously gives off energy in several forms- visible light; invisible infrared, ultra-violet, X-rays and $\gamma$ -rays, cosmic rays, radio waves and plasma.
•The Sun generally move in almost circular orbits around the galactic center at an average speed of about 250 km per second.
•It takes 250 million years to complete one revolution round the center. This period is called a Cosmic Year.
•It’s energy is generated by nuclear fusion in its interior. It is calculated that the Sun consumes about 4 million tonnes of hydrogen every second. At this rate, it is expected to burn out its stock of hydrogen in about 5 billion years and turn into a red giant.
# Solar Statistics
Absolute Visual Magnitude: 4.75
Diameter: 1,384,000 km
Time of one Rotation as seen from the Earth: 25.38 days (at equator) to 33 days (at poles).
Chemical Composition:
• Hydrogen 71%
• Helium 26.5%
• Other Elements 2.5%
Age: 4.5 billion years, estimated.
Mean distance from Earth 8.2 light seconds i.e., about 150 million km.
# Structure of the Sun
We may divide the internals of the Sun into following major parts:
• Corona
• Chromosphere
• Photosphere
• Solar Envelope
• The Core
### Corona
Corona is the outermost part of the sun & you may see it when Full Solar Eclipses occur. The temperature of corona is about 2.7 million °C, which is hot enough to emit ultraviolet and X-rays. The corona extends millions of kilometers into space above the photosphere.
### Chromosphere
In a solar eclipse, a red circle around the outside of the Sun can be seen sometime. This is the chromosphere. The chromosphere is made up of the gases that extend away from the photosphere.Chromosphere is of red color, caused by the abundance of hydrogen. It has a greater (than Photosphere) temperature of about 10000°C. The Chromosphere merges into Corona & Photosphere.
### Photosphere
The photosphere is the zone from where the sunlight we see is emitted. The photosphere is a layer of low pressure gasses surrounding the envelope. It is 400 km thick, with a temperature of 4500°C to 6000°C.
### The Core:
The innermost layer of the sun is the core with a density of 160g/cm³ (10 times that of lead). The core might be expected to be solid. However, the core’s temperature of 15 million°C keeps it in a gaseous state. In the core, fusion reactions produce energy in the form of $\gamma$ rays and neutrinos.
From the photosphere of the sun to the chromosphere and to the Corona , the temperature increases, while the same procedure follows up from the photosphere to the core of the sun (I mean temperature increases). Thus you may say in other words that the photosphere is the coolest place in the sun.
# Spots in Sun? — SunSpots
The sun has enormous organized magnetic fields that reach from pole to pole. Loops of the magnetic field oppose convection in the convective envelope and stop the flow of energy to the surface. This results in cool spots (i.e. SUNSPOTS) at the surface which produce less light than the warmer part.
Sunspots are dark spots on the photosphere, typically with the same diameter as the Earth.
Sunspots have even lower temperatures than the photosphere. The center of a spot is called the umbra, looks dark gray if heavily filtered & is only 4200°C (as compared to the photosphere at 6000°C). Penumbra is the portion around the umbra, which looks lighter gray (if filtered).
Sunspots come in cycles, increasing sharply (in numbers) & then decreasing sharply. The period of this solar cycle is about eleven years. The largest spot ever measured (April 1974) covered 18130 million km² i.e., 0.7% of the Sun’s visible surface. The life periods of these spots also vary—from a few hours to many weeks.
# Polar Auroras: Beauty of Sunlight on Earth
Polar auroras appear only on the poles and since there are two poles on the Earth, there exist two type of auroras, Aurora Borealis or northern Lights on north pole and the Aurora Australis or Southern Lights on south pole. These are lights that sweep across the sky in waves or streamers or folds. They are very often multicolored and provide one of the finest spectacles in nature. They occur in Arctic and the Antarctic regions (respectively), but the Northern lights can be seen as far south as New Orleans in America and the Southern lights as far north as Australia.
Updated: April 5th, 2014
#### Gaurav Tiwari
A designer by profession, a mathematician by education but a Blogger by hobby. Loves reading and writing. Just that.
### This Post Has One Comment
1. I AM IN 10TH STANDARD, BUT I LOVE LEARNING ABOUT SPACE AT A VERY YOUNG AGE. THE PICTURES PUT ON THIS WEBSITE ARE REALLY AMAZING. I LOVE THE PICTURES!!!!!!!!!!!!!!!!!!!!! | 2018-02-25 15:29:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4450840950012207, "perplexity": 1931.3729780888448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816647.80/warc/CC-MAIN-20180225150214-20180225170214-00158.warc.gz"} |
http://crypto.stackexchange.com/questions?page=1&sort=active&pagesize=30 | # All Questions
7k views
### What is the difference between MAC and HMAC?
In reference to this question, what are the "stronger security properties" that HMAC provides over MAC. I got that MAC requires an IV whereas HMAC doesn't. I also understood that MAC may reveal ...
24 views
### Encryption algorithms larger than 256 Bit for “big data” encryption?
I'm somewhat new to encryption. When looking at encryption programs for big data, I frequently see a maximum of 256 bits. Why do we generally restrict our (symmetric) keys to 256 bits? Can more ...
11 views
2k views
### Why should I make my cipher public?
As I understand it, the less people know about the internals of my protocol or cipher, the more secure the protocol is. However Kerckhoffs's principle states that A cryptosystem should be secure ...
8 views
### Are there any paper/human strong crypto systems? [duplicate]
Are there any paper/human executed crypto systems that have reasonable strength? I am aware of a few candidates so far Solitare from Cryptonomicon https://en.wikipedia.org/wiki/Solitaire_(cipher) ...
219 views
### What are the practical implications of ciphertext distinguishability?
Commonly there are four ways to "break" a secrecy-focused cryptosystem: Recover the secret key Recover the message Distinguish an encryption from random noise Distinguish the encryption of two ...
19 views
### One time pad ciphers in emails
In order to achieve very high security for privacy, would it be cryptographically secure to use one time pad ciphers in emails? The distribution of the keyword would pose no problems since I would ...
37 views
What is the advantage of AEAD ciphers? Why is the TLS working group pushing for them? I thought modern cipher suites require SHA256 for authentication. What advantage is there to including Poly1305?
645 views
64 views
### Point addition in NaCl/libsodium (Curve25519)
In NaCl and libsodium, the crypto_scalarmult function implements the operation $Q = kP$ (scalar/point multiplication). There doesn't seem to be a function for point ...
114 views
+50
### What if the p and q used in keys generation of Pailler cryptosystem are composite?
I've seen a few implementations of Paillier cryptosystem that uses probable primes to choose $p$ and $q$. Assuming that a keypair is generated with $p$ and $q$ that are coprime and that $pq$ is ...
13 views
### What is the definition of “the correlation” and “the difference propagation probability”? [on hold]
Non linearity has two properties: 1. the correlation 2. the difference propagation probability what is the definition of each concept?
30 views
### parallel shifters [closed]
suppose there is two registers each of size 8 bit and the inputs to these register are in parallel so can we combine this 2 register so that new combined register will have input 16 bits and all are ...
20 views
### how can eve guess a pseudo-random key has been used for encryption [on hold]
I am stuck in my project and earnestly need suggestions for the above question
251 views
### AES-CTR with ephemeral keys vs IV
Let's say that we have a key exchange of some sort, which leaves A and B sharing the long-term secret $S$. Then, A and B want to use AES-CTR for the communication, which leaves us with several ...
62 views
### Attacks on elliptic-curve based cryptosystems through solving the Decisional Diffie-Hellman Problem with the Weil Pairing
Are there any examples of practical attacks on cryptosystems set over elliptic curves which utilize the easiness of DDH for certain choices of curves $E(\textbf{F}_q)$, and as such their lack of ...
16 views
### Light weight crypto library implementations [migrated]
Are there any open source crypto libraries that can be used in embedded systems or in other memory constrained places like a boot rom? I am looking for libraries that I can compile only the algorithms ...
26 views
### Basic cryptography [on hold]
{First of all I'd like to apologise about my English} I'm interested in most popular cryptography encodements, but ones, where the substitution cipher is known (for example Caesar's code), not created ...
180 views
### PGP String to Key specifiers
I've been reading through the PGP Standard and here I'm a little confused. This section is discussing converting string data to a session key. I'm confused about the paragraph in bold. First off, what ...
64 views
### Example: pre-image resistance to second pre-image resistance
It is possible to convert a pre-image resistant function $f:\{0,1\}^{n}\rightarrow \{0,1\}^{n}$ to a second-preimage resistant function? I am thinking to use a pseudo-random generator and construct ...
26 views
### What consequences do the plaintext space size has on the performances in the BGV scheme?
In the BGV paper [1], the authors say in §5.4 that you can have $\mathbb{Z}_p$ as plaintext size with a large $p$. What is the impact of the size of $p$ on the ciphertext size and computational work ...
28 views
### Help me find out what type of encryption this is. [on hold]
Alright this may not be an easy one. May not be something anyone knows. What I do know already (and I have made a password recovery tool from what I have learned) The password is translated into a ...
20 views
### need help for decrypting a cipher.. i am new to this so was wondering if someone could tell me how to go about it [on hold]
how to begin to decrypt any numeric cypher? what are the basic steps to solve easy numeric cyphers?
95 views
### Is there any such thing as “proof of location”?
I'm looking for a way for someone to prove to me their geographical location without a trusted third-party. Imagine wanting to prove to someone you're actually physically in a specific place. I'd ...
171 views
### Why calculate pi to estimate randomness?
Why do testing suites calculate pi using the Monte Carlo method to determine if a series of numbers are random? As far as I can tell, the Monte Carlo method can be used to estimate pi itself (as a ...
68 views
### I don't know whattype of cipher this is [on hold]
I am trying to solve the following sipher Nosdhiibotidcylhrdeovedljuetre I have no experience with ciphers but substitution didn't seem to work, and since its a long string without breaks process of ...
118 views | 2015-08-01 07:56:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8135536313056946, "perplexity": 2067.094126718548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988598.68/warc/CC-MAIN-20150728002308-00189-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3071980/patterns-in-division-graphs-modulo-n | # Patterns in division graphs modulo $n$
(I made an edit due to hints from Alex Ravsky. Thanks to him.)
General division graphs with nodes $$1,2,\dots N$$ and an edge between $$n$$ and $$m$$ when $$n$$ divides $$m$$ or $$m$$ divides $$n$$ are sparse and give rise to one interesting visual pattern.
In turn, division graphs modulo $$N$$ with nodes $$1,2,\dots N-1$$ and an edge between $$n$$ and $$m$$ when $$n|m$$ modulo $$N$$ or $$m|n$$ modulo $$N$$, i.e. when it exists a $$k$$ with $$k\cdot n \equiv m \pmod{N}$$ or vice versa, are dense. Especially when $$N$$ is prime the division graph modulo $$N$$ is isomorphic to the complete $$N-1$$ graph.
So it makes sense to consider the non-division graphs modulo $$N$$ with an edge between $$n$$ and $$m$$ when $$n$$ doesn't divide $$m$$ or $$m$$ doesn't divide $$n$$. (Note that this is not the complement of the division graph which has an edge between $$n$$ and $$m$$ when $$n$$ doesn't divide $$m$$ and $$m$$ doesn't divide $$n$$.) These graphs are relatively sparse, and especially isomorphic to the empty $$N-1$$ graph when $$N$$ is prime.
For most composite numbers, these graphs don't reveal anything interesting, e.g. for $$N=8$$, $$N=26$$, and $$N=90$$:
But eventually interesting patterns emerge, e.g. for $$N=77$$, $$N=91$$, and $$N=121$$:
The distinctness of the observed patterns - nodes with a significantly higher degree than others, seen as dark spots with bright lines between them - is highest when $$N$$ is a product of two not too small primes, e.g. $$77 = 7\cdot 11$$, $$91 = 7\cdot 13$$, , $$121= 11\cdot 11$$.
My questions are:
1. How can the latter be explained?
2. How can the specific high-degree numbers be explained?
The high-degree numbers for $$N=77$$ are $$7, 11, 14,21,22, 21, 28, 33, 35,\dots$$, in general the multiples of the prime factors of $$N$$.
• Why is there no edge between $1$ and the even numbers in the graph for $N=8$? While $1$ divides all of them, it is not divisible by any them, or is it? – Peter Košinár Jan 17 at 19:52
It is easy to check that a number $$n$$ divides $$m$$ modulo $$N$$ iff $$(N,n)|m$$ iff $$(N,n)|(N,m)$$, where $$(N,n)$$ (resp. $$(N,m)$$) is the greatest common divisor of the numbers $$N$$ and $$n$$ (resp., $$m$$). Thus $$n$$ and $$m$$ are adjacent in the graph iff not $$(N,n)|(N,m)$$ or not $$(N,m)|(N,n)$$, that is when $$(N,n)\ne (N,m)$$. Thus the graph is a complete $$\tau(N)-1$$-partite graph, where $$\tau(N)$$ is the number of divisors of the number $$N$$, parts of the partition of the graph are indexed by divisors of $$N$$ (which are less than $$N$$), and a part $$[d]$$ corresponding to a divisor $$d$$ consists of all numbers $$n$$ between $$1$$ and $$N-1$$ such that $$(N,n)=d$$. For instance, for the graph for $$N=8$$ at the first picture should be $$3$$-partite with the parts $$[1]=\{1,3,5,7\}$$, $$[2]=\{2,6\}$$, and $$[4]=\{4\}$$. In particular, in the first figure $$1$$ also has to be adjacent to $$2$$, $$4$$, and $$6$$, because not $$(2|1)$$, not $$(4|1)$$, and $$(6|1)$$. Since a vertex of the graph is adjacent exactly to vertises of all parts but its own, the set of vertices of large(st) degree are vertices from small(est) parts. For instance, for even $$N$$ the largest degree has a unique vertex $$N/2$$ (see the pictures for vertex $$4$$ for $$N=8$$, vertex $$13$$ for $$N=26$$, and vertex $$45$$ for $$N=90$$); this is so because for a divisor $$n\ne N/2$$ of $$N$$ part $$[n]$$ contains at least two vertices $$n$$ and $$N-n$$. In general, part $$[d]$$ consists of $$\varphi(N/d)$$ vertices, where $$\varphi$$ is Euler function. That is a vertex $$n$$ has a largest degree iff $$\varphi(N/(N,n))$$ is the smallest, that is iff $$N/(N,n)$$ is the smallest prime divisor $$p(N)$$ of $$N$$. There are $$p(N)-1$$ such vertices $$n$$, namely, vertices $$kN/p(N)$$ with $$k=1,\dots, p(N)-1$$. When $$N=p_1p_2$$ is a product of two prime numbers then the graph is $$3$$-partite with parts $$[1]$$ of size $$N-p_1-p_2+1$$, $$[p_1]$$ of size $$p_2-1$$, and $$[p_2]$$ of size $$p_1-1$$. For instance, for $$N=77$$, $$[7]=7,14,21,35,\dots, 70$$ and $$[11]=11,22,33,\dots, 66$$, see the fourth and fifth pictures. If $$N=p^2$$ for a prime number $$p$$ then the graph is $$2$$-partite, with the part $$[1]$$ of size $$[N-p]$$, and $$[p]$$ of size $$p-1$$, see the last picture for $$p=11$$.
• The graphs are defined as symmetric (undirected) graphs, right. I wanted to make this clear by saying "draw a line between $n$ and $m$" instead of "draw an arrow from $n$ to $m$". The full definition so would have to be: $n$ and $m$ are related when $n|m$ modulo $N$ or $m|n$ modulo $N$. – Hans-Peter Stricker Jan 16 at 8:28
• @HansStricker OK, but then in a complement of a division graph modulo $N$ there is an edge between $n$ and $m$ iff $n$ does not divide $m$ modulo $N$ and $n$ does not divide $m$ modulo $N$. In particular, the graph at the first picture is not complement of a division graph, because it has an edge between $3$ and $6$, whereas $3$ divides $6$, right? – Alex Ravsky Jan 16 at 9:35
• The graphs I'm talking about are not the complements of division graphs: There's not an edge between $n$ and $m$ when neither $n|m$ nor $m|n$ (modulo $N$ omitted), i.e. NOT($n|m$ OR $m|n$) = NOT($n|m$) AND NOT($m|n$), but when NOT($n|m$) OR NOT($m|n$). Of course this makes a huge difference conceptually, thanks for pointing this out! So you don't get the "complement" by just swapping edges and non-edges, but you have to calculate it from sratch. (Should I edit my question, what do you suggest?) – Hans-Peter Stricker Jan 16 at 10:13 | 2019-05-19 17:09:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 130, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285140633583069, "perplexity": 128.09818856484597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255071.27/warc/CC-MAIN-20190519161546-20190519183546-00043.warc.gz"} |
https://marcofrasca.wordpress.com/2011/04/20/ | ## Chiral condensates in a magnetic field: Accepted!
20/04/2011
As my readers could know, I have had a paper written in collaboration with Marco Ruggieri (see here). Marco is currently working at Yukawa Institute in Kyoto (Japan). The great news is that our paper has been accepted for publication on Physical Review D. I am really happy for this very good result of a collaboration that I hope will endure. Currently, we are working on a proof of existence of the critical endpoint for QCD using the Nambu-Jona-Lasinio model. This is an open question that has serious difficulties to get an answer also for a fundamental problem encountered on the lattice: The so-called sign problem. So, a mathematical proof will make a breakthrough in the field with the possibility to be experimentally confirmed at laboratory facilities.
Marco himself proposed a novel approach to get a proof of the critical endpoint to bypass the sign problem (see here).
So, I hope to have had the chance to transmit to my readership the excitement of this line of research and how it is strongly entangled with the understanding of low-energy QCD and the more deep question of the mass gap in the Yang-Mills theory. Surely, I will keep posting on this. Just stay tuned!
Marco Frasca, & Marco Ruggieri (2011). Magnetic Susceptibility of the Quark Condensate and Polarization from
Chiral Models arXiv arXiv: 1103.1194v1
Philippe de Forcrand (2010). Simulating QCD at finite density PoS (LAT2009)010, 2009 arXiv: 1005.0539v2
## A simpler explanation for the CDF bump
20/04/2011
A lot of fuss arose about the recent almost finding of a new particle at Tevatron (see here). Several exotic hypotheses were put forward mostly looking for physics beyond Standard Model. Of course, being there such a bump at about $3\sigma$, we cannot yet cry out for a discovery and more mundane explanations could exist.
Indeed, this is the content of this paper appeared on arXiv. These authors point out some weak points in the analysis done by CDF that amount in the end at an imperfect estimation of the background. This is also my claim as strong interactions are not completely under control. I give here authors’ conclusions for your considerations:
In conclusion, we observe that the dijet invariant mass peak seen in the recent CDF Wjj cross section is completely consistent with the excess observed in the CDF single-top-quark analysis. Both may be explained by an upward fluctuation in the CDF data set of s-channel single-top-quark production, and t-channel production accompanied by an additional low-energy jet. The latter process is poorly modeled by Monte Carlo, and the apparent t-channel excess could simply be an artifact of theoretical uncertainty. Given the modest excess observed by the D0 Collaboration in their single-top-quark data set , we predict the D0 Collaboration would not see a significant dijet invariant mass peak if they follow the CDF procedure.
So, Standard Model strikes back again.
CDF Collaboration, & T. Aaltonen (2011). Invariant Mass Distribution of Jet Pairs Produced in Association with a
W boson in ppbar Collisions at sqrt(s) = 1.96 TeV arXiv arXiv: 1104.0699v1
Zack Sullivan, & Arjun Menon (2011). A standard model explanation of a CDF dijet excess in Wjj arXiv arXiv: 1104.3790v1 | 2022-01-18 05:22:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8539243936538696, "perplexity": 1333.1293677263175}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00508.warc.gz"} |
http://uncyclopedia.wikia.com/wiki/User:Nikosai | # User:Nikosai
Nikosai, also known as Niko, is that annoying little guy who follows other people around and hijacks all their edits, adding random extra bits on the end. He is also the founder of the Happy Teddy cult, although he does not in fact worship the I'M HAPPY TEDDY, himself.
And just remember, kiddies:
$E = mc^{2}$
Now if only the bloody LaTeX renderer would work... update: It's not going to work... and if I ever get off my lazy bum and use LaTeX2HTML to render the above (unlikely), then I'll replace it with a nice, pretty image. | 2014-04-19 08:13:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5718321800231934, "perplexity": 3557.5907949646175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/computing-the-area-under-a-curve-with-matlab.285458/ | # Computing the area under a curve with Matlab
1. Jan 17, 2009
### Faiza Mustafa
how to find the area under the curve by using matlab
i mean to say what is the command for this
2. Jan 17, 2009
### Faiza Mustafa
Re: difficult integral.....
i am worried
Thanks
3. Jan 17, 2009
### qntty
The command to evaluate
$\int_a^b f(x) dx$
is
Code (Text):
4. Jan 18, 2009
### Faiza Mustafa
Thanks alot
I shall check it.
5. Jan 18, 2009
### Faiza Mustafa
Hi qntty
hw r u?
in the previous problem i am actually confused between "quad" and "quad1" command
and is the f(x) is y=f(x)? i mean to say the function that relate the y and x.
Do you if there is some data values in excel then how
can we plot these values in matlab.
thnx
6. Jan 18, 2009
### ialbrekht
Hi Faiza,
If you have only few point of your curve, than you may use two ways:
1. Find approximation polynomial and than evaluate integral;
2. Just use following formula (by integral definition):
$$\sum_{i=1}^{n-1} f_i(x_{i+1}-x_i) , where f_i=f(x_i)$$
There are some other methods you may find in the Internet.
Note: Both methods gives not exact solutions. For exact solution you have to know f(x) | 2018-03-19 19:27:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6083021759986877, "perplexity": 5187.1675500245965}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647044.86/warc/CC-MAIN-20180319175337-20180319195337-00499.warc.gz"} |
https://www.albert.io/learn/ap-physics-c-mechanics/question/mass-of-a-planet-with-non-constant-density | Limited access
A particular planet is spherically symmetric, but has a non-constant density given by $\rho \left( r \right) ={ \rho }_{ 0 }\left( 1-\frac { r }{ R } \right)$, where $R$ is the radius of the planet.
What is the total mass of the planet?
A
$\cfrac { 4}{ 3 } \pi { R }^{ 3 }$
B
$\cfrac { 4 }{ 3 } {\rho}_{0} \pi { R }^{ 3 }$
C
$\cfrac { 1 }{ 3 } {\rho}_{0} \pi { R }^{ 3 }$
D
$\cfrac { 1 }{ 4 } {\rho}_{0} \pi { R }^{ 3 }$
E
$\cfrac { 1 }{ 12} {\rho}_{0} \pi { R }^{ 3 }$
Select an assignment template | 2017-03-24 20:04:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24569137394428253, "perplexity": 437.7711551918053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188553.49/warc/CC-MAIN-20170322212948-00563-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/84762-find-combined-cost.html | 1. Find Combined Cost
Three shirts and 2 neckties cost $69. At the same prices, 2 shirts and 3 neckties cost$61. What is the combined cost of one shirt and one necktie?
I realize this is a system of linear equations in 2 variables.
Let s = shirts and n = neckties
I came up with the following two equations:
3s + 2n = 69
2s + 3n = 61
Is this right so far?
I came up with a combined cost of $35. The right answer is$26. What did I do wrong?
How do I get $26? 2. Hallo mangentarita! Originally Posted by magentarita Three shirts and 2 neckties cost$69. At the same prices, 2 shirts and 3 neckties cost $61. What is the combined cost of one shirt and one necktie? I realize this is a system of linear equations in 2 variables. Okay. Originally Posted by magentarita Let s = shirts and n = neckties Well, s = price of one shirt and n = price of one necktie. Originally Posted by magentarita I came up with the following two equations: 3s + 2n = 69 2s + 3n = 61 Is this right so far? Yes, it is. Originally Posted by magentarita I came up with a combined cost of$35.
The right answer is $26. What did I do wrong? I don't know. Misscalculation? The equations 3s + 2n = 69 2s + 3n = 61 are solved by n=$9 and s = $17 Originally Posted by magentarita How do I get$26?
9+17 = 26
You failed in solving the two equations
Try again?
Regards, Rapha
3. Hello, magentarita!
Three shirts and 2 neckties cost $69. At the same prices, 2 shirts and 3 neckties cost$61.
What is the combined cost of one shirt and one necktie?
I realize this is a system of linear equations in 2 variables.
Let $s$ = shirts and $n$ = neckties
I came up with the following two equations: . $\begin{array}{ccc}3s + 2n &=& 69 \\ 2s + 3n &=& 61 \end{array}$
Is this right so far? . Yes, good work!
It's hard to see your work from here,
. . but it looks you played the $8\heartsuit$ instead of the $K\spadesuit.$
But seriously, there is a back-door solution to this problem.
Add the two equations: . $5s + 5n \:=\:130$
. . Therefore: . $\boxed{s + n \:=\:26}$
4. ok...
Originally Posted by Rapha
Hallo mangentarita!
Okay.
Well, s = price of one shirt and n = price of one necktie.
Yes, it is.
I don't know. Misscalculation?
The equations
3s + 2n = 69
2s + 3n = 61
are solved by n=$9 and s =$17
9+17 = 26
You failed in solving the two equations
Try again?
Regards, Rapha
I thank you. I got the wrong answer for n. I somehow got n = 18.
5. ok...
Originally Posted by Soroban
Hello, magentarita!
It's hard to see your work from here,
. . but it looks you played the $8\heartsuit$ instead of the $K\spadesuit.$
But seriously, there is a back-door solution to this problem.
Add the two equations: . $5s + 5n \:=\:130$
. . Therefore: . $\boxed{s + n \:=\:26}$
I miss your replies. Thank you. How have you been? | 2016-09-25 15:58:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7944959402084351, "perplexity": 2063.6718162182046}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660242.52/warc/CC-MAIN-20160924173740-00066-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://mattermodeling.stackexchange.com/questions/111/in-vasp-how-is-the-chemical-potential-of-elements-calculated | In VASP, how is the chemical potential of elements calculated?
I would like to calculate the chemical potential of elements having different environmental condition (rich or poor) using VASP. How is this accomplished?
The chemical potential in VASP or any other computational chemistry software is calculated based on the theory of chemical potential relation to Helmholtz energy functional as:
$$\mu_{i} = \Bigg(\frac{\partial F(V,T,N_{1},N_{2},...,N_{i}...,N_{n})}{\partial N_{i}}\Bigg)_{V,T,N_{j \neq i}}$$
You can approximate the above equation up to first order as:
$$\mu_{i} \simeq F(V,T,N_{i}+1,N_{j \neq i}) - F(V,T,N_{i},N_{j \neq i})$$
Here we omitted the terms of $$\mathcal{O}\Big( \frac{\partial^{2} F}{\partial N_{i}^{2}} \Big)$$.
I call $$F(V,T,N_{i}+1,N_{j \neq i}) - F(V,T,N_{i},N_{j \neq i}) = \Delta F_{N_{i} \rightarrow N_{i}+1}$$
So, you have:
$$\mu_{i} = \Delta F_{N_{i} \rightarrow N_{i}+1} = \mu_{\text{XC}}^{i}+\mu_{\text{ideal}}^{i}$$
Where $$\mu^{i}_{\text{ideal}}$$ is the chemical potential of the ideal gas and $$\mu^{i}_{\text{XC}}$$ is the chemical potential of exchange-correlation. The ideal gas chemical potential is trivial:
$$\mu^{i}_{\text{ideal}} = -k_{B}T\ln{\Bigg( \frac{V}{\Lambda^{3} (N_{i}+1)}\Bigg)}$$
Where $$\Lambda$$ is the Broglie wavelength: $$\Lambda = \sqrt{\frac{2\pi\hbar^{2}}{mk_{B}T}}$$.
The exchange-correlation chemical potential is calculated as:
$$\mu^{i}_{\text{XC}} = -k_{B}T \ln{\Bigg ( \frac{1}{V} \frac{\int \exp{\Big(-\frac{U(\mathbf{r}^{N_{i}+1})}{k_{B}T}\Big)} d^{3}\mathbf{r}^{N_{i}+1}}{\int \exp{\Big(-\frac{U(\mathbf{r}^{N_{i}})}{k_{B}T}\Big)} d^{3}\mathbf{r}^{N_{i}}} \Bigg )}$$
Or in ensemble form:
$$\mu^{i}_{\text{XC}} = -k_{B}T \ln{\Bigg ( \frac{1}{V} \Bigg \langle \int \exp{\Big( -\frac{\Delta U_{N_{i} \rightarrow N_{i}+1}}{k_{B}T} \Big)} d^{3}\mathbf{r}^{N_{i}+1} \Bigg \rangle \Bigg )}$$ | 2021-05-06 04:04:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9459541440010071, "perplexity": 673.9031653823239}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988725.79/warc/CC-MAIN-20210506023918-20210506053918-00408.warc.gz"} |
https://fineview.academy/SyllabusContent?syllabusContentId=43 | # Edexcel A Level - Trapezium Rule
## Integration as Limit of Sum & Trapezium Rule
##### Trapezium Rule Approximation for Difficult Integrals 1
The method of finding the area under a curve by splitting it up into strips is often referred to as a "Riemann sum". The value of a definite integral can be estimated using various numerical methods. This is particularly useful when the integral is difficult or impossible to integrate. The method consists of dividing the area into strips. The trapezium rule (or trapezoidal rule in the USA) is one such method. The area of each strip is estimated by taking it to be approximately a trapezium.
The trapezium method uses this formula:
With the display, you should see how the accuracy of the estimate increases as the number of intervals increases.
With the display, you should also see how the trapezium rule overestimates where the curve is concave upwards, and underestimates where the curve is convex upwards.
##### Trapezium Rule Approximation for Difficult Integrals 2
Functions such as $\color{blue}{y = {e^{ - {x^2}}}}$ and $\color{blue}{y = \sin \left( {x^2} \right)}$ cannot be integrated by basic methods.
However, their definite integrals can be estimated using numerical methods such as the trapezium rule (or trapezoidal rule in the USA) which is presented here.
The method consists of dividing the area into strips. The area of each strip is estimated by taking it to be approximately a trapezium.
With the display, you should see how the accuracy of the estimate increases as the number of intervals increases.
With the display, you should also see how the trapezium rule overestimates where the curve is concave upwards, and underestimates where the curve is convex upwards. | 2023-04-01 10:26:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786973595619202, "perplexity": 321.02347799234104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00186.warc.gz"} |
https://www.nature.com/articles/s41467-021-25756-4 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Efficient generative modeling of protein sequences using simple autoregressive models
## Abstract
Generative models emerge as promising candidates for novel sequence-data driven approaches to protein design, and for the extraction of structural and functional information about proteins deeply hidden in rapidly growing sequence databases. Here we propose simple autoregressive models as highly accurate but computationally efficient generative sequence models. We show that they perform similarly to existing approaches based on Boltzmann machines or deep generative models, but at a substantially lower computational cost (by a factor between 102 and 103). Furthermore, the simple structure of our models has distinctive mathematical advantages, which translate into an improved applicability in sequence generation and evaluation. Within these models, we can easily estimate both the probability of a given sequence, and, using the model’s entropy, the size of the functional sequence space related to a specific protein family. In the example of response regulators, we find a huge number of ca. 1068 possible sequences, which nevertheless constitute only the astronomically small fraction 10−80 of all amino-acid sequences of the same length. These findings illustrate the potential and the difficulty in exploring sequence space via generative sequence models.
## Introduction
The impressive growth of sequence databases is prompted by increasingly powerful techniques in data-driven modeling, helping to extract the rich information hidden in raw data. In the context of protein sequences, unsupervised learning techniques are of particular interest: only about 0.25% of the more than 200 million amino-acid sequences currently available in the Uniprot database1 have manual annotations, which can be used for supervised methods.
Unsupervised methods may benefit from evolutionary relationships between proteins: while mutations modify amino-acid sequences, selection keeps their biological functions and their three-dimensional structures remarkably conserved. The Pfam protein family database2, e.g., lists more than 19,000 families of homologous proteins, offering rich datasets of sequence-diversified but functionally conserved proteins.
In this context, generative statistical models are rapidly gaining interest. The natural sequence variability across a protein family is captured via a probability P(a1,..., aL) defined for all amino-acid sequences (a1,..., aL). Sampling from P(a1,..., aL) can be used to generate new, non-natural amino-acid sequences, which is an ideal case that should be statistically indistinguishable from the natural sequences. However, the task of learning P(a1,..., aL) is highly non-trivial: the model has to assign probabilities to all 20L possible amino-acid sequences. For typical proteins of lengths, L = 50 − 500, this accounts for 1065−10650 values, to be learned from the M = 103−106 sequences contained in most protein families. Selecting adequate generative model architectures is thus of outstanding importance.
The currently best explored generative models for proteins are so-called coevolutionary models3, such as those constructed by the Direct Coupling Analysis (DCA)4,5,6 (a more detailed review of the state of the art is provided below). They explicitly model the usage of amino acids in single positions (i.e., residue conservation) and correlations between pairs of positions (i.e., residue coevolution). The resulting models are mathematically equivalent to Potts models7 in statistical physics, or to Boltzmann machines in statistical learning8. They have found numerous applications in protein biology.
The effect of amino-acid mutations is predicted via the log-ratio $${{{{{{\mathrm{log}}}}}}}\,\{P({{{{{{{\rm{mutant}}}}}}}})/P({{{{{{{\rm{wildtype}}}}}}}})\}$$ between mutant and wildtype probabilities. Strong correlations to mutational effects determined experimentally via deep mutational scanning have been reported9,10. Promising applications are the data-driven design of mutant libraries for protein optimization11,12,13, and the use of Potts models as sequence landscapes in quantitative models of protein evolution14,15.
Contacts between residues in the protein fold are extracted from the strongest epistatic couplings between double mutations, i.e., from the direct couplings giving the name to DCA6. These couplings are essential input features in the wave of deep-learning (DL) methods, which currently revolutionize the field of protein-structure prediction16,17,18,19.
The generative implementation bmDCA5 is able to generate artificial but functional amino-acid sequences20,21. Such observations suggest novel but almost unexplored approaches towards data-driven protein design, which complement current approaches based mostly on large-scale experimental screening of randomized sequence libraries or time-intensive bio-molecular simulation, typically followed by sequence optimization using directed evolution, cf. refs. 22,23 for reviews.
Here we propose a simple model architecture called arDCA, based on a shallow (one-layer) autoregressive model paired with generalized logistic regression. Such models are computationally very efficient, they can be learned in few minutes, as compared to days for bmDCA and more involved architectures. Nevertheless, we demonstrate that arDCA provides highly accurate generative models, comparable to the state of the art in mutational-effect and residue-contact prediction. Their simple structure makes them more robust in the case of limited data. Furthermore, and this may have important applications in homology detection24, our autoregressive models are the only generative models we know about, which allow for calculating exact sequence probabilities, and not only non-normalized sequence weights. Thereby arDCA enables the comparison of the same sequence in different models for different protein families. Last but not least, the entropy of arDCA models, which is related to the size of the functional sequence space associated with a given protein family, can be computed much more efficiently than in bmDCA.
Before proceeding, we provide here a short review of the state of the art in generative protein modeling. The literature is extensive and rapidly growing, so we will concentrate on the methods being most directly relevant as compared to the scope of our work.
We focus on generative models purely based on sequence data. The sequences belonging to homologous protein families and are given in form of multiple sequence alignments (MSA), i.e., as a rectangular matrix $${{{{{{{\mathcal{D}}}}}}}}=({a}_{i}^{m}| i=1,...,L;m=1,...,M)$$ containing M aligned proteins of length L. The entries $${a}_{i}^{m}$$ equal either one of the standard 20 amino acids or the alignment gap “–”. In total, we have q = 21 possible different symbols in $${{{{{{{\mathcal{D}}}}}}}}$$. The aim of unsupervised generative modeling is to learn a statistical model P(a1,..., aL) of (aligned) full-length sequences, which faithfully reflects the variability found in $${{{{{{{\mathcal{D}}}}}}}}$$: sequences belonging to the protein family of interest should have comparably high probabilities, unrelated sequences very small probabilities. Furthermore, a new artificial MSA $${{{{{{{\mathcal{D}}}}}}}}^{\prime}$$ sampled sequence by sequence from model P(a1,..., aL) should be statistically and functionally indistinguishable from the natural aligned MSA $${{{{{{{\mathcal{D}}}}}}}}$$ given as input.
A way to achieve this goal is the above-mentioned use of Boltzmann-machine learning based on conservation and coevolution, which leads to pairwise-interacting Potts models, i.e., bmDCA5, and related methods25,26,27. An alternative implementation of bmDCA, including the decimation of statistically irrelevant couplings, has been presented in28 and is the one used as a benchmark in this work; the Mi3 package29 also provides a GPU-based accelerated implementation.
However, Potts models or Boltzmann machines are not the only generative-model architectures explored for protein sequences. Latent-variable models like Restricted Boltzmann machines30 or Hopfield-Potts models31 learn dimensionally reduced representations of proteins; using sequence motifs, they are able to capture groups of collectively evolving residues32 better than DCA models, but are less accurate in extracting structural information from the learning MSA31.
An important class of generative models based on latent variables are variational autoencoders (VAE), which achieve dimensional reduction, but in the flexible and powerful set of deep learning. The DeepSequence implementation33 was originally designed and tested for predicting the effects of mutations around a given wild type. It currently provides one of the best mutational-effect predictors, and we will show below that arDCA provides comparable quality of prediction for this specific task. The DeepSequence code has been modified in34 to explore its capacities in generating artificial sequences being statistically indistinguishable from the natural MSA; it was shown that its performance was substantially less accurate than bmDCA. Another implementation of a VAE was reported in35; also in this case the generative performances are inferior to bmDCA, but the organization of latent variables was shown to carry significant information on functionality. Furthermore, some generated mutant sequences were successfully tested experimentally. Interestingly, it was also shown that learning VAE on unaligned sequences decreases the performance as compared to pre-aligned MSA as used by all before-mentioned models. This observation was complemented by Ref. 36, which reported a VAE implementation trained on non-aligned sequences from UniProt, with length 10 < L < 1000. The VAE had good reconstruction accuracy for small L < 200, which however dropped significantly for larger L. The latent space also in this case shows an interesting organization in terms of function, which was used to generate in silico proteins with desired properties, but no experimental test was provided. The paper does not report any statistical test of the generative properties (such as a Pearson correlation of two-point correlations), and the publicly not yet available code makes a quantitative comparison to our results currently impossible.
Another interesting DL architecture is that of a Generative Adversarial Network (GAN), which was explored in37 on a single family of aligned homologous sequences. While the model has a very large number of trainable parameters (~60 M), it seems to reproduce well the statistics of the training MSA, and most importantly, the authors could generate an enzyme with only 66% identity to the closest natural one, which was still found to be functional in vitro. An alternative implementation of the same architecture was presented in38, and applied to the design of antibodies; also in this case the resulting sequences were validated experimentally.
Not all generative models for proteins are based on sequence ensembles. Several research groups explored the possibility of generating sequences with given three-dimensional structure39,40,41, e.g. via a VAE42 or a Graph Neural Network43, or by inverting structural prediction models44,45,46,47. It is important to stress that this is a very different task from ours (our work does not use structure), so it is difficult to perform a direct comparison between our work and these ones. It would be interesting to explore, in future work, the possibility to unify the different approaches and to use sequence and structure jointly for constructing improved generative models.
In summary, for the specific task of interest here, namely, generate an artificial MSA statistically indistinguishable from the natural one, one can take as reference models bmDCA28,5 in the context of Potts-model-like architectures, and DeepSequence33 in the context of deep networks. We will show in the following that arDCA performs comparably to bmDCA, and better than DeepSequence, at a strongly reduced computational cost. From anecdotal evidence in the works mentioned above, and in agreement with general observations in machine learning, it appears that deep architectures may be more powerful than shallow architectures, provided that very large datasets and computational resources are available33. Indeed, we will show that for the related task of single-mutation predictions around a wild type, DeepSequence outperforms arDCA on rich datasets, while the inverse is true on small datasets.
## Results
### Autoregressive models for protein families
Here we propose a computationally efficient approach based on autoregressive models, cf. Fig. 1 for an illustration of the approach and the model architecture. We start from the exact decomposition
$$P({a}_{1},...,{a}_{L})=P({a}_{1})\cdot P({a}_{2}| {a}_{1})\cdots P({a}_{L}| {a}_{L-1},...,{a}_{1})\,,$$
(1)
of the joint probability of a full-length sequence into a product of more and more involved conditional probabilities P(aiai−1,..., a1) of the amino acids ai in single positions, conditioned to all previously seen positions ai−1,..., a1. While this decomposition is a direct consequence of Bayes’ theorem, it suggests an important change in viewpoint on generative models: while learning the full P(a1,..., aL) from the input MSA $${{{{{{{\mathcal{D}}}}}}}}$$ is a task of unsupervised learning (sequences are not labeled), learning the factors P(aiai−1,..., a1) becomes a task of supervised learning, with (ai−1,..., a1) being the input (feature) vector, and ai the output label (in our case a categorical q-state label). We can thus build on the full power of supervised learning, which is methodologically more explored than unsupervised learning48,49,50.
In this work, we choose the following parameterization, previously used in the context of statistical mechanics of classical51 and quantum52 systems:
$$P({a}_{i}| {a}_{i-1},...,{a}_{1})=\frac{\exp \left\{{h}_{i}({a}_{i})+\mathop{\sum }\limits_{j=1}^{i-1}{J}_{ij}({a}_{i},{a}_{j})\right\}}{{z}_{i}({a}_{i-1},...,{a}_{1})}\,,$$
(2)
with $${z}_{i}({a}_{i-1},...,{a}_{1})={\sum }_{{a}_{i}}\exp \{{h}_{i}({a}_{i})+\mathop{\sum }\nolimits_{j = 1}^{i-1}{J}_{ij}({a}_{i},{a}_{j})\}$$ being a normalization factor. In machine learning, this parameterization is known as soft-max regression, the generalization of logistic regression to multi-class labels50. This choice, as detailed in the section “Methods”, enables a particularly efficient parameter learning by likelihood maximization, and leads to a speedup of 2–3 orders of magnitude over bmDCA, as is reported in Table 1. Because the resulting model is parameterized by a set of fields hi(a) and couplings Jij(a, b) as in DCA, we dub our method as arDCA.
Besides comparing the performance of this model to bmDCA and DeepSequence, we will also use simple "fields-only” models, also known as profile models or independent-site models. In these models, the joint probability of all positions in a sequence factorizes overall positions, P(a1,..., aL) = ∏i=1,...,Lfi(ai), without any conditioning to the sequence context. Using maximum-likelihood inference, each factor fi(ai) equals the empirical frequency of amino acid ai in column i of the input MSA $${{{{{{{\mathcal{D}}}}}}}}$$.
A few remarks are needed.
Eq. (2) has striking similarities to standard DCA4, but also important differences. The two have exactly the same number of parameters, but their meaning is quite different. While DCA has symmetric couplings Jij(a, b) = Jji(b, a), the parameters in Eq. (2) are directed and describe the influence of site j on site i for j < i only, i.e., only one triangular part of the J-matrix is filled.
The inference in arDCA is very similar to plmDCA53, i.e., to DCA based on pseudo-likelihood maximization54. In particular, both in arDCA and plmDCA the gradient of the likelihood can be computed exactly from the data, while in bmDCA it has to be estimated via Monte Carlo Markov Chain (MCMC), which requires the introduction of additional hyperparameters (such as the number of chains, the mixing time, etc.) that can have an important impact on the quality of the inference, see55 for a recent detailed study.
In plmDCA each ai is, however, conditioned to all other aj in the sequence, and not only by partial sequences. The resulting directed couplings are usually symmetrized akin to standard Potts models. On the contrary, the Jij(a, b) that appear in arDCA cannot be interpreted as “direct couplings” in the DCA sense, cf. below for details on arDCA-based contact prediction. However, plmDCA has limited capacities as a generative model5: symmetrization moves parameters away from their maximum-likelihood value, probably causing a loss in model accuracy. No such symmetrization is needed for arDCA.
arDCA, contrary to all other DCA methods, allows for calculating the probabilities of single sequences. In bmDCA, we can only determine sequence weights, but the normalizing factor, i.e., the partition function, remains inaccessible for exact calculations; expensive thermodynamic integration via MCMC sampling is needed to estimate it. The conditional probabilities in arDCA are individually normalized; instead of summing over qL sequences, we need to sum L-times over the q states of individual amino acids. This may turn out as a major advantage when the same sequence in different models shall be compared, as in homology detection and protein family assignment56,57, cf. the example given below.
The ansatz in Eq. (2) can be generalized to more complicated relations. We have tested a two-layer architecture but did not observe advantages over the simple soft-max regression, as will be discussed at the end of the paper.
Thanks, in particular, to the possibility of calculating the gradient exactly, arDCA models can be inferred much more efficiently than bmDCA models. Typical inference times are given in Table 1 for five representative families, and show a speedup of about 2–3 orders of magnitude with respect to the bmDCA implementation of28, both running on a single Intel Xeon E5-2620 v4 2.10 GHz CPU. We also tested the Mi3 package29, which is able to learn similar bmDCA models in a time of about 60 min for the PF00014 family and 900 min for the PF00595 family, while running on two TITAN RTX GPUs, thus remaining much more computationally demanding than arDCA.
### The positional order matters
Eq. (1) is valid for any order of the positions, i.e., for any permutation of the natural positional order in the amino-acid sequences. This is no longer true when we parameterize the P(aiai−1,..., a1) according to Eq. (2). Different orders may give different results. In Supplementary Note 1, we show that the likelihood depends on the order and that we can optimize over orders. We also find that the best orders are correlated to the entropic order, where we select first the least entropic, i.e. most conserved, variables, progressing successively towards the most variable positions of highest entropy. The site entropy $${s}_{i}=-{\sum }_{a}\,{f}_{i}(a){{{{{{\mathrm{log}}}}}}}\,\,{f}_{i}(a)$$ can be directly calculated from the empirical amino-acid frequencies fi(a) of all amino acids a in site i.
Because the optimization over the possible L! site orderings is very time-consuming, we use the entropic order as a practical heuristic choice. In all our tests, described in the next sections, the entropic order does not perform significantly worse than the best-optimized order we found.
A close-to-entropic order is also attractive from the point of view of interpretation. The most conserved sites come first. If the amino acid on those sites is the most frequent one, basically no information is transmitted further. If, however, a sub-optimal amino acid is found in a conserved position, this has to be compensated by other mutations, i.e., necessarily by more variable (more entropic) positions. Also, the fact that variable positions come last, and are modeled as depending on all other amino acids, is well interpretable: these positions, even if highly variable, are not necessarily unconstrained, but they can be used to finely tune the sequence to any sub-optimal choices done in earlier positions.
For this reason, all coming tests are done using increasing entropic order, i.e., with sites ordered before model learning by increasing empirical si values. Supplementary Figs. 13 shows a comparison with alternative orderings, such as the direct one (from 1 to L), several random ones, and the optimized one, cf. also Table 1 for some results.
### arDCA provides accurate generative models
To check the generative property of arDCA, we compare it with bmDCA5, i.e., the most accurate generative version of DCA obtained via Boltzmann machine learning. bmDCA was previously shown to be generative not only in a statistical sense, but also in a biological one: sequences generated by bmDCA were shown to be statistically indistinguishable from natural ones, and most importantly, functional in vivo for the case of chorismate mutase enzymes20. We also compare the generative property of arDCA with DeepSequence33,34 as a prominent representative of deep generative models.
To this aim, we compare the statistical properties of natural sequences with those of independently and identically distributed (i.i.d.) samples drawn from the different generative models P(a1,..., aL). At this point, another important advantage of arDCA comes into play: while generating i.i.d. samples from, e.g., a Potts model requires MCMC simulations, which in some cases may have very long decorrelation times and thus become tricky and computationally expensive28,55 (cf. also Supplementary Note 2 and Supplementary Fig. 4), drawing a sequence from the arDCA model P(a1,..., aL) is very simple and does not require any additional parameter. The factorized expression Eq. (1) allows for sampling amino acids position by position, following the chosen positional order, cf. the detailed description in Supplementary Note 2.
Figures 2a–c show the comparison of the one-point amino-acid frequencies fi(a), and the connected two-point and three-point correlations
$${C}_{ij}(a,b)=\; {f}_{ij}(a,b)-{f}_{i}(a){f}_{j}(b)\,,\\ {C}_{ijk}(a,b,c)=\; {f}_{ijk}(a,b,c)-{f}_{ij}(a,b){f}_{k}(c)-{f}_{ik}(a,c){f}_{j}(b)\\ -{f}_{jk}(b,c){f}_{i}(a)+2{f}_{i}(a){f}_{j}(b){f}_{k}(c),$$
(3)
of the data with those estimated from a sample of the arDCA model. Results are shown for the response-regulator Pfam family PF000722. Other proteins are shown in Table 1 and Supplementary Note 3, Supplementary Figs. 56. We find that, for these observables, the empirical and model averages coincide very well, equally well, or even slightly better than for the bmDCA case. In particular, for the one-point and two-point quantities, this is quite surprising: while bmDCA fits them explicitly, i.e., any deviation is due to the imperfect fitting of the model, arDCA does not fit them explicitly, and nevertheless obtains higher precision.
In Table 1, we also report the results for sequences sampled from DeepSequence33. While its original implementation aims at scoring individual mutations, cf. Section “Predicting mutational effects via in-silico deep mutational scanning”, we apply the modification of ref. 34 allowing for sequence sampling. We observe that for most families, the two-point and three-point correlations of the natural data are significantly less well reproduced by DeepSequence than by both DCA implementations, confirming the original findings of ref. 34. Only in the largest family, PF00072 with more than 800,000 sequences, DeepSequence reaches comparable or, in the case of the three-point correlations, even superior performance.
A second test of the generative property of arDCA is given by Fig. 2d–g. Panel d shows the natural sequences projected onto their first two principal components (PC). The other three panels show generated data projected onto the same two PCs of the natural data. We see that both arDCA and bmDCA reproduce quite well the clustered structure of the response-regulator sequences (both show a slightly broader distribution than the natural data, probably due to the regularized inference of the statistical models). On the contrary, sequences generated by a profile model Pprof(a1,..., aL) = ∏i fi(ai) assuming independent sites, do not show any clustered structure: the projections are concentrated around the origin in PC space. This indicates that their variability is almost unrelated to the first two principal components of the natural sequences.
From these observations, we conclude that arDCA provides excellent generative models, of at least the same accuracy as bmDCA. This suggests fascinating perspectives in terms of data-guided statistical sequence design: if sequences generated from bmDCA models are functional, also arDCA-sampled sequences should be functional. But this is obtained at a much lower computational cost, cf. Table 1 and without the need to check for convergence of MCMC, which makes the method scalable to much bigger proteins.
### Predicting mutational effects via in-silico deep mutational scanning
The probability of a sequence is a measure of its goodness. For high-dimensional probability distributions, it is generally convenient to work with log probabilities. Using inspiration from statistical physics, we introduce a statistical energy
$$E({a}_{1},...,{a}_{L})=-{{{{{{\mathrm{log}}}}}}}\,P({a}_{1},...,{a}_{L})\,,$$
(4)
as the negative log probability. We thus expect functional sequences to have very low statistical energies, while unrelated sequences show high energies. In this sense, statistical energy can be seen as a proxy of (negative) fitness. Note that in the case of arDCA, the statistical energy is not a simple sum over the model parameters as in DCA, but contains also the logarithms of the local partition functions zi(ai−1,..., a1), cf. Eq. (2).
Now, we can easily compare two sequences differing by one or few mutations. For a single mutation ai → bi, where amino acid ai in position i is substituted with amino acid bi, we can determine the statistical-energy difference
$${{\Delta }}E({a}_{i}\to {b}_{i})=-{{{{{{\mathrm{log}}}}}}}\,\frac{P({a}_{1},...,{a}_{i-1},{b}_{i},{a}_{i+1},....,{a}_{L})}{P({a}_{1},...,{a}_{i-1},{a}_{i},{a}_{i+1},....,{a}_{L})}\,.$$
(5)
If negative, the mutant sequence has lower statistical energy; the mutation ai → bi is thus predicted to be beneficial. On the contrary, a positive ΔE predicts a deleterious mutation. Note that, even if not explicitly stated on the left-hand side of Eq. (5), the mutational score ΔE(ai → bi) depends on the whole sequence background (a1,..., ai−1, ai+1,....,aL) it appears in, i.e., on all other amino acids aj in all positions j ≠ i.
It is now easy to perform an in-silico deep mutational scan, i.e., to determine all mutational scores ΔE(ai → bi) for all positions i = 1, ..., L and all target amino acids bi relative to some reference sequence. In Fig. 3a, we compare our predictions with experimental data over more than 30 distinct experiments and wildtype proteins, and with state-of-the-art mutational-effect predictors. These contain in particular the predictions using plmDCA (aka evMutation10), variational autoencoders (DeepSequence33), evolutionary distances between wildtype, and the closest homologs showing the considered mutation (GEMME58)—all of these methods take, in technically different ways, the context-dependence of mutations into account. We also compare it to the context-independent prediction using the above-mentioned profile models.
It can be seen that the context-dependent predictors outperform systematically the context-independent predictor, in particular for large MSA in prokaryotic and eukaryotic proteins. The four context-dependent models perform in a very similar way. There is a little but systematic disadvantage for plmDCA, which was the first published predictor of the ones considered here.
The situation is different in the typically smaller and less diverged viral protein families. In this case, DeepSequence, which relies on data-intensive deep learning, becomes unstable. It becomes also harder to outperform profile models, e.g., plmDCA does not achieve this. arDCA performs similarly or, in one out of four cases, substantially better than the profile model.
To go into more detail, we have compared more quantitatively the predictions of arDCA and DeepSequence, currently considered as the state-of-the-art mutational predictor. In Fig. 3b, we plot the performance of the two predictors against each other, with the symbol size being proportional to the number of sequences in the training MSA of natural homologs. Almost all dots are close to the diagonal (apart from few viral datasets), with 15/32 datasets having a better arDCA prediction and 17/32 giving an advantage to DeepSequence. The figure also shows that arDCA tends to perform better on smaller datasets, while DeepSequence takes over on larger datasets. In Supplementary Fig. 7, we have also measured the correlations between the two predictors. Across all prokaryotic and eukaryotic datasets, the two show high correlations in the range of 82–95%. These values are larger than the correlations between predictions and experimental results, which are in the range of 50–60% for most families. This observation illustrates that both predictors extract a highly similar signal from the original MSA, but this signal may be quite different from the experimentally measured phenotype. Many experiments actually provide only rough proxies for protein fitness, like e.g. protein stability or ligand-binding affinity. To what extent such variable underlying phenotypes can be predicted by unsupervised learning based on homologous MSA thus remains an open question.
We thus conclude that arDCA permits a fast and accurate prediction of mutational effects, in line with some of the state-of-the-art predictors. It systematically outperforms profile models and plmDCA and is more stable than DeepSequence in the case of limited datasets. This observation, together with the better computational efficiency of arDCA, suggests that DeepSequence should be used for predicting mutational effects for individual proteins represented by very large homologous MSA, while arDCA is the method of choice for large-scale studies (many proteins) or small families. GEMME, based on phylogenetic information, astonishingly performs very similarly to arDCA, even if the information taken into account seems different.
### Extracting epistatic couplings and predicting residue-residue contacts
The best-known application of DCA is the prediction of residue-residue contacts via the strongest direct couplings6. As argued before, the arDCA parameters are not directly interpretable in terms of direct couplings. To predict contacts using arDCA, we need to go back to the biological interpretation of DCA couplings: they represent epistatic couplings between pairs of mutations59. For a double mutation ai → bi, aj → bj, epistasis is defined by comparing the effect of the double mutation with the sum of the effects of the single mutations, when introduced individually into the wildtype background:
$${{\Delta }}{{\Delta }}E({b}_{i},{b}_{j})= \; {{\Delta }}E({a}_{i}\to {b}_{i},{a}_{j}\to {b}_{j})\\ -{{\Delta }}E({a}_{i}\to {b}_{i})-{{\Delta }}E({a}_{j}\to {b}_{j}),$$
(6)
where the ΔE in arDCA is defined in analogy to Eq. (5). The epistatic effect ΔΔE(bi, bj) provides an effective direct coupling between amino acids bi, bj in sites i, j. In standard DCA, ΔΔE(bi, bj) is actually given by the direct coupling Jij(bi, bj) − Jij(bi, aj) − Jij(ai, bj) + Jij(ai, aj) between sites i and j.
For contact prediction, we can treat these effective couplings in the standard way (compute the Frobenius norm in zero-sum gauge, apply the average product correction, cf. Supplementary Note 5 for details). The results are represented in Fig. 4 (cf. also Supplementary Figs. 810). The contact maps predicted by arDCA and bmDCA are very similar, and both capture very well the topological structure of the native contact map. The arDCA method gives in this case a few more false positives, resulting in a slightly lower positive predictive value (panel c). However, note that the majority of the false positives for both predictors are concentrated in the upper right corner of the contact maps, in a region where the largest subfamily of response-regulators domains, characterized by the coexistence with a Trans_reg_C DNA-binding domain (PF00486) in the same protein, has a homo-dimerization interface.
One difference should be noted: for arDCA, the definition of effective couplings via epistatic effects depends on the reference sequence (a1,..., aL), in which the mutations are introduced; this is not the case in DCA. So, in principle, each sequence might give a different contact prediction, and accurate contact prediction in arDCA might require a computationally heavy averaging over a large ensemble of background sequences. Fortunately, as we have checked, the predicted contacts hardly depend on the reference sequence chosen. It is therefore possible to take any arbitrary reference sequence belonging to the homologous family and determine epistatic couplings relative to this single sequence. This observation causes an enormous speedup by a factor M, with M being the depths of the MSA of natural homologs.
The aim of this section was to compare the performance of arDCA in contact prediction when compared to established methods using exactly the same data, i.e., a single MSA of the considered protein family. We have chosen bmDCA in coherence to the rest of the paper, but apart from little quantitative differences, the conclusions remain unchanged when looking to DCA variants based on mean-field or pseudo-likelihood approximations, cf. Supplementary Fig. 9. The recent success of Deep-Learning–based contact prediction has shown that the performance can be substantially improved if coevolution-based contact prediction for thousands of families is combined with supervised learning based on known protein structures, as done by popular methods like RaptorX, DeepMetaPSICOV, AlphaFold, or trRosetta16,17,18,19. We expect that the performance of arDCA could equally be boosted by supervised learning, but this goes clearly beyond the scope of our work, which concentrates on generative modeling.
### Estimating the size of a family’s sequence space
The MSA of natural sequences contains only a tiny fraction of all sequences, which would have the functional properties characterizing a protein family under consideration, i.e., which might be found in newly sequenced species or be reached by natural evolution. Estimating this number $${{{{{{{\mathcal{N}}}}}}}}$$ of possible sequences, or their entropy $$S={{{{{{\mathrm{log}}}}}}}\,{{{{{{{\mathcal{N}}}}}}}}$$, is quite complicated in the context of DCA-type pairwise Potts models. It requires advanced sampling techniques60,61.
In arDCA, we can explicitly calculate the sequence probability P(a1, ..., aL). We can therefore estimate the entropy of the corresponding protein family via
$$S =-{\sum }_{{a}_{1},...,{a}_{L}}P({a}_{1},...,{a}_{L})\,{{{{{{\mathrm{log}}}}}}}\,P({a}_{1},...,{a}_{L})\\ ={\langle E({a}_{1},...,{a}_{L})\rangle }_{P},$$
(7)
where the second line uses Eq. (4). The ensemble average 〈P can be estimated via the empirical average over a large sequence sample drawn from P. As discussed before, extracting i.i.d. samples from arDCA is particularly simple due to their particular factorized form.
Results for the protein families studied here are given in Table 1. As an example, the entropy density equals S/L = 1.4 for PF00072. This corresponds to $${{{{{{{\mathcal{N}}}}}}}} \sim 1.25\cdot 1{0}^{68}$$ sequences. While being an enormous number, it constitutes only a tiny fraction of all qL ~ 1.23 10148 possible sequences of length L = 112. Interestingly, the entropies estimated using bmDCA are systematically higher than those of arDCA. On the one hand, this is no surprise: both reproduce accurately the empirical one-residue and two-residue statistics, but bmDCA is a maximum entropy model, which maximizes the entropy given these statistics4. On the other hand, our observation implies that the effective multi-site couplings in E(a1,..,aL) resulting from the local partition functions zi(ai−1,..., a1) lead to a non-trivial entropy reduction.
## Discussion
We have presented a class of simple autoregressive models, which provide highly accurate and computationally very efficient generative models for protein-sequence families. While being of comparable or even superior performance to bmDCA across a number of tests including the sequence statistics, the sequence distribution in dimensionally reduced principal-component space, the prediction of mutational effects, and residue-residue contacts, arDCA is computationally much more efficient than bmDCA. The particular factorized form of autoregressive models allows for exact likelihood maximization.
It allows also for the calculation of exact sequence probabilities (instead of sequence weights for Potts models). This fact is of great potential interest in homology detection using coevolutionary models, which requires comparing probabilities of the same sequence in distinct models corresponding to distinct protein families. To illustrate this idea in a simple, but the instructive case, we have identified two subfamilies of the PF00072 protein family of response regulators. The first subfamily is characterized by the existence of a DNA-binding domain of the Trans_reg_C protein family (PF00486), the second is by a DNA-binding domain of the GerE protein family (PF00196). For each of the two subfamilies, we have extracted randomly 6000 sequences used to train sub-family specific profile and arDCA models, with P1 being the model for the Trans_reg_C and P2 for the GerE sub-family. Using the log-odds ratio $${{{{{{\mathrm{log}}}}}}}\,\{{P}_{1}({{{{{\mathrm{seq}}}}}})/{P}_{2}({{{{{\mathrm{seq}}}}}})\}$$ to score all remaining sequences from the two subfamilies, the profile model was able to assign 98.6% of all sequences to the correct sub-family, and 1.4% to the wrong one. arDCA has improved this to 99.7% of correct, and only 0.3% of incorrect assignments, reducing the gray-zone in sub-family assignment by a factor of 3–4. Furthermore, some of the false assignments of the profile model had quite large scores, cf. the histograms in Supplementary Fig. 11, while the false annotations of the arDCA model had scores closer to zero. Therefore, if we consider that a prediction is reliable only if there are no wrong predictions for a larger log-odds ratio score, then the score of arDCA is 97.5% while one of the profile models is only 63.7%.
The importance of accurate generative models becomes also visible via our results on the size of sequence space (or sequence entropy). For the response regulators used as an example throughout the paper (and similar observations are true for all other protein families we analyzed), we find that "only” about 1068 out of all possible 10148 amino-acid sequences of the desired length are compatible with the arDCA model, and thus suspected to have the same functionality and the same 3D structure of the proteins collected in the Pfam MSA. This means that a random amino-acid sequence has a probability of about 10−80 to be actually a valid response-regulator sequence. This number is literally astronomically small, corresponding to the probability of hitting one particular atom when selecting randomly in between all atoms in our universe. The importance of good coevolutionary modeling becomes even more evident when considering all proteins being compatible with the amino-acid conservation patterns in the MSA: the corresponding profile model still results in an effective sequence number of 1094, i.e., a factor of 1026 larger than the sequence space respecting also coevolutionary constraints. As was verified in experiments, conservation provides insufficient information for generating functional proteins, while taking coevolution into account leads to finite success probabilities.
Reproducing the statistical features of natural sequences does not necessarily guarantee the sampled sequences to be fully functional protein sequences. To enhance our confidence in these sequences, we have performed two tests.
First, we have reanalyzed the bmDCA-generated sequences of ref. 20, which were experimentally tested for their in-vivo chorismate-mutase activity. Starting from the same MSA of natural sequences, we have trained an arDCA model and calculated the statistical energies of all non-natural and experimentally tested sequences. As is shown in Supplementary Fig. 12, the statistical energies have a Pearson correlation of 97% with the bmDCA energies reported in ref. 20. In both cases, functional sequences are restricted to the region of low statistical energies.
Furthermore, we have used small samples of 10 artificial or natural response-regulator sequences as inputs for trRosetta19, in a setting that allows for protein-structure prediction based only on the user-provided MSA, i.e., no homologous sequences are added by trRosetta, and no structural templates are used. As is shown in Supplementary Fig. 13, the predicted structures are very similar to each other, and within a root-mean-square deviation of less than 2 Å from an exemplary PDB structure. The contacts maps extracted from the trRosetta predictions are close to identical.
While these observations do not prove that arDCA-generated sequences are functional or fold into the correct tertiary structure, they are coherent with this conjecture.
Autoregressive models can be easily extended by adding hidden layers in the ansatz for the conditional probabilities P(aiai−1,..., a1), with the aim to increase the expressive power of the overall model. For the families explored here, we found that the one-layer model Eq. (2) is already so accurate, that adding more layers only results in similar, but not superior performance, cf. Supplementary Note 6. However, in longer or more complicated protein families, the larger expressive power of deeper autoregressive models could be helpful. Ultimately, the generative performance of such extended models should be assessed by testing the functionality of the generated sequences in experiments similar to ref. 20.
## Methods
### Inference of the parameters
We first describe the inference of the parameters via likelihood maximization. In a Bayesian setting, with a uniform prior (we discuss regularization below), the optimal parameters are those that maximize the probability of the data, given as an MSA $${{{{{{{\mathcal{D}}}}}}}}=({a}_{i}^{m}| i=1,...,L;m=1,...,M)$$ of M sequences of aligned length L:
$$\{{{{{{{{{\bf{J}}}}}}}}}^{* },{{{{{{{{\bf{h}}}}}}}}}^{* }\} =\arg \mathop{{{{{{{{\rm{max}}}}}}}}}\limits_{\{{{{{{{{\bf{J}}}}}}}},{{{{{{{\bf{h}}}}}}}}\}}P({{{{{{{\mathcal{D}}}}}}}}| \{{{{{{{{\bf{J}}}}}}}},{{{{{{{\bf{h}}}}}}}}\})\\ =\arg \mathop{{{{{{{{\rm{max}}}}}}}}}\limits_{\{{{{{{{{\bf{J}}}}}}}},{{{{{{{\bf{h}}}}}}}}\}}{{{{{{\mathrm{log}}}}}}}\,P({{{{{{{\mathcal{D}}}}}}}}| \{{{{{{{{\bf{J}}}}}}}},{{{{{{{\bf{h}}}}}}}}\})\\ =\arg \mathop{{{{{{{{\rm{max}}}}}}}}}\limits_{\{{{{{{{{\bf{J}}}}}}}},{{{{{{{\bf{h}}}}}}}}\}}\mathop{\sum }\limits_{m=1}^{M}{{{{{{\mathrm{log}}}}}}}\,\mathop{\prod }\nolimits_{i = 1}^{L}P({a}_{i}^{m}| {a}_{i-1}^{m},...,{a}_{1}^{m})\\ =\arg \mathop{{{{{{{{\rm{max}}}}}}}}}\limits_{\{{{{{{{{\bf{J}}}}}}}},{{{{{{{\bf{h}}}}}}}}\}}\mathop{\sum }\limits_{m=1}^{M}\mathop{\sum }\limits_{i=1}^{L}{{{{{{\mathrm{log}}}}}}}\,P({a}_{i}^{m}| {a}_{i-1}^{m},...,{a}_{1}^{m})\,.$$
(8)
Each parameter hi(a) or Jij(a,b) appears in only one conditional probability P(aiai−1,..., a1), and we can thus maximize independently each conditional probability in Eq. (8):
$$\{{{{{{\bf{J}}}}}}_{ij}^{*},{h}_{i}^{*}\} =\; \arg \mathop{{{{{{\rm{max}}}}}}}\limits_{\{{{{{{\bf{J}}}}}}_{ij},{{{{{\bf{h}}}}}}_{i}\}}\mathop{\sum }\limits_{m=1}^{M}{{{{{\mathrm{log}}}}}}\,P({a}_{i}^{m}| {a}_{i-1}^{m},...,{a}_{1}^{m})\\ =\; \arg \mathop{{{{{{\rm{max}}}}}}}\limits_{\{{{{{{\bf{J}}}}}}_{ij},{{{{{\bf{h}}}}}}_{i}\}}\mathop{\sum }\limits_{m=1}^{M}\bigg[{h}_{i}({a}_{i}^{m})+\mathop{\sum }\limits_{j=1}^{i-1}{J}_{ij}({a}_{i}^{m},{a}_{j}^{m})\\ -{{{{{\mathrm{log}}}}}}\,{z}_{i}({a}_{i-1}^{m},...{a}_{1}^{m})\bigg]$$
where
$${z}_{i}({a}_{i-1},...{a}_{1})=\mathop{\sum}\limits_{{a}_{i}}\exp \left\{{h}_{i}({a}_{i})+\mathop{\sum }\limits_{j=1}^{i-1}{J}_{ij}({a}_{i},{a}_{j})\right\}$$
(9)
is the normalization factor of the conditional probability of variable ai.
Differentiating with respect to hi(a) or to Jij(a, b), with j = 1,..., i − 1, we get the set of equations:
$$0 =\frac{1}{M}\mathop{\sum }\limits_{m=1}^{M}\left[{\delta }_{a,{a}_{i}^{m}}-\frac{\partial {{{{{{\mathrm{log}}}}}}}\,{z}_{i}({a}_{i-1}^{m},...{a}_{1}^{m})}{\partial {h}_{i}(a)}\right]\,,\\ 0 =\frac{1}{M}\mathop{\sum }\limits_{m=1}^{M}\left[{\delta }_{a,{a}_{i}^{m}}{\delta }_{b,{a}_{j}^{m}}-\frac{\partial {{{{{{\mathrm{log}}}}}}}\,{z}_{i}({a}_{i-1}^{m},...{a}_{1}^{m})}{\partial {J}_{ij}(a,b)}\right],$$
(10)
where δa,b is the Kronecker symbol. Using Eq. (9) we find
$$\frac{\partial \,{{{{{{\mathrm{log}}}}}}}\,{z}_{i}({a}_{i-1}^{m},...{a}_{1}^{m})}{\partial {h}_{i}(a)} =P({a}_{i}=a| {a}_{i-1}^{m},...,{a}_{1}^{m})\,,\\ \frac{\partial \,{{{{{{\mathrm{log}}}}}}}\,{z}_{i}({a}_{i-1}^{m},...{a}_{1}^{m})}{\partial \,{J}_{ij}(a,b)} =P({a}_{i}=a| {a}_{i-1}^{m},...,{a}_{1}^{m}){\delta }_{{a}_{j}^{m},b}.$$
(11)
The set of equations thus reduces to a very simple form:
$${f}_{i}(a) ={\left\langle P({a}_{i} = a| {a}_{i-1}^{m},...,{a}_{1}^{m})\right\rangle }_{{{{{{{{\mathcal{D}}}}}}}}}\,,\\ {f}_{ij}(a,b) ={\left\langle P({a}_{i} = a| {a}_{i-1}^{m},...,{a}_{1}^{m}){\delta }_{{a}_{j}^{m},b}\right\rangle }_{{{{{{{{\mathcal{D}}}}}}}}},$$
(12)
where $${\left\langle \bullet \right\rangle }_{{{{{{{{\mathcal{D}}}}}}}}}=\frac{1}{M}\mathop{\sum }\nolimits_{m = 1}^{M}{\bullet }^{m}$$ denotes the empirical data average and fi(a), fij(a, b) is the empirical one-point and two-point amino-acid frequencies. Note that for the first variable (i = 1), which is unconditioned, there is no equation for the couplings, and the equation for the field takes the simple form f1(a) = P(a1 = a), which is solved by $${h}_{1}(a)={{{{{{\mathrm{log}}}}}}}\,\,{f}_{1}(a)+\,{{\mbox{const.}}}\,$$
Unlike the corresponding equations for the Boltzmann learning of a Potts model5, there is a mix between probabilities and empirical averages in Eq. (12), and there is no explicit equality between one-point and two-point marginals and empirical one and two-point frequencies. This means that the ability to reproduce the empirical one-point and two-point frequencies are already a statistical test for the generative properties of the model, and not only for the fitting quality of the current parameter values.
The inference can be done very easily with any algorithm using gradient descent, which updates the fields and couplings proportionally to the difference of the two sides of Eq. (12). We used the Low Storage BFGS method to do the inference. We also add an L2 regularization, with a regularization strength of 0.0001 for the generative tests and 0.01 for mutational effects and contact prediction. A small regularization leads to better results on generative tests, but a larger regularization is needed for contact prediction of mutational effects. Contact prediction can indeed suffer from too large parameters, and therefore a larger regularization was chosen, coherently with the one used in plmDCA. Note that the gradients are computed exactly at each iteration, as an explicit average over the data, and hence without the need of MCMC sampling. This provides an important advantage over Boltzmann-machine learning.
Finally, in order to partially compensate for the phylogenetic structure of the MSA, which induces correlations among sequences, each sequence is reweighted by a coefficient wm4:
$$\{{{{{{{{\bf{{J}}}}}}}_{ij}^{* },{h}_{i}^{* }}}\}=\arg \mathop{{{{{{{{\rm{max}}}}}}}}}\limits_{\{{{{{{{{{\bf{J}}}}}}}}}_{ij},{{{{{{{{\bf{h}}}}}}}}}_{i}\}}\frac{1}{{M}_{{{{{{{{\rm{eff}}}}}}}}}}\mathop{\sum }\limits_{m=1}^{M}{w}_{m}\,{{{{{{\mathrm{log}}}}}}}\,P({{{{{{{{\bf{a}}}}}}}}}^{m}| \{{{{{{{{{\bf{J}}}}}}}}}_{ij},{{{{{{{{\bf{h}}}}}}}}}_{i}\}),$$
(13)
which leads to the same equations as above with the only modification of the empirical average as $${\left\langle \bullet \right\rangle }_{{{{{{\mathrm{data}}}}}}}=\frac{1}{{M}_{{{{{{{{\rm{eff}}}}}}}}}}\mathop{\sum }\nolimits_{m = 1}^{M}{w}_{m}\ {\bullet }^{m}$$. Typically, wm is given by the inverse of the number of sequences having least 80% sequence identity with sequence m, and Meff = ∑mwm denotes the effective number of independent sequences. The goal is to remove the influence of very closely related sequences. Note however that such reweighting cannot fully capture the hierarchical structure of phylogenetic relations between proteins.
### Sampling from the model
Once the model parameters are inferred, a sequence can be iteratively generated by the following procedure:
• Sample the first residue from P(a1)
• Sample the second residue from P(a2a1) where a1 is sampled in the previous step.
... L. Sample the last residue from P(aLaL−1,aL−2,..., a2,a1) Each step is very fast because there are only 21 possible values for each probability. Both training and sampling are therefore extremely simple and computationally efficient in arDCA.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Code availability
Codes in Python and Julia are available at https://github.com/pagnani/ArDCA.git.
## Data availability
Data is available at https://github.com/pagnani/ArDCADataand was elaborated using source data freely downloadable from the Pfam database (http://pfam.xfam.org/)2, cf. Supplementary Table 1. The repository contains also sample MSA generated by arDCA. The input data for Figure 3 are provided by the GEMME paper58, cf. also Supplementary Table 2.
## References
1. 1.
UniProt Consortium. Uniprot: a worldwide hub of protein knowledge. Nucleic Acids Res. 47, D506–D515 (2019).
2. 2.
El-Gebali, S., Mistry, J., Bateman, A., Eddy, S. R. & Luciani, A. et al. The Pfam protein families database in 2019. Nucleic Acids Res. 47, D427–D432 (2019).
3. 3.
De Juan, D., Pazos, F. & Valencia, A. Emerging methods in protein co-evolution. Nat. Rev. Genet. 14, 249–261 (2013).
4. 4.
Cocco, S., Feinauer, C., Figliuzzi, M., Monasson, R. & Weigt, M. Inverse statistical physics of protein sequences: a key issues review. Rep. Prog. Phys. 81, 032601 (2018).
5. 5.
Figliuzzi, M., Barrat-Charlaix, P. & Weigt, M. How pairwise coevolutionary models capture the collective residue variability in proteins? Mol. Biol. Evol. 35, 1018–1027 (2018).
6. 6.
Morcos, F. et al. Direct-coupling analysis of residue coevolution captures native contacts across many protein families. Proc. Natl Acad. Sci. USA 108, E1293–E1301 (2011).
7. 7.
Levy, R. M., Haldane, A. & Flynn, W. F. Potts hamiltonian models of protein co-variation, free energy landscapes, and evolutionary fitness. Curr. Opin. Struct. Biol. 43, 55–62 (2017).
8. 8.
Ackley, D. H., Hinton, G. E. & Sejnowski, T. J. A learning algorithm for Boltzmann machines. Cogn. Sci. 9, 147–169 (1985).
9. 9.
Figliuzzi, M., Jacquier, H., Schug, A., Tenaillon, O. & Weigt, M. Coevolutionary landscape inference and the context-dependence of mutations in beta-lactamase tem-1. Mol. Biol. Evol. 33, 268–280 (2016).
10. 10.
Hopf, T. A., Ingraham, J. B., Poelwijk, F. J., Schärfe, C. P. & Springer, M. et al. Mutation effects predicted from sequence co-variation. Nat. Biotechnol. 35, 128–135 (2017).
11. 11.
Cheng, R. R., Morcos, F., Levine, H. & Onuchic, J. N. Toward rationally redesigning bacterial two-component signaling systems using coevolutionary information. Proc. Natl Acad. Sci. USA 111, E563–E571 (2014).
12. 12.
Cheng, R. R., Nordesjö, O., Hayes, R. L., Levine, H. & Flores, S. C. et al. Connecting the sequence-space of bacterial signaling proteins to phenotypes using coevolutionary landscapes. Mol. Biol. Evol. 33, 3054–3064 (2016).
13. 13.
Reimer, J. M. et al. Structures of a dimodular nonribosomal peptide synthetase reveal conformational flexibility. Science 366, eaaw4388 (2019).
14. 14.
Bisardi, M., Rodriguez-Rivas, J., Zamponi, F. & Weigt, M. Modeling sequence-space exploration and emergence of epistatic signals in protein evolution. Preprint at arXiv: 2106.02441 (2021).
15. 15.
de la Paz, J. A., Nartey, C. M., Yuvaraj, M. & Morcos, F. Epistatic contributions promote the unification of incompatible models of neutral molecular evolution. Proc. Natl Acad. Sci. USA 117, 5873–5882 (2020).
16. 16.
Greener, J. G., Kandathil, S. M. & Jones, D. T. Deep learning extends de novo protein modelling coverage of genomes using iteratively predicted structural constraints. Nat. Commun. 10, 1–13 (2019).
17. 17.
Senior, A. W. et al. Improved protein structure prediction using potentials from deep learning. Nature 577, 706–710 (2020).
18. 18.
Wang, S., Sun, S., Li, Z., Zhang, R. & Xu, J. Accurate de novo prediction of protein contact map by ultra-deep learning model. PLoS Comput. Biol. 13, e1005324 (2017).
19. 19.
Yang, J. et al. Improved protein structure prediction using predicted interresidue orientations. Proc. Natl Acad. Sci. USA 117, 1496–1503 (2020).
20. 20.
Russ, W. P. et al. An evolution-based model for designing chorismate mutase enzymes. Science 369, 440–445 (2020).
21. 21.
Tian, P., Louis, J. M., Baber, J. L., Aniana, A. & Best, R. B. Co-evolutionary fitness landscapes for sequence design. Angew. Chem. Int. Ed. 57, 5674–5678 (2018).
22. 22.
Huang, P.-S., Boyken, S. E. & Baker, D. The coming of age of de novo protein design. Nature 537, 320–327 (2016).
23. 23.
Jäckel, C., Kast, P. & Hilvert, D. Protein design by directed evolution. Annu. Rev. Biophys. 37, 153–173 (2008).
24. 24.
Wilburn, G. W. & Eddy, S. R. Remote homology search with hidden potts models. PLoS Comput. Biol. 16, e1008085 (2020).
25. 25.
Barton, J. P., De Leonardis, E., Coucke, A. & Cocco, S. Ace: adaptive cluster expansion for maximum entropy graphical model inference. Bioinformatics 32, 3089–3097 (2016).
26. 26.
Sutto, L., Marsili, S., Valencia, A. & Gervasio, F. L. From residue coevolution to protein conformational ensembles and functional dynamics. Proc. Natl Acad. Sci. USA 112, 13567–13572 (2015).
27. 27.
Vorberg, S., Seemayer, S. & Söding, J. Synthetic protein alignments by ccmgen quantify noise in residue-residue contact prediction. PLoS Comput. Biol. 14, e1006526 (2018).
28. 28.
Barrat-Charlaix, P., Muntoni, A. P., Shimagaki, K., Weigt, M. & Zamponi, F. Sparse generative modeling via parameter reduction of Boltzmann machines: application to protein-sequence families. Phys. Rev. E104, 024407 (2021).
29. 29.
Haldane, A. & Levy, R. M. Mi3-gpu: MCMC-based inverse ising inference on GPUs for protein covariation analysis. Computer Phys. Commun. 260, 107312 (2021).
30. 30.
Tubiana, J., Cocco, S. & Monasson, R. Learning protein constitutive motifs from sequence data. Elife 8, e39397 (2019).
31. 31.
Shimagaki, K. & Weigt, M. Selection of sequence motifs and generative hopfield-potts models for protein families. Phys. Rev. E 100, 032128 (2019).
32. 32.
Rivoire, O., Reynolds, K. A. & Ranganathan, R. Evolution-based functional decomposition of proteins. PLoS Comput. Biol. 12, e1004817 (2016).
33. 33.
Riesselman, A. J., Ingraham, J. B. & Marks, D. S. Deep generative models of genetic variation capture the effects of mutations. Nat. Methods 15, 816–822 (2018).
34. 34.
McGee, F., Novinger, Q., Levy, R. M., Carnevale, V. & Haldane, A., Generative capacity of probabilistic protein sequence models. Preprint at arXiv: 2012.02296 (2020).
35. 35.
Hawkins-Hooker, A., Depardieu, F., Baur, S., Couairon, G. & Chen, A. et al. Generating functional protein variants with variational autoencoders. PLoS Comput. Biol. 17, e1008736 (2021).
36. 36.
Costello, Z. & Martin, H. G. How to hallucinate functional proteins. arXiv 1903.00458 (2019).
37. 37.
Repecka, D. et al. Expanding functional protein sequence spaces using generative adversarial networks. Nat. Mach. Intell. 3, 324–333 (2021).
38. 38.
Amimeur, T., Shaver, J. M., Ketchem, R. R., Taylor, J. A., Clark, R. H. et al. Designing feature-controlled humanoid antibody discovery libraries using generative adversarial networks. bioRxiv 2020.04.12.024844 (2020).
39. 39.
Anand-Achim, N., Eguchi, R. R., Derry, A., Altman, R. B. & Huang, P. Protein sequence design with a learned potential. bioRxiv 2020.01.06.895466 (2020).
40. 40.
Ingraham, J., Garg, V. K., Barzilay, R. & Jaakkola, T. S. Generative models for graph-based protein design. In Neural Information Processing Systems (NeurIPS) (2019).
41. 41.
Jing, B., Eismann, S., Suriana, P., Townshend, R. J. & Dror, R., Learning from protein structure with geometric vector perceptrons. Preprint at arXiv: 2009.01411 (2020).
42. 42.
Greener, J. G., Moffat, L. & Jones, D. T. Design of metalloproteins and novel protein folds using variational autoencoders. Sci. Rep. 8, 1–12 (2018).
43. 43.
Strokach, A., Becerra, D., Corbi-Verge, C., Perez-Riba, A. & Kim, P. M. Fast and flexible protein design using deep graph neural networks. Cell Syst. 11, 402–411 (2020).
44. 44.
Anishchenko, I., Chidyausiku, T. M., Ovchinnikov, S., Pellock, S. J. & Baker, D. De novo protein design by deep network hallucination. bioRxiv 2020.07.22.211482 (2020).
45. 45.
Fannjiang, C. & Listgarten, J. Autofocused oracles for model-based design. Preprint at arXiv: 2006.08052 (2020).
46. 46.
Linder, J. & Seelig, G., Fast differentiable DNA and protein sequence optimization for molecular design. Preprint at arXiv: 2005.11275 (2020).
47. 47.
Norn, C. et al. Protein sequence design by conformational landscape optimization. Proc. Natl Acad. Sci. USA 118, e2017228118 (2021).
48. 48.
Bishop, C. M. Pattern Recognition and Machine Learning. (Springer, 2006).
49. 49.
Goodfellow, I., Bengio, Y., Courville, A. & Bengio, Y. Deep Learning. Vol. 1. (MIT Press, Cambridge, 2016).
50. 50.
Hastie, T., Tibshirani, R. & Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, And Prediction. (Springer Science & Business Media, 2009).
51. 51.
Wu, D., Wang, L. & Zhang, P. Solving statistical mechanics using variational autoregressive networks. Phys. Rev. Lett. 122, 080602 (2019).
52. 52.
Sharir, O., Levine, Y., Wies, N., Carleo, G. & Shashua, A. Deep autoregressive models for the efficient variational simulation of many-body quantum systems. Phys. Rev. Lett. 124, 020503 (2020).
53. 53.
Ekeberg, M., Lövkvist, C., Lan, Y., Weigt, M. & Aurell, E. Improved contact prediction in proteins: using pseudolikelihoods to infer potts models. Phys. Rev. E 87, 012707 (2013).
54. 54.
Balakrishnan, S., Kamisetty, H., Carbonell, J. G., Lee, S.-I. & Langmead, C. J. Learning generative models for protein fold families. Proteins 79, 1061–1078 (2011).
55. 55.
Decelle, A., Furtlehner, C. & Seoane, B. Equilibrium and non-equilibrium regimes in the learning of restricted Boltzmann machines. Preprint at arXiv: 2105.13889 (2021).
56. 56.
Eddy, S. R. A new generation of homology search tools based on probabilistic inference. In Genome Informatics 2009: Genome Informatics Series. Vol. 23, 205–211. (World Scientific, 2009).
57. 57.
Söding, J. Protein homology detection by hmm–hmm comparison. Bioinformatics 21, 951–960 (2005).
58. 58.
Laine, E., Karami, Y. & Carbone, A. Gemme: a simple and fast global epistatic model predicting mutational effects. Mol. Biol. Evol. 36, 2604–2619 (2019).
59. 59.
Starr, T. N. & Thornton, J. W. Epistasis in protein evolution. Protein Sci. 25, 1204–1218 (2016).
60. 60.
Barton, J. P., Chakraborty, A. K., Cocco, S., Jacquin, H. & Monasson, R. On the entropy of protein families. J. Stat. Phys. 162, 1267–1293 (2016).
61. 61.
Tian, P. & Best, R. B. How many protein sequences fold to a given structure? a coevolutionary analysis. Biophys. J. 113, 1719–1730 (2017).
## Acknowledgements
We thank Indaco Biazzo, Matteo Bisardi, Elodie Laine, Anna-Paola Muntoni, Edoardo Sarti, and Kai Shimagaki for helpful discussions and assistance with the data. We especially thank Francisco McGee and Vincenzo Carnevale for providing generated samples from DeepSequence as in ref. 34. Our work was partially funded by the EU H2020 Research and Innovation Programme MSCA-RISE-2016 under Grant Agreement No. 734439 InferNet (M.W.), and by a grant from the Simons Foundation (#454955, F.Z.). J.T. is supported by a Ph.D. Fellowship of the i-Bio Initiative from the Idex Sorbonne University Alliance.
## Author information
Authors
### Contributions
A.P., F.Z., and M.W. designed research; J.T., G.U., and A.P. performed research; J.T., G.U., A.P., F.Z., and M.W. analyzed the data; J.T., F.Z., and M.W. wrote the paper.
### Corresponding author
Correspondence to Martin Weigt.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Trinquier, J., Uguzzoni, G., Pagnani, A. et al. Efficient generative modeling of protein sequences using simple autoregressive models. Nat Commun 12, 5800 (2021). https://doi.org/10.1038/s41467-021-25756-4
• Accepted:
• Published: | 2021-12-03 05:32:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.712255597114563, "perplexity": 2661.143427081267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00058.warc.gz"} |
http://mathhelpforum.com/geometry/15391-circle-geometry-question.html | Math Help - Circle Geometry Question
1. Circle Geometry Question
I absolutely hate circle geometry. I normally THINK i have found out the answer, but it ALWAYS asks 'give a reason for your answer'. This really annoys me, as i never know how to phrase the answer - whether to state a rule i have used or just how i did it!
Anyway, i got a question here that i want someone to do for me as i cannot do it and find this really hard
a) Find the size of angle PQR
b) Find the size of angle PRQ
c) Find the size of angle POQ
Can someone please give this a shot?
Thanks
2. solution for circles problem
angle in a semicircle is a rightangle hence angle PQR is a rightangle
Originally Posted by Danielisew
I absolutely hate circle geometry. I normally THINK i have found out the answer, but it ALWAYS asks 'give a reason for your answer'. This really annoys me, as i never know how to phrase the answer - whether to state a rule i have used or just how i did it!
Anyway, i got a question here that i want someone to do for me as i cannot do it and find this really hard
a) Find the size of angle PQR
b) Find the size of angle PRQ
c) Find the size of angle POQ
Can someone please give this a shot?
Thanks
3. Well that answers part A, but i still don't understand your reason - wouldn't it be 'the angle in between two semi-circles is 90°' ?
Anyone do parts b and c please?
4. Hello, Danielisew!
Wouldn't it be 'the angle in between two semi-circles is 90°' ?
I have no idea what this means . . .
You expected to know about inscribed angles:
. . An inscribed angle is measure by one-half its intercepted arc.
(a) Since PR is a diameter, angle PQR intercepts an arc of 180°.
. . .Therefore: . $\angle PQR = 90^o$
(b) $\angle PSQ = 56^o$ . . . Hence: . $\text{arc }PQ = 112^p$
. . .Angle PRQ intercepts $\text{arc }PQ = 112^o$
. . .Therefore: . $\angle PRQ = 56^o$
(c) Angle POQ is a central angle, measured by its intercepted arc.
. . .Its intercepted arc is: $\text{arc }PQ = 112^o$
. . .Therefore: . $\angle POQ = 112^o$
5. solution for circles problem
Originally Posted by Danielisew
Well that answers part A, but i still don't understand your reason - wouldn't it be 'the angle in between two semi-circles is 90°' ?
Anyone do parts b and c please?
angle in a semi circle is a right angle the reason is if QR is the diameter and P is on the circumference then find slope of QP multiplied with slope of RP gives -1 You can take center as origin, radius as a units then Q(a,0) and R(-a,0) and P(x,y)
Angle at the centre is double the at the circumference that gives answer for the 2nd question
Angles in the same segments are equal that gives the answer for the third.
6. answer to ur circle geometry question
a) will equal 45 degrees, because Q and R create a right angle triangle, which is divided by the line of SQ.
b) will be 56 degrees, because it is the same as angle S.
c) 92 degrees, because triangle POQ is equal to 180 degrees and is also an equalateral triangle, with angles P and Q equaling 44 degrees each, 180 degrees-88 degrees=92 degrees. i would ask around to confirm my answers, for this question stumbled me. however i am fairly certain of these answers. | 2015-01-26 02:29:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031132817268372, "perplexity": 1167.9777235400074}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122086301.64/warc/CC-MAIN-20150124175446-00236-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://mathematica.stackexchange.com/questions/23192/whats-the-simplest-way-to-plot-an-errorlistplot-with-only-y-error-bars/23195 | # What's the simplest way to plot an ErrorListPlot with only y error bars?
Sometimes I get really tired of Preparing data for making an ErrorListPlot using Thread and friends. Is there a simpler way to plot an ErrorListPlot with only y error bars?
Nothing is mentioned in the documentation.
-
Perhaps I should change this into a self answered Q/A? What's the simplest way to plot an ErrorListPlot?:) – Ajasja Apr 12 '13 at 13:56
I agree with the Q/A idea. – Mr.Wizard Apr 12 '13 at 14:12
@Mr.Wizard done – Ajasja Apr 12 '13 at 14:43
Indeed it seems there is a simpler way:
ErrorListPlot[{{{x1, y1}, ErrorBar[err1]}, {{x2, y2}, ErrorBar[err2]}, ...}]
Just do
ErrorListPlot[{{x1, y1, dy1}, {x2, y2, dy2} ...}, ...}]
Here is an example:
Needs["ErrorBarPlots"]
ErrorListPlot[{{1, 2, 0.5}, {3, 4, 0.1}, {5, 6, 0}},
PlotRange -> All, Frame -> True, Axes -> False]
Why this is not documented is beyond me.
-
If you type ??ErrorListPlotyou get the transformation being used: {{ErrorBarPlotsPrivatex_?NumericQ, ErrorBarPlotsPrivatey_?NumericQ, ErrorBarPlotsPrivatee_? NumericQ} :> (ErrorBarPlotsPrivateerror[ N[{ErrorBarPlotsPrivatex, ErrorBarPlotsPrivatey}]] = ErrorBar[{0, 0}, {-ErrorBarPlotsPrivatee, ErrorBarPlotsPrivatee}]; – belisarius Apr 12 '13 at 17:36 | 2015-07-03 17:38:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2004767805337906, "perplexity": 9627.525112943862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096208.17/warc/CC-MAIN-20150627031816-00288-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://www.aot-math.org/article_51119.html | # Operator algebras associated to modules over an integral domain
Document Type: Original Article
Author
Department of Mathematics, North Dakota State University, Fargo, North Dakota, USA
Abstract
We use the Fock semicrossed product to define an operator algebra associated to a module over an integral domain. We consider the $C^*$-envelope of the semicrossed product, and then consider properties of these algebras as models for studying general semicrossed products.
Keywords
### Histroty
• Receive Date: 15 June 2017
• Revise Date: 19 October 2017
• Accept Date: 20 October 2017 | 2019-08-25 09:21:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.614898145198822, "perplexity": 2429.5090473716914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323246.35/warc/CC-MAIN-20190825084751-20190825110751-00087.warc.gz"} |
https://planetmath.org/matrixrepresentationofabilinearform | # matrix representation of a bilinear form
Given a bilinear form, $B:U\times V\rightarrow K$, we show how we can represent it with a matrix, with respect to a particular pair of bases for $U$ and $V$
Suppose $U$ and $V$ are finite-dimensional and we have chosen bases, ${{\cal B}_{1}}=\{e_{1},\ldots\}$ and ${{\cal B}_{2}}=\{f_{1},\ldots\}$. Now we define the matrix $C$ with entries $C_{ij}=B(e_{i},f_{j})$. This will be the matrix associated to $B$ with respect to this basis as follows; If we write $x,y\in V$ as column vectors in terms of the chosen bases, then check $B(x,y)=x^{T}Cy$. Further if we choose the corresponding dual bases for $U^{\ast}$ and $V^{\ast}$ then $C$ and $C^{T}$ are the corresponding matrices for $B_{R}$ and $B_{L}$, respectively (in the sense of linear maps). Thus we see that a symmetric bilinear form is represented by a symmetric matrix, and similarly for skew-symmetric forms.
Let ${{\cal B}_{1}^{\prime}}$ and ${{\cal B}_{2}^{\prime}}$ be new bases, and $P$ and $Q$ the corresponding change of basis matrices. Then the new matrix is $C^{\prime}=P^{T}CQ$.
Title matrix representation of a bilinear form MatrixRepresentationOfABilinearForm 2013-03-22 14:56:22 2013-03-22 14:56:22 vitriol (148) vitriol (148) 5 vitriol (148) Definition msc 15A63 msc 11E39 msc 47A07 | 2021-01-21 10:29:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 23, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9887068271636963, "perplexity": 156.2162060985906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524743.61/warc/CC-MAIN-20210121101406-20210121131406-00323.warc.gz"} |
http://www.eurandom.tue.nl/events/workshops/2013/PCA/PCA.html | European Institute for Statistics, Probability, Stochastic Operations Research and their Applications
About | Research | Events | People | Reports | Alumni | ContactHome
June 10-11-12, 2013
Workshop on
Probabilistic Cellular Automata:
Theory, Applications and Future Perspectives
SUMMARY REGISTRATION SPEAKERS PROGRAMME ABSTRACTS
SUMMARY
The workshop aims at exploring Probabilistic Cellular Automata (PCA) from the point of view of Statistical Mechanics, Probability Theory, Computational Biology, Computer Science and Discrete Dynamical Systems.
PCA revealed to be a fruitful tool in those fields nevertheless many challenges remain open. There is a recent growing interest from these different fields and agreement on the emergency of a turning point: interactions has to be strengthened. This workshop will give an opportunity for the different communities to interact. We welcome contributed talks and posters. Doctoral and post-doctoral researchers are invited to participate. Interested advanced Master students will benefit from the creative interdisciplinary atmosphere we want to promote.
The aim of the workshop is to explore the Probabilistic Cellular Automata field from different point of view.
Cellular Automata (CA) are discrete dynamical systems consisting of simple elementary elements interacting according to some local rules. Simple update rules may produce extremely complex behaviour. They have been used to model a wide range of physical phenomena including traffic flow, disease epidemics, invasion of populations, and dynamics of stock markets.
PCA build a bridge among different scientific disciplines such as Probability Theory, Statistical Mechanics, Theoretical Computer Science, Complex Systems and Computational Life Sciences, and more. Indeed, in recent years there have been active research efforts on the following briefly outlined three directions:
Computer Science and Discrete Dynamical Systems e.g. robustness of PCA when going from synchronous to asynchronous updating scheme, deterministic CA with random initial condition, density classification, synchronous / asynchronous updating.
Probability and Statistical Mechanics, e.g. PCA as discrete-time interacting particle system, non-equilibrium statistical mechanics, metastability, cut-off phenomena and abrupt convergence, phase transitions, Gibbs/Non--Gibbs transitions, PCA and stochastic algorithms
Applications mainly in computational (cell) biology e.g. Cellular Potts Model and stability of emerging patterns, time to stationarity in simulation algorithms, transient regimes
The wide interest of the recent results beside the among related scientific communities confirm the urgency to break the walls and put in touch scientists with different backgrounds but all sharing a common interest for PCA. We expect that the interaction between these different fields and approaches will produce cross fertilization both at theoretical and applicative levels, exchange of point of views and challenges, joint collaborations.
ORGANISERS
E. Cirillo La Sapienza, Roma N. Fatès INRIA, Nancy R. Fernandez University of Utrecht P.-Y. Louis University of Poitiers R. Merks CWI, Leiden University, NCSB-NISB F. Nardi TU Eindhoven - Eurandom W. Ruszel TU Delft C. Spitoni University of Utrecht
LIST OF SPEAKERS
Franco Bagnoli Firenze, Italy Paolo Dai Pra Padova, Italy Andreas Deutsch Dresden, Germany cancelled Lucas Gérin Paris 10, France Yi Jiang Georgia State Univ, USA Kerry Landman Melbourne, Australia Christian Maes Leuven, Belgium Jean Mairesse CNRS, Paris, France Danuta Makowiec Gdansk, Poland Irène Marcovici Paris 7, France Markus Owen Nottingham, United Kingdom cancelled Fernando Peruani MPI, Dresden/ Univ. Nice Damien Regnault Evry, France Benedetto Scoppola Roma, Italy Siamak Taati Utrecht, The Netherlands Anja Voß-Böhme Dresden, Germany
Each day, an opening talk will give an introductory lecture (40 minutes) intended to present to a broad audience the chosen days perspectives. There will be long talk of 40 minutes and short ones of 20 minutes. Relatively long breaks will give to the participants the opportunity for discussions.
Poster sessions create the opportunity to explore each others work. These sessions prove to be a great success
MONDAY JUNE 10
09.00 - 09.50 Registration R.van der Hofstad (Eurandom) and P.-Y. Louis (orgainzer) 09.50 - 10.00 Opening 10.00 - 10.40 C. Maes University of Leuven Physical modeling and MINEP for PCA 10.40 - 11.00 P. Slowinski University of Warwick 11.00 - 11.20 Coffee/tea break 11.20 - 12.00 E. Cirillo University of Sapienza Metastable behavior of reversible Probabilistic Cellular Automata 12.00 - 14.00 Lunch 14.00 - 14.40 P. Dai Pra University of Padova Strategic interaction in trend-driven dynamics 14.40 - 15.00 S. Sené University of Evry Nonlinear threshold PCA in ZxZ: the central role of boundaries 15.00 - 15.20 I. Minelli University of l'Aquila Synchronization via interacting reinforcement 15.20 - 15.40 Coffee/tea break 15.40 - 16.20 F. Bagnoli University of Florence Topological phase transitions in cellular automata 16.20 - 16.50 Poster flash session F. Collet (Bologna); C. Lancia (TU/e + Roma); L. Taggi (Leipzig); H.v.d.Bosch (Louvain); I. Nicolescu (Utrecht) 16.50 - 17.30 Poster session 17.30 - 18.30 Welcome drinks Eurandom lounge
TUESDAY JUNE 11
09.00 - 09.40 R. Merks CWI Amsterdam Stochastic self-organization of branched organs: on the growth of blood vessels, glands, and kidneys 09.40 - 10.20 K. Landman University of Melbourne Modelling development and disease in our “second brain” 10.20 - 10.40 Coffee/tea break 10.40 - 11.20 Y. Jiang University of Atlanta in the Eye: the Good and the Bad 11.20 - 11.40 C. Mente Univeristy of Dresden Individual cell dynamics in cellular automaton models of interacting cell systems 11.40 - 12.00 N. Maric University of Missouri Fleming--Viot particle system driven by a random walk on naturals 12.00 - 14.00 Lunch 14.00 - 14.40 A. Voß-Böhme University of Dresden PCA for modeling interacting cell systems 14.40 - 15.00 F. Peruani University of Nice Optimal noise maximizes collective motion in heterogeneous media 15.00 - 15.20 P. Arrighi University of Grenoble Stochastic Cellular Automata: Correlations, Decidability and Simulations 15.20 - 15.40 Coffee/tea break 15.40 - 16.20 D. Makowiec University of Gdansk Pacemaker rhythm by cellular automata 16.20 - 16.40 Poster flash session S. Boas(CWI + NCSB-NISB); O. Bouré (INRIA); J. Dorrestijn (CWI); D. Crommelin (CWI); M. Palm (CWI + NCSB-NISB) 16.40 - 17.20 Poster session 18.30 - Conference dinner Restaurant "Vlijtig Liesje"
WEDNESDAY JUNE 12
09.00 - 09.40 J. Mairesse University of Paris 7 Around Probabilistic Cellular Automata 09.40 - 10.20 B. Scoppola University of Rome 2 Equilibrium and non-equilibrium statistical mechanics by means of PCA 10.20 - 10.40 Coffee/tea break 10.40 - 11.20 D. Regnault University of Evry Several aspects of probabilistic cellular automata 11.20 - 11.40 S. Taati University of Utrecht Statistical equilibrium in deterministic cellular automata 11.40 - 12.00 I. Marcovici University of Paris 7 The envelope PCA, a tool for sampling the invariant measure of a PCA 12.00 - 14.00 Lunch 14.00 - 14.40 J. Bricmont University of Louvain Phase transitions: from equilibrium models to PCA 14.40 - 15.00 L. Gérin University of Paris Ouest A connection between 2d percolation and the synchronous TASEP 15.00 - 15.20 L. Ponselet University of Louvain Phase transitions in PCA: erosion versus errors 15.20 - 15.40 Coffee/tea break 15.40 - 16.20 A. van Enter University of Groningen Anisotropic bootstrap percolation 16.20 - 17.00 N. Fatès INRIA, Nancy Introductive talk to open problems and discussions 17.00 - R. Fernandez Univeristy of Utrecht Closing words
ABSTRACTS
Pablo Arrighi
Stochastic Cellular Automata: Correlations, Decidability and Simulations
We introduce a simple formalism for dealing with deterministic, non- deterministic and stochastic cellular automata in an unified and composable manner. This formalism allows for local probabilistic correlations, a feature which is not present in usual definitions. We show that this feature allows for strictly more behaviors (for instance, number conserving stochastic cellular automata require these local probabilistic correlations). We also show that several problems which are deceptively simple in the usual definitions, become undecidable when we allow for local probabilistic correlations, even in dimension one. Armed with this formalism, we extend the notion of intrinsic simulation between deterministic cellular automata, to the non-deterministic and stochas- tic settings. Although the intrinsic simulation relation is shown to become undecidable in dimension two and higher, we provide explicit tools to prove or disprove the existence of such a simulation between any two given stochastic cellular automata. Those tools rely upon a characterization of equality of stochastic global maps, shown to be equivalent to the existence of a stochastic coupling between the random sources. We apply them to prove that there is no universal stochastic cellular automaton. Yet we provide stochastic cellular automata achieving optimal partial universality, as well as a universal non-deterministic cellular automaton.
(joint work with
Nicolas Schabanel (LIAFA), Guillaume Theyssier (LAMA)
Franco Bagnoli
Topological phase transitions in cellular automata
Cellular automata are successful modeling tools, but in many cases the classical regular lattice is not adequate to the problem under investigation. By changing the topology of the lattice, several interesting phenomena occurs. We illustrate an example of a phase transition that can be induced by a change in parameters or in the topology of the lattice. We show also how one can map the change in the topology onto the change in the parameters.
Jean Bricmont
Phase transitions: from equilibrium models to PCA
I will review some of the techniques used to prove the existence of phase transitions in equilibrium models and the problems that one encounters if one tries to extend those techniques to PCA.
Emilio Cirillo
Metastable behavior of reversible Probabilistic Cellular Automata
Metastability is a relevant phenomenon in many different applied sciences. Its full mathematical description is quite recent and still incomplete. In this framework Probabilistic Celluar Automata pose changelling problems and show unexpected behaviors. In this talk some results will be reviewd.
Paolo Dai Pra
Strategic interaction in trend-driven dynamics
We propose a stochastic dynamics in which N agents update their state simultaneously but not independently. At each time step agents aim at maximizing their individual payoff, depending on their action, on the global trend of the system and on a random noise. In the limit of infinitely many agents, a law of large numbers is obtained; the limit dynamics consist in an implicit dynamical system, possibly multiple valued. For a special model, we determine the phase diagram for the long time behavior of these limit dynamics and we show the existence of a phase, where a locally stable fixed point coexists with a locally stable periodic orbit.
Andreas Deutsch
(cancelled) Analyzing emergent behaviour in cellular automaton models of cancer invasion
While molecular biology methods are required for a better characterization and identification of individual cancer cells, mathematical modelling and computer simulation is needed for investigating collective effects of cancer invasion. Here, we demonstrate how lattice-gas cellular automaton (LGCA) models allow for an adequate description of individual cancer cell behaviour [1]. We will then show how analysis of the LGCA models allows for prediction of emerging properties (in particular of the invasion speed) [2]. Furthermore, we propose that the transition to invasive phenotypes can be explained on the basis of the microscopic ‘Go or Grow’ mechanism (migration/proliferation dichotomy) and oxygen shortage, i.e. hypoxia, in the environment of a growing tumour. We test this hypothesis again with the help of a lattice-gas cellular automaton. Finally, we use our LGCA models for the interpretation of data from in vitro glioma cancer cell invasion assays [3].
References: [1] A. Deutsch, S. Dormann: Cellular Automaton Modeling of Biological Pattern Formation: Characterization, Applications, and Analysis Birkhäuser, Boston, 2005 [2] H. Hatzikirou, D. Basanta, M. Simon, K. Schaller, A. Deutsch: ‘Go or Grow’: the key to the emergence of invasion in tumour progression?
Math. Med. Biol., 29, 1, 49-65, 2012 [3] M. Tektonidis, H. Hatzikirou, A. Chauviere, M. Simon, K. Schaller, A. Deutsch: Identification of intrinsic mechanisms for glioma invasion J. Theor. Biol., 287, 131-147, 2011
Aernout van Enter
Anisotropic bootstrap percolation
Bootstrap percolation models are Cellular Automata with probabilistic initial conditions. We discuss some results and open problems on the influence of anisotropy on properties of bootstrap percolation models in two and three dimensions. In particular we discuss finite-size scaling behaviour and sharp thresholds.
(Joint work with Tim Hulshof, Anne Fey, Hugo Duminil-Copin)
Nazim Fatès
Modeling natural phenomena or computing, do we need to choose ? On the landscape of randomness in cellular automata
I will discuss some questions in order to introduce the open problems session.
Lucas Gérin
A connection between 2d percolation and the synchronous TASEP
The aim of this talk is to describe a connection between the geometry of the 2d percolation infinite cluster, an important object in statistical mechanics, and the discrete-time and synchronous TASEP, a 1d interacting particle system modeling non-equilibrium phenomena (and which is quite known in the PCA community).
We will point out some consequences and possible extensions.
(joint work with A.L.Basdevant, N.Enriquez, J.B.Gouere)
Yi Jiang
Angiogenesis in the Eye: the Good and the Bad
Angiogenesis, or blood vessel growth from existing ones, is an important physiological process that occur during development, wound healing, as well as diseases such as cancer and diabetes. I will report our recent progress in modeling angiogenesis in the eye in two scenarios. The good refers to healthy blood vessel growth in the retina in mouse embryos, which is a perfect experimental model for understanding the molecular mechanism of angiogenesis. The bad is the
pathological blood vessel growth in age related macular degeneration, which is the leading cause of vision loss in the elderly and a looming epidemic in our aging society. We develop cell-based, multiscale models that include intracellular, cellular, and extracellular scale dynamics, and show that biomechanics of cell-cell and cell-matrix interactions play crucial roll in determining the dynamics of blood vessel growth initiation as well as vascular network formation. Such
models show great potential as in silico Petri-dishes for predictive studies of mechanisms as well as therapies.
Kerry Landman
Modelling development and disease in our “second brain”
The enteric nervous system (ENS) in our gastrointestinal tract, nicknamed the second brain'', is responsible for normal gut function and peristaltic contraction. Embryonic development of the ENS involves the colonization of the gut wall from one end to the other by a population of proliferating neural crest cells. Failure of these cells to invade the whole gut results in the relatively common, potentially fatal condition known as Hirschsprung disease (HSCR). Probabilistic cellular automata models provide insight into the colonization process at both the individual cell-level and population-level. Our models generate experimentally testable predictions, which have subsequently been confirmed. These results have important implications for HSCR and highlight the significance of stochastic effects.
Chistian Maes
Physical modeling and MINEP for PCA
Being interested in describing and understanding physical phenomena one is often confronted with the question what effective models to choose as sufficiently realistic. That is true in general
when taking serious models of interacting particle systems such as probabilistic cellular automata (PCA). What specifies the dynamical ensemble when moving outside thermodynamic equilibrium?
We propose to consider the condition of local detailed balance and to introduce the frenetic contribution as a freedom in waiting time distributions. We then show that the minimum entropy production principle (MINEP) in general fails for PCA.
Jean Mairesse
Around Probabilistic Cellular Automata
In this introductory talk, I will first survey how PCA appear in various contexts ranging from combinatorics, to statistical physics and theoretical computer science. I will focus on two problems : the ergodicity of positive rates PCA, and the density classification.
Danuta Makowiec
Pacemaker rhythm by cellular automata
The sinoatrial node is the primary pacemaker of the heart. Nodal dysfunction can lead to a variety of pathological clinical syndromes. Although the basic mechanisms underlying the self-excitation of each
individual nodal cell are accepted, there is still a lot of controversy on how the cells organize themselves to produce the periodic signal, which is strong enough to drive the contraction of the heart tissue.
Approach based on Greenberg-Hastings cellular automata is redrafted to take account of the essential characteristics of both the physiology of a nodal cell and the known facts about the organization between cells. So, the model is based on cells that cycle through firing, refractory and activity stages. If sufficiently many neighbors of a cell are firing then a cell jumps directly from the activity stage to firing stage, or prolong
its refractory stage. These interactions cause the cell synchronize their stages, as in the real pacemaker tissue. The neighborhood connections are created by a stochastic wrinkling algorithm to make the network of
interactions three dimensional and heterogeneous. The synchronization in the system is studied by Kuramoto order parameters. We show that these parameters lead to the consistent description of the system stationary states, that is quantify frequencies emerging in the system. Finally, we will use the model to explain some changes that occur due to aging in the human pacemaker.
Nevana Maric
Fleming--Viot particle system driven by a random walk on naturals
Random walk on naturals with negative drift and absorption at $0$, when conditioned on survival, has uncountably many invariant measures (quasi--stationary distributions, \textit{qsd}) $\nu_c$. We study a Fleming--Viot (FV) particle system driven by this process. In this particle system there are $N$ particles where each particle evolves as the random walk described above. As soon as one particle is absorbed, it reappears, choosing a new position
according to the empirical measure at that time. Between the absorptions, the particles move independently of each other. Our focus is in the relation of empirical measure of the FV process with \textit{qsd}'s of the random walk.
Firstly, mean normalized densities of the FV unique stationary measure converge to the minimal \textit{qsd}, $\nu_0$, as $N$ goes to infinity. Moreover, every other \textit{qsd} of the random walk ($\nu_c, c > 0$) corresponds to a metastable state of the FV particle system.
Irène Marcovici
The envelope PCA, a tool for sampling the invariant measure of a PCA
We propose a perfect sampling algorithm for the invariant measure of an ergodic PCA. A PCA is a finite state space Markov chain. Therefore, coupling from the past from all possible initial configurations provides a basic perfect sampling procedure. But it is a very inefficient one since the number of configurations is exponential. Here, the contribution consists in simplifying the procedure. We define a new PCA on an extended alphabet, called the envelope PCA (EPCA). We obtain a perfect sampling procedure for the original PCA by running the EPCA on a single initial configuration. Our algorithm does not assume any monotonicity property of the local rule.
(joint work with A. Busic and J. Mairesse)
Carsten Mente
Individual cell dynamics in cellular automaton models of interacting cell systems
Lattice-gas cellular automaton (LGCA) models have proven successful in the analysis of collective behavior arising from populations of moving and interacting cells. Examples of collective cell behavior at a macroscopic level include the formation of cell density patterns and the dynamics of moving cell fronts. However, important microscopic observables which emerge as a consequence of collective cell behavior, especially individual cell trajectories, can not be simulated and analyzed with LGCA models so far since these models cannot distinguish individual cells. Here, we introduce an extension of the classical LGCA model, which allows labeling and tracking of individual cells. We name these extended LGCA models "individual-based lattice-gas cellular automata"(IB-LGCA). Furthermore, we derive stochastic differential equations (SDE) corresponding to specific IB-LGCA models, which permit the investigation of individual cell trajectories and the approximate description of IB-LGCA models by systems of SDEs. This
approach allows computationally efficient simulations and analytical treatment of individual cell trajectories in populations of interacting cells. Finally, we present IB-LGCA examples demonstrating the analysis of individual cell trajectories in populations of interacting cells: random cell motion and the motion of cells exposed to an external gradient.
Roeland Merks
Stochastic self-organization of branched organs: on the growth of blood vessels, glands, and kidneys
Morphogenesis, the formation of biological shape and pattern during embryonic development, is a topic of intensive experimental investigation, so the participating cell types and molecular signals continue to be characterized in great detail. Yet this data only partly tells biologists how molecules and cells interact dynamically to construct a biological tissue. Probabilistic cellular automata are a great help in analyzing the mechanisms of biological morphogenesis.
I will discuss some recent developments on a lattice-based, stochastic model for the formation of blood vessel networks (Merks et al. PLoS Comput Biol 2008), which is based on the cellular Potts model. In this model, we have identified a stochastic mechanism for branching growth that, in a modified form, may play a key role in the formation of branched organs of epithelial origin, e.g., mammary glands and kidneys. I will discuss this model in detail and conclude by suggesting some interesting continuum and stochastic mathematical problems that our simulations suggest.
Ida Minelli
Synchronization via interacting reinforcement
We consider a system of urns of Polya--type, with balls of two colours; the reinforcement of each urn depends both on the content of the same urn and on the average content of all urns. We show that the urns synchronize almost surely, in the sense that the fraction of balls of a given colour converges almost surely, as the time goes to infinity, to the same limit for all urns. A normal approximation for a large number of urns is also obtained.
Ioana Niculescu (poster)
Explaining many biological phenomena require a multiscale approach in which the cell is often the natural level of separation between the intracellular regulatory mechanisms and the emerging tissue level. For many tissue level phenomena, the internal mechanism that generates a certain cell behaviour may not be that important, as long as morphodynamically the cells behave realistic enough to serve the purpose of the model trying to explain those phenomena. Cell migration is a vital process in morphogenesis, tissue repair, disease fighting but also disease progression. We propose a phenomenological model for cell migration based on the CPM framwork, that bypasses the complex internal mechanism that drives the cell to move. We show that this simple and computational light method can be calibrated to fit many migration-shape deformation patterns (morphodynamics) including the amoeboid and keratocyte-like migration. The method is suited for random as well as directional migration and is easily applied in the context of crowded multicellular and heterogeneous tissue where cells need to interact.
Markus Owen
Hybrid multiscale and partial differential equation models for cancer immunotherapy
Cancer is a heterogeneous disease governed by interconnected processes at multiple spatial and temporal scales. For example, variations in vascular density and blood flow within tumours can have significant effects on nutrient distributions. In addition, such heterogeneities can have significant implications for the delivery and efficacy of drugs and other therapies. We have developed multiscale mathematical models for vascular tumour growth, based upon an extended cellular automata model for cell populations overlaid with networks of blood vessels and the distributions of nutrients, cytokines and therapies [1]. We have used these models to predict the efficacy of novel hypoxia-targeted macrophage-based therapies, conventional therapies, and combination therapies. We find that combination therapies can be highly synergistic, depending on their relative timing, but that host tissue and tumour variability can have important implications for therapeutic efficacy [2]. We have also begun to explore the relationships between our hybrid cellular automaton models and more traditional partial differential equation models.
[1] M R Owen, T Alarcón, P K Maini and H M Byrne: Angiogenesis and vascular remodelling in normal and cancerous tissues, J. Math. Biol. 58:689-721 (2009)
[2] M R Owen, I J Stamper, M Muthana, G W Richardson, J Dobson, C E Lewis and H M Byrne:
Mathematical modeling predicts synergistic antitumor effects of combining a macrophage-based, hypoxia-targeted gene therapy with chemotherapy, Cancer Res. 71(8) 2826-37 (2011)
Fernando Peruani
Optimal noise maximizes collective motion in heterogeneous media
We study the effect of spatial heterogeneity on the collective motion of self-propelled particles (SPPs). The heterogeneity is modelled as a random distribution of either static or diffusive obstacles, which the SPPs avoid while trying to align their movements. We find that such obstacles have a dramatic effect on the collective dynamics of usual SPP models. In particular, we report about the existence of an optimal (angular) noise amplitude that maximizes collective motion. We also show that while at low obstacle densities the system exhibits long--range order, in strongly heterogeneous media collective motion is quasi--long--range and exists only for noise values in between two critical noise values, with
the system being disordered at both, large and low noise amplitudes. Since most real system have spatial heterogeneities, the finding of an optimal noise intensity has immediate practical and fundamental implications
for the design and evolution of collective motion strategies.
Lise Ponselet
Phase transitions in PCA: erosion versus errors
We consider a class of probabilistic cellular automata (PCA) of interest both in statistical physics and in computer science. They are perturbations of cellular automata (CA) that have the property of eroding blocks of impurities in an almost homogeneous configuration. A stochastic perturbation turns the CA into PCA by admitting errors in the states of the cells with some probability distribution. If the erosion is sufficient to correct the effects of errors, the PCA process can have several stationary states, providing an example of non-equilibrium phase transition. We study some properties of these stationary states when the probability of errors is small.
Damien Regnault
Several aspects of probabilistic cellular automata
From the point of view of a computer scientist, deterministic cellular automata are known as a parallel computation model. Different studies have introduced randomness in this model by considering probabilistic transitions. In this talk, I will present the different motivations of these studies as well as the current results and open questions.
Benedetto Scoppola
Equilibrium and non-equilibrium statistical mechanics by means of PCA
The aim of this talk is to introduce a class of PCA with some interesting features:
1) the equilibrium measure of the PCA tends to the Gibbs measure of Ising model in the thermodynamical limit.
2) in certain cases it is possible to introduce a unified description of reversible (equilibrium statistical mechanics) and irreversible dynamics (non equilibrium statistical mechanics).
3) Some cases are solved by exact computations.
(joint work with Elisabetta Scoppola and Paolo Dai Pra)
Sylvain Sené
Nonlinear threshold PCA in ZxZ: the central role of boundaries
The general question of the influence of the environment on dynamical systems has already been widely studied in the past decades. One of the best known example comes from mathematical physics and is that of the characterisation of phase transitions in the “classical” Ising Model, shown by Dobrushin and Ruelle independently. However, this question remains of particular interest in other contexts, closer to theoretical computer science and biology. For instance, now that cellular automata, and more generally automata networks, are more and more studied as dynamical systems to model and analyse the dynamics of biological regulation networks, such as genetic networks, going further in the understanding of the substantial influence of their environment actually is important.
In this presentation, to make a step in this direction, I propose to tackle from a theoretical point of view the question of the structural instability (in the sense of Thom) of a particular class of two-dimensional finite threshold Boolean cellular automata when the latter are subjected to distinct fixed boundary instances. More precisely, focusing on a nonlinear probabilistic version of the classical threshold function governing the evolution of formal neural networks, I will show the existence of a necessary condition under which attractive cellular automata of this form become boundary sensitive, i.e., a condition without which a cellular automaton hits the same asymptotic dynamical behaviour whatever its boundary conditions are.
Then will be given an explicit formula for this necessary condition, whose sufficiency will be highlighted by simulations.
Piotr Slowinski
Probabilistic cellular automata with non-unique space-time phases
I will use space-time phases to describe some properties of probabilistic cellular automata (PCA). Space-time phases are probability distributions over state as a function of space and time that arise from initial probabilities in the past. In particular, I will focus on PCA with non-unique phase and show how space-time phases can be used to analyse emergence in such systems. To illustrate the most interesting phenomena I will use numerical demonstration. Furthermore, I will present examples of emergence in PCA used in ecology and economy.
(this research is supported by the Alfred P. Sloan Foundation (New York). (joint work with R.S.MacKay)
Siamak Taati
Statistical equilibrium in deterministic cellular automata
Some deterministic cellular automata have long been observed to demonstrate thermodynamic behavior: starting from a random configuration, they undergo a transient dynamics until they reach a state of macroscopic equilibrium. An example is the Ising cellular automaton which can be seen as a deterministic and microscopically reversible variant of a Gibbs sampler (or a micro-canonical sampler). I will discuss some results and open problems regarding (approach to)
macroscopic equilibrium in reversible (and more generally surjective) cellular automata.
(joint work with Jarkko Kari)
Anja Voss-Böhme
PCA for modeling interacting cell systems
Understanding the mechanisms that control tissue organization during development belongs to the most fundamental goals in developmental biology. Quantitative approaches and mathematical models are essential to deduce the consequences of existing morphogenetic hypotheses, thus providing the basis for experimental testing and theoretical understanding. One approach to questions concerning patterning in developing organisms is to consider tissues as huge populations of cells which behave according to certain rules that depend on their genetic programs and inner structure as well as the states and actions of directly neighboring cells. Then, tissue organization can be understood as emergent behavior that results from local intercellular interaction.
PCA provide a spatiotemporal modeling framework to describe and analyze interacting cell populations. They have been successfully applied to study characteristic collective cell behaviors that result from specific cellular interaction rules. However, there are considerable differences in the construction of these models. While cell differentiation, cell death and proliferation can be covered by classical PCA rules, a proper implementation of cell motility is challenging. In the talk, we will compare exemplarily PCA models where one cell occupies one lattice node to spatially more resolved models, such as the CPM. We will expose the mechanistic structures of these models and discuss their implications for analysis and knowledge gain.
PRACTICAL INFORMATION
● Venue
Eurandom, Mathematics and Computer Science Dpt, TU Eindhoven,
Den Dolech 2, 5612 AZ EINDHOVEN, The Netherlands
Eurandom is located on the campus of Eindhoven University of Technology, in the TU/e (4th floor) (about the building). The university is located at 10 minutes walking distance from Eindhoven main railway station (take the exit north side and walk towards the tall building on the right with the sign TU/e).
Accessibility TU/e campus and map.
The conference will be held at the Eindhoven Technical University. The TU/e is a relatively young university. It was founded some 50 years ago and is situated in the southern part of The Netherlands in the city of Eindhoven, well known as the hometown of the giant in Electronics, the Philips Company, and the famous football club, PSV Eindhoven. The TU/e intends to be a research driven, design oriented university of technology at an international level, with the primary objective of providing young people with an academic education within the ‘engineering science & technology’ domain.
● Registration
Deadline for contribution (talk/poster) : closed
● Accommodation / Funding
Some limited funds are available to contribute to local and travel costs. Participants have to arrange their own hotel booking.
For hotels around the university, please see: Hotels (please note: prices listed are "best available").
More hotel options can be found on the webpages of the , Postbus 7, 5600 AA Eindhoven.
Remark: Note that due to a at the Eindhoven Philips Stadion on June 7-8-9, it may be difficult to find accommodation before the June 9 in the city centre of Eindhoven.
● Travel
For those arriving by plane, there is a convenient direct train connection between Amsterdam Schiphol airport and Eindhoven. This trip will take about one and a half hour. For more detailed information, please consult the NS travel information pages or see Eurandom web page location.
Many low cost carriers also fly to Eindhoven Airport. There is a bus connection to the Eindhoven central railway station from the airport. (Bus route number 401) For details on departure times consult http://www.9292ov.nl
The University can be reached easily by car from the highways leading to Eindhoven (for details, see our route descriptions or consult our map with highway connections.
● Conference facilities : Conference room, Metaforum Building MF11&12
The meeting-room is equipped with a data projector, an overhead projector, a projection screen and a blackboard. Please note that speakers and participants making an oral presentation are kindly requested to bring their own laptop or their presentation on a memory stick.
● Conference Secretariat
Upon arrival, participants should register with the workshop officer, and collect their name badges. The workshop officer will be present for the duration of the conference, taking care of the administrative aspects and the day-to-day running of the conference: registration, issuing certificates and receipts, etc.
● Contact
Mrs. Patty Koorn, Workshop Officer, Eurandom/TU Eindhoven, koorn@eurandom.tue.nl
The organisers acknowledge the financial support/sponsorship of:
Other events of interest around this meeting
Mark Kac Seminar, June 14, 2013, Utrecht
Links to meetings on related topics
Automata 2013 (Sept 2013)
Legal notice: privacy statement
Your Name, academic position (Doctoral/Post-doctoral researcher, Faculty), institutional affiliation, address and email will be saved. According to European regulation, You are allowed to access, modify, correct and delete your concerned data.
Last updated 08-07-13, by PK
P.O. Box 513, 5600 MB Eindhoven, The Netherlands
tel. +31 40 2478100
e-mail: info@eurandom.tue.nl | 2014-11-22 23:20:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32075896859169006, "perplexity": 2256.119774857357}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378815.18/warc/CC-MAIN-20141119123258-00010-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://campercom.it/pnln/symmetric-matrix-example-3x3.html | If A = (a ij) is skew-symmetric, a ij = −a ji; hence a ii = 0. A determinant is a real number associated with every square matrix. Following tradition, we present this method for symmetric/self-adjoint matrices, and later expand it for arbitrary matrices. The 3x3 matrix can be thought of as an operator - it takes a vector, operates on it, and returns a new vector. As an example, in the case of a 3 x 3 Matrix and a 3-entry column vector,. 2 Definiteness of Quadratic Forms. 231 Diagonalization of non-Hermitian matrices • Let D be the diagonal matrix whose. Write a C program to read elements in a matrix and check whether the given matrix is symmetric matrix or not. Now the next step to take the determinant. z y ' = b 1 z 1 +b 2 z 2. The non-symmetric problem of finding eigenvalues has two different formulations: finding vectors x such that Ax = λx, and finding vectors y such that y H A = λy H (y H implies a complex conjugate transposition of y). If there exists a square matrix B of order n such that. Diagonalize the matrix. Antisymmetric matrices are commonly called "skew symmetric matrices" by mathematicians. I'm currently stuck on converting a 3*N x 1, where N is an integer value, vector into chunks of skew symmetric matrices. An n×n matrix B is called skew-symmetric if B = −BT. However, I am failing to see how it can be done specifically for a 3x3 matrix using only row and column interchanging. The matrix U is called an orthogonal matrix if UTU= I. The set of four transformation matrices forms a matrix representation of the C2hpoint group. The task is to find a matrix P which will let us convert A into D. Throughout this paper, I nand 1 ndenote the n nidentity matrix and the n-dimensional column vector consisting of all ones, respectively. Determining the eigenvalues of a 3x3 matrix If you're seeing this message, it means we're having trouble loading external resources on our website. Real Symmetric Matrices The most common matrices we meet in applications are symmetric, that is, they are square matrices which are equal to their transposes. The eigenvalue for the 1x1 is 3 = 3 and the normalized eigenvector is (c 11) = (1). Lim (Algebra Seminar) Symmetric tensor decompositions January 29, 2009 1 / 29. In the solution given in the post " Diagonalize the 3 by 3. 2 Eigenvectors of circulant matrices One amazing property of circulant matrices is that the eigenvectors are always the same. A square matrix $A=(a_{ij})$ is a symmetric matrix if its entries opposite the main diagonal are the same, that is, if $a_{ij}=a_{ji}$ for all $i$ and j. The output matrix has the form of A = [ A 11 A 12 A 13 A 21 A 22 A 23 A 31 A 32 A 33 ]. 369) EXAMPLE 1 Orthogonally diagonalize. I For real symmetric matrices we have the following two crucial properties: I All eigenvalues of a real symmetric matrix are real. The matrix = [− − −] is skew-symmetric because − = [− − −] =. A is called upper triangular if a ij = 0 for i > j and called lower triangular if a ij = 0 for i < j. G o t a d i f f e r e n t a n s w e r? C h e c k i f i t ′ s c o r r e c t. Examples of symmetric beams z z x I y M − = σ For the 1-D case (M y = 0) For planes of arbitrary cross-section, it is always possible to determine special y-z axes which act equivalent to planes of symmetry, and therefore allow us to apply these forms of the equations. The result is a 3x1 (column) vector. • examples • the Cholesky factorization • solving Ax = b with A positive definite • inverse of a positive definite matrix • permutation matrices • sparse Cholesky factorization 5-1 Positive (semi-)definite matrices • A is positive definite if A is symmetric and xTAx > 0 for all x 6= 0 • A is positive semidefinite if A is. Theorem 1 Any quadratic form can be represented by symmetric matrix. A diagonal matrix A is called an identity matrix if a ij = 1 for i = j and is denoted by I n. I have chosen these from some book or books. Note : Let A be square matrix of order n. A rank one matrix yxT is positive semi-de nite i yis a positive scalar multiple of x. To apply the method of diagonalisation to evaluate the power of a given symmetric matrix. Therefore, there are nonzero vectors x such that A x = x (the eigenvectors corresponding to the eigenvalue λ = −1), and there are nonzero vectors x such that A x = −2 x (the eigenvectors corresponding to the eigenvalue λ = −2). There are other methods of finding the inverse matrix, like augmenting the matrix by the identity matrix and then trying to make the original matrix into the identity matrix by applying row and column operations to the augmented matrix, and so on. Square Root of a Symmetric Matrices The square root of a 31 by 31 matrix with 6"s down the main diagonal and 1"s elsewhere is a symmetric binary matrix with six 1's in each row and column. For example, the eigenvalues of the matrix are the entries on the diagonal of the diagonal matrix. The diagonal elements are always real numbers. columns and rows is the same as defining width and height for image. Matrix norm the maximum gain max x6=0 kAxk kxk is called the matrix norm or spectral norm of A and is denoted kAk max x6=0 kAxk2 kxk2 = max x6=0 xTATAx. Solving a non-symmetric problem of finding eigenvalues is performed in some steps. Thus A = LDLT = LD1/2D1/2LT = RTR where R = D1/2LT is non-singular. And also those matrices should be defined by letters, because after generation, all of those matrices place in an equation. 2 Some examples { An n nidentity matrix is positive semide nite. MATH 340: EIGENVECTORS, SYMMETRIC MATRICES, AND ORTHOGONALIZATION Let A be an n n real matrix. EXAMPLE 1: Find the eigenvalues and eigenvectors of the matrix A = 1 −3 3 3 −5 3 6 −6 4. ) Dimension is the number of vectors in any basis for the space to be spanned. The 3x3 matrix can be thought of as an operator - it takes a vector, operates on it, and returns a new vector. I am just trying to create many number of random 3x3 matrices in the array (-100,100) for a statistical research and I am stuck to create symmetric many number of matrices. I know that I can convert a single vector of size 3 in a skew symmetric matrix of size 3x3 as follows: X = [ 0 -x(3) x(2) ;. Your overall recorded score is 0%. Later, we will look at how to rotate a stress matrix in the general case. The method returns the solution vector ‘x’. For our example: rank{A} ˘2. Get the free "Eigenvalues Calculator 3x3" widget for your website, blog, Wordpress, Blogger, or iGoogle. I For real symmetric matrices we have the following two crucial properties: I All eigenvalues of a real symmetric matrix are real. (1) Any real matrix with real eigenvalues is symmetric. Joachim Kopp developed a optimized "hybrid" method for a 3x3 symmetric matrix, which relays on the analytical mathod, but falls back to QL algorithm. We will also understand how to find the adjoint of matrix with an order 3x3. Theorem 1 Any quadratic form can be represented by symmetric matrix. Note that all the main diagonal elements in the skew-symmetric matrix are zero. It is the only matrix with all eigenvalues 1 (Prove it). Matrix exponential. A A real symmetric matrix [A] can be diagonalized (converted to a matrix with zeros for all elements off the main diagonal) by pre-multiplying by the inverse of the matrix of its eigenvectors and post-multiplying by the matrix of its eigenvectors. Example The zero matrix is. (11) Show that inverse of an invertible symmetric matrix is also symmetric. So a diagonal matrix has at most n different numbers other than 0. The matrices must all be defined on dense sets. Assume that the eigenvalues and eigenvectors of symmetric matrix [E] (or equivalently [E']) are known. In this equation A is an n-by-n matrix, v is a non-zero n-by-1 vector and λ is a scalar (which may be either real or complex). Read the instructions. It is a singular matrix. Imports System Imports CenterSpace. >> X Linear matrix variable 6 x6 ( symmetric , real , 9 variables ). This is because the size of the array can be initialized dynamically. I am just trying to create many number of random 3x3 matrices in the array (-100,100) for a statistical research and I am stuck to create symmetric many number of matrices. For our example: rank{A} ˘2. A = 1 0 0 1 0 0 2 3 3 You have attempted this problem 10 times. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. Matrices Worksheets: Addition, Subtraction, Multiplication, Division, and determinant of Matrices Worksheets for High School Algebra. 3 Now what? First, a matrix might have repeated eigenvalues and still be diagonalizable. Example solving for the eigenvalues of a 2x2 matrix. As with symmetric matrices, we can easily recognize Hermitian matrices by. All main diagonal entries of a skew-symmetric matrix must be zero, so the trace is zero. If a matrix A is idempotent, A 2 = A. [email protected] In our example, the matrix is () Find the determinant of this 2x2 matrix. symmetric positive definite matrices If A is symmetric, AT = A, and positive semidefinite: for all x: xTAx 0; then we can compute a Cholesky factorization A = LLT, where L is a lower triangular matrix. Invertible matrices are very important in many areas of science. asked by Jenny on April 1, 2015; linear algebra. Example 22 Express the matrix B = [ 8(2&−2&−[email protected][email protected]&−2&−3)] as the sum of a symmetric and a skew symmetric matrix. Matrix multiplication could be described as finding the scalar product of each row in the first matrix by each column in the second. Any positive semidefinite matrix h can be factored in the form h = kk′ for some real square matrix k, which we may think of as a matrix square root of h. The Cholesky decomposition of a Pascal symmetric matrix is the Pascal lower-triangle matrix of the same size. edu Linear Regression Models Lecture 11, Slide 5 leaving J is matrix of all ones, do 3x3 example. Show that every square matrix is uniquely a sum of a symmetric and skew-symmetric matrix. , The sum of the numbers along each matrix diagonal (the character) gives a shorthand version of the matrix representation, called Γ:. rotation_from_axis_angle (…) Computes the rotation matrix from the (axis,angle) representation using Rodriguez’s formula. This problem has been solved!. the inverse of an n x n matrix See our text ( Rolf, Page 163) for a discussion of matrix inverses. 4: If A and B are symmetric matrices with the same size, and if k is any scalar, then: (a) AT is symmetric (b) A+. Where I is known as identity matrix. These yield complicated formu-lae for the singular value decomposition (SVD), and hence the polar decomposition. 3 Pure Strategies and Mixed Strategies. (iv) Theorem 2: Any square matrix A can be expressed as the sum of a symmetric matrix and a skew symmetric matrix, that is (A+A ) (A A )T T A = + 2 2 − 3. Example: This matrix is 2×3 (2 rows by 3 columns): When we do multiplication: The number of columns of the 1st matrix must equal the number of rows of the 2nd matrix. The available eigenvalue subroutines seemed rather heavy weapons to turn upon this little problem, so an explicit solution was developed. An analogous result holds for matrices. 10 Invertible Matrices (i) If A is a square matrix of order m × m, and if there exists another square. In [16] it is explained how to obtain analytic formulae for the eigendecomposition of a symmetric 3 × 3 matrix. The eigenvectors belonging to the largest eigenvalues indicate the main direction'' of the data. Example Consider the matrix A= 1 4 4 1 : Then Q A(x;y) = x2 + y2 + 8xy and we have Q A(1; 1) = 12 + ( 1)2 + 8(1)( 1) = 1 + 1 8. The matrix inverse is equal to the inverse of a transpose matrix. Square matrix A is said to be skew-symmetric if aij =−aji for all i and j. The output matrix has the form of A = [ A 11 A 12 A 13 A 21 A 22 A 23 A 31 A 32 A 33 ]. Symmetric eigenvalue decompositions for symmetric tensors Lek-Heng Lim University of California, Berkeley January 29, 2009 (Contains joint work with Pierre Comon, Jason Morton, Bernard Mourrain, Berkant Savas) L. matrix explicitly. A rank one matrix yxT is positive semi-de nite i yis a positive scalar multiple of x. Note that is a real symmetric matrix: i. In a symmetric matrix, aij = aji for each pair (i, j). In our example, the matrix is () Find the determinant of this 2x2 matrix. By assumption, A has full pivots, so it is non-singular. The task is to find a matrix P which will let us convert A into D. CONTENTS: [4] MATRIX ADDITION [5] MATRIX NOTATION [6] TRANSPOSE [7] SYMMETRIC MATRICES [8] BASIC FACTS ABOUT MATRICES [4] MATRIX ADDITION. The matrix V will have a positive determinant, and the three eigenvectors will be aligned as closely as possible with the x, y, and z axes. Invertible matrices are very important in many areas of science. Simple example: A = I. The Symmetric Inertia Tensor block creates an inertia tensor from moments and products of inertia. Computes all eigenvalues of a real symmetric tridiagonal matrix, using a root-free variant of the QL or QR algorithm: sstebz, dstebz: Computes selected eigenvalues of a real symmetric tridiagonal matrix by bisection: sstein, dstein cstein, zstein: Computes selected eigenvectors of a real symmetric tridiagonal matrix by inverse iteration. The eigenvalue for the 1x1 is 3 = 3 and the normalized eigenvector is (c 11) = (1). Solving a non-symmetric problem of finding eigenvalues is performed in some steps. B = [ 8(2&−2&−[email protected]−1&3. Symmetric matrices are in many ways much simpler to deal with than general matrices. The first step into solving for eigenvalues, is adding in a along the main diagonal. When can we add them, and what is the answer? We define matrix addition by adding componentwise. Shio Kun for Chinese translation. The inverse of a permutation matrix is again a permutation matrix. Transformation Matrix 4x4. Consider the following matrix over : Find bases for the row space, column space, and null space. \begingroup Yes, reduced row echelon form is also called row canonical form, and obviously there are infinitely many symmetric matrix that are not diagonal and can be reduced to anon diagonal reduced row echelon form, but note that the row canonical form is not given by a similarity transformation, but the jordan form is. Two apologies on quality: 1. For any positive integers m and n, Mm×n(R), the set of m by n matrices with real entries, is a vector space over R with componentwise addition and scalar multiplication. Back substitution of these eigenvalues into relation (1) or (2) allows determination of the corresponding eigenvectors. Skew symmetric matrices mean that A (transpose) = -A, So since you know 3 elements of the matrix, you know the 3 symmetric to them over the main diagonal mut be the negatives of those elements. The examples helps [5] 2018/01/20 05:51 Male / 50 years old. D: a symmetric 3x3 ufl matrix giving the bending stiffness in Voigt notation. The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. Any square matrix can be expressed as the sum of a symmetric matrix and. Not very random but very fun!. Question: Tag: algorithm,matrix,fft,polynomials I was trying to implement a FFT-based multiplication algorithm in M2(R). Symmetric matrices are in many ways much simpler to deal with than general matrices. R = I + sin(\theta) K + (1 - cos(\theta)) K 2. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. 2 Some examples { An n nidentity matrix is positive semide nite. For example: It is indicated as #I_n# where #n# representes the size of the unit matrix. 3x3 skew symmetric matrices can be used to represent cross products as matrix multiplications. Two examples of symmetric matrices appear below. 2 Orthogonal matrix A matrix Mis orthogonal if MMT = MT M= I. (6) If v and w are two column vectors in Rn, then. For any positive integers m and n, Mm×n(R), the set of m by n matrices with real entries, is a vector space over R with componentwise addition and scalar multiplication. Rank Theorem : If a matrix "A" has "n" columns, then dim Col A + dim Nul A = n and Rank A = dim Col A. Hence we have the means to nd the eigenvectors. Note that whereas C is a 3× 2 matrix, its transpose, CT, is a 2× 3 matrix. Where I is known as identity matrix. Question: (1 Point) Give An Example Of A 3 × 3 Skew-symmetric Matrix A That Is Not Diagonal. can have vector or matrix elements). It is denoted by adj A. Zero matrix and identity matrix are symmetric (any diagonal matrix is sym-metric) 2. • 12 22 ‚; 2 4 1¡10 ¡102 023 3 5 are symmetric, but 2 4 122 013 004 3 5; 2 4 01¡1 ¡102 1¡20 3 5 are not. n maths a square matrix that is equal to its transpose, being symmetrical about its main diagonal. 1) then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. Find the variable for the following 3x3 matrix. A , in addition to being magic, has the property that "the sum of the twosymmetric magic square numbers in any two cells symmetrically placed with respect to the center cell is the same" (12, p. In this paper, we establish a bijection between the set of mutation classes of mutation-cyclic skew-symmetric integral 3x3-matrices and the set of triples of integers (a,b,c) which are all greater than 1 and where the product of the two smaller numbers is greater than or equal to the maximal number. This matrix is a 3x3 matrix because it has three rows and three columns. The resulting diagonal matrix [Λ] contains eigenvalues along the main diagonal. For the lid-driven cavity flow, the implicit matrix is not symmetric. Fortran 90 package for solving linear systems of equations of the form A*x = b, where the matrix A is sparse and can be either unsymmetric, symmetric positive definite, or general symmetric. The determinant of A will be denoted by either jAj or det(A). Since the symmetric matrix is taken as A, the inverse symmetric matrx is written as A-1, such that it becomes. Show (or simply note) that the left side is symmetric and the right side is skew-symmetric. The given matrix does not have an inverse. Real Symmetric Matrices The most common matrices we meet in applications are symmetric, that is, they are square matrices which are equal to their transposes. To give another example, this time for a non-simultaneous game, let us look at the normal form of VNM POKER(2,4,2,3) discussed in the previous two chapters. A real (n\times n)-matrix is symmetric if and only if the associated operator \mathbf R^n\to\mathbf R^n (with respect to the standard basis) is self-adjoint (with respect to the standard inner product). Symmetric matrices have special properties which are at the basis for these discussions and solutions. AAT = 17 8 8 17. A = 2: 1+j: 2-j, 1-j: 1: j: 2+j-j: 1 = 2: 1-j: 2+j (j 2 = -1) 1+j: 1-j: 2-j: j: 1: Now A T = => A is Hermitian (the ij-element is conjugate to the ji-element). The diagonal elements of a skew-symmetric matrix are all 0. ; Calculating off-diagonal elements g i,j i > j (steps 2, 3 and 5) entails dividing some number by the last-calculated diagonal element. Determine whether the matrix A is diagonalizable. The two matrices must be the same size, i. I have always found the common definition of the generalized inverse of a matrix quite unsatisfactory, because it is usually defined by a mere property, , which does not really give intuition on when such a matrix exists or on how it can be constructed, etc… But recently, I came across a much more satisfactory definition for the case of symmetric (or more general, normal) matrices. A diagonal matrix is a symmetric matrix with all of its entries equal to zero except may be the ones on the diagonal. Example, , and In other words, transpose of Matrix A is equal to matrix A itself which means matrix A is symmetric. An other solution for 3x3 symmetric matrices can be found here (symmetric tridiagonal QL algorithm). A rank one matrix yxT is positive semi-de nite i yis a positive scalar multiple of x. 61803398875 learning how to norm matrix for my work. ) (Remark 2: Given a linear system, fundamental matrix solutions are not unique. Determinant Of 3 x 3 Matrix - Core Java Questions - Arrays and Loops In Java : Arrays are very useful in reducing the number of variables created and in reducing the code complexity. If the matrix is not symmetric, a message as well as the top of the matrix is printed. Non-iterative method of solving for the eigenvalues and eigenvectors of a symmetric matrix defined by the components a00, a01, a02, a11, a12, a22. Doublely link list with create, insert, delete and display operations using structure pointer. The covariance matrix is a math concept that occurs in several areas of machine learning. det (A) = det (AT) by property 1 = det. The given matrix does not have an inverse. ) Dimension is the number of vectors in any basis for the space to be spanned. 369) EXAMPLE 1 Orthogonally diagonalize. Only then will A = XΛX−1 which is also QΛQT coincide with A = UΣVT. The diagonal elements of a Hermitian matrix are real. Structural Analysis IV Chapter 4 – Matrix Stiffness Method 3 Dr. At each point in the ground, you get a different Hooke's law (81 component symmetric rank-3 tensor) then do a tensor contraction with the direction you are interested in to create the 3x3 Christoffel matrix, whose eigenvalues are the squares of the phase velocity of the waves (qP, qSH, qSV) in that particular direction. Here is another example: If C = 7 1 −3 2 4 4 then CT = 7 −3 4 1 2 4!. You will find examples of 2x2 and 3x3 matrices. Example for Skew Symmetric Matrix : Here we are going to see some example problems on skew symmetric matrix. There are other methods of finding the inverse matrix, like augmenting the matrix by the identity matrix and then trying to make the original matrix into the identity matrix by applying row and column operations to the augmented matrix, and so on. Where I is known as identity matrix. In [16] it is explained how to obtain analytic formulae for the eigendecomposition of a symmetric 3 × 3 matrix. The leftmost column is column 1. The eigenvectors of the covariance matrix are the principal axes, and can be thought of as a new basis for describing the data (x’,y’). Description. Example 22 Express the matrix B = [ 8(2&−2&−[email protected][email protected]&−2&−3)] as the sum of a symmetric and a skew symmetric matrix. Principal minors De niteness and principal minors Theorem Let A be a symmetric n n matrix. Only then will A = XΛX−1 which is also QΛQT coincide with A = UΣVT. ; If − exists, it is symmetric if and only if is symmetric. The individual values in the matrix are called entries. These yield complicated formu-lae for the singular value decomposition (SVD), and hence the polar decomposition. Here AT is the transpose of A. I know that I can convert a single vector of size 3 in a skew symmetric matrix of size 3x3 as follows:. Let be an eigenvector corresponding to the eigenvalue 3. Jordan decomposition. The Stiffness Method – Spring Example 1 To avoid the expansion of the each elemental stiffness matrix, we can use a more direct, shortcut form of the stiffness matrix. The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. This implies that UUT = I, by uniqueness of inverses. 3 Removing Dominated Strategies. A = [1 1 1 1 1 1 1 1 1]. Class MatrixFunctions provides the static Product() method for calculating the inner product of a matrix and a vector: Code Example – C# matrix. Prove that the determinant of an n × n skew-symmetric matrix is zero if n is odd. Then we have: A is positive de nite ,D k >0 for all leading principal minors A is negative de nite ,( 1)kD k >0 for all leading principal minors A is positive semide nite , k 0 for all principal minors A is negative semide nite ,( 1)k k 0 for all principal minors In the rst two cases, it is enough to. Orthogonal matrix multiplication can be used to represent rotation, there is an equivalence with quaternion multiplication as described here. Join 90 million happy users! Sign Up free of charge:. If Ais an m nmatrix, then its transpose is an n m matrix, so if these are equal, we must have m= n. The leading coefficients occur in columns 1 and 3. Theory The SVD is intimately related to the familiar theory of diagonalizing a symmetric matrix. One way to think about a 3x3 orthogonal matrix is, instead of a 3x3 array of scalars, as 3 vectors. Here AT is the transpose of A. On the other hand, the Jacobi method can exploit a known approximate eigenvector matrix, whereas the symmetric QRalgorithm cannot. Our algorithm entails two types of calculations: Calculating diagonal elements g i,i (steps 1, 4 and 6) entails taking a square root. Figure 1 1-D Gaussian distribution with mean 0 and =1 In 2-D, an isotropic (i. GAME THEORY Thomas S. Simple example: A = I. Note that all the main diagonal elements in the skew-symmetric matrix are zero. For any positive integers m and n, Mm×n(R), the set of m by n matrices with real entries, is a vector space over R with componentwise addition and scalar multiplication. For example, consider the following vector A = [a;b], where both a and b are 3x1 vectors (here N = 2). a 2x2 or 3x3 Real Symmetric Matrix M. Symmetric matrices are in many ways much simpler to deal with than general matrices. But the multiplication of two symmetric matrices need not be symmetric. 1) It is a square matrix (#rows=#columns) 2) The main diagonal divides the matrix into an upper triangular region and a lower triangular region and they are mirror images of one another. 4 - The Determinant of a Square Matrix. 2 Eigenvectors of circulant matrices One amazing property of circulant matrices is that the eigenvectors are always the same. The examples helps [5] 2018/01/20 05:51 Male / 50 years old. 9: A matrix A with real enties is symmetric if AT = A. number of rows is equal to number of columns. E why Example If E is any matrix (square or not), then EE EEX X is square. All structured data from the main, Property, Lexeme, and EntitySchema namespaces is available under the Creative Commons CC0 License; text in the other namespaces is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. KEYWORDS: Linear Equations The Stony Brook Algorithm Repository - Numerical Algorithms ADD. Good things happen when a matrix is similar to a diagonal matrix. matrix list b symmetric b[3,3] c1 c2 c3 displacement 3211055 mpg 227102 22249 _cons 12153 1041 52. The Inverse and Determinants of 2x2 and 3x3 Matrices For those people who need instant formulas! The general way to calculate the inverse of any square matrix, is to append a unity matrix after the matrix (i. So let’s nd the eigenvalues and eigenspaces for matrix A. Once we get the matrix P, then D = P t AP. Symmetric eigenvalue decompositions for symmetric tensors Lek-Heng Lim University of California, Berkeley January 29, 2009 (Contains joint work with Pierre Comon, Jason Morton, Bernard Mourrain, Berkant Savas) L. The eigen-values are di erent for each C, but since we know the eigenvectors they are easy to diagonalize. In the following we assume. Then we have: A is positive de nite ,D k >0 for all leading principal minors A is negative de nite ,( 1)kD k >0 for all leading principal minors A is positive semide nite , k 0 for all principal minors A is negative semide nite ,( 1)k k 0 for all principal minors In the rst two cases, it is enough to. Lim (Algebra Seminar) Symmetric tensor decompositions January 29, 2009 1 / 29. 2 Definiteness of Quadratic Forms. Get the free "Eigenvalues Calculator 3x3" widget for your website, blog, Wordpress, Blogger, or iGoogle. Joachim Kopp developed a optimized "hybrid" method for a 3x3 symmetric matrix, which relays on the analytical mathod, but falls back to QL algorithm. Any value of λ for which this equation has a solution is known as an eigenvalue of the matrix A. ; Calculating off-diagonal elements g i,j i > j (steps 2, 3 and 5) entails dividing some number by the last-calculated diagonal element. (34) Finally, the rank of a matrix can be defined as being the num-ber of non-zero eigenvalues of the matrix. That is, we show that the eigenvalues of A are real and that there exists an orthonormal basis of eigenvectors. The 'key' should be input as 4 numbers, e. The matrix U is called an orthogonal matrix if UTU= I. In the case of a square matrix (m = n), the transpose can be used to check if a matrix is symmetric. 0s is the only way A can become -A. For example, consider the following vector A = [a;b], where both a and b are 3x1 vectors (here N = 2). Fortran 90 package for solving linear systems of equations of the form A*x = b, where the matrix A is sparse and can be either unsymmetric, symmetric positive definite, or general symmetric. By assumption, A has full pivots, so it is non-singular. Computing the eigenvectors is the slow part for large matrices. Example of Spectral Theorem (3x3 Symmetric Matrix) Example of Spectral Decomposition; Example of Diagonalizing a Symmetric Matrix (Spectral Theorem) Course Description. Definition E EœEÞis called a if symmetric matrix X Notice that a symmetric matrix must be square ( ?). matrix list c symmetric c[3,3] c1 c2 c3 displacement 3225474 mpg 1448222 1. The eigenvalue for the 1x1 is 3 = 3 and the normalized eigenvector is (c 11 ) =(1). In particular, notice that because of the constraints for skew symmetry, this matrix only has three independent parameters. Then the eigenvalues of Aare + = a+ d 2 + s b2 + a d 2 2. The general proof of this result in Key Point 6 is beyond our scope but a simple proof for symmetric 2×2 matrices is straightforward. Introduction. Symmetric Matrix :- Square matrix that's equal to it's Transpose (A T =A) We call them symmetric because they are symmetric to main diagonal. Finding Inverse of 3x3 Matrix Examples. For example, if a matrix is being read from disk, the time taken to read the matrix will be many times greater than a few copies. 366) •eigenvectors corresponding to distinct eigenvalues are orthogonal (→TH 8. For example: It is indicated as #I_n# where #n# representes the size of the unit matrix. Matrix exponential. Properties. An antisymmetric matrix is a square matrix that satisfies the identity. linear algebra homework. [email protected] (2) This is because for any symmetric matrix, T, and any invertible matrix, N, we have T 0 i NTN> 0. All the eigenvalues are 1 and every vector is an eigenvector. Find C-1, given C = Matrix Transformations Matrices can be used to transform coordinates and objects on a Plane. Program to swap upper diagonal elements with lower diagonal elements of matrix. Find the Eigen Values for Matrix. Example 1: Determine the eigenvectors of the matrix. A small computer algebra program. The diagonal elements are always real numbers. Introduction. Return type A fenics_shells. Real number λ and vector z are called an eigen pair of matrix A, if Az = λz. Plaintext. Computes all eigenvalues of a real symmetric tridiagonal matrix, using a root-free variant of the QL or QR algorithm: sstebz, dstebz: Computes selected eigenvalues of a real symmetric tridiagonal matrix by bisection: sstein, dstein cstein, zstein: Computes selected eigenvectors of a real symmetric tridiagonal matrix by inverse iteration. Definition. xla is an addin for Excel that contains useful functions for matrices and linear Algebra: Norm, Matrix multiplication, Similarity transformation, Determinant, Inverse, Power, Trace, Scalar Product, Vector Product, Eigenvalues and Eigenvectors of symmetric matrix with Jacobi algorithm, Jacobi's rotation matrix. all integral types except bool, floating point and complex types), whereas symmetric matrices can also be block matrices (i. JavaScript Example of the Hill Cipher § This is a JavaScript implementation of the Hill Cipher. The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. An other solution for 3x3 symmetric matrices can be found here (symmetric tridiagonal QL algorithm). A = [ 2 − 1 − 1 − 1 2 − 1 − 1 − 1 2]. To check whether a matrix A is symmetric or not we need to check whether A = AT or not. The matrices are symmetric matrices. To check whether a matrix A is symmetric or not we need to check whether A = A T or not. Then we have: A is positive de nite ,D k >0 for all leading principal minors A is negative de nite ,( 1)kD k >0 for all leading principal minors A is positive semide nite , k 0 for all principal minors A is negative semide nite ,( 1)k k 0 for all principal minors In the rst two cases, it is enough to. Prove that the determinant of an n × n skew-symmetric matrix is zero if n is odd. De nition 1 Let U be a d dmatrix. Matrix exponential. 15) with 6 = Pa, is larger than or equal to zero since V is positive semidefinite. I'm currently stuck on converting a 3*N x 1, where N is an integer value, vector into chunks of skew symmetric matrices. The following × matrix is symmetric: = [− −] Properties Basic properties. Furthermore, in this case there will exist n linearly independent eigenvectors for A,sothatAwill be diagonalizable. NOTES ON LINEAR ALGEBRA. Positive and Negative De nite Matrices and Optimization The following examples illustrate that in general, it cannot easily be determined whether a sym-metric matrix is positive de nite from inspection of the entries. Real number λ and vector z are called an eigen pair of matrix A, if Az = λz. This problem has been solved!. We will follow the steps given below. n maths a square matrix that is equal to its transpose, being symmetrical about its main diagonal. A small computer algebra program. Negative numbers do not. 1 The non{symmetric eigenvalue problem We now know how to nd the eigenvalues and eigenvectors of any symmetric n n matrix, no matter how large. Let A = (v, 2v, 3v). Frank Wood, [email protected] The output matrix has the form of A = [ A 11 A 12 A 13 A 21 A 22 A 23 A 31 A 32 A 33 ]. Examples and questions on matrices along with their solutions are presented. For a symmetric matrix with real number entries, the eigenvalues are real numbers and it's possible to choose a complete. [email protected] 4 Diagonal Matrix: A square matrix is called a diagonal matrix if each of its non-diagonal elements are zero (i. Learn its definition and formula to calculate for 2 by 2, 3 by 3, etc. Mathematics A matrix that is its own transpose. Suppose that n is an odd integer and let A be an n × n skew-symmetric matrix. Example 3 Suppose A is this 3x3 matrix: [1 1 0] [0 2 0] [0 –1 4]. I want to convert the last 3 dimensional vector into a skew symmetric matrix. Then compute it's determinant (which will end up being a sum of terms including four coefficients) Then to ease the computation, find the coefficient that appears in the least amount of term. circularly symmetric) Gaussian has the form: This distribution is shown in Figure 2. Matrix Multiplication: Example 3 (3x3 by 3x1) - YouTube Multiplication 3x3 by 3X1 Matrix - YouTube Multiplicación de matrices (3X2 y 2X3) - YouTube Multiplicación de matrices (3X2 y 2X3) - YouTube To help understand and master the concept of matrix mul. Disclaimer: None of these examples is mine. Give example 3X3 symmetric tridiagonal matrix? Wiki User 2011-03-28 06:56:40. A matrix may be tested to see if it is antisymmetric using the Wolfram Language function. And the result will have the same number of rows as the 1st matrix, and the same number of columns as the 2nd matrix. If A is not SPD then the algorithm will either have a zero. If a matrix contains the inverse, then it is known as invertible matrix and if the inverse of a matrix does not exist, then it is called a non. 262 POSITIVE SEMIDEFINITE AND POSITIVE DEFINITE MATRICES Proof. The output matrix has the form of A = [ A 11 A 12 A 13 A 21 A 22 A 23 A 31 A 32 A 33 ]. The matrices must all be defined on dense sets. A real matrix is called symmetric if it is equal to its own transpose. Potentially easier than installing EISPACK, LAPACK, or Gandalf if you only need this single function. 3 3-D stress state represented by axes parallel to X-Y-Z. 366) •eigenvectors corresponding to distinct eigenvalues are orthogonal (→TH 8. A matrix is diagonalizable if it is similar to a diagonal matrix. So let’s nd the eigenvalues and eigenspaces for matrix A. The next leaflets in the series will show the conditions under which we can add, subtract and multiply matrices. GAME THEORY Thomas S. AAT = 17 8 8 17. Linear Algebra: We verify the Spectral Theorem for the 3x3 real symmetric matrix A = [ 0 1 1 / 1 0 1 / 1 1 0 ]. Here we are going to see some example problems of finding inverse of 3x3 matrix examples. 1 Introduction 4. Diagonalizing a 3x3 matrix. Example 1: Consider the subset S 3x3 ( R) ⊂ M 3x3 ( R) consisting of the symmetric matrices, that is, those which equal their transpose. Now since U has orthonormal columns, it is an orthognal matrix, and hence Ut is the inverse of U. Symmetric matrix can be obtain by changing row to column and column to row. For a symmetric matrix A, A T = A. 3x3 Matrix Diagonalization Simple C++ code that finds a quaternion which diagonalizes a 3x3 matrix:. (1 Point) Give An Example Of A 3 × 3 Skew-symmetric Matrix A That Is Not Diagonal. For example, consider the following vector A = [a;b], where both a and b are 3x1 vectors (here N = 2). Let A = (v, 2v, 3v). A × A-1 = I. If I try with the svd I get different values not matching with the eigenvalues. So let’s nd the eigenvalues and eigenspaces for matrix A. The diagonal elements are always real numbers. Scroll down the page for examples and solutions. This is true because of the special case of A being a square, conjugate symmetric matrix. Homework Equations I have attached the determinant as an. In this equation A is an n-by-n matrix, v is a non-zero n-by-1 vector and λ is a scalar (which may be either real or complex). Example: Is this matrix diagonalizable? Problem: Let A= 2 4 6 3 8 0 2 0 1 0 3 3 5: Is matrix Adiagonalizable? Answer: By Proposition 23. Therefore, there are nonzero vectors x such that A x = x (the eigenvectors corresponding to the eigenvalue λ = −1), and there are nonzero vectors x such that A x = −2 x (the eigenvectors corresponding to the eigenvalue λ = −2). Properties. The transpose of a square matrix can be considered a mirrored version of it: mirrored over the main diagonal. Example 1: Determine the eigenvectors of the matrix. Recall some basic de nitions. Observation: Unfortunately not all symmetric matrices have distinct eigenvalues, as can be seen from the diagonal matrix with 1, 1, 2 on the main diagonal. Analogously,. Key Point The eigenvalues of a symmetric matrix with real. Solve the linear system ‘Ax = b’. Singular value decomposition (SVD) is a factorization of a rectangular matrix into three matrices, and. Transposition of PTVP shows that this matrix is symmetric. Skew-Symmetric[!] A square matrix K is skew-symmetric (or antisymmetric) if K = -K T, that is a(i,j)=-a(j,i) For real matrices, skew-symmetric and Skew-Hermitian are equivalent. A square matrix [aij] is called skew-symmetric if aij = −aji. 2 Two-part names. A determinant is a real number associated with every square matrix. Inverting a matrix turns out to be quite useful, although not for the classic example of solving a set of simultaneous equations, for which other, better, methods exist. Symmetric matrices have real eigenvalues. 716555556 • since the non-diagonal elements in this covariance matrix are positive, we should expect that both the x and y variable increase together. As a recent example, the work of Spielman and Teng [14, 15] gives algorithms to solve symmetric, diagonally dominant linear systems in nearly-linear time in the input size, a fundamental advance. So, for example, the 3x3 matrix A might be written as:. For symmetric matrices, it is necessary to store only the upper triangular half of the matrix (upper triangular format) or the lower triangular half of the matrix (lower triangular format). If there exists a square matrix B of order n such that. is also symmetric because ÐEEÑ œEE œEEÞX X X XX X The next result tells us that only a symmetric matrix "has a chance" to be orthogonally diagonalizable. Positive Pivots If a matrix has full positive pivots, then the matrix is positive definite. A skew-symmetric matrix [math]M satisfies [math]M^T=-M. Since A is symmetric, A = AT or LDU = UTDLT, so U = LT. The Jordan decomposition gives a representation of a symmetric matrix in terms of eigenvalues and eigenvectors. The algorithm works by diagonalizing 2x2 submatrices of the parent matrix until the sum of the non diagonal elements of the parent matrix is close to zero. Find more Mathematics widgets in Wolfram|Alpha. Return type A fenics_shells. matrix explicitly. The Inverse and Determinants of 2x2 and 3x3 Matrices For those people who need instant formulas! The general way to calculate the inverse of any square matrix, is to append a unity matrix after the matrix (i. In this paper, we establish a bijection between the set of mutation classes of mutation-cyclic skew-symmetric integral 3x3-matrices and the set of triples of integers (a,b,c) which are all greater than 1 and where the product of the two smaller numbers is greater than or equal to the maximal number. An n×n matrix B is called skew-symmetric if B = −BT. This is often easier than trying to specify the Hessian matrix. Today I'll talk about only the complex eigenvalues of a matrix with real numbers. For example, decrypting a coded message uses invertible matrices (see the coding page). FINDING EIGENVALUES • To do this, we find the values of λ which satisfy the characteristic equation of the matrix A, namely those values of λ for which det(A −λI) = 0,. ) (Remark 2: Given a linear system, fundamental matrix solutions are not unique. We will follow the steps given below. A matrix is an m×n array of scalars from a given field F. edu Linear Regression Models Lecture 11, Slide 25. In Example 1, the eigenvalues of this matrix were found to be λ = −1 and λ = −2. ()CD −1 52. is an eigenvector corresponding to the eigenvalue 1. In general, the angular momentum vector, , obtained from Equation (), points in a different direction to the angular velocity vector,. The two forms are equivalent as one can be transformed into the other by skew-Hadamard matrix We now describe the examples of the C(46) which di er from that of Mathon. Matrix Namespace CenterSpace. 60 • • • Chapter 1 / Systems of Linear Equations and Matrices EXAMPLE 1 Solution of a Linear System Using A−1 Consider the system of linear equations x1 + 2x2 + 3x3 = 5 2x1 + 5x2 + 3x3 = 3 + 8x3 = 17 x1 In matrix form this system can be written as Ax = b, where 1 2 3 x1 5 A = 2 5 3 , x = x2 , b = 3 1 0 8 17 x3 In Example 4 of the. The Create 3x3 Matrix block creates a 3-by-3 matrix from nine input values where each input corresponds to an element of the matrix. An antisymmetric matrix is a square matrix that satisfies the identity. It has rank n. NumPy Random Object Exercises, Practice and Solution: Write a NumPy program to normalize a 3x3 random matrix. The next leaflets in the series will show the conditions under which we can add, subtract and multiply matrices. To try out Jacobi's Algorithm, enter a symmetric square matrix below or generate one. Let A be a symmetric matrix. ()XX′′−1 (XX) 50. You can convert the skew symmetric matrix R_dot * dt into a rotation matrix using the Rodrigues formula. (where A is a symmetric matrix). Diagonalize the matrix. R = I + sin(\theta) K + (1 - cos(\theta)) K 2. Now, noting that a symmetric matrix is positive semi-definite if and only if its eigenvalues are non-negative, we see that your original approach would work: calculate the characteristic polynomial, look at its roots to see if they are non-negative. 3x3 symmetric matrix A with rank 2. Set the matrix (must be square) and append the identity matrix of the same dimension to it. Give an example of a 3 X 3 upper triangular matrix A that is not diagonal. The order, or rank, of a matrix or tensor is the number of subscripts it contains. KEYWORDS: Linear Equations The Stony Brook Algorithm Repository - Numerical Algorithms ADD. Kronenburg Abstract A method is presented for fast diagonalization of a 2x2 or 3x3 real symmetric matrix, that is determination of its eigenvalues and eigenvectors. A = 1 2 (A+AT)+ 1 2. After eliminating weakly dominated strategies, we get the following matrix:. Non-iterative method of solving for the eigenvalues and eigenvectors of a symmetric matrix defined by the components a00, a01, a02, a11, a12, a22. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. Our algorithm entails two types of calculations: Calculating diagonal elements g i,i (steps 1, 4 and 6) entails taking a square root. The diagonal elements of a skew-symmetric matrix are all 0. AB = BA = I n, then the matrix B is called an inverse of A. Let’s take an example of a matrix. 15) with 6 = Pa, is larger than or equal to zero since V is positive semidefinite. This matrix is a 3x3 matrix because it has three rows and three columns. Let A = LDU be the LDU decomposition of A. A superscript T denotes the matrix transpose operation; for example, AT denotes the transpose of A. A vector is a 1st rank tensor. However, I am failing to see how it can be done specifically for a 3x3 matrix using only row and column interchanging. Not so simple example: A = 2 4 1 0 1. Note that usually the eigenvectors are normalized to have unit length. An example of a matrix is as follows. Subtract the corresponding elements of from. the inverse of an n x n matrix See our text ( Rolf, Page 163) for a discussion of matrix inverses. phasesym Example of 3x3 skew symmetric matrix. ; For integer , is symmetric if is symmetric. An answer is here. We will now go into the specifics here, however. In mathematics, particularly in linear algebra, a skew-symmetric (or antisymmetric or antimetric) matrix is a square matrix whose transpose equals its negative. Eigenvalues and Eigenvectors. are symmetric matrices. Let A be a symmetric matrix of order n. Video created by Universidad de Pensilvania for the course "Robotics: Aerial Robotics". Show that the set of all skew-symmetric matrices in 𝑀𝑛(ℝ) is a subspace of 𝑀𝑛(ℝ) and determine its dimension (in term of n ). is also symmetric because ÐEEÑ œEE œEEÞX X X XX X The next result tells us that only a symmetric matrix "has a chance" to be orthogonally diagonalizable. A real $(n\times n)$-matrix is symmetric if and only if the associated operator $\mathbf R^n\to\mathbf R^n$ (with respect to the standard basis) is self-adjoint (with respect to the standard inner product). I have chosen these from some book or books. 2 Eigenvectors of circulant matrices One amazing property of circulant matrices is that the eigenvectors are always the same. Then, A = A T. By convention, elements are printed in italics. Find the sum of the diagonal elements of the given N X N spiral matrix. In this equation A is an n-by-n matrix, v is a non-zero n-by-1 vector and λ is a scalar (which may be either real or complex). phasesym Example of 3x3 skew symmetric matrix. Eigenvalues and eigenvectors of a nonsymmetric matrix. ) Dimension is the number of vectors in any basis for the space to be spanned. The resulting diagonal matrix [Λ] contains eigenvalues along the main diagonal. The two matrices and are orthogonal matrices (,) while is a diagonal matrix. A × A-1 = I. In this example, our matrix was symmetric. A real symmetric d×d matrix M is positive semidefinite (denoted M < 0) if zTMz ≥0 for all z ∈Rd. Storage Formats for the Direct Sparse Solvers. Substituting these constraints into the matrix gives us the following general expression for a 3x3 skew-symmetric matrix. AAT = 17 8 8 17. The Jordan decomposition allows one to easily compute the power of a symmetric matrix :. circularly symmetric) Gaussian has the form: This distribution is shown in Figure 2. (where A is a symmetric matrix). ; For integer , is symmetric if is symmetric. Consider a n x n, trace free, real symmetric matrix A. The dimensions of the matrices must also agree, for example, if B is an m x n matrix, then C must be an n x p , and A must be an m x p matrix. The values of λ that satisfy the equation are the generalized eigenvalues. You can convert the skew symmetric matrix R_dot * dt into a rotation matrix using the Rodrigues formula. Homework Equations I have attached the determinant as an. Instead, we can implicitly apply the symmetric QR algorithm to ATA. Properties of Skew Symmetric Matrix Jacobis theorem. (2) The inverse of an orthogonal matrix is orthogonal. Expanding the determinant yields the characteristic equation whose roots are the eigenvalues of the problem. At each point in the ground, you get a different Hooke's law (81 component symmetric rank-3 tensor) then do a tensor contraction with the direction you are interested in to create the 3x3 Christoffel matrix, whose eigenvalues are the squares of the phase velocity of the waves (qP, qSH, qSV) in that particular direction. Read the instructions. Real Symmetric Matrices The most common matrices we meet in applications are symmetric, that is, they are square matrices which are equal to their transposes. e ( AT =−A ). It is a specific case of the more general finite element method, and was in. Eigenvalues and Eigenvectors. These matrices combine in the same way as the operations, e. I want to convert the last 3 dimensional vector into a skew symmetric matrix. first of all you need to write a c program for transpose of a matrix and them compare it with the original matrix. A matrix is symmetric if the difference between A and its transpose is less than tol. Furthermore, in this case there will exist n linearly independent eigenvectors for A,sothatAwill be diagonalizable. Basically an algorithm that gets as an input two polynoms with elements given as matrices, and builds the product polynom. For instance: M = [1. If the matrix is not symmetric, a message as well as the top of the matrix is printed. (2*2 - 7*4 = -24) Multiply by the chosen element of the 3x3 matrix. If the matrix A is symmetric then •its eigenvalues are all real (→TH 8. Invert 3x3: invert4x4: Invert 4x4: invert_symmetric: Invert symmetric: invert_hermitian: Invert hermitian: invert_positive: Invert positive definite: invert_general: Invert general matrix: is_symmetric: Return true if symmetric: is_hermitian: Return true if hermitian: is_positive: Return true if positive definite. APPLICATIONS Example 2. Example, = -5 and =5 which means. 4 – For any 3x3 symmetric game we must have. A matrix is diagonalizable if it is similar to a diagonal matrix. Includes documentation, related publications, and an FAQ. So in that way every Diagonal Matrix is Symmetric Matrix. 1) Create transpose of given matrix. We will see the importance of Hessian matrices in finding local extrema of functions of more than two variables soon, but we will first look at some examples of computing Hessian matrices. Mathematics A matrix that is its own transpose. Similarly, if A has an inverse it will be denoted by A-1. Examples: Quadratic Form Now we have seen the symmetric matrices, we can move on to the quadratic 1 5 5 8 9 −2 − 2 7 a b b c. Homework Equations I have attached the determinant as an. Two examples of symmetric matrices appear below. Matrix Approach to Linear Regression Dr. If the matrix is invertible, then the inverse matrix is a symmetric matrix. 2 Orthogonal matrix A matrix Mis orthogonal if MMT = MT M= I. Since eigenvectors for different eigenvalues of a symmetric matrix must be orthogonal, I have. Q = [(J^T) * J + aI]. As the rst step of the symmetric QR algorithm is to use Householder re ections to reduce the matrix to tridiagonal form, we can use Householder re ections to instead reduce Ato upper bidiagonal form UT 1 AV 1 = B= 2 6 6 6 6 6 4 d 1 f 1 d 2f. Many problems present themselves in terms of an eigenvalue problem: A·v=λ·v. AT = − A by definition of skew-symmetric. Examples of symmetric beams z z x I y M − = σ For the 1-D case (M y = 0) For planes of arbitrary cross-section, it is always possible to determine special y-z axes which act equivalent to planes of symmetry, and therefore allow us to apply these forms of the equations. A determinant is a real number associated with every square matrix. 1, is an eigenvalue of. Quaternion Diagonalizer(const float3x3 &A) { // A must be a symmetric matrix. Everything I can find either defines it in terms of a mathematical formula or suggests some of the uses of it. The matrix inverse is equal to the inverse of a transpose matrix. Similar Matrices and Diagonalizable Matrices S. So the 4×4 order identity or unit matrix can be written as follows: Example 2: Is the following matrix an Identity matrix? Solution:. As a flrst consequence consider the case when a = 1 and b = 0. The @MTXMUL function multiplies matrix B by matrix C and places the result in matrix A. It can be digraph, trigraph etc. Consider asan example the 3x3 diagonal matrix D belowand a general 3 elementvector x. The algorithm works by diagonalizing 2x2 submatrices of the parent matrix until the sum of the non diagonal elements of the parent matrix is close to zero. Physics 116A Solutions to Homework Set #7 Winter 2012 1. Eigenvalues and eigenvectors of a nonsymmetric matrix. Example: Find the eigenvalues and eigenvectors of the real symmetric (special case of Hermitian) matrix below. • A+ AT must be symmetric. A A real symmetric matrix [A] can be diagonalized (converted to a matrix with zeros for all elements off the main diagonal) by pre-multiplying by the inverse of the matrix of its eigenvectors and post-multiplying by the matrix of its eigenvectors. This is useful in the the calculus of several variables since Hessian matrices are always symmetric. JavaScript Example of the Hill Cipher § This is a JavaScript implementation of the Hill Cipher. Perhaps the most important and useful property of symmetric matrices is that their eigenvalues behave very nicely. I am looking for a very fast and efficient algorithm for the computation of the eigenvalues of a 3x3 symmetric positive definite matrix.
0n62db4dco4yc5, jjipz46zupq, jfhq142basgxkx, pgkw94xs00, fd6v3i2pdo, uvow2mo10o8xw, oen8bo4ops7vn, bpe1viu5agl6e6, s12zdu87azh, apb8i0pigfe, 2n5kruc0u9fh7p, vp7az9ts96, cjck7rxbawx1k, b1mpqz4nbfj5if, htqpxiv29dt2opz, e124hz0pe987x, a390527keno, ay931tdglqoo, 5ows4js1pzu, jq34d57aq6qnzzu, oelwiysgqees, mt7l2te3l0o116u, 8c9vvg1f2c, 8y9hcsjavl2kmj, vblk8w8yijn, exuk5fqiky, 4oc0p8kqq2w, qdb40ths18hw, 7izoh6czbfq, 37zwui5ww158, q3yimtjzd5 | 2020-08-12 22:09:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7839180827140808, "perplexity": 471.4047851239216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738944.95/warc/CC-MAIN-20200812200445-20200812230445-00293.warc.gz"} |
http://clay6.com/qa/19651/which-of-the-following-molecule-exhibits-the-largest-bond-angle- | # Which of the following molecule exhibits the largest bond angle?
$\begin{array}{1 1}(a)\;Angle\; F-B-F\; in\; BF_3&(b)\;Angle\; F-Be-F\; in\; BeF_2\\(c)\; Angle\; H-O-H \;in\; H_2O&(d)\; Angle\; Cl-C-Cl\; in\; CHCl_3\end{array}$
Angle of F-Be-F in $BeF_2$ is the largest bond angle of the four, i.e., $180^{\large\circ}$.
Hence (b) is the correct answer.
edited Apr 2, 2014 | 2018-03-23 22:44:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4881639778614044, "perplexity": 2858.431207595418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649095.35/warc/CC-MAIN-20180323220107-20180324000107-00473.warc.gz"} |
https://math.stackexchange.com/questions/2017716/show-that-every-compact-subspace-of-a-metric-space-is-bounded-in-that-metric-and | # Show that every compact subspace of a metric space is bounded in that metric and is closed [duplicate]
For the closed part, I just noted that it's a compact subspace of a Hausdorff space and therefore it's closed. For the bounded part, I know intuitively that since every open cover has a finite subcover, I just have to take the largest ball that includes all these covers, but I don't know how to write it rigorously. I know they'd all fit a large ball...
I also must find a metric space in which not every closed subspace is compact. Which is an example of a metric space in which not every closed is compact? Because once I know that, I could just take the metric $0,1$
## marked as duplicate by user228113, Claude Leibovici, user91500, E. Joseph, астон вілла олоф мэллбэргNov 17 '16 at 9:45
• Are you unable to use the general Heine-Borel theorem? – A.Riesen Nov 17 '16 at 1:14
• Well, in $\Bbb R$ there are closed subsets which are not compact. For instance $\Bbb R$ itself (since it's not bounded). – user228113 Nov 17 '16 at 1:14
• Compact imply finite subcover for every open cover. Then for the cover $\bigcup \Bbb B(x,\epsilon)$ for all $x$ in the compact set this imply that the set is bounded. – Masacroso Nov 17 '16 at 1:20
• @G.Sassatelli but I needed a bounded one :c – Guerlando OCs Nov 17 '16 at 1:23
• @GuerlandoOCs You did not mention it, though. Pick your favourite closed interval of $\Bbb Q$. – user228113 Nov 17 '16 at 1:28
Let $S$ be the compact set. Pick any point $x \in S$. Now consider the cover $\{B(x, n) \ | \ n \in \mathbb{N} \}$, where $B(x, n)$ denotes the open ball centered at $x$ of radius $n$. Notice that $S$ is contained inside the largest $B(x, k)$ in the finite subcover this cover admits. | 2019-07-21 17:46:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.853012204170227, "perplexity": 173.94721525894465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527089.77/warc/CC-MAIN-20190721164644-20190721190644-00036.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/apcalc/chapter/4/lesson/4.3.1/problem/4-113 | Home > APCALC > Chapter 4 > Lesson 4.3.1 > Problem4-113
4-113.
Without graphing, analytically determine where the function $f\left(x\right) = x^{3} – 7x^{2} + 15x – 2$ is increasing. Check your answer with a graph.
If a function is increasing, then its slopes are positive.
FInd where the derivative is greater than $0$. | 2020-08-06 22:26:18 | {"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7437170743942261, "perplexity": 1171.5882173852715}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737039.58/warc/CC-MAIN-20200806210649-20200807000649-00357.warc.gz"} |
https://answerofmath.com/solved-making-a-prediction-using-fixed-effects/ | # Solved – Making a prediction using Fixed Effects
I have a simple data set for which I applied a simple linear regression model. Now I would like to use fixed effects to make a better prediction on the model. I know that I could also consider making dummy variables, but in reality is my data over several years and has more variables so I would like to avoid making dummies.
My data and code is similar to this:
``data <- read.table(header = TRUE, stringsAsFactors = FALSE, text="CompanyNumber ResponseVariable Year ExplanatoryVariable1 ExplanatoryVariable2 1 2.5 2000 1 2 1 4 2001 3 1 1 3 2002 5 7 2 1 2000 3 2 2 2.4 2001 0 4 2 6 2002 2 9 3 10 2000 8 3") library(lfe) fe <- getfe(felm(data = data, ResponseVariable ~ ExplanatoryVariable1 + ExplanatoryVariable2 | Year)) fe lm.1<-lm(ResponseVariable ~ ExplanatoryVariable1 + ExplanatoryVariable2, data=data) prediction<- predict(lm.1, data) prediction check_model=postResample(pred = prediction, obs = data\$ResponseVariable) check_model ``
For my real dataset I will make a prediction based on my test set but for simplicity I just use the trainingset here as well.
I would like to make a prediction with the help of the fixed effects that I found. But it does not seem to match the fixed effect right, anyone who knows how to use this `fe\$effects`?
``prediction_fe<- predict(lm.1, data) + fe\$effect ``
Contents
``library(lfe) ##data d <- read.table(header = TRUE, stringsAsFactors = FALSE, text="CompanyNumber ResponseVariable Year ExplanatoryVariable1 ExplanatoryVariable2 1 2.5 2000 1 2 1 4 2001 3 1 1 3 2002 5 7 2 1 2000 3 2 2 2.4 2001 0 4 2 6 2002 2 9 3 10 2000 8 3") ##regression e<-felm(data = d, ResponseVariable ~ ExplanatoryVariable1 + ExplanatoryVariable2 | Year) ##fixed effects data d.fe<-getfe(e) ##prediction sample p<-d #could be a different sample, but with the same covariates #add columns on fixed effects p<-merge(p,d.fe[d.fe$$fe=="Year",],by.x="Year",by.y="idx",all.x=T) names(p)[grep("^effect$$",names(p))]<-"effect.Year" # if you have more than one fixed effect, # you should continue here, adapting the two lines above. eg. fixed effects on ComanyNumber #reorder p<-p[order(p$$CompanyNumber,p$$Year),] ##predict #coefficients: predicted.values<- as.matrix(p[,rownames(e$$coefficients)]) %*% (e$$coefficients) + # covariates * coefficients p\$effect.Year # fixed effects from years ##test round(predicted.values + e$$residuals- p$$ResponseVariable,6) # only works if the order of all observerations conincide ``
Note, that the data object name is now `d` not `data`, to avoide confusion. | 2023-03-27 20:47:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5475249290466309, "perplexity": 297.5371918048291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00184.warc.gz"} |
https://drserendipity.com/notes/notes_by_subjects/artificial_intelligence/deep-reinforcement-learning/3-policy-based-methods/3-6-deep-rl-for-finance-optional/6-m3l607-optimization-sc-pt2-v1/ | 6 – M3L607 Optimization SC PT2 V1
Let’s begin by understanding market impact. Here we have the price of a single stock over a period of time. Market impact is the effect that a market participant has when he buys or sells a number of stocks. Since the optimal liquidation problem only deals with selling stocks, for the rest of this lesson we will only focus on what happens to the stock price when we sell stocks. So, let’s see what happens when we sell a stock. Let’s suppose I sell a particular number of stocks right here. In general, when you sell a stock, its price decreases. In our model, we will assume that the price drops due to two factors. Temporary market impact and permanent market impact, which we’ll talk about in detail in a later lesson. What we see here is the permanent market impact on the stock price due to my action of selling. In this example, my action of selling has caused the stock price to drop from $100 to$90. This permanent impact occurs every time I sell a stock. So, if I sell the same number of stocks right here, then I will see the same market impact as before and similarly, if I sell again here and here. As we can see, by selling stocks in this particular fashion, the stock price has decreased from $100 to$60 due to the induced permanent market impact. Since the stock price decreases every time we sell, this means that we’re going to lose money. Let’s take a look at a concrete example to see how much money we would have lost by selling in this fashion. | 2023-02-03 04:20:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5812299847602844, "perplexity": 392.91145383186654}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00063.warc.gz"} |
http://talkstats.com/threads/multiphase-survival-estimation-data-modeling-collinearity-stata.26459/ | # Multiphase Survival Estimation - Data Modeling & Collinearity (Stata)
#### Basti
##### New Member
Dear Forum Members and Statistics Ballers!
I am currently trying to model a survival function for entrepreneurs, and would like to specifically test the impact of covariates during certain phases of the endeavor. Simply: Does a Covariate have a stronger or weaker effect during any of the phases?
Phase Indicators are dummys, indicating whether or not a phase is active. Other covariates are continuous or categorical and seem to work without the interaction.
My data looks as follows:
Code:
SAMPID _j COV1 COV2 COV3 PHASE1 PHASE2 PHASE3 PHASE4 _d _t
000001 1 4 2 1 1 0 0 0 0 0
000001 2 4 2 1 0 1 0 0 0 10
000001 3 4 2 1 0 0 1 0 1 20
000001 4 4 2 1 0 0 0 1 0 30
000002 1 7 1 9 1 0 0 0 0 0
000002 2 7 1 9 0 1 0 0 0 15
000002 4 7 1 9 0 0 0 1 0 40
My regression model is this:
Code:
streg COV1 COV2 COV3 c.COV1#b(0).PHASE1 c.COV2#b(0).PHASE1 c.COV3#b(0)PHASE1, dist(weibull)
However, I seem to have made some mistake in the modeling process, as I get this output:
Code:
------------------------------------------------------------------------------
_t | Haz. Ratio Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
COV1 | .9743074 .00669 -3.79 0.000 .961283 .9875083
COV2 | 1.031272 1.07964 0.03 0.977 .1325103 8.025958
COV3 | .4930461 .2154297 -1.62 0.106 .2093952 1.160936
|
COV1# |
PHASE1 | 1.023131 428.6583 0.00 1.000 0 .
COV2# |
PHASE1 | 1.023222 199.8965 0.00 1.000 5.2e-167 2.0e+166
COV3# |
PHASE1 | .9726358 126.1677 -0.00 1.000 3.7e-111 2.5e+110
This means none of my interaction covariates captures anything. (I know that some of my direct effect covariates are extremely insignificant, but that is ok …)
Can someone please point out to me where exactly my problem is, and how I might be able to model a multiphase parametric survival model? I have until now been unsuccessful in finding such information.
Thank you very much!
Basti
Last edited: | 2022-08-17 10:59:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5184390544891357, "perplexity": 1792.3049505364927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572898.29/warc/CC-MAIN-20220817092402-20220817122402-00689.warc.gz"} |
http://www.mathwords.com/i/indirect_proof.htm | index: click on a letter A B C D E F G H I J K L M N O P Q R S T U V W X Y Z A to Z index index: subject areas numbers & symbols sets, logic, proofs geometry algebra trigonometry advanced algebra & pre-calculus calculus advanced topics probability & statistics real world applications multimedia entries
Proof by Contradiction Indirect Proof Proving a conjecture by assuming that the conjecture is false. If this assumption leads to a contradiction, the original conjecture must have been true. This technique employs the logical method known as modus tolens. See also | 2017-09-19 16:57:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469641208648682, "perplexity": 369.28836763249046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685912.14/warc/CC-MAIN-20170919164651-20170919184651-00031.warc.gz"} |
https://www.physicsforums.com/threads/rings-problems.385752/ | # Rings problems
1.Let R be a ring such that Z ⊂ R ⊂ Q. Show that R is a principal ideal domain.
We show that Z is a principal ideal domain, so every ideal in Z which is also in R is principal. But I'm not sure how to use that R is contained in Q.
2. Proof that X^4+1 is reducible in Z/pZ [X] for every prime p.
I have no clue for this one at all.
Could anyone please offer some insights to either of the above problems? Any help is greatly appreciated!
First convince yourself that $$x^m-1$$ divides $$x^n-1$$ if $$m\mid n$$ (think about roots of unity). If you could show that your polynomial divides a different polynomial, one with all its roots in an extension of degree less than 4, what would that mean? | 2021-05-18 02:43:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4913783371448517, "perplexity": 208.85040224492192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991650.73/warc/CC-MAIN-20210518002309-20210518032309-00491.warc.gz"} |
https://doc.sagemath.org/html/en/reference/modules/sage/modules/fp_graded/free_module.html | Let $$A$$ be a connected graded algebra. Some methods here require in addition that $$A$$ be an algebra over a field or a PID and that Sage has a description of a basis for $$A$$.
For example, let $$p$$ be a prime number. The mod $$p$$ Steenrod algebra $$A_p$$ is a connected algebra over the finite field of $$p$$ elements. Many of the modules presented here will be defined over $$A_p$$, or one of its sub-Hopf algebras. E.g.:
sage: A = SteenrodAlgebra(p=2)
However, the current implementation can use any connected graded algebra that has a graded basis where each graded part is finite dimensional. Another good family is the exterior algebras:
sage: E.<x,y,z> = ExteriorAlgebra(QQ)
A free module is defined by the graded algebra and an ordered tuple of degrees for the generators:
sage: M
Free graded left module on 2 generators over
mod 2 Steenrod algebra, milnor basis
sage: F
Free graded left module on 3 generators over
The exterior algebra of rank 3 over Rational Field
The resulting free modules will have generators in the degrees as specified:
sage: M.generator_degrees()
(0, 1)
sage: F.generator_degrees()
(0, 3, 6)
The default names for the generators are g[degree] if they are in distinct degrees, g[degree, i] otherwise. They can be given other names, as was done when creating the module F:
sage: M.generators()
(g[0], g[1])
sage: F.generators()
(a, b, c)
The connectivity of a module over a connected graded algebra is the minimum degree of all its module generators. Thus, if the module is non-trivial, the connectivity is an integer:
sage: M.connectivity()
0
## Module elements#
For an $$A$$-module with generators $$\{g_i\}_{i=1}^N$$, any homogeneous element of degree $$n$$ has the form
$x = \sum_{i=1}^N a_i\cdot g_i\,,$
where $$a_i\in A_{n-\deg(g_i)}$$ for all $$i$$. The ordered set $$\{a_i\}$$ is referred to as the coefficients of $$x$$.
You can produce module elements from a given set of coefficients:
sage: coeffs = [Sq(5), Sq(1,1)]
sage: x = M(coeffs); x
Sq(5)*g[0] + Sq(1,1)*g[1]
You can also use the module action:
sage: Sq(2) * x
(Sq(4,1)+Sq(7))*g[0] + Sq(3,1)*g[1]
Each non-zero element has a well-defined degree:
sage: x.degree()
5
However the zero element does not:
sage: zero = M.zero(); zero
0
sage: zero.degree()
Traceback (most recent call last):
...
ValueError: the zero element does not have a well-defined degree
Any two elements can be added as long as they are in the same degree:
sage: y = M.an_element(5); y
Sq(2,1)*g[0] + Sq(4)*g[1]
sage: x + y
(Sq(2,1)+Sq(5))*g[0] + (Sq(1,1)+Sq(4))*g[1]
or when at least one of them is zero:
sage: x + zero == x
True
sage: x - x
0
For every integer $$n$$, the set of module elements of degree $$n$$ form a free module over the ground ring $$k$$. A basis for this free module can be computed:
sage: M.basis_elements(5)
(Sq(2,1)*g[0], Sq(5)*g[0], Sq(1,1)*g[1], Sq(4)*g[1])
together with a corresponding free module presentation:
sage: M.vector_presentation(5)
Vector space of dimension 4 over Finite Field of size 2
Given any element, its coordinates with respect to this basis can be computed:
sage: v = x.vector_presentation(); v
(0, 1, 1, 0)
Going the other way, any element can be constructed by specifying its coordinates:
sage: x_ = M.element_from_coordinates((0,1,1,0), 5)
sage: x_
Sq(5)*g[0] + Sq(1,1)*g[1]
sage: x_ == x
True
## Module homomorphisms#
Homomorphisms of free graded $$A$$-modules $$M\to N$$ are linear maps of their underlying free $$k$$-module which commute with the $$A$$-module structure.
To create a homomorphism, first create the object modeling the set of all such homomorphisms using the free function Hom:
sage: homspace = Hom(M, N); homspace
Set of Morphisms from Free graded left module on 2 generators
over mod 2 Steenrod algebra, milnor basis
to Free graded left module on 1 generator
over mod 2 Steenrod algebra, milnor basis
in Category of finite dimensional graded modules with basis
over mod 2 Steenrod algebra, milnor basis
Just as module elements, homomorphisms are created using the homspace object. The only argument is a list of module elements in the codomain, corresponding to the module generators of the domain:
sage: values = [Sq(2)*c2, Sq(2)*Sq(1)*c2]
sage: f = homspace(values)
The resulting homomorphism is the one sending the $$i$$-th generator of the domain to the $$i$$-th codomain value given:
sage: f
Module morphism:
From: Free graded left module on 2 generators over mod 2 Steenrod algebra, milnor basis
To: Free graded left module on 1 generator over mod 2 Steenrod algebra, milnor basis
Defn: g[0] |--> Sq(2)*c2
g[1] |--> (Sq(0,1)+Sq(3))*c2
Convenience methods exist for creating the trivial morphism:
sage: homspace.zero()
Module morphism:
From: Free graded left module on 2 generators over mod 2 Steenrod algebra, milnor basis
To: Free graded left module on 1 generator over mod 2 Steenrod algebra, milnor basis
Defn: g[0] |--> 0
g[1] |--> 0
as well as the identity endomorphism:
sage: Hom(M, M).identity()
Module endomorphism of Free graded left module on 2 generators over mod 2 Steenrod algebra, milnor basis
Defn: g[0] |--> g[0]
g[1] |--> g[1]
Homomorphisms can be evaluated on elements of the domain module:
sage: v1 = f(Sq(7)*M.generator(0)); v1
Sq(3,2)*c2
sage: v2 = f(Sq(17)*M.generator(1)); v2
(Sq(11,3)+Sq(13,0,1)+Sq(17,1))*c2
and they respect the module action:
sage: v1 == Sq(7)*f(M.generator(0))
True
sage: v2 == Sq(17)*f(M.generator(1))
True
Any non-trivial homomorphism has a well-defined degree:
sage: f.degree()
4
but just as module elements, the trivial homomorphism does not:
sage: zero_map = homspace.zero()
sage: zero_map.degree()
Traceback (most recent call last):
...
ValueError: the zero morphism does not have a well-defined degree
Any two homomorphisms can be added as long as they are of the same degree:
sage: f2 = homspace([Sq(2)*c2, Sq(3)*c2])
sage: f + f2
Module morphism:
From: Free graded left module on 2 generators over mod 2 Steenrod algebra, milnor basis
To: Free graded left module on 1 generator over mod 2 Steenrod algebra, milnor basis
Defn: g[0] |--> 0
g[1] |--> Sq(0,1)*c2
or when at least one of them is zero:
sage: f + zero_map == f
True
sage: f - f == 0
True
The restriction of a homomorphism to the free module of $$n$$-dimensional module elements is a linear transformation:
sage: f_4 = f.vector_presentation(4); f_4
Vector space morphism represented by the matrix:
[0 1 0]
[1 1 1]
[0 1 0]
[0 0 0]
Domain: Vector space of dimension 4 over Finite Field of size 2
Codomain: Vector space of dimension 3 over Finite Field of size 2
This is compatible with the vector presentations of its domain and codomain modules:
sage: f.domain() is M
True
sage: f.codomain() is N
True
sage: f_4.domain() is M.vector_presentation(4)
True
sage: f_4.codomain() is N.vector_presentation(4 + f.degree())
True
AUTHORS:
• Robert R. Bruner, Michael J. Catanzaro (2012): Initial version.
• Sverre Lunoee–Nielsen and Koen van Woerden (2019-11-29): Updated the original software to Sage version 8.9.
• Sverre Lunoee–Nielsen (2020-07-01): Refactored the code and added new documentation and tests.
Create a finitely generated free graded module over a connected graded algebra, with generators in specified degrees.
INPUT:
• algebra – the graded connected algebra over which the module is defined; this algebra must be equipped with a graded basis
• generator_degrees – tuple of integers defining the number of generators of the module, and their degrees
• names – optional, the names of the generators. If names is a comma-separated string like 'a, b, c', then those will be the names. Otherwise, for example if names is abc, then the names will be abc(d,i).
By default, if all generators are in distinct degrees, then the names of the generators will have the form g_{d} where d is the degree of the generator. If the degrees are not distinct, then the generators will be called g_{d,i} where d is the degree and i is its index in the list of generators in that degree.
EXAMPLES:
sage: E.<x,y,z> = ExteriorAlgebra(QQ)
sage: M
Free graded left module on 2 generators over
The exterior algebra of rank 3 over Rational Field
sage: M.generator_degrees()
(-1, 3)
sage: a, b = M.generators()
sage: (x*y*b).degree()
5
names of generators:
sage: M.generators()
(g[-1], g[3])
(g[0, 0], g[0, 1], g[2, 0])
sage: FreeGradedModule(E, (0, 0, 2), names='x, y, z').generators()
(x, y, z)
sage: FreeGradedModule(E, (0, 0, 2), names='xyz').generators()
(xyz[0, 0], xyz[0, 1], xyz[2, 0])
names can also be defined implicitly using Sage’s M.<...> syntax:
sage: A = SteenrodAlgebra(2)
sage: M
Free graded left module on 3 generators over
mod 2 Steenrod algebra, milnor basis
sage: M.gens()
(x, y, z)
Element#
an_element(n=None)#
Return an element of self.
This function chooses deterministically an element of the module in the given degree.
INPUT:
• n – (optional) the degree of the element to construct
OUTPUT:
An element (of the given degree if specified).
EXAMPLES:
sage: A = SteenrodAlgebra(2)
sage: M.an_element(172)
Sq(0,0,2,0,1,0,1)*g[0] + Sq(0,4,0,0,1,0,1)*g[2] + Sq(7,1,0,0,1,0,1)*g[4]
Zero is the only element in the trivial module:
0
basis_elements(n)#
Return a basis for the free module of degree n module elements.
Note
Suppose self is a module over the graded algebra $$A$$ with base ring $$R$$. This returns a basis as a free module over $$R$$, not a basis as a free module over $$A$$.
INPUT:
• n – an integer
OUTPUT:
A sequence of homogeneous module elements of degree n, which is a basis for the free module of all degree n module elements.
EXAMPLES:
sage: A = SteenrodAlgebra(2)
sage: M.<m0, m2, m4> = A.free_graded_module((0,2,4))
sage: M.basis_elements(8)
(Sq(1,0,1)*m0,
Sq(2,2)*m0,
Sq(5,1)*m0,
Sq(8)*m0,
Sq(0,2)*m2,
Sq(3,1)*m2,
Sq(6)*m2,
Sq(1,1)*m4,
Sq(4)*m4)
change_ring(algebra)#
Change the base ring of self.
INPUT:
• algebra – a connected graded algebra
OUTPUT:
The free graded module over algebra defined with the same number of generators of the same degrees as self.
EXAMPLES:
sage: A = SteenrodAlgebra(2)
sage: A2 = SteenrodAlgebra(2, profile=(3,2,1))
sage: N = M.change_ring(A2); N
Free graded left module on 2 generators over sub-Hopf algebra of
mod 2 Steenrod algebra, milnor basis, profile function [3, 2, 1]
Changing back yields the original module:
sage: N.change_ring(A) is M
True
connectivity()#
The connectivity of self.
OUTPUT:
An integer equal to the minimal degree of all the generators, if this module is non-trivial. Otherwise, $$+\infty$$.
EXAMPLES:
sage: A = SteenrodAlgebra(2)
sage: M.connectivity()
-2
element_from_coordinates(coordinates, n)#
The module element of degree n having the given coordinates with respect to the basis of module elements given by basis_elements().
INPUT:
• coordinates – a sequence of elements of the ground ring
• n – an integer
OUTPUT:
A module element of degree n.
EXAMPLES:
sage: A = SteenrodAlgebra(2)
sage: x = M.element_from_coordinates((0,1,0,1), 5); x
Sq(5)*g[0] + Sq(4)*g[1]
sage: basis = M.basis_elements(5)
sage: y = 0*basis[0] + 1*basis[1] + 0*basis[2] + 1*basis[3]
sage: x == y
True
sage: M.element_from_coordinates((0,0,0,0), 5)
0
gen(index)#
Return the module generator with the given index.
EXAMPLES:
sage: A = SteenrodAlgebra(2)
sage: M.generator(0)
g[0]
sage: M.generator(1)
g[2]
sage: M.generator(2)
g[4]
generator(index)#
Return the module generator with the given index.
EXAMPLES:
sage: A = SteenrodAlgebra(2)
sage: M.generator(0)
g[0]
sage: M.generator(1)
g[2]
sage: M.generator(2)
g[4]
generator_degrees()#
The degrees of the module generators.
OUTPUT:
A tuple containing the degrees of the generators for this module, in the order that the generators were given when self was constructed.
EXAMPLES:
sage: A = SteenrodAlgebra(2)
sage: M.generator_degrees()
(-2, 2, 4)
generators()#
Return all the module generators.
EXAMPLES:
sage: A = SteenrodAlgebra(2)
sage: M.generators()
(g[-2], g[1])
has_relations()#
Return False as this has no relations.
This is for compatibility with FPModule.
EXAMPLES:
sage: A = SteenrodAlgebra(2)
sage: F.has_relations()
False
is_trivial()#
Return True if this module is trivial and False otherwise.
EXAMPLES:
sage: A = SteenrodAlgebra(2)
False
True
minimal_presentation(top_dim=None, verbose=False)#
Return a minimal presentation of self.
OUTPUT:
The identity morphism as self is free.
EXAMPLES:
sage: A2 = SteenrodAlgebra(2)
sage: M.minimal_presentation().is_identity()
True
relations()#
Return the relations of self, which is ().
This is for compatibility with FPModule.
EXAMPLES:
sage: A = SteenrodAlgebra(2)
sage: F.relations()
()
resolution(k, top_dim=None, verbose=False)#
Return a free resolution of self of length k.
Since self is free, the initial map in the resolution will be the identity, and the rest of the maps will be zero.
INPUT:
OUTPUT:
A list of homomorphisms $$[1_M, 0, 0, \ldots, 0]$$ consisting of the identity map on this module followed by zero maps. Other than this module, the other modules in the resolution will be zero.
EXAMPLES:
sage: E.<x,y,z> = ExteriorAlgebra(QQ)
sage: M.resolution(0)
[Module endomorphism of Free graded left module on 2 generators over The exterior algebra of rank 3 over Rational Field
Defn: g[1] |--> g[1]
g[2] |--> g[2]]
sage: M.resolution(1)
[Module endomorphism of Free graded left module on 2 generators over The exterior algebra of rank 3 over Rational Field
Defn: g[1] |--> g[1]
g[2] |--> g[2],
Module morphism:
From: Free graded left module on 0 generators over The exterior algebra of rank 3 over Rational Field
To: Free graded left module on 2 generators over The exterior algebra of rank 3 over Rational Field]
sage: M.resolution(4)
[Module endomorphism of Free graded left module on 2 generators over The exterior algebra of rank 3 over Rational Field
Defn: g[1] |--> g[1]
g[2] |--> g[2],
Module morphism:
From: Free graded left module on 0 generators over The exterior algebra of rank 3 over Rational Field
To: Free graded left module on 2 generators over The exterior algebra of rank 3 over Rational Field,
Module endomorphism of Free graded left module on 0 generators over The exterior algebra of rank 3 over Rational Field,
Module endomorphism of Free graded left module on 0 generators over The exterior algebra of rank 3 over Rational Field,
Module endomorphism of Free graded left module on 0 generators over The exterior algebra of rank 3 over Rational Field]
suspension(t)#
Suspend self by the given degree t.
INPUT:
• t – an integer
OUTPUT:
A module which is isomorphic to this module by a shift of degrees by the integer t.
EXAMPLES:
sage: A = SteenrodAlgebra(2)
sage: M.suspension(4).generator_degrees()
(4, 6, 8)
sage: M.suspension(-4).generator_degrees()
(-4, -2, 0)
vector_presentation(n)#
Return a free module over the ground ring of the module algebra isomorphic to the degree n elements of self.
Let $$\mathcal{k}$$ be the ground ring of the algebra over this module is defined, and let $$M_n$$ be the free module of module elements of degree n.
The return value of this function is the free module $$\mathcal{k}^{r}$$ where $$r = dim(M_n)$$.
The isomorphism between $$k^{r}$$ and $$M_n$$ is given by the bijection taking the standard basis element $$e_i$$ to the $$i$$-th element of the array returned by basis_elements().
INPUT:
• n – an integer degree
OUTPUT:
A free module over the ground ring of the algebra over which self is defined, isomorphic to the free module of module elements of degree n.
EXAMPLES:
sage: A1 = SteenrodAlgebra(2, profile=[2,1]) | 2022-12-05 17:05:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5619652271270752, "perplexity": 1897.8902660461092}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00069.warc.gz"} |
https://deepgraph.readthedocs.io/en/latest/generated/deepgraph.deepgraph.DeepGraph.plot_map_generator.html | # deepgraph.deepgraph.DeepGraph.plot_map_generator¶
DeepGraph.plot_map_generator(lon, lat, by, edges=False, C=None, C_split_0=None, kwds_basemap=None, kwds_scatter=None, kwds_quiver=None, kwds_quiver_0=None, passable_ax=False)
Plot nodes and corresponding edges by groups, on basemaps.
Create a generator of scatter plots of the nodes in v, split in groups by v.groupby(by), on a mpl_toolkits.basemap.Basemap instance. If edges is set True, also create a quiver plot of each group’s corresponding edges.
The coordinates of the scatter plots are determined by the node’s longitudes and latitudes (in degrees): v[lon] and v[lat], where lon and lat are column names of v (the arrow’s coordinates are determined automatically).
In order to map colors to the arrows, either C or C_split_0 can be be passed, an array of the same length as e. Passing C creates a single quiver plot (qu). Passing C_split_0 creates two separate quiver plots, one for all edges where C_split_0 == 0 (qu_0), and one for all other edges (qu). By default, the arrows of qu_0 have no head, indicating “undirected” edges. This can be useful, for instance, when C_split_0 represents an array of temporal distances.
When mapping colors to arrows by setting C (or C_split_0), clim is automatically set to the min and max values of the entire array. In case one wants clim to be set to min and max values for each group’s colors, one may explicitly pass clim = None to kwds_quiver.
The same behaviour occurs when passing a sequence of g.n Numbers as colors c to kwds_scatter. In that case, vmin and vmax are automatically set to c.min() and c.max() of all nodes. Explicitly setting vmin and vmax to None, the min and max values of the groups’ color arrays are used.
In order to control the parameters of the basemap, scatter, quiver and/or quiver_0 plots, one may pass keyword arguments by setting kwds_basemap, kwds_scatter, kwds_quiver and/or kwds_quiver_0.
If passable_ax is True, create a generator of functions. Each function takes a matplotlib axes object (and/or a Basemap object) as input, and returns a scatter/quiver plot.
Parameters: lon (int or str) – A column name of v. The corresponding values must be longitudes in degrees. lat (int or str) – A column name of v. The corresponding values must be latitudes in degrees. by (array_like) – Column name(s) of v, determining the groups to create plots of. edges (bool, optional (default=True)) – Whether to create a quiver plot (2-D field of arrows) of the edges between the nodes. C (array_like, optional (default=None)) – An optional array used to map colors to the arrows. Must have the same length es e. Has no effect if C_split_0 is passed as an argument. C_split_0 (array_like, optional (default=None)) – An optional array used to map colors to the arrows. Must have the same length es e. If this parameter is passed, C has no effect, and two separate quiver plots are created (qu and qu_0). kwds_basemap (dict, optional (default=None)) – kwargs passed to basemap. kwds_scatter (dict, optional (default=None)) – kwargs to be passed to scatter. kwds_quiver (dict, optional (default=None)) – kwargs to be passed to quiver (qu). kwds_quiver_0 (dict, optional (default=None)) – kwargs to be passed to quiver (qu_0). Only has an effect if C_split_0 has been set. passable_ax (bool, optional (default=False)) – If True, return a generator of functions. Each function takes a matplotlib axes object (and/or a Basemap object) as input, and returns a dict of matplotlib objects. obj – If C_split_0 has been passed, return a generator of dicts of matplotlib objects with the following keys: [‘fig’, ‘ax’, ‘m’, ‘pc’, ‘qu’, ‘qu_0’, ‘group’]. Otherwise, return a generator of dicts with keys: [‘fig’, ‘ax’, ‘m’, ‘pc’, ‘qu’, ‘group’]. If passable_ax is True, return a generator of functions. Each function takes a matplotlib axes object (and/or a Basemap object) as input, and returns a dict as described above. generator
Notes
When passing C_split_0, the color of the arrows in qu_0 can be set by passing the keyword argument color to kwds_quiver_0. The color of the arrows in qu, however, are determined by C_split_0.
The default drawing order is set to: 1. quiver_0 (zorder=1) 2. quiver (zorder=2) 3. scatter (zorder=3) This order can be changed by setting the zorder in kwds_quiver_0, kwds_quiver and/or kwds_scatter. See also http://matplotlib.org/examples/pylab_examples/zorder_demo.html | 2020-08-05 06:59:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23806504905223846, "perplexity": 7088.335906437009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735916.91/warc/CC-MAIN-20200805065524-20200805095524-00372.warc.gz"} |
https://www.lil-help.com/questions/107420/merced-by-danielle-ofri-and-the-case-history-by-dannie-abse-1186-words | Merced" by Danielle Ofri and the "Case History" by Dannie Abse [1186 Words]
# Merced" by Danielle Ofri and the "Case History" by Dannie Abse [1186 Words]
A
0 points
A+ Solution
Class- assignment and case, analyze the essay between "Merced" by Danielle Ofri and the case history by Dannie Abse. Please answer questions by question according to the following below:
(Focus on the task; Close textual analysis is the key)
1. Theme: Choose two works which are "MERCED" by Danielle Ofri and the "Case history" by Dannie Abse for extended analysis. Write down what moves you about each piece. What do you think the key themes are in each text? What do you think each author is saying?
2. Structure: What methods does each author use to achieve his or her ends? Are there any symbols or special images used to help convey the complexity of the meaning? (Continue with the same two texts that you used in question 1)
3. What are the most impressive lines in each text? Why have you singled these lines out as being particularly important?
4. Consider Jack Coulehan's poem, "I'm Gonna Slap Those Doctors" ( in additional materials) -- Which Ofri says she read aloud to a patient at Bellevue. What is the patient's perspective of the doctors in this poem? Why do you think Ofri chose to read this poem to her patient?
5. What are the uses of poetry and storytelling in patient care?
6. What does Ofri learn about herself in "Merced" ( as a physician, as a writer as a sentient human being)?
Task 2. From the two texts between "Merced" by Danielle Ofri and the "Case history" by Dannie Abse. We will working on compare works, nothing similar themes and differing or similar authorial perspectives. For the first question: Choose point of comparison among the works and focus on a prevailing theme.
For the second question: Refer to another work read this term as a point of comparison with at least one of the works on doctors' and nurses' perspectives. Think about the heeling uses of poetry and of language itself.
1. Whether or not you are a health care provider, you have all doubtless had some kind of experience with the medical establishment, Choose one experience, personality, situation, encounter to study and analyze. Your choice can come from your own personal life or from a patient you work with professionally.
2. Identify who this patient is in relation to you (self, family member, patient of your, etc.) and what the nature of his/her illness is or has been. Provide essential background on the nature of this case: type of diagnosis, mitigating circumstances and contexts; duration of illness; medical professionals involved in treatment; setting in which treatment occurs; patient responses; outlook for recovery; and your own assessment of the course of the illness and the nature of the treatment. Note: more to come online. For now, just choose what and who your case will be.
Merced" by
a+tutor
A
0 points
#### Oh Snap! This Answer is Locked
Thumbnail of first page
Excerpt from file: Name: Tutor: Course: Date: Merced"byDanielleOfriandthe"CaseHistory"byDannieAbse Question1 MercedbyDanielleOfri,presentsrealandapplicablesituationthatengulfmedical practitioners.Danielle,aninterninmedicaltrainingisenthusiasticabouthertimeinBellevue
Filename: literature-merced-by-danielle-ofri-and-the-case-history-by-dannie-abse-21.doc
Filesize: < 2 MB
Print Length: 6 Pages/Slides
Words: NA
Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$
Use LaTeX to type formulas and markdown to format text. See example. | 2018-10-18 05:15:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25401803851127625, "perplexity": 6466.181386531205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511703.70/warc/CC-MAIN-20181018042951-20181018064451-00218.warc.gz"} |
https://www.physicsforums.com/threads/help-with-integration-1-involving-integration-by-parts-etc.295357/ | Homework Help: Help with integration (1) involving integration by parts etc
1. Feb 25, 2009
NCyellow
1. The problem statement, all variables and given/known data
Solve for indefinite integral of
(7x^3)/sqr(4+x^2)(dx)
2. Relevant equations
I just can't seem to find the right solution.
3. The attempt at a solution
First of all, we can just factor the 7 out of the integral for now since it is only a constant.
the inverse square root of (4+x^2) looks like arctan(x/2).
So I set 1/(4+x^2) up as dv, and so V would equal arctan(x/2). U is then x^3, and du is 3x^2.
2. Feb 25, 2009
lanedance
how about looking at trig substitution - in particular what trig identity could simplfy the denominator
3. Feb 25, 2009
Dick
Substitution again. Try u=x^2+4. You have a left over x^2 in the numerator. But x^2=u-4. Try the easy stuff before you resort to the hard stuff.
4. Feb 25, 2009
NCyellow
I did it, and ended up with the integral of (u^2-4u) over square root of u, all multiplied by the constant 7/2. After a lengthy algebra session, I ended up with a huge answer, that wasn't correct... What did I do wrong?
5. Feb 25, 2009
Dick
I ended up with basically (u-4)*du/sqrt(u) forgetting the constants. What did you do? I think you have an extra u in the numerator which doesn't belong there.
6. Feb 25, 2009
NCyellow
Ah, there we go. I forgot to take out an x for du. Thanks. | 2018-07-16 09:52:21 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8300837278366089, "perplexity": 1471.1299603546156}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589237.16/warc/CC-MAIN-20180716080356-20180716100356-00183.warc.gz"} |
http://maisonkayser.co.uk/grey-garden-qqmcp/a3b626-latex-in-python-string | three-dimensional plots, saving these to vector graphic formats like PDF PyLaTeX is a Python library for creating and compiling LaTeX files. \frac{\partial f_1}{\partial x_1} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \left[ \frac{\partial \mathbf{f}}{\partial x_1} This can be done with Thus, we keep just a bit of whitespace at the top and to the Now think of LaTeX as La/TeX. The pylatexenc.latexencode module provides a function unicode_to_latex () which converts a unicode string into LaTeX text and escape sequences. The heigth depends on the content of the figure, but the golden mean may Python String Methods. F-strings provide a concise and convenient way to embed python expressions inside string literals for formatting. plot (not the text labels) to a rasterized image. I am looking into a solution for this. Requires adding a usepackage{longtable} to your LaTeX preamble. Search String Methods . up vote-3 down vote favorite. In this reference page, you will find all the methods that a string object can call. With the help of DataFrame.to_latex() method, We can get the dataframe in the form of latex document which we can open as a separate file by using DataFrame.to_latex() method. That's why we cover it in great detail in this tutorial. This method retains the newline ‘\n’ in the generated string. edited Dec 5 at 16:46. meW. PyLaTeX is a Python library for creating and compiling latex documents. Variable Use: Strings can be assigned to variable say string1 and string2 which can called when using the print statement. To create an f-string, prefix the string with the letter “ f ”. 0 & 2 & 4 The typeset LaTeX document will have whitespace on either side of the figure, so we do not need to include this in the figure. It is an easy-to-learn general purpose programming language, with strong capabilities, including in state-of-the-art topics such latex strings: from IPython.display import Latex, display, display_latexfrom sympy.interactive import printingprinting.init_printing(use_latex='png')display(Latex('$\alpha^2 + \eta$')) This just displays the string repr: If I change display to display_latex it will just return None and not display anything. At the same time, they leave much of the possible power of the Python-LaTeX combination un-tapped. It should recognize accented characters and most math symbols. \begin{bmatrix} (Convert to string with str(sym) or to latex with sympy.latex(sym) Raises: ValueError: If cannot simplify float """ try: ratio = rationalize(flt) res = sympy.simplify(ratio) except ValueError: ratio = rationalize(flt/np.pi) res = sympy.simplify(ratio) * sympy.pi return res # fin For example, you can use the join() method to concatenate two strings. Combining LATEX with Python Uwe Ziegenhagen Abstract Even older than Java, Python has achieved a lot of popularity in recent years. with LaTeX. If you wish to include pseudocode or algorithms, you may find Algorithms and Pseudocodeuseful also. Python string method find() determines if string str occurs in string, or in a substring of string if starting index beg and ending index end are given.. Syntax str.find(str, beg=0, end=len(string)) Parameters. PyLaTeX has two quite different usages: generating full pdfs and generating LaTeX snippets. The paid amount are: [4000, 4000, 4500, 4500, 5000] Second way: Using string string.format method. Multiline string: I'm learning Python. Make sure what you are trying to do is possible in a LaTeX document, that your LaTeX syntax is valid and that you are using raw strings if necessary to avoid unintended escape sequences. [La]TeX), so this means that the figure width should be 3.40390 inches. this method: Another way to set the legend fonts after the legend has been drawn is Python.sty, SageTeX, and SympyTeX illustrate the potential of a close Python-LaTeX integration. The string has the f prefix and uses {} to evaluate variables. PyLaTeX is a Python library for creating and compiling latex documents. in String Anführungszeichen. 0.76 : Printing made better, allows outputs, added print_out Recent versions matplotlib break the However, the backslash also remains in the result. Some of the invalid raw strings are: As an example of a library built on template strings for i18n, see the Here are the modified files to make use of figure.figsize property can be used to set the default figure size. Jupyter notebook recognizes LaTeX code written in markdown cells and renders the symbols in the browser using the MathJax JavaScript library. The format() method formats the specified value(s) and insert them inside the string's placeholder. Python is known for its simple syntax. Beitrag So Aug 31, 2003 00:15. The basic string formatting can be done by using the ‘%’ operator. Python already offers many ways to substitute strings [/formatting-strings-with-python/], including the recently introduced f-Strings [/string-formatting-with-python-3s-f-strings/]. There are python libraries that interface to LaTeX interpreters and Ghostscript to generate images. Python is quite a powerful language when it comes to its data science capabilities. fonts). solve string latex formulas using python3. The goal of this library is to be easy but is also to provide an extensible interface between Python and latex. I refer to TechBeamers.com tutorials. Long string with newlines: I'm learning Python. Comments #1 Ken Starks, October 24, 2008 at 1:29 p.m. Python String Formatting Rule of Thumb (Image: Click to Tweet) This flowchart is based on the rule of thumb that I apply when I’m writing Python: Python String Formatting Rule of Thumb: If your format strings are user-supplied, use Template Strings (#4) to avoid security issues. Make sure LaTeX, dvipng and ghostscript are each working and on your PATH. Python f-string expressions. The goal of this library is to be an easy, but extensible interface between Python and LaTeX. For example, the code $\int_a^b f(x) = F(b) - F(a)$ renders inline as $\int_a^b f(x) dx = F(b) - F(a)$. This is a good option if you have Interaktiver Modus Um Python im interaktiven Modus zu starten, ruft man den Interpreter in einer Shell … To do this, simply use plain text for the labels and then replace them This method retains the newline ‘\n’ in the generated string. for stunning zooms). PythonTeX: Fast Access to Python from within LaTeX ... the way in which \pyconverts its argument to a valid LaTeX string can be specified by the user. Python string length | len() Python | Pandas DataFrame.to_latex() method Last Updated: 17-09-2019. When a backslash is followed by a quote in a raw string, it’s escaped. The goal of this library is to be easy but is also to provide an extensible interface between Python and latex. Announcements Here you can see a respective diagram in newer browsers. # Get this from LaTeX using \showthe\columnwidth, 2017-07-13 (last modified), 2006-05-11 (created). Posted on July 7th, 2016, by tom in Uncategorized. Enclose LaTeX code in dollar signs $...$ to display math inline. Enclose LaTeX code in double dollar signs $$...$$to display expressions in a centered paragraph. the so-called "mixed-mode rendering" capability in recent versions of with an explicit call to the axes() function. I will try to discuss this here further in the near future. 3 Calling python passing it argument to process and getting the Latex back. \end{bmatrix} Python String Methods Previous Next Python has a set of built-in methods that you can use on strings. Here is the outline of the LaTeX file used to include the figure figure. Aufgrund der verhältnismäÿig hohen Lesbarkeit und einiger hilfreicher Mathematik-Module wird Python auch im akademischen und schuli-schen Bereich häu g verwendet. display the axis labels, so we need to specify large margins. It is the most popular site for Python programmers. It is the most popular site for Python programmers. problems with the text.usetex method (for example, if the The Python Template class is intended to be used for string substitution or string interpolation. When a backslash is followed by a quote in a raw string, it’s escaped. If the formatted structures include objects which are not fundamental Python types, the representation may not be loadable. A couple of switches allow you to alter how this function behaves. Below we give a partial list of commonly used mathematical symbols. Revision 5e2833af. Basic example¶. A documentation string (docstring) is a string that describes a module, function, class, or method definition. document, you can have LaTeX generate and substitute the text for your Because of this feature, we can’t create a raw string of single backslash. Python 3.x: kleine Griechische Buchstaben codiert sind, von 945 bis 969 ja,alpha ist chr(945) ist omega chr(969) so geben Sie einfach . Ein String, oder Zeichenkette, kann man als eine Sequenz von einzelnen Zeichen sehen. The docstring is a special attribute of the object ( object.__doc__ ) and, for consistency, is surrounded by triple double quotes, i.e. That is the unicode character U+212B. High level languages like Python and R are great partly because entire workflows can be done within them; from data ingestion, to cleaning, to analysis, to producing plots and regression tables. Heidelberg 1 Play with it on mybinder.org ! Python f-strings are available since Python 3.6. Python string method replace() returns a copy of the string in which the occurrences of old have been replaced with new, optionally restricting the number of replacements to max.. Syntax. It's a viable alternative to other to the built-in string substitution options when it comes to creating complex string-based templates. For example, the code $\int_a^b f(x) = F(b) - F(a)$ renders inline as ∫abf(x)dx=F(b)−F(a). Make sure LaTeX, dvipng and ghostscript are each working and on your PATH. 675 12. psfrag functionality (see for example this The idea behind f-strings is to make string interpolation simpler. It is the most popular site for Python programmers. We got you covered. Unfortunately, string modulo "%" is still available in Python3 and what is even worse, it is still widely used. Pick your favourite math course and write the notes from your last class in LaTeX. Square brackets can be used to access elements of the string. We can … Some features of pylatex are: We can access all the features of LaTeX in python using this module; We can make documents with fewer lines of code We can put expressions between the {} brackets. str.replace(old, new[, max]) Parameters. Python inside LaTeX (and Sage too) All of the above posts contains examples. The placeholder is defined using curly brackets: {}. Add a section, a subsection and some text to the document. Read more about the placeholders in the Placeholder section below. Jedes einzelne Zeichen eines Strings, kann über einen Index angesprochen werden. It is fairly reminiscent of LaTeX in it’s inline notation: Simplest f-string from official Python docs. Note: All string methods returns new values. The examples below using SymPy illustrate this approach. above, the figure width is set to the \columnwidth.). Using the package listings you can add non-formatted text as you would do with \begin{verbatim} but its main aim is to include the source code of any programming language within your document. Wish you could get Å in your string? graph using the psfrag package. The string has the f prefix and uses {} to evaluate variables. Enclose LaTeX code in double dollar signs $$...$$ to display expressions in a centered paragraph. Once this is determined, the Im folgenden Beispiel sehen wir, wie der obige im Bild dargestellt String in Python definiert wird und wie wir auf ihn zugreifen können: $python formatting_string.py Peter is 23 years old Peter is 23 years old Peter is 23 years old We have the same output. With these smaller plot sizes, the default margins are not enough to In python, a String is a sequence of characters, and each character in it has an index number associated with it. Because of this feature, we can’t create a raw string of single backslash. See the LaTeX WikiBook for more information (especially the section on mathematics). This can save typing when several conversions or formatting operations are needed. Besides, Python also has other ways for string formatting. Otherwise, use Literal String Interpolation/f-Strings (#3) if you’re on Python 3.6+, and “New Style” str.format (#2) if you’re not. escape bool, optional. Python Raw String and Quotes. In the future, a latex installation may be the only external dependency. Wenn du dir nicht sicher bist, in welchem der anderen Foren du die Frage stellen sollst, dann bist du hier im Forum für allgemeine Fragen sicher richtig. I refer to TechBeamers.com tutorials. sample_str = "Sample String" Each character in this string has a sequence number, and it starts with 0 i.e. Some features of pylatex are: We can access all the features of LaTeX in python using this module python string math . Including Python code in LaTeX papers is very simple and convenient with the “listings” package. Python Programmierforen. The module array_to_latex converts a NumPy/SciPy array or Pandas Numerical DataFrame to a LaTeX array or table using Python 3.x style formatting of the result. 2012-06-12 New on CTAN: Embed Python in LaTeX; more Introduction Python Templates are used to substitute data into strings. Introduction to Python Print Table. \vdots & \ddots & \vdots \\ f-strings are how you should use print statements in Python. This is a longer explanation, which may include math with latex syntax . One solution is to selectively convert parts of the Most other symbols can be inferred from these examples. directly with very good results (if you are careful about choosing be used to make a pleasing figure. Run your script with verbose mode enabled: python example.py --verbose-helpful (or --verbose-debug-annoying) and inspect the output. Use \left and \right to enclose an arbitrary expression in brackets: The derivative$f'(a)$of the function$f(x)$at the point$x=a$is the limit, $$f'(a) = \lim_{x \to a} \frac{f(x) - f(a)}{x - a}$$, A function$f(x)$is continuous at a point$x=a$if, $$\lim_{x \to a^-} f(x) = f(a) = \lim_{x \to a^+} f(x)$$, $$e^x = \sum_{k=0}^{\infty} \frac{x^k}{k! 3 Beiträge • Seite 1 von 1. worst_case. When the file is processed by LaTeX, look at the output. Simply install using pip: pip install pylatex And then install a relevant LaTeX processor and other dependencies. The goal of this library is being an easy, but extensible interface between Python and LaTeX. Since the figure will not be scaled down, we may explicitly set the font However, the backslash also remains in the result. MichaelMcNeilForbes, To ensure that your graphics use exactly the same fonts as your A primary use case for template strings is for internationalization (i18n) since in that context, the simpler syntax and functionality makes it easier to translate than other built-in string formatting facilities in Python. Python inside LaTeX (and Sage too) All of the above posts contains examples. Markdown file extension is .md Moreover, Printing tables within python is quite a challenge sometimes, as the trivial options provide you the output in an unreadable format. also allow LaTeX code to interact with Python. -- MichaelMcNeilForbes, Section author: MichaelMcNeilForbes, Unknown[100], GaelVaroquaux, Unknown[101], EmmanuelleGouillart, Unknown[102], LaurentPerrinet, Unknown[8], Christian Gagnon. It is fairly reminiscent of LaTeX in it’s inline notation: Simplest f-string from official Python docs. LaTeX is a typesetting language for producing scientific documents. the graphic is included, it will not be resized, and the fonts etc. Example: str1 = 'Wel' print(str1,'come') Output: Wel come Example: str1 = 'Welcome' str2 = 'Python' print(str1, str2) Output: Welcome Python String Concatenation: sizes. © Copyright 2015, Various authors Thanks for sharing this Kjell. discussion. However, Python does not have a character data type, a single character is simply a string with a length of 1. \begin{bmatrix} Navier-Stokes Equation for Incompressible Flow. 3 & 0 & 1 \\ Example #1 : In this example … The format() method formats the specified value(s) and insert them inside the string's placeholder. However, when you’re learning Python for the first time or when you’ve come to Python with a solid background in another programming language, you may run into some things that Python doesn’t allow. 1 & 2 & 1 \\ Python ist eine Programmiersprache, die einfach zu erlernen ist und ein schnelles Entwi-ckeln von Programmen ermöglicht. encoding str, optional. You should be capable of understanding it, when you encounter it in some Python code. Here is an example as it should probably work: however, {{{contourf}}} currently does not support the {{{rasterized}}} option (which is silently ignored). jobname is any string that can be used to create a valid filename; Examples: In all the following examples, no files are left on the filesystem, unless requested with the keep_pdf_file and keep_log_file parameters to the create_pdf method. Matplotlib, tikz, pgf, latex, python, dynamic table or plothttp://mirror.hmc.edu/ctan/macros/latex/contrib/pythontex/pythontex.pdf appropriate fonts cannot be found.). The command \lstinputlisting[language=Octave]{BitXorMatrix.m} imports the code from the file BitXorMatrix.m, the additional parameter in between brackets enables language highlighting for the Octave programming language.If you need to import only part of the file you can specify two comma-separated parameters inside the brackets. Allgemeine Fragen. A string representing the encoding to use in the output file, defaults to ‘utf-8’. \cdots \frac{\partial \mathbf{f}}{\partial x_n} \right] = With Templates, we gain a heavily customizable interface for string substitution (or string interpolation). diesen Text ausgeben. Description. ["Cookbook/Matplotlib/UsingTex"] guidelines. Multiline string: I'm learning Python. either side of the figure, so we do not need to include this in the The pylatexenc.latexencode module provides a function unicode_to_latex() which converts a unicode string into LaTeX text and escape sequences. Morgen, ich möchte z.B. PyTeX, or Py/TeX is you prefer, is to be a front end to TeX, written in Python. (In the example I have been running PyX and PGF/TikZ in parallel as I build up my skills in each, and this means I can more easily use them together. right so that the labels do not extend beyond the bounding box, and add will I refer to TechBeamers.com tutorials. For example: renders as f′(a)=limx→af(x)−f(a)x−a See the LaTeX WikiBook for more information (especially the section on mathematics). matplotlib. Matplotlib has a simple pure python LaTeX interpreter that uses system fonts to render the output. An obvious solution is to greyscale convert your figure, but for readibility, adding dashes is often better.. This maybe implemented for an example of a SigmoidalFunctions with, Note: This section is obsolete. Like many other popular programming languages, strings in Python are arrays of bytes representing unicode characters. Python Raw String and Quotes. Python f-string expressions. Examples: Ubuntu sudo apt-get install texlive-pictures texlive-science texlive-latex-extra latexmk Documentation . (x-a)^n$$, Write LaTeX code to display Stokes' Theorem, $$\int_{\partial \Omega} \omega = \int_{\Omega} d \omega$$, Write LaTeX code to display the adjoint property of the tensor product, $$\mathrm{Hom}(U \otimes V,W) \cong \mathrm{Hom}(U, \mathrm{Hom}(V,W))$$, Write LaTeX code to display the definition of the Laplace transform, $$\mathscr{L} { f(t) } = F(s) = \int_0^{\infty} f(t) e^{-st} dt$$, Write LaTeX code to display the inverse matrix formula, $$\begin{bmatrix} a & b \\ c & d \end{bmatrix}^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}$$, Write LaTeX code to display the infinite product formula, $$\sin x = x \prod_{n=1}^{\infty} \left( 1 - \frac{x^2}{\pi^2 n^2} \right)$$. Since Python 2.6 has been introduced, the string method format should be used instead of this old-style formatting. Also, a raw string can’t have an odd number of backslashes at the end. When set to False prevents from escaping latex special characters in column names. Python String capitalize() Converts first character to Capital Letter. Python String format() Method String Methods. two column format. ‘La’ is written in TeX's macro language. above produces the following output (Note: LaTeX will pause after the Making a publication quality plot with Python (and latex) 31 Jan 2018. In this tutorial, we will look at ‘%’ operator and string.format() function for string formatting in Python in this tutorial. Code #1 : For example: $$f'(a) = \lim_{x \to a} \frac{f(x) - f(a)}{x-a}$$. Python String Formatting Rule of Thumb: If your format strings are user-supplied, use Template Strings (#4) to avoid security issues. We can get that to print in Python, but we have to create it in a unicode string, and print the string properly encoded. The result: Employee Name is Mike. Thanks for sharing this Kjell. Python ist eine interpretierte Sprache, wodurch sich bei der Programmentwicklung erheblich Zeit sparen lässt, da Kompilieren und Linken nicht nötig sind. Installation. Latex to render mathematical and scientific writing. $$, Write LaTeX code to display the angle sum identity,$$\cos(\alpha \pm \beta) = \cos \alpha \cos \beta \mp \sin \alpha \sin \beta$$, Write LaTeX code to display the indefinite integral,$$\int \frac{1}{1 + x^2} \, dx = \arctan x + C$$, Write LaTeX code to display the Navier-Stokes Equation for Incompressible Flow,$$\frac{\partial \mathbf{u}}{\partial t} + (\mathbf{u} \cdot \nabla) \mathbf{u} - \nu \nabla^2 \mathbf{u} = - \nabla w + \mathbf{g}$$, Write LaTeX code to display Green's Theorem,$$\oint_C (L dx + M dy) = \iint_D \left( \frac{\partial M}{\partial x} - \frac{\partial L}{\partial y} \right) dx \, dy$$, Write LaTeX code to display the Prime Number Theorem,$$\lim_{x \to \infty} \frac{\pi(x)}{ \frac{x}{\log(x)}} = 1$$, Write LaTeX code to display the general formula for Taylor series,$$\sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!} Documentation of the package is part of the (awesome) LaTeX wikibook… First, include the package in your document: \documentclass{article} \usepackage{listings} \begin{document} \end{document} And then insert … There are 1 inch = 72.27pt (in versucht es laufen zu lassen. They do not change the original string. \frac{\partial f_m}{\partial x_1} & \cdots & \frac{\partial f_m}{\partial x_n} By default, the value will be read from the pandas config module. I was poking at the non-latex stuff, but took a look to the issue. Some other plot elements do however. First Way: Using ‘%’ for string formatting. This section describes a technique following the A couple of switches allow you to alter how this function behaves. Create Latex file like this It is the most popular site for Python programmers. The format() method returns the formatted string. Python f-strings are available since Python 3.6. Python.sty requires that all Python code be executed every time the document is compiled. In this example, we make a small function inside pycode which takes one argument which is string that represents the integrand to integrate and second argument which is the integration variable. The option escapeinside= {A}{B} will define delimiters for escaping into LaTeX code, i.e. The class works using regular expressions and provides a user-friendly and powerful interface. This example shows basic document generation functionality. all the code between the string "A" and "B" will be parsed as LaTeX over the current listings style. : Documentation of the package is part of the (awesome) LaTeX wikibook… First, include the package in your document: \documentclass{article} \usepackage{listings} \begin{document} \end{document} And then insert code directly in the document: Some of the invalid raw strings are: Example. Let us try it out. $$share | improve this question. Enclose LaTeX code in dollar signs ... to display math inline. only have one axis. be exactly as you set them rather than scaled (and possibly distored). fill_document (doc) [source] ¶. Perl and Python have their own interfaces to Tk, allowing them also to use Tk when building GUI programs. The string itself can be formatted in much the same way that you would with str.format(). print(txt.format(price = 49)) Try it Yourself » Definition and Usage. ‘La’ is a front end to Don Knuth's typesetting program TeX.$$, $$For example, we have a string variable sample_str that contains a string i.e. Template strings provide simpler string substitutions as described in PEP 292. Long string with newlines: I'm learning Python. We do this Markdown. print u '\u212B'.encode('utf-8') Å We use u'' to indicate a unicode string. It is payback time and repr("%s" % b"a") semi-intuitively returns '"b\'a\'"' in Python 3(.3) (and b"%s" % b"a" throws TypeError: unsupported operand type(s) for %: 'bytes' and 'bytes').This is the result of Python 3’s strict distinction between text (sequence of unicode code points) and bytes (sequence of raw bytes). It is also friendly language. -- It should recognize accented characters and most math symbols. Python is very Popular Language. Python code in LaTeX. See the LaTeX WikiBook (Mathematics) and the Detexify App to find any symbol you can think of! ), The first step is to determine the size of the figure: this way, when or EPS can result in unacceptably large files (though with the ability$$. It can be used on some websites like Stack Overflow or to write documentations (essentially on GitHub). In the event that things dont work¶ Try rm -r ~/.matplotlib/*cache. Strings are Arrays. $$,$$ The example We introduce a very small part of the language for writing mathematical notation. }$$, The Jacobian matrix of the function \mathbf{f}(x_1, \dots, x_n) is,$$ Syntax : DataFrame.to_latex() Return : Return the dataframe as a latex document. more space to the bottom for the x label: Here is the python file that generates the plots. Der Interpreter kann interaktiv genutzt werden, so dass man einfach mit 3. However, a LaTeX environment is a very heavy dependence that is unlikely to be installed by the users. The pprint module provides a capability to “pretty-print” arbitrary Python data structures in a form which can be used as input to the interpreter. A string is a sequence of characters enclosed in quotation marks. I would like to solve this string using Python 3.x "$x^{2} + x^{1} - 1$" Is it possible? I have been running PyX and PGF/TikZ in parallel as I build up my skills in each, and this means I can more easily use them together. \showthe command, press enter to continue): Thus, the figure will be 246.0pt wide.$ python formatting_string.py Peter is 23 years old Peter is 23 years old Peter is 23 years old We have the same output. For instance, to import the code from the line 2 to the line … \begin{matrix} a & b \\ c & d \end{matrix} (example for REVTeX4 for publication is APS physics journals with a In this example, we Combining LATEX with Python Uwe Ziegenhagen August 9, 2019 Dante e.V. This page describes several ways to produce publication quality graphics … This can be done in LaTeX by explicitly setting the width of the figure Also, a raw string can’t have an odd number of backslashes at the end. Description. \begin{pmatrix} a & b \\ c & d \end{pmatrix} Comments #1 Ken Starks, October 24, 2008 at 1:29 p.m. Dies funktioniert in den meisten Fällen nicht, und wir erhalten viele Fehlermeldungen. The second and more usable way of formatting strings in Python is the str.format function which is part of the string class. to use, for example: If you have very complex images such as high-resolution contour plots or Including Python code in LaTeX papers is very simple and convenient with the “listings” package. I refer to TechBeamers.com tutorials. Following is the syntax for replace() method −. \mathbf{J} = \frac{d \mathbf{f}}{d \mathbf{x}} = using the psfrag package. It’s a very simple language that allows you to write HTML in a shortened way. Insert the price inside the placeholder, the price should be in fixed point, two-decimal format: txt = "For only {price:.2f} dollars!" f-strings are how you should use print statements in Python. Wie wichtig dieser Unterschied ist, sieht man, wenn man sich ein beliebiges Python2-Programm nimmt und dieses unter Python3 laufen lässt bzw. \end{bmatrix} \Showthe\Columnwidth, 2017-07-13 ( last modified ), 2006-05-11 ( created ) man einfach mit 3 may used... Produce publication quality graphics with LaTeX syntax 2019 Dante e.V to substitute into! … Python is the most popular site for Python programmers Template class is intended to be but... Be scaled down, we can … Perl and Python have their own to. To access elements of the plot ( not the text labels ) to a rasterized image is... Have a uniform documentation quality graphics with LaTeX syntax ( essentially on GitHub ) then install a LaTeX. Trivial options provide you the output prevents from escaping LaTeX special characters in column names string formatting be., 2019 Dante e.V being an easy, but extensible interface between Python LaTeX. Wir erhalten viele Fehlermeldungen methods that a string representing the encoding to Tk! Set the font sizes und schuli-schen Bereich häu g verwendet in markdown and! Knuth 's typesetting program TeX this reference page, you need to provide an interface! Method − to display math inline example above, the string itself can be used to the... This reference page, you can use on strings psfrag functionality ( for! The “ listings ” package Python auch im akademischen und schuli-schen Bereich häu g verwendet the labels then. On your PATH ’ is written in markdown cells and renders the in... In recent versions of matplotlib are: [ 4000, 4000, 4500 5000. F-Strings [ /string-formatting-with-python-3s-f-strings/ ] are each working and on your PATH very heavy dependence that is to! About the placeholders in the event that things dont work¶ Try rm -r ~/.matplotlib/ cache!, allowing them also to provide an extensible interface between Python and LaTeX ) 31 Jan.. 9, 2019 Dante e.V listings ” package, wenn man sich ein beliebiges Python2-Programm nimmt und dieses unter laufen... Be capable of understanding it, when you encounter it in some Python code LaTeX..., dvipng and ghostscript are each working and on your PATH the f prefix and {! ( last modified ), 2006-05-11 ( created ) non-latex stuff, latex in python string interface! Save typing when several conversions or formatting operations are needed unicode characters simple and convenient with the “ listings package. 2006-05-11 ( created ) has an index number associated with it an extensible interface between Python and LaTeX that Python. Psfrag package da Kompilieren und Linken nicht nötig sind of built-in methods a! Eine Sequenz von einzelnen Zeichen sehen the users modified ), 2006-05-11 ( created ) i Try... Sich bei der Programmentwicklung erheblich Zeit sparen lässt, da Kompilieren und nicht... The code between the { } to evaluate variables of LaTeX in it ’ s escaped TeX, written markdown. A concise and convenient with the “ listings ” package placeholder section below, 4000, 4500,,. Ways to produce publication quality plot with Python ( and possibly distored.. So dass man einfach mit 3 a raw string, oder Zeichenkette, kann über einen index werden! Has a sequence of characters, and each character in it ’ s inline notation: Simplest from. Variable say string1 and string2 which can called when using the MathJax JavaScript library TeX, written in TeX macro! A documentation string ( docstring ) is a Python library for creating and compiling LaTeX files utf-8.... Latex papers is very simple language that allows you to alter how this behaves. This method retains the newline ‘ \n ’ in the event that things dont work¶ Try rm ~/.matplotlib/! \Showthe\Columnwidth, 2017-07-13 ( last modified ), 2006-05-11 ( created ), 4500, 5000 ] way... Allows outputs, added print_out the result und einiger hilfreicher Mathematik-Module wird Python auch im akademischen und Bereich! Latex snippets can called when using the MathJax JavaScript library genutzt werden, so dass einfach. ) function is Mike introduce a very simple and convenient with the letter “ f ” how! Can use on strings, is to be installed by the users, allowing them to! The representation may not be scaled down, we may explicitly set the font sizes expressions and provides user-friendly. Not the text labels ) to a rasterized image like Stack Overflow or to write HTML in a raw can. Pylatex and then replace them using the print statement the dataframe as a LaTeX document own. Include math with LaTeX GUI programs golden mean may be used for string formatting when set False! Heavy dependence that is unlikely to be installed by the users string formatting above posts contains examples 's! Latex processor and other dependencies over the current listings style should be capable of it! 'S latex in python string we cover it in great detail in this tutorial of characters, and SympyTeX the. Run your script with verbose mode enabled: Python example.py -- verbose-helpful ( or -- verbose-debug-annoying ) insert..., sieht man, wenn man sich ein beliebiges Python2-Programm nimmt und dieses Python3! When it comes to its data science capabilities that contains a string a... You would with str.format ( ) method to concatenate two strings indicate unicode... Modified ), 2006-05-11 ( created ) understanding it, when you encounter it great! And more usable way of formatting strings in Python it is the str.format function which is part the. Is fairly reminiscent of LaTeX in it ’ s escaped lässt bzw we cover it in some Python code passing... ‘ utf-8 ’ LaTeX code in LaTeX papers is very simple and convenient with the so-called mixed-mode! Mean may be used for string formatting '' will be exactly as you them! Substitution or string interpolation ) werden, so dass man einfach mit 3 a respective diagram newer! Strings provide simpler string substitutions as described in PEP 292 embed Python expressions inside string literals for formatting to strings. Over the current listings style, including the recently introduced f-strings [ /string-formatting-with-python-3s-f-strings/.! And LaTeX ) 31 Jan 2018 read more about the placeholders in the section. Is the most popular site for Python programmers that contains a string sample_str! Also allow LaTeX code in dollar signs $...$ to display expressions in a paragraph... Is unlikely to be a front end to TeX, written in Python arrays... Above posts contains examples associated with it Return the dataframe as a document! Be assigned to variable say string1 and string2 which can called when using the JavaScript. Arrays of bytes representing unicode characters, defaults to ‘ latex in python string ’ Sage too ) all the! Docstring ) is a string that describes a technique following the [ Cookbook/Matplotlib/UsingTex '' ] guidelines need provide... I 'm learning Python apt-get install texlive-pictures texlive-science texlive-latex-extra latexmk documentation two strings other dependencies allows outputs, added the... Can be used on some websites like Stack Overflow or to write documentations ( on! Oder Zeichenkette, kann man als eine Sequenz von einzelnen Zeichen sehen this with an explicit call to ! We do this, simply use plain text for the labels and then replace them the. Function, class, or Py/TeX is you prefer, is to be a front end to,! Python auch im akademischen und schuli-schen Bereich häu g verwendet ist, sieht man, man. Is a very heavy dependence that is unlikely to be easy but is also to provide an interface... This is determined, the value will be read from the pandas config module the example above, the also... 'S why we cover it in some Python code in double dollar signs \$. ) ` function to create an f-string, prefix the string method format should be used instead of library., it ’ s inline notation: Simplest f-string from official Python docs assigned to variable string1. Unlikely to be installed by the users produce publication quality graphics with LaTeX.! Single backslash string interpolation ) all of the plot ( not the text labels ) to a image! Brackets can be assigned to variable say string1 and latex in python string which can called when using the print statement s very! Of characters, and each character in this example, you need to provide optional subsection this! Two quite different usages: generating full pdfs and generating LaTeX snippets --... This from LaTeX using \showthe\columnwidth, 2017-07-13 ( last modified ), 2006-05-11 ( created ) ( price = )! July 7th, 2016, by tom in Uncategorized syntax for replace ( ):! For replace ( ) Converts first character to Capital letter goal of this library is to be easy but also..., ruft man den Interpreter in einer Shell … Python is quite a challenge,! I will Try to discuss this Here further in the generated string so dass man mit. This method retains the newline ‘ \n ’ in the example above, value... And it starts with 0 i.e about the placeholders in the output prefer, is to be easy is! ” package funktioniert in den meisten Fällen nicht, und wir erhalten viele Fehlermeldungen methods... Because of this library is being an easy, but extensible interface between and!: strings can be inferred from these examples letter “ f ” since Python 3.6 created ) other the. T have an odd number of backslashes at the non-latex stuff, but extensible interface between and! String string.format method pick your favourite math course and write the notes from your last class in LaTeX papers very. This make sure LaTeX, dvipng and ghostscript are each working and on your PATH inferred these. Capital letter work¶ Try rm -r ~/.matplotlib/ * cache character to Capital letter 2019 Dante e.V über einen angesprochen! Viable alternative to other to the built-in string substitution options when it comes to creating string-based... | 2021-06-15 22:19:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6688247919082642, "perplexity": 5205.892419316324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621627.41/warc/CC-MAIN-20210615211046-20210616001046-00255.warc.gz"} |
https://www.coursehero.com/file/5902081/20072prbset6/ | # 20072prbset6 - METU Department of Economics Econ 202...
• Notes
• MajorFreedomPuppy9918
• 4
This preview shows page 1 - 2 out of 4 pages.
METU Department of Economics Econ 202 Macroeconomic Theory Instructors: Ebru Voyvoda and Şirin SaraçoğluTeaching Assistant: Gizem Koşar2007-2008 Spring Semester Problem Set 6 (OB Chapters 7 and 8) The questions marked with (*) are to be solved as homework due 18.04.2008 Question 1:Suppose a hypothetical economy has the following Phillips curve: ()etttnd uuwhere unis the natural level of unemployment, πtis the inflation rate at t, and πetis the expected inflation rate at t. a)Describe briefly what sign you expect on parameter d. Explain the economic significance of the sign. b)Assume d = -0.4, un = 0.08, and πet = 0.05 for all t. Graph the short run and long run Phillips curves. c)Now assume that πet= πt-1, so that inflation in the current period, t, depends on unemployment and the inflation of the last period, t-1. What is the non accelerating inflation rate of unemployment (NAIRU)? Explain the intuition behind the NAIRU. d)If the government of this hypothetical economy wants to reduce inflation by 4%, how much does the Phillips curve suggest unemployment will change? Calculate the number of point-years excess unemployment. With this goal, can the Central Bank affect the number of point-years of excess unemployment? Discuss. Do you think the Central Bank can choose the distribution of excess unemployment over time? Question 2(*):Assume that the following is true about the economy:C = 70 + 0.1(Y T) I = 40 − 200i + 0.1Y G = 100 T = 100 Md=\$Y(0.4 − i) Ms= 80 Assume the following wage setting relation: W = Pe(z − 30u) where z = 14013is a parameter that represents the workers bargaining power and u is the unemployment rate. The following is the price setting relation P = (1+ μ)W where μ = 0.3 is the markup.The economy production function is: Y = N The labor force is | 2021-09-27 08:21:32 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8211247324943542, "perplexity": 5146.3179247788985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00201.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/boxes-moved-conveyor-belt-filled-tothe-packing-station-110-m-away-belt-initially-stationar-q439207 | Boxes are moved on a conveyor belt from where they are filled tothe packing station 11.0 m away. The belt is initially stationaryand must finish with zero speed. The most rapid transit isaccomplished if the belt accelerates for half the distance, thendecelerates for the final half of the trip. If the coefficient ofstatic friction between a box and the belt is 0.60, what is theminimum transit time for each box?
### Get this answer with Chegg Study
Practice with similar questions
Q:
Boxes are moved on a conveyor belt from where they are filled to the packing station 11.0 m away. The belt is initially stationary and must finish with zero speed. The most rapid transit is accomplished if the belt accelerates for half the distance, then decelerates for the final half of the trip. If the coefficient of static friction between a box and the belt is 0.54 what is the minimum transit time for each box? | 2016-07-01 21:43:20 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8637122511863708, "perplexity": 924.9215088919449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00112-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://scipost.org/submissions/2110.03644v2/ | Computing associators of endomorphism fusion categories
Submission summary
Authors (as Contributors): Jacob C Bridgeman · Ramona Wolf
Submission information
Code repository: https://github.com/JCBridgeman/FSymbolsFromModules
Date submitted: 2021-10-19 16:46
Submitted by: Bridgeman, Jacob C
Submitted to: SciPost Physics
Ontological classification
Specialties:
• Mathematical Physics
• Quantum Physics
Approaches: Theoretical, Computational
Abstract
Many applications of fusion categories, particularly in physics, require the associators or $F$-symbols to be known explicitly. Finding these matrices typically involves solving vast systems of coupled polynomial equations in large numbers of variables. In this work, we present an algorithm that allows associator data for some category with unknown associator to be computed from a Morita equivalent category with known data. Given a module category over the latter, we utilize the representation theory of a module tube category, built from the known data, to compute this unknown associator data. When the input category is unitary, we discuss how to ensure the obtained data is also unitary. We provide several worked examples to illustrate this algorithm. In addition, we include several Mathematica files showing how the algorithm can be used to compute the data for the Haagerup category $\mathcal{H}_1$, whose data was previously unknown.
Current status:
Has been resubmitted
Submission & Refereeing History
Resubmission scipost_202204_00039v1 on 25 April 2022
Submission 2110.03644v2 on 19 October 2021
Reports on this Submission
Anonymous Report 2 on 2022-4-17 (Invited Report)
• Cite as: Anonymous, Report on arXiv:2110.03644v2, delivered 2022-04-17, doi: 10.21468/SciPost.Report.4936
Report
Dear editor and authors,
First of all, my apologies for the long delay in my report. The delay does not reflect lack of interest in this manuscript.
The authors present a way to calculate the F-symbols associated with a particular fusion category. The starting point are the F-symbols of a different, but related fusion category, which are assumed to be known. The relation between the two fusion categories is that they are 'Morita equivalent'. In short, the idea is to construct the so-called module category over the fusion category with known F-symbols, and considering the module tube category, which allows for the calculation of the F-symbols associated with the fusion category one started with.
My general impression of the paper is that it is well written, and accessible for people with knowledge about fusion- and modular tensor categories, even if they are not too familiar with module- and module tube categories, which lies at the heart of the construction. In this respect, the explicitly worked out examples are really helpful and perhaps essential.
In the introduction, the authors point out the complexity of obtaining the F-symbols for a given set of fusion rules, which lies in the complexity of the (typically enormously overdetermined) consistency conditions, the pentagon equations. The large amount of gauge symmetry that needs to be fixed adds to the problem, in particular in the presence of fusion multiplicities. The algorithm the authors present is really helpful (if available), because the equations that need to be solved are typically much simpler. Apart from the simpler examples, the authors also apply the algorithm, to find the F-symbols of H_1 (coming from the Haagerup subfactor), which were previously unknown.
In section II, the authors provide the necessary background information, starting with some details on fusion categories, C-module categories, module tube categories and Morita equivalence.
In section III, the authors use the notions reviewed in section II, to calculate the data for the module category, in particular the decomposition of the tensor products of pairs of irreducible representations, which can be used in the end to calculate the sought after F-symbols.
Section IV contains a relatively simple example that is worked out in detail, to show the workings of the algorithm, while section V contains the more complicated case of H_1 I alluded to above.
In my opinion, this is an interesting, technical but well written paper on the subject of fusion categories. In my opinion, this paper should be published in scipost. Below, I provide a list of comments, questions, recommendations, etc., which I urge the authors to consider.
Requested changes
Introduction
1.
It's presumably good to point out that the construction presented in the paper gives a particular solution for the F-symbols associated with a set of fusion rules, but typically not all solutions (this is the usual situation, so no criticism to the paper).
Some related questions. If one has several monoidially inequivalent sets of F-symbols for the Morita equivalent fusion category, are the resulting sets of F-symbols also monoidially inequivalent?
If one happens to know all monoidially inequivalent sets of F-symbols for the Morita equivalent fusion category, does one obtain all monoidially inequivalent sets of F-symbols for the fusion category of interest?
Section II
2.
I am confused about the discussion of the Frobenius-Perron dimension (around eq. (7)). The Frobenius-Perron dimension of a is given by the largest eigenvalue of the fusion matrix N_a. As such, one has d_a >= 1, and satisfies 7b. However, it can not always be written as in the last equation of eq. (6). In non-unitary fusion categories, the F-symbol appearing in that equation can have absolute value larger than one, in contradiction with d_a >= 1. So in some way, it looks like that with d_a, the authors mean the quantum dimension. However, in non-unitary fusion categories, there will be negative quantum dimension, so d_a can not be the quantum dimension either.
3.
A somewhat related issue to question 2. Just under eqns (7), it is stated that if there is a basis, such that all F-matrices are unitary, the fusion category is (or is called) unitary. This is strictly speaking not true (though for perhaps a rather trivial reason). Even if all F-matrices are unitary, it is sometimes possible to choose a pivotal structure, such that some of the quantum dimensions are negative. One would not call such a fusion category unitary.
4. For the C-module category, I am wondering if the same issues arise as under 2. and 3. for the fusion category.
Section III
5. The notation in eq. (30) is not entirely clear/defined. Are the C^{\alpha}_P simply coefficients that need to be found?
Section IV
6. The first 1/4 in eq. (47a) should be 1/2.
Section V.
8. It is stated that it s considered likely that a CFT associated with the Haagerup subfactor exists. I am wondering what data of this conjectured (?) CFT is known. It could be interesting to use the F-symbols obtained for H_1, and solve the hexagon equations (which is typically much easier than solving the pentagon equations). In this way one could obtain information about the scaling dimensions of the primary fields (or compare, if this information is available).
7. Here, the authors deal with the Haagerup fusion categories. They provide a visualisation, which is of course useful. It is however not stated what it is the authors are plotting. Without this information, it is hard to learn something from the figure.
Generic question
8. In the paper, the authors work out three examples in detail (Sec. IV, App. B, C). Do these three examples cover all possible complications that can arise when executing the algorithm? In either case, it would be good to point this out somewhere.
• validity: high
• significance: good
• originality: high
• clarity: good
• formatting: excellent
• grammar: excellent
Author: Jacob C Bridgeman on 2022-04-25 [id 2415]
(in reply to Report 2 on 2022-04-17)
Category:
remark
We thank the referee for their close reading of our manuscript. We address each of the requested changes below. Additionally, we provide a pdf with changed marked.
Introduction:
1.
We are a little confused by this comment. We indicate several times that we compute the $F$-symbols of the Morita equivalent category $C_M^*$, which are uniquely determined (up to tensor equivalence/gauge+permutation)/
Section II:
2.
We have rearranged this section, introducing a heading 'Unitary case' to clarify. Since we only need to restrict the gauge in this way in the unitary case, we have moved the discussion of FP dimensions to that section.
3.
We have rearranged the sentence to recognize that the unitarity of $F$ is a consequence of unitarity of the category.
4.
We don't think this should be an issue, but we've added a footnote indicating that the pivotal structure should be the one compatible with unitarity.
Section III:
5.
We have clarified this.
Section IV:
6.
We have corrected this.
Section V:
1. The data for the conjectured CFT was worked out by Evans and Gannon in https://arxiv.org/abs/1006.1326. They showed that the central charge of the CFT is a multiple of 8, and construct some character vectors for the corresponding vertex operator algebra. However, this data corresponds to the CFT associated with the quantum double of the Haagerup fusion categories. The problem with the fusion categories themselves is that the respective hexagon equations do not have a solution, so none of them admits a braiding.
7.
We have clarified what the figure shows.
Generic question:
8.
We've added an appropriate comment just above Section V
Report 1 by Ana Ros Camacho on 2022-2-24 (Invited Report)
• Cite as: Ana Ros Camacho, Report on arXiv:2110.03644v2, delivered 2022-02-24, doi: 10.21468/SciPost.Report.4509
Strengths
1-The paper provides a new computational pathway to describing the associators of the whole categorical Morita equivalence class through just one representative of it.
2-The examples provided to show the power of this algorithm, in particular the Haagerup fusion categories, are really nice and exciting!
3- The paper ticks all the boxes for the acceptance criteria: it is well-written and has a clear structure. It includes examples of the algorithm described which are interesting. It has a detailed abstract, and a good summary of achievements. Citations included are of top quality. Files and code are available and nice to read.
Weaknesses
1-Through the manuscript the authors use (indecomposable) module categories extensively. These are actually tricky to describe in general, even when skeletizing, which makes me hesitate about the actual use of the algorithm described beyond some known cases. Still, the examples described and in particular the one about the Haagerup fusion categories are exciting ones.
2-The paper lacks a bit of perspective for future work - some comments on this would benefit the manuscript quite a bit.
3- The manuscript needs some (minor) clarifications here and there, and also about notation. E.g.:
-- the simples of the skeletal fusion category in Definition 1 are denoted as $a_1$, $a_2$, etc but this notation is not used again later (similarly with the simples of the module category).
-- Eq 5a: the bullet notation at the vertex has not been introduced at this point; also a word about what $M_{ab}^c$ is at this point would help.
-- Right before Definition 3 the notion of indecomposable module category is mentioned but this hasn't been introduced earlier (and this is quite a crucial concept for the construction described, so maybe worth a sentence!).
-- Speaking of which, I would include some citation on the tube algebra, in particular when claiming that when a module category is indecomposable the tube algebra is semisimple since it's an important point.
-- At the beginning of the unitary subsection at section II, it is unclear to me what the * notation means (since it is used earlier at Eq 18).
-- The F introduced right before Eq 38, is it the same F or is it anything new?
4-I would like to ask the authors for a justification of the software used in the algorithm described. Mathematica is notoriously known for e.g. omitting certain solutions when solving equations, which may lead to wrong statements. Why this software is a good one, and why it is not interfering with the results obtained?
5-While the strategy performed and the algorithm are interesting I believe the paper lacks a bit of mathematical depth, which is not necessarily a bad thing for this journal.
Report
I recommend this article for publication at SciPost Physics. Well done folks!
Requested changes
Please could you address points 2, 3 and 4 from the "Weaknesses" part.
• validity: top
• significance: high
• originality: high
• clarity: top
• formatting: perfect
• grammar: excellent
Author: Jacob C Bridgeman on 2022-04-25 [id 2414]
(in reply to Report 1 by Ana Ros Camacho on 2022-02-24)
Category: | 2023-02-05 09:53:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6885614991188049, "perplexity": 871.1390019537599}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00760.warc.gz"} |
http://starwerk.net/page-2/?post=why-are-galaxies-disks | This is where you’ll find our thoughts about recent (and not) astronomy, earth science and other science news, our answers to questions we get from people a lot, plus whatever else strikes our fancy. Sometimes we get into a few of the mathematical or scientific details, but never too deeply.
In response to a question from the public…
#### Why Are Galaxies Disks?
Not all galaxies are disks, but many are. The reasons have to do with their formation history and a few basic physical laws.
The short answer as to why disk galaxies have the shape they have is angular momentum. That phrase by itself is not very illuminating. Even if you know what angular momentum is, you still might not understand why it causes some galaxies to take on the shape of a disk, while others do not. The important difference between the two types of galaxies is the history of their formation, but before we get to that we will look at how disk galaxies form.
Galaxies started to form early in the history of the universe. They formed as the result of the collapse of enormous clouds of gas, mostly hydrogen and helium. These were attracted by the gravity of dark matter concentrations that had emerged from the initial density field of the Big Bang. As these clouds collapsed, material that had originally been distributed quite widely began to concentrate into relatively small volumes. Many collisions occurred between different parcels within the collapsing gas clouds, and these collisions set the galaxies onto the path of become disk galaxies like spirals.
##### The Law of Conservation of Momentum
To understand why collisions in gas are important, you must realize that when gas clouds collide they dissipate enormous amounts of energy. During a collision the gas compresses and heats. The hot gas begins to emit radiation, as x-rays, visible light or infrared, depending on the violence of the collision. As a result, the energy of the collision, originally the kinetic energy of the bulk motion of the gas clouds, is radiated away. This is a consequence of the law of conservation of energy, which states that energy cannot be created or destroyed, only shifted around in form. To summarize, in the process of these collisions the gas initially heats up. It then radiates away vast amounts of energy and cools again, often below its pre-collision temperature.
What’s more, the motion of the gas is greatly altered, perhaps even arrested, as a result of the collision. Due to conservation of linear momentum, colliding gas clouds that are initially moving at great speeds before they collide can be brought to a near standstill afterward. We can make this less abstract by considering an specific example.
We could imagine two clouds of gas of the same mass that move toward each other with exactly the same speed. However, a more familiar analogy might serve us better at the outset. So to begin with, picture two locomotives moving toward each other along a single set of railroad tracks. When they collide, they will abruptly stop. A few locomotive parts will fly in all directions, while most of each locomotive will be crumpled and stopped at the point of the collision. Most of the initial energy of their motion is spent compressing the locomotives, but some of it is carried away by the flying parts and the loud crashing sounds emitted. Colliding clouds of gas in space do something quite similar.
The clouds stop, just as the colliding locomotives do. They also become greatly compressed and heat up as a result. A great deal of the kinetic energy that was stored in their motion through space, their kinetic energy, is converted into heat as they compress. This heat is then radiated away via electromagnetic radiation, i.e. visible light, x-rays, infrared etc. Eventually all the energy is radiated away, leaving the clouds cold and much denser than before the collision.
Now imagine that one of the clouds (or locomotives) is slightly heavier than the other, or perhaps it is moving slightly faster (or both). In that case, the collision will still happen in essentially the way it’s described above, but the clouds (or locomotives) will not be completely stopped afterward. They will be combined into a single mass moving along in the direction of the faster/heavier cloud - or locomotive. This is because the original momentum of the system, which is the sum of the momenta of each object, is conserved. Or put another way, the total momentum before the collision is the same as the total momentum after the collision.
We can put this onto a mathematical footing as follows. If we use the fact that momentum (usually denoted by p… Don’t ask me why, I don’t know) is the product of an object’s mass and velocity, then we define momentum as below.
$$\vec{p}=m\vec{v}$$
The small arrows on the p and the v indicate that they are vectors, objects that have both a size (or amount of) and a direction. We need to use a vector because moving 5 mph to the left, for example, is not the same as moving 5 mph to the right, or forward or backward. The little arrows remind us about this fact. They tell us one other important thing as well. The momentum has a direction that is the same as the direction as the motion. This makes sense intuitively, but it is made mathematically clear because both sides of the equation are the same vector - that is what an equation means - and so both have the same direction.
Note that the mass $$m$$ does not have an arrow. That is because mass is not a vector; it has no direction. Mass is what is called a scalar in the vernacular of physics. It is just a plain number (with units, of course, in this case kilograms). Do not confuse mass with weight. Weight is the force that gravity exerts, and force definitely has a direction, so it is a vector. If you want to get a better understanding of vectors, have a look at the blog post about vectors on this site.
The total momentum of the system comprised of the two clouds in the collision is the sum of their individual momenta. We can write that as follows.
$$\vec{P}=\vec{p}_1+\vec{p}_2$$
The upper case denotes to total momentum in the system, and the subscripts are labels used to refer to the individual clouds. We could have used $$\vec{p_a}$$ or $$\vec{p_b}$$ instead if we wanted to. We just need something to keep them straight in our mind. If the concept of vectors is not familiar to you, have a look at my vector tutorial to get the general idea of what they are and how they work. Many objects in physics are represented by vectors, so knowing even a little bit about them is immensely useful.
The equation above is generally true, but we can simplify things if we imagine that the two clouds move toward each other along a line in one dimension, call it the $$x$$-axis of some coordinate system. We do not lose any generality by doing this because we are free to choose any orientation we like for a set of coordinates that we use to describe this system. Since we are dealing with one-dimensional motion we can write the total momentum as below.
$$P = p_1 -p_2$$
We have here assumed that cloud 1 moves in the positive $$x$$ direction (generally defined as to the right) and cloud 2 moves in the negative $$x$$ direction (to the left under the usual conventions). Now we can substitute using the definition of momentum.
$$MV = m_1 v_1 -m_2 v_2$$
We again use upper case to denote the values after the collision. If we assume that the loss of mass in the collision is negligible, then we know that the total mass is the same before and after, and thus $$M = m_1 + m_2$$. So we simplify further.
$$(m_1+m_2)V = m_1 v_1 -m_2 v_2$$
Now we can write the velocity of the system after the collision by dividing both sides by the total mass, $$m_1+m_2$$.
$$V = \frac{m_1 v_1 -m_2 v_2}{m_1+m_2}$$
From this we can see that when the initial momentum ($$m_1v_1$$) of cloud 1 is larger than the momentum ($$m_2v_2$$) of cloud 2, then the final velocity of the merged cloud is positive, or in other words, it is in the initial direction of cloud 1. However, when the initial momentum of cloud 2 is larger, then the final velocity of the cloud is negative, in the initial direction of cloud 2. In the event that the two momenta are identical, the clouds stop dead, just as in the very first example we imagined. Take careful consideration that the final result does not depend on which of the clouds is initially moving faster, it depends upon which has the larger momentum initially. That momentum can consist of a small cloud moving relatively fast, or a larger cloud moving relatively slow. It is the product of the mass and velocity that tell us the momentum. In any case, the total momentum of the system is the same before and after the collision because it is conserved.
So what does all of this have to do with the shapes of galaxies? Just this. There will be numerous little cloudlets in a protogalactic cloud, and they will undergo collisions like that described above. In some cases the final velocity of a cloud-cloud collision will result in a “positive” final velocity. In others the final velocity will be “negative,” where positive and negative just means that the combined cloud that is formed by the collision can move in the initial direction of either of the two colliding cloudlets; each collision has its own specific $$x$$-axis orientation aligned along the direction from which the two clouds approach one another. The point is that after each collision the final combined cloud is moving along in the direction of one of the original clouds. The collisions will happen over and over again, forming larger cloudlets out of the original smaller ones. Eventually, everything will have settled down into one giant cloud of gas.
As an aside, on much smaller scales, some of these collisions can result in the formation of stars. That is another topic entirely, and much more detailed physics is involved.
So after everything is settled down and all the smaller cloudlets have merged into a single big cloud, the big cloud will likely be moving through space with a velocity determined by the resultant velocity of all the collisions that formed it. This is just the velocity corresponding to the total momentum of the system. In other words, it is the momentum we would get if we summed up all the cloud momenta from all the little cloudlets in the original protogalactic cloud before any collisions happened at all! So if we have $$n$$ cloudlets in the protogalactic gas cloud, then the total momentum of the cloud can be written as the sum below, and its velocity through space can be found by dividing this momentum by the total mass of the cloud.
$$\vec{P}=\vec{p}_1+\vec{p}_2+\vec{p}_3+\vec{p}_4+...+\vec{p}_{n-1}+\vec{p}_n$$
$$\vec{V}= \frac{\vec{p}_1+\vec{p}_2+\vec{p}_3+\vec{p}_4+...+\vec{p}_{n-1}+\vec{p}_n}{ m_1+m_2+m_3+m_4+…+m_{n-1}+m_n}$$
While the momentum of each individual cloudlet can be transferred to and shared with other cloudlets, the total momentum of the cloud as a whole, as described in the sum above, will remain constant. This is true no matter how many collisions and mergers of cloudlets happen. In particular, it is true after all the cloudlets have settled down into a single big gas cloud, whatever form it might take.
Linear momentum explains why a galaxy might be moving through space in some direction after it forms, but it does not explain its shape. For that we need a related physical law, called conservation of angular momentum. Since like linear momentum, angular momentum is conserved, whatever angular momentum the cloud had before its collapse is what it will have afterward. While momentum is related to motion through space, angular momentum is related to an object’s rotational motion, or its tendency to spin about some axis. Angular momentum is also discussed in my post about how orbits work.
##### The Law of Conservation of Angular Momentum
Imagine again the protogalactic cloud before it begins to collapse. It will have a large number of smaller cloudlets within it, each moving in some direction. What’s more, each of these cloudlets will have angular momentum that is related to its momentum. Unlike linear momentum, angular momentum must be measured from a particular reference point, and its value is different for different points in space. This is somewhat analogous to the way that linear momentum depends on the motion of the reference frame in which it is defined, because velocity depends upon that frame. If you don’t understand what that means, read up on Galilean relativity, or on special relativity. The takeaway point here is that the angular momentum of each cloudlet is related to both its momentum and to its position within the larger cloud.
To get a handle on all of this it will be helpful to know the definition of angular momentum. It is generally denoted by $$\vec{L}$$, and it is a vector because it has a direction. I have no idea why L is used instead of some other letter.
$$\vec{L}=\vec{r}\times\vec{p}$$
There is a little bit more to this equation than appears at first. The momentum is there, as promised, and the $$\vec{r}$$ is the distance from some reference point to the moving object. The point used is arbitrary, but you will get different values of the angular momentum for different reference points. This is fine though. You just have to be sure to use the same reference point for all your computations. Then your conclusions will be valid.
If you assumed that the multiplication sign means that we multiply the two vectors, you are correct. But how do you multiply two vectors? We’ll get to that. But first, a picture will be useful to understand how these three vectors are related to one another.
For this schematic, we are measuring the angular momentum of a particle with mass m and momentum $$\vec{p}$$ that is a distance r from some point in space denoted by O. The distance vector $$\vec{r}$$ has a length r (called its magnitude) and runs from the point O to the center of the moving particle. We would say that the tail of $$\vec{r}$$ is at O, and its head is at the particle. Likewise, the momentum is represented by a vector (arrow) with its tail located at the particle and its head pointing off in the direction the object is moving. The length of the vector $$\vec{p}$$ (its magnitude) is $$mv$$. It is not a physical length or distance in space, and so the length relative to $$\vec{r}$$ in this diagram has no meaning. It does have meaning when compared to other momentum vectors, just as the length of $$\vec{r}$$ is meaningful when compared to other distance vectors.
To understand how we get the angular momentum from this diagram it is helpful to rearrange things somewhat, as below.
In the second diagram, we slide the r vector parallel to itself until its tail is coincident with the tail of the momentum. This is a valid thing to do because vectors remain the same if we move them around in space as long as we don’t change their length or their direction. Moving them around this way is called parallel transport, and we can parallel transport vectors around all we like, they still remain the same vectors.
After we parallel transport $$\vec{r}$$ we note that it makes an angle $$\theta$$ with the momentum vector. The angle will be useful below. But first, line up the fingers of your right hand with $$\vec{r}$$ such that your palm faces roughly in the direction that $$\vec{p}$$ is pointing. It does not have to point exactly in that direction, just in the same general direction. With this orientation, the thumb of your right hand will be pointing out of the screen. That is the direction of the angular momentum vector, $$\vec{L}$$. This somewhat complicated procedure is called the right-hand-rule. Notice that if you mess up and use your left hand, your thumb will be pointing into the page, opposite of the correct answer. It is important to keep this in mind. Let’s do another example.
Now we have flipped the momentum vector, $$\vec{p}$$, so that it points in the opposite direction from the case above; note how the angle $$\theta$$ has changed in this new orientation. Use your right hand to again determine the direction of the angular momentum. You should find that it now points into the screen, opposite from your previous result. In general, if you reverse either $$\vec{p}$$ or $$\vec{r}$$ you will also reverse the direction of $$\vec{L}$$. This is another important idea to keep in mind. Also, you must remember to always place your fingers along the direction of $$\vec{r}$$ with their tips pointing in the direction the vector points, with your palm facing off in the direction of $$\vec{p}$$. Never place your fingers along $$\vec{p}$$ with your palm facing $$\vec{r}$$. If you do, you will get the opposite answer from the correct one. Give it a try. You’ll see.
Now we have only to understand how to get the amount of angular momentum, or the magnitude of the vector $$\vec{L}$$. This is given by the equation below.
$$L= r p \sin \theta$$
The sine of an angle varies between $$0$$ and $$1$$, with $$\sin 0\,=\, 0$$ and $$\sin 90°\, =\, 1$$. This means that when $$\vec{p}$$ and $$\vec{r}$$ are parallel to one another, the angular momentum of the object is zero. On the other hand, when $$\vec{p}$$ and $$\vec{r}$$ are at a right angle, the angular momentum is maximum. At other angles the angular momentum has some intermediate value. Also, the sine (and its related function, the cosine) is a function that repeats itself. So the sine of $$180^{\circ}$$ is again zero, and the sine of $$270^{\circ}$$ (or $$-90^{\circ}$$) is -1. When the angle is $$360^{\circ}$$ we are back at zero and the function repeats.
To get an intuitive feel for angular momentum, we can do a little thought experiment. Imagine holding a board in its middle - something you have probably done before many times. If the board is 4 meters long (about twelve feet), you could imagine holding it so that it balances on your shoulder. Imagine further that the board is a standard 2x4, or in other words that it has a cross section that is a 2 inch by 4 inch rectangle (about 5 cm by 10 cm). If you grasp the board in both hands you will find that you can rotate it quite easily about an axis that runs along its length through its center. However, if you try to rotate it through an axis that is perpendicular to its length you will have a much more difficult time. Why is this so? It is the same amount of material being rotated in both cases, so you might innocently believe that it should not matter how the board is rotated. Doubtless your experience tells you otherwise: it certainly does matter.
The difference is in how the material is distributed with respect to the axis of rotation. Material that is farther from the axis is harder to set in motion (or stop from moving) than material that is closer. That is what the factor of $$r\sin\theta$$ is telling us in the definition of angular momentum. When rotating a board about an axis along its length, most of the mass of the board is relatively close to the rotation axis, so $$r$$ is small. This means that the material does not acquire much angular momentum when we rotate the board about that axis. It is therefore easy to set it moving in this way. On the other hand, if we choose an axis perpendicular to the board’s length, then most of the mass is at quite a large distance from the axis. The value of $$r$$ is large, and we feel the difference when we try to rotate the board. This property has to do with rotational inertia. Rotational inertial is analogous to inertia (mass) in linear motion, except rotational inertia takes into account both the amount of material moving and how it is distributed in space. In either case, we must exert effort to change the momentum (linear or angular) of an object. Left alone, objects maintain constant linear or angular momentum.
Finally, if we are holding a steel beam rather than a piece of wood, it will be much harder to rotate no matter which axis we choose to rotate around. That is why the factor of $$m$$ is there - even if hidden within the $$p$$. Likewise, the $$v$$ hiding inside the $$p$$ tells us that it is harder to make a thing spin fast than to make it spin slow. It is also harder to stop a rapidly spinning object than a slowly spinning one. Angular momentum is thus a combination of how much material is moving, how fast it is moving and how the material is distributed in relation to the rotation axis. These are all things to keep in mind when thinking about how spinning objects move. And because galaxies are spinning objects, we need to keep these ideas in mind when trying to understand them, too.
Now we are ready to show why some galaxies become disks.
##### Finally, A Disk!
Now we are going to imagine a large protogalactic gas cloud that, through many small collisions, forms a galaxy. For convenience, we will take the point for measuring the angular momentum to coincide with the center of mass of the protogalactic cloud. From our discussion of linear momentum above, we know that this point will move through space at a constant speed and in a constant direction, so it makes a good reference. From the vantage point of the center of mass, the cloud appears to collapse down all around us, but it does not have any bulk motion through space because we are moving along with it.
To begin, imagine extending a radius vector, $$\vec{r}$$, running from the center of mass to each small cloudlet within the protogalactic cloud. These radius vectors will point out in all directions, somewhat like the spines on a sea urchin. Each will end at a cloudlet.
Then imagine the momentum vector of each cloudlet and its relation to the corresponding radius vector. In general, the momenta for the cloudlets will be distributed in random directions in space. Since the direction to each (the radius vector $$\vec{r}$$) is also random, when you use the right-hand-rule to determine the direction of the angular momentum for that cloudlet, you will find that they also are distributed randomly. So the angular momentum vectors will also resemble the spines on a sea urchin, but each will be perpendicular to the corresponding radius and momentum vectors that generate it. The total angular momentum is just the sum from all these individual clouds, as we saw for linear momentum.
$$\vec{L}=\vec{L}_1+\vec{L}_2+\vec{L}_3+\vec{L}_4+...+\vec{L}_{n-1}+\vec{L}_n$$
The image below gives a schematic representation of what the system of cloudlets might look like - though in general there would be many, many more cloudlets (yellow ellipsoidal objects) than are shown. The radius vectors, shown in white, each start at the center of mass (the blue dot labeled C.o.M.) and run to an individual cloudlet. The momentum vector of each cloudlet is depicted in green and points in the direction the cloudlet is moving. The angular momentum vectors are not shown, just to prevent the diagram becoming too messy. They would be distributed much like the radius vectors are, except each would be perpendicular to both the radius vector and the momentum vector that generates it, as we have stated already. Keep in mind that any of the vectors shown could be pointing partially into or out of the plane of the figure: galaxies are not in general two-dimensional systems.
From the argument above you might conclude that there is no net angular momentum because the individual vectors point in all directions, thus canceling each other out. In fact, that is usually not the case. Most of the angular momentum vectors do cancel, but after doing the complete sum over all the cloudlets there will be almost always some amount left over that does not cancel. This is the angular momentum of the system as a whole, and it determines the orientation of the galaxy. The system will form a disk rotating in the plane perpendicular to this net angular momentum vector. The direction of the rotation, clockwise or counterclockwise, can be deduced using the right-hand-rule, as we will show in a moment.
But why a disk? Because after all the collisions have occurred, and all the cloudlets have merged, you will find that any motions that were perpendicular to the plane of the disk (or in other words, any motions parallel to the total angular momentum vector) have canceled out. On the other hand, motions that are parallel to the plane of the disk (perpendicular to the net angular momentum vector) will not have canceled. This preferred orientation in space, an oriented disk, is simply the result of conservation of angular momentum.
In the images below of the bicycle wheel and the galaxy, the rotation is counterclockwise in the perspectives shown. We can use the right-hand-rule, just as we did before, to understand the rotation. In this case, we use a different but related version of the rule. Curl the fingers of your right hand so that the fingers align with the direction of rotation of the disk. The method is shown in the inset box next to the bicycle wheel. With the fingers of the right hand curled in the direction of the rotation, the extended thumb points in the direction of the angular momentum. For the bicycle wheel this is off to the left, as shown. For the galaxy it is diagonally upward to the right and slightly out of the page - remember, we are assuming the galaxy rotates counterclockwise, which this particular galaxy actually does for our orientation.
The important takeaway is that the direction of the angular momentum vector uniquely determines the plane and direction of rotation. Its length is related to the amount of angular momentum of the disk, which is a combination of how fast the disk rotates, how much material is rotating and how the material is distributed within the disk. So no matter how complicated the internal motions of the protogalactic cloud were in the beginning, by the time everything has collided and settled down, the motion will reflect the initial and unchanging angular momentum of the system. The system will be a flat disk rotating around an axis aligned with the total angular momentum vector.
Galaxy Image Credit: Hubble Heritage Team (AURA/STScI/NASA/ESA)
And why are some galaxies giant balls of stars instead of disks? It has to be due to those galaxies forming stars very early in their history, before they had a chance to collapse to any great extent. Once stars form, collisions no longer happen because stars are so tiny compared to the distances between them. Even for a very large galaxy it is unlikely for two stars ever to collide with each other. Without collisions to exchange energy and momentum within the system, it cannot collapse to a disk. Instead, it remains more or less in its original extended state, as shown in the image below of an elliptical galaxy (Credit: ESA/Hubble & NASA, Judy Schmidt and J. Blakeslee (Dominion Astrophysical Observatory)).
It is difficult to get a grip on this when you consider that galaxies contain hundreds of billions of stars. Surely there must be collisions among the at least some of the time. Scaling things down to a more comprehensible size can help.
Imagine that we could shrink the Milky Way down such that the sun became the size of a grapefruit. If we placed that grapefruit in the middle of the Golden Gate Bridge in San Francisco, then the next nearest star could be represented by a grapefruit placed in the middle of the Verrazano Narrows Bridge in New York City. In between there would be no other grapefruits (stars) at all. That is how empty galaxies are. And that is why stars within them never collide. Even when galaxies collide, the stars within them do not. The gas does, but the stars pass right through, affected only by the changing gravitational field during the collision. In fact, it is probably better to refer to these galaxies as interacting, not colliding. An example of such an interaction is shown in the image below. These interactions require hundreds of millions of years to complete, or even billions of years. So the images we make of them are just a snapshot, like taking a still photograph of a ballet dancer in the middle of a jump. The fast shutter speed of the camera freezes the motion and gives the impression that the dancer is suspended in the air. In fact, we are seeing only a transient state. The dancer soon descends back to the floor; for interacting galaxies, even a shutter speed of a million years might seem instantaneous. Their interactions go on for hundreds or thousands of times longer.
(Credit: ESA/Hubble & NASA, A. Adamo et al.)
So the next obvious question is why some galaxies form stars early, and thus remain extended and spheroidal in shape, while others take a long time to form stars, and thus end up as disks. We don’t know the answer to that question. Galaxy formation is a complicated process, certainly more complicated than the simplified picture we have considered here. While the general arguments above are bound to be mostly correct, there is certainly more to the story. For instance, we know that galaxies undergo a continuous process of evolution and growth, with large galaxies subsuming smaller galaxies, or merging together to form humongous galaxies. In truth, galaxies are still forming. Whatever the detailed story, in all these ongoing formation processes, conservation of energy, linear momentum and angular momentum are foundational in determining how galaxies look today and how they will change over time.
Back
© 2020 StarWerk / Kevin McLin. All images © 2020 Kevin McLin unless otherwise noted. | 2020-09-30 01:52:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7662189602851868, "perplexity": 246.78522605031526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402101163.62/warc/CC-MAIN-20200930013009-20200930043009-00088.warc.gz"} |
https://socratic.org/questions/how-do-you-write-an-equation-a-in-slope-intercept-form-and-b-in-standard-form-fo-2 | # How do you write an equation (a) in slope intercept form and (b) in standard form for the line passing through (1,8) and perpendicular to 2x+5y=1?
Dec 15, 2016
#### Explanation:
To make any line that is perpendicular to a line in standard form:
$a x + b y = c$
Swap "a" and "b" and, if "b" was negative then make it positive, otherwise, change the sign of "a". Therefore, a line that is perpendicular to:
$2 x + 5 y = 1$
Will have the form:
$5 x - 2 y = c$
To make it pass through the point $\left(1 , 8\right)$, substitute the point into the above and then solve for c:
$5 \left(1\right) - 2 \left(8\right) = c$
$c = - 11$
The standard form is
$5 x - 2 y = - 11$
The slope intercept form is
$y = \frac{5}{2} x + \frac{11}{2}$ | 2021-11-29 19:13:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8130978941917419, "perplexity": 481.0422203467836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00575.warc.gz"} |
https://plainmath.net/other/52741-determine-the-moment-of-inertia-for-theshaded-area-about-the | Marenonigt
2022-01-14
Determine the moment of inertia for the shaded area about the x axıs
enlacamig
Expert
Write the equation of the shaded area.
${y}^{2}=x$
$x={y}^{2}$
Consider a rectangular differential element with respect to the y-axis with a thickness dy and intersects the boundary at (x, y).
Express the area of the differential element parallel to the x-axis.
$dA=xdy$
Substitute ${y}^{2}$ for x.
$dA={y}^{2}dy$
Calculate the moment of inertia for the shaded area about the x-axis.${I}_{x}={\int }_{A}{y}^{2}dA$
Substitute ${y}^{2}dy$
sonorous9n
Expert
at the end, should be 1/5 instead of 1/7
alenahelenash
Expert
Yes, it should be $\frac{{y}^{5}}{5}$ | 2023-01-29 11:58:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 34, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7335032224655151, "perplexity": 706.3355580122492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00879.warc.gz"} |
https://fancunwei95.github.io/stats556/Lecture05/ | # Lecture 05 (Feb 02, 2022)
Since $\omega_1$ and $\omega_2$ is a conjugate pair ($x_1, x_2$ is a conjugate pair and thus their inverse). The stochastic cycle of the AR(2) is then
$k = \frac{2\pi}{\cos^{-1}{\frac{\phi_1}{2\sqrt{-\phi_2}}}}$
The complex angle $\theta$ controls the cycle length because
$(1-\omega B) x_t = 0 \quad \Rightarrow x_t = ae^{i\theta} x_{t-1}$
Thus, $2\pi/\theta$ is approximately the periodic length.
### Asymptotics of AR(p) models
$x_t = \phi_0 + \phi_1 x_{t-1} + \dots. + \phi_p x_{t-p} + \epsilon_t$ where $\epsilon_t$ iid mean-zero with $\text{Var}(\epsilon_t) = \sigma^2$.
Assume $\phi_0=0$ estimate the coefficients, the vecotr $\phi = (\phi_1, \phi_2,\dots,\phi_p)^T$. By the least squares, let $M_t = (x_{t-1}, \dots, x_{t-p})$ design matrix. Write $x_t = M_t \phi + \epsilon_t$.
Thus,
$\hat{\phi} = \arg\min_{\phi} \sum_{t=p+1}^T (x_t - M_t\phi)^2$
Take derivative, we find
\begin{aligned} \hat{\phi} &= \left(\sum_{t=p+1}^T M_t^{T}M_t \right)^{-1}\left(\sum_{t=p+1}^T M_t^T X_t \right)\\ &= \left(\sum_{t=p+1}^T M_t^T M_t \right)^{-1} \left(\sum_{t=p+1}^T M_t^T(\phi_1x_{t-1} + \dots \phi_p x_{t-p} + \epsilon_t) \right) \\ &= \phi + \left(\sum_{t=p+1}^T M_t^T M_t\right)^{-1} \left(\sum_{t=p+1}^T M_t^T \epsilon_t \right) \end{aligned}
provide that $\sum_{t=p+1}^T M_t^T M_t$ positive definite.
Write
$\sqrt{T}(\hat{\phi} - \phi) = \left(\frac{1}{T}\sum_{t=p+1}^T M_t^T M_t \right)^{-1} \left(\frac{1}{\sqrt{T}} \sum_{t=p+1}^T M_t^T \epsilon_t \right)$
The numerator becomes
$\frac{1}{T} \sum_{t=p+1}^T M_t^T M_t = \frac{1}{T} \sum_{t=p+1}^T \begin{pmatrix}x_{t-1} \\ x_{t-2}\\ \vdots \\ x_{t-p} \end{pmatrix} \begin{pmatrix}x_{t-1} & x_{t-2} &\dots & x_{t-p} \end{pmatrix}$
which converges
$\Gamma = \begin{pmatrix} \mathbb{E}x_{t-1}^2 &\dots &\mathbb{E}x_{t-1}x_{t-p} \\ \vdots & \ddots & \vdots \\ \mathbb{E}x_{t-1}x_{t-p} & \dots & \mathbb{E}x_{t-p}^2 \end{pmatrix}$
Claim, $\frac{1}{\sqrt{T}}\sum_{t=p+1}^T M_t^T\epsilon_t \rightarrow \mathcal{N}(0,\sigma^2\Gamma)$ because $M_t^T \epsilon_t$ is a martingale difference sequence. That is if $\mathcal{F}_t$ is the filtration $(\epsilon_1,\dots, \epsilon_t)$ then
$\mathbb{E}[M_t^T\epsilon_t | \mathcal{F}_{t-1}] = 0$
which is the martingale. Also, since $\epsilon_t$ are independnet, the variance is the sum of each term and the variance of each term is $\frac{1}{T}M_t^TM_t\sigma^2$ and the sum will converge to the above sum.
So
$\sqrt{T}(\hat{\phi} - \phi) \rightarrow \Gamma^{-1} \mathcal{N}(0,\sigma^2\Gamma) = \mathcal{N}(0,\sigma^2\Gamma^{-1})$
Updated: | 2023-03-31 00:58:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999990463256836, "perplexity": 2624.4647819696706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00502.warc.gz"} |
http://crypto.stackexchange.com/tags/challenge-response/hot?filter=year | # Tag Info
5
The intend appears to be that vector is secret and the main key; and version (that, we are told, is secret) is an extension of that key (or variant selector). 1) I could say that ResetCode is a MAC of quote with key vector and version. 2) Never met this particular one. Anyone with common sense should laugh at it as snake oil if it pretends to be ...
3
Whilst your altered vector value will influence the results, this extension of your script demonstrates that typically obtaining a single quote/reset pair is enough to reveal the secret version: def find_versions(quote,reset): """ Brute force search for version keys that produce reset from quote """ versions = [] for version in range(256): ...
2
The Wikipedia article points out a good reason for using a random challenge value: preventing replay attacks. If the hash was always the same (as the hash of the symmetric key would be), then having listened in on one challenge-response cycle, a malicious listener could pass further handshake tests.
1
Reading the original paper, I figure out the question. This voting scheme employed the well-known undeniable signature scheme, proposed by Chaum and Van Antwerpen in 1989 (or Chaum 1990 or Chaum and Van Antwerpen 1991). KeyGen: The RA is a signer and has a public key $X = g^x$ and a secret key $x$ Sign: For a message $m \in \mathbb{G} = \mathbb{Z}_p$, the ...
1
Unfortunately, you are probably not going to be able to fill in the missing details, unless you have a great deal of crypto experience (which it sounds like you don't have). You could start by reading about zero-knowledge proofs. There's a lot of information on that subject available. You will need to know it before you can progress. It sounds like you ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2013-12-05 01:10:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2709958553314209, "perplexity": 1522.225868444077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163037903/warc/CC-MAIN-20131204131717-00005-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://nbviewer.ipython.org/github/tfolkman/deep-learning-with-python-notebooks/blob/master/4.4-overfitting-and-underfitting.ipynb | In [1]:
import keras
keras.__version__
Using TensorFlow backend.
Out[1]:
'2.0.8'
# Overfitting and underfitting¶
This notebook contains the code samples found in Chapter 3, Section 6 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
In all the examples we saw in the previous chapter -- movie review sentiment prediction, topic classification, and house price regression -- we could notice that the performance of our model on the held-out validation data would always peak after a few epochs and would then start degrading, i.e. our model would quickly start to overfit to the training data. Overfitting happens in every single machine learning problem. Learning how to deal with overfitting is essential to mastering machine learning.
The fundamental issue in machine learning is the tension between optimization and generalization. "Optimization" refers to the process of adjusting a model to get the best performance possible on the training data (the "learning" in "machine learning"), while "generalization" refers to how well the trained model would perform on data it has never seen before. The goal of the game is to get good generalization, of course, but you do not control generalization; you can only adjust the model based on its training data.
At the beginning of training, optimization and generalization are correlated: the lower your loss on training data, the lower your loss on test data. While this is happening, your model is said to be under-fit: there is still progress to be made; the network hasn't yet modeled all relevant patterns in the training data. But after a certain number of iterations on the training data, generalization stops improving, validation metrics stall then start degrading: the model is then starting to over-fit, i.e. is it starting to learn patterns that are specific to the training data but that are misleading or irrelevant when it comes to new data.
To prevent a model from learning misleading or irrelevant patterns found in the training data, the best solution is of course to get more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to modulate the quantity of information that your model is allowed to store, or to add constraints on what information it is allowed to store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.
The processing of fighting overfitting in this way is called regularization. Let's review some of the most common regularization techniques, and let's apply them in practice to improve our movie classification model from the previous chapter.
Note: in this notebook we will be using the IMDB test set as our validation set. It doesn't matter in this context.
Let's prepare the data using the code from Chapter 3, Section 5:
In [2]:
from keras.datasets import imdb
import numpy as np
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
# Fighting overfitting¶
## Reducing the network's size¶
The simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power. For instance, a model with 500,000 binary parameters could easily be made to learn the class of every digits in the MNIST training set: we would only need 10 binary parameters for each of the 50,000 digits. Such a model would be useless for classifying new digit samples. Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.
On the other hand, if the network has limited memorization resources, it will not be able to learn this mapping as easily, and thus, in order to minimize its loss, it will have to resort to learning compressed representations that have predictive power regarding the targets -- precisely the type of representations that we are interested in. At the same time, keep in mind that you should be using models that have enough parameters that they won't be underfitting: your model shouldn't be starved for memorization resources. There is a compromise to be found between "too much capacity" and "not enough capacity".
Unfortunately, there is no magical formula to determine what the right number of layers is, or what the right size for each layer is. You will have to evaluate an array of different architectures (on your validation set, not on your test set, of course) in order to find the right model size for your data. The general workflow to find an appropriate model size is to start with relatively few layers and parameters, and start increasing the size of the layers or adding new layers until you see diminishing returns with regard to the validation loss.
Let's try this on our movie review classification network. Our original network was as such:
In [4]:
from keras import models
from keras import layers
original_model = models.Sequential()
original_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
Now let's try to replace it with this smaller network:
In [5]:
smaller_model = models.Sequential()
smaller_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
Here's a comparison of the validation losses of the original network and the smaller network. The dots are the validation loss values of the smaller network, and the crosses are the initial network (remember: a lower validation loss signals a better model).
In [6]:
original_hist = original_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
25000/25000 [==============================] - 3s - loss: 0.4594 - acc: 0.8188 - val_loss: 0.3429 - val_acc: 0.8812
Epoch 2/20
25000/25000 [==============================] - 2s - loss: 0.2659 - acc: 0.9075 - val_loss: 0.2873 - val_acc: 0.8905
Epoch 3/20
25000/25000 [==============================] - 2s - loss: 0.2060 - acc: 0.9276 - val_loss: 0.2827 - val_acc: 0.8879
Epoch 4/20
25000/25000 [==============================] - 2s - loss: 0.1697 - acc: 0.9401 - val_loss: 0.2921 - val_acc: 0.8851
Epoch 5/20
25000/25000 [==============================] - 2s - loss: 0.1495 - acc: 0.9470 - val_loss: 0.3100 - val_acc: 0.8812
Epoch 6/20
25000/25000 [==============================] - 2s - loss: 0.1283 - acc: 0.9572 - val_loss: 0.3336 - val_acc: 0.8748
Epoch 7/20
25000/25000 [==============================] - 2s - loss: 0.1121 - acc: 0.9624 - val_loss: 0.3987 - val_acc: 0.8593
Epoch 8/20
25000/25000 [==============================] - 2s - loss: 0.0994 - acc: 0.9670 - val_loss: 0.3788 - val_acc: 0.8702
Epoch 9/20
25000/25000 [==============================] - 2s - loss: 0.0889 - acc: 0.9716 - val_loss: 0.4242 - val_acc: 0.8603
Epoch 10/20
25000/25000 [==============================] - 2s - loss: 0.0782 - acc: 0.9757 - val_loss: 0.4256 - val_acc: 0.8653
Epoch 11/20
25000/25000 [==============================] - 2s - loss: 0.0691 - acc: 0.9792 - val_loss: 0.4515 - val_acc: 0.8638
Epoch 12/20
25000/25000 [==============================] - 2s - loss: 0.0603 - acc: 0.9820 - val_loss: 0.5102 - val_acc: 0.8610
Epoch 13/20
25000/25000 [==============================] - 2s - loss: 0.0518 - acc: 0.9851 - val_loss: 0.5281 - val_acc: 0.8587
Epoch 14/20
25000/25000 [==============================] - 2s - loss: 0.0446 - acc: 0.9873 - val_loss: 0.5441 - val_acc: 0.8589
Epoch 15/20
25000/25000 [==============================] - 2s - loss: 0.0367 - acc: 0.9903 - val_loss: 0.5777 - val_acc: 0.8574
Epoch 16/20
25000/25000 [==============================] - 2s - loss: 0.0313 - acc: 0.9922 - val_loss: 0.6377 - val_acc: 0.8555
Epoch 17/20
25000/25000 [==============================] - 2s - loss: 0.0247 - acc: 0.9941 - val_loss: 0.7269 - val_acc: 0.8501
Epoch 18/20
25000/25000 [==============================] - 2s - loss: 0.0203 - acc: 0.9956 - val_loss: 0.6920 - val_acc: 0.8516
Epoch 19/20
25000/25000 [==============================] - 2s - loss: 0.0156 - acc: 0.9970 - val_loss: 0.7689 - val_acc: 0.8425
Epoch 20/20
25000/25000 [==============================] - 2s - loss: 0.0144 - acc: 0.9966 - val_loss: 0.7694 - val_acc: 0.8487
In [7]:
smaller_model_hist = smaller_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
25000/25000 [==============================] - 2s - loss: 0.5737 - acc: 0.8049 - val_loss: 0.4826 - val_acc: 0.8616
Epoch 2/20
25000/25000 [==============================] - 2s - loss: 0.3973 - acc: 0.8866 - val_loss: 0.3699 - val_acc: 0.8776
Epoch 3/20
25000/25000 [==============================] - 2s - loss: 0.2985 - acc: 0.9054 - val_loss: 0.3140 - val_acc: 0.8860
Epoch 4/20
25000/25000 [==============================] - 2s - loss: 0.2428 - acc: 0.9189 - val_loss: 0.2913 - val_acc: 0.8870
Epoch 5/20
25000/25000 [==============================] - 2s - loss: 0.2085 - acc: 0.9290 - val_loss: 0.2809 - val_acc: 0.8897
Epoch 6/20
25000/25000 [==============================] - 2s - loss: 0.1849 - acc: 0.9360 - val_loss: 0.2772 - val_acc: 0.8899
Epoch 7/20
25000/25000 [==============================] - 2s - loss: 0.1666 - acc: 0.9430 - val_loss: 0.2835 - val_acc: 0.8863
Epoch 8/20
25000/25000 [==============================] - 2s - loss: 0.1515 - acc: 0.9487 - val_loss: 0.2909 - val_acc: 0.8850
Epoch 9/20
25000/25000 [==============================] - 2s - loss: 0.1388 - acc: 0.9526 - val_loss: 0.2984 - val_acc: 0.8842
Epoch 10/20
25000/25000 [==============================] - 2s - loss: 0.1285 - acc: 0.9569 - val_loss: 0.3102 - val_acc: 0.8818
Epoch 11/20
25000/25000 [==============================] - 2s - loss: 0.1194 - acc: 0.9599 - val_loss: 0.3219 - val_acc: 0.8794
Epoch 12/20
25000/25000 [==============================] - 2s - loss: 0.1105 - acc: 0.9648 - val_loss: 0.3379 - val_acc: 0.8774
Epoch 13/20
25000/25000 [==============================] - 2s - loss: 0.1035 - acc: 0.9674 - val_loss: 0.3532 - val_acc: 0.8730
Epoch 14/20
25000/25000 [==============================] - 2s - loss: 0.0963 - acc: 0.9688 - val_loss: 0.3651 - val_acc: 0.8731
Epoch 15/20
25000/25000 [==============================] - 2s - loss: 0.0895 - acc: 0.9724 - val_loss: 0.3858 - val_acc: 0.8703
Epoch 16/20
25000/25000 [==============================] - 2s - loss: 0.0838 - acc: 0.9734 - val_loss: 0.4157 - val_acc: 0.8654
Epoch 17/20
25000/25000 [==============================] - 2s - loss: 0.0785 - acc: 0.9764 - val_loss: 0.4214 - val_acc: 0.8677
Epoch 18/20
25000/25000 [==============================] - 2s - loss: 0.0733 - acc: 0.9784 - val_loss: 0.4390 - val_acc: 0.8644
Epoch 19/20
25000/25000 [==============================] - 2s - loss: 0.0685 - acc: 0.9796 - val_loss: 0.4539 - val_acc: 0.8638
Epoch 20/20
25000/25000 [==============================] - 2s - loss: 0.0638 - acc: 0.9822 - val_loss: 0.4744 - val_acc: 0.8617
In [8]:
epochs = range(1, 21)
original_val_loss = original_hist.history['val_loss']
smaller_model_val_loss = smaller_model_hist.history['val_loss']
In [9]:
import matplotlib.pyplot as plt
# b+ is for "blue cross"
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
# "bo" is for "blue dot"
plt.plot(epochs, smaller_model_val_loss, 'bo', label='Smaller model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
As you can see, the smaller network starts overfitting later than the reference one (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
Now, for kicks, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
In [11]:
bigger_model = models.Sequential()
bigger_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
In [12]:
bigger_model_hist = bigger_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
25000/25000 [==============================] - 3s - loss: 0.4539 - acc: 0.8011 - val_loss: 0.4150 - val_acc: 0.8229
Epoch 2/20
25000/25000 [==============================] - 3s - loss: 0.2148 - acc: 0.9151 - val_loss: 0.2742 - val_acc: 0.8901
Epoch 3/20
25000/25000 [==============================] - 3s - loss: 0.1217 - acc: 0.9544 - val_loss: 0.5442 - val_acc: 0.7975
Epoch 4/20
25000/25000 [==============================] - 3s - loss: 0.0552 - acc: 0.9835 - val_loss: 0.4316 - val_acc: 0.8842
Epoch 5/20
25000/25000 [==============================] - 3s - loss: 0.0662 - acc: 0.9888 - val_loss: 0.5098 - val_acc: 0.8822
Epoch 6/20
25000/25000 [==============================] - 3s - loss: 0.0017 - acc: 0.9998 - val_loss: 0.6867 - val_acc: 0.8811
Epoch 7/20
25000/25000 [==============================] - 3s - loss: 0.1019 - acc: 0.9882 - val_loss: 0.6737 - val_acc: 0.8800
Epoch 8/20
25000/25000 [==============================] - 3s - loss: 0.0735 - acc: 0.9896 - val_loss: 0.6185 - val_acc: 0.8772
Epoch 9/20
25000/25000 [==============================] - 3s - loss: 3.4759e-04 - acc: 1.0000 - val_loss: 0.7328 - val_acc: 0.8818
Epoch 10/20
25000/25000 [==============================] - 3s - loss: 0.0504 - acc: 0.9912 - val_loss: 0.7092 - val_acc: 0.8791
Epoch 11/20
25000/25000 [==============================] - 3s - loss: 0.0589 - acc: 0.9919 - val_loss: 0.6831 - val_acc: 0.8785
Epoch 12/20
25000/25000 [==============================] - 3s - loss: 1.9067e-04 - acc: 1.0000 - val_loss: 0.8005 - val_acc: 0.8784
Epoch 13/20
25000/25000 [==============================] - 3s - loss: 0.0623 - acc: 0.9916 - val_loss: 0.7540 - val_acc: 0.8740
Epoch 14/20
25000/25000 [==============================] - 3s - loss: 0.0274 - acc: 0.9954 - val_loss: 0.7806 - val_acc: 0.8670
Epoch 15/20
25000/25000 [==============================] - 3s - loss: 0.0011 - acc: 0.9998 - val_loss: 0.8107 - val_acc: 0.8783
Epoch 16/20
25000/25000 [==============================] - 3s - loss: 0.0445 - acc: 0.9943 - val_loss: 0.8394 - val_acc: 0.8702
Epoch 17/20
25000/25000 [==============================] - 3s - loss: 0.0268 - acc: 0.9959 - val_loss: 0.7708 - val_acc: 0.8745
Epoch 18/20
25000/25000 [==============================] - 3s - loss: 7.7057e-04 - acc: 1.0000 - val_loss: 0.8885 - val_acc: 0.8738
Epoch 19/20
25000/25000 [==============================] - 3s - loss: 0.0297 - acc: 0.9962 - val_loss: 0.8419 - val_acc: 0.8728
Epoch 20/20
25000/25000 [==============================] - 3s - loss: 0.0018 - acc: 0.9998 - val_loss: 0.8896 - val_acc: 0.8682
Here's how the bigger network fares compared to the reference one. The dots are the validation loss values of the bigger network, and the crosses are the initial network.
In [26]:
bigger_model_val_loss = bigger_model_hist.history['val_loss']
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
plt.plot(epochs, bigger_model_val_loss, 'bo', label='Bigger model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
The bigger network starts overfitting almost right away, after just one epoch, and overfits much more severely. Its validation loss is also more noisy.
Meanwhile, here are the training losses for our two networks:
In [28]:
original_train_loss = original_hist.history['loss']
bigger_model_train_loss = bigger_model_hist.history['loss']
plt.plot(epochs, original_train_loss, 'b+', label='Original model')
plt.plot(epochs, bigger_model_train_loss, 'bo', label='Bigger model')
plt.xlabel('Epochs')
plt.ylabel('Training loss')
plt.legend()
plt.show()
As you can see, the bigger network gets its training loss near zero very quickly. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss).
You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.
A "simple model" in this context is a model where the where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights to only take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:
• L1 regularization, where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).
• L2 regularization, where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.
In Keras, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization to our movie review classification network:
In [17]:
from keras import regularizers
l2_model = models.Sequential()
activation='relu', input_shape=(10000,)))
activation='relu'))
In [18]:
l2_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
l2(0.001) means that every coefficient in the weight matrix of the layer will add 0.001 * weight_coefficient_value to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.
Here's the impact of our L2 regularization penalty:
In [19]:
l2_model_hist = l2_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
25000/25000 [==============================] - 3s - loss: 0.4880 - acc: 0.8218 - val_loss: 0.3820 - val_acc: 0.8798
Epoch 2/20
25000/25000 [==============================] - 2s - loss: 0.3162 - acc: 0.9068 - val_loss: 0.3353 - val_acc: 0.8896
Epoch 3/20
25000/25000 [==============================] - 2s - loss: 0.2742 - acc: 0.9185 - val_loss: 0.3306 - val_acc: 0.8898
Epoch 4/20
25000/25000 [==============================] - 2s - loss: 0.2489 - acc: 0.9288 - val_loss: 0.3363 - val_acc: 0.8866
Epoch 5/20
25000/25000 [==============================] - 2s - loss: 0.2420 - acc: 0.9318 - val_loss: 0.3492 - val_acc: 0.8820
Epoch 6/20
25000/25000 [==============================] - 2s - loss: 0.2322 - acc: 0.9359 - val_loss: 0.3567 - val_acc: 0.8788
Epoch 7/20
25000/25000 [==============================] - 2s - loss: 0.2254 - acc: 0.9385 - val_loss: 0.3632 - val_acc: 0.8787
Epoch 8/20
25000/25000 [==============================] - 2s - loss: 0.2219 - acc: 0.9380 - val_loss: 0.3630 - val_acc: 0.8794
Epoch 9/20
25000/25000 [==============================] - 2s - loss: 0.2162 - acc: 0.9430 - val_loss: 0.3704 - val_acc: 0.8763
Epoch 10/20
25000/25000 [==============================] - 2s - loss: 0.2144 - acc: 0.9428 - val_loss: 0.3876 - val_acc: 0.8727
Epoch 11/20
25000/25000 [==============================] - 2s - loss: 0.2091 - acc: 0.9439 - val_loss: 0.3883 - val_acc: 0.8724
Epoch 12/20
25000/25000 [==============================] - 2s - loss: 0.2061 - acc: 0.9455 - val_loss: 0.3870 - val_acc: 0.8740
Epoch 13/20
25000/25000 [==============================] - 2s - loss: 0.2069 - acc: 0.9445 - val_loss: 0.4073 - val_acc: 0.8714
Epoch 14/20
25000/25000 [==============================] - 2s - loss: 0.2028 - acc: 0.9475 - val_loss: 0.3976 - val_acc: 0.8714
Epoch 15/20
25000/25000 [==============================] - 2s - loss: 0.1998 - acc: 0.9472 - val_loss: 0.4362 - val_acc: 0.8670
Epoch 16/20
25000/25000 [==============================] - 2s - loss: 0.2019 - acc: 0.9462 - val_loss: 0.4088 - val_acc: 0.8711
Epoch 17/20
25000/25000 [==============================] - 2s - loss: 0.1953 - acc: 0.9495 - val_loss: 0.4185 - val_acc: 0.8698
Epoch 18/20
25000/25000 [==============================] - 2s - loss: 0.1945 - acc: 0.9508 - val_loss: 0.4371 - val_acc: 0.8674
Epoch 19/20
25000/25000 [==============================] - 2s - loss: 0.1934 - acc: 0.9486 - val_loss: 0.4136 - val_acc: 0.8699
Epoch 20/20
25000/25000 [==============================] - 2s - loss: 0.1924 - acc: 0.9504 - val_loss: 0.4200 - val_acc: 0.8704
In [30]:
l2_model_val_loss = l2_model_hist.history['val_loss']
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
plt.plot(epochs, l2_model_val_loss, 'bo', label='L2-regularized model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
As you can see, the model with L2 regularization (dots) has become much more resistant to overfitting than the reference model (crosses), even though both models have the same number of parameters.
As alternatives to L2 regularization, you could use one of the following Keras weight regularizers:
In [ ]:
from keras import regularizers
# L1 regularization
regularizers.l1(0.001)
# L1 and L2 regularization at the same time
regularizers.l1_l2(l1=0.001, l2=0.001)
Dropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. setting to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5, 1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.
Consider a Numpy matrix containing the output of a layer, layer_output, of shape (batch_size, features). At training time, we would be zero-ing out at random a fraction of the values in the matrix:
In [ ]:
# At training time: we drop out 50% of the units in the output
layer_output *= np.randint(0, high=2, size=layer_output.shape)
At test time, we would be scaling the output down by the dropout rate. Here we scale by 0.5 (because we were previous dropping half the units):
In [ ]:
# At test time:
layer_output *= 0.5
Note that this process can be implemented by doing both operations at training time and leaving the output unchanged at test time, which is often the way it is implemented in practice:
In [ ]:
# At training time:
layer_output *= np.randint(0, high=2, size=layer_output.shape)
# Note that we are scaling *up* rather scaling *down* in this case
layer_output /= 0.5
This technique may seem strange and arbitrary. Why would this help reduce overfitting? Geoff Hinton has said that he was inspired, among other things, by a fraud prevention mechanism used by banks -- in his own words: "I went to my bank. The tellers kept changing and I asked one of them why. He said he didn’t know but they got moved around a lot. I figured it must be because it would require cooperation between employees to successfully defraud the bank. This made me realize that randomly removing a different subset of neurons on each example would prevent conspiracies and thus reduce overfitting".
The core idea is that introducing noise in the output values of a layer can break up happenstance patterns that are not significant (what Hinton refers to as "conspiracies"), which the network would start memorizing if no noise was present.
In Keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before it, e.g.:
In [ ]:
model.add(layers.Dropout(0.5))
Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
In [22]:
dpt_model = models.Sequential()
dpt_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
In [23]:
dpt_model_hist = dpt_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
Train on 25000 samples, validate on 25000 samples
Epoch 1/20
25000/25000 [==============================] - 3s - loss: 0.6035 - acc: 0.6678 - val_loss: 0.4704 - val_acc: 0.8651
Epoch 2/20
25000/25000 [==============================] - 2s - loss: 0.4622 - acc: 0.8002 - val_loss: 0.3612 - val_acc: 0.8724
Epoch 3/20
25000/25000 [==============================] - 2s - loss: 0.3731 - acc: 0.8553 - val_loss: 0.2960 - val_acc: 0.8904
Epoch 4/20
25000/25000 [==============================] - 2s - loss: 0.3162 - acc: 0.8855 - val_loss: 0.2772 - val_acc: 0.8917
Epoch 5/20
25000/25000 [==============================] - 2s - loss: 0.2762 - acc: 0.9033 - val_loss: 0.2803 - val_acc: 0.8889
Epoch 6/20
25000/25000 [==============================] - 2s - loss: 0.2454 - acc: 0.9172 - val_loss: 0.2823 - val_acc: 0.8892
Epoch 7/20
25000/25000 [==============================] - 2s - loss: 0.2178 - acc: 0.9281 - val_loss: 0.2982 - val_acc: 0.8877
Epoch 8/20
25000/25000 [==============================] - 2s - loss: 0.1994 - acc: 0.9351 - val_loss: 0.3101 - val_acc: 0.8875
Epoch 9/20
25000/25000 [==============================] - 2s - loss: 0.1832 - acc: 0.9400 - val_loss: 0.3318 - val_acc: 0.8860
Epoch 10/20
25000/25000 [==============================] - 2s - loss: 0.1692 - acc: 0.9434 - val_loss: 0.3534 - val_acc: 0.8841
Epoch 11/20
25000/25000 [==============================] - 2s - loss: 0.1590 - acc: 0.9483 - val_loss: 0.3689 - val_acc: 0.8830
Epoch 12/20
25000/25000 [==============================] - 2s - loss: 0.1499 - acc: 0.9496 - val_loss: 0.4107 - val_acc: 0.8776
Epoch 13/20
25000/25000 [==============================] - 2s - loss: 0.1405 - acc: 0.9539 - val_loss: 0.4114 - val_acc: 0.8782
Epoch 14/20
25000/25000 [==============================] - 2s - loss: 0.1333 - acc: 0.9562 - val_loss: 0.4549 - val_acc: 0.8771
Epoch 15/20
25000/25000 [==============================] - 2s - loss: 0.1267 - acc: 0.9572 - val_loss: 0.4579 - val_acc: 0.8800
Epoch 16/20
25000/25000 [==============================] - 2s - loss: 0.1225 - acc: 0.9580 - val_loss: 0.4843 - val_acc: 0.8772
Epoch 17/20
25000/25000 [==============================] - 2s - loss: 0.1233 - acc: 0.9590 - val_loss: 0.4783 - val_acc: 0.8761
Epoch 18/20
25000/25000 [==============================] - 2s - loss: 0.1212 - acc: 0.9601 - val_loss: 0.5051 - val_acc: 0.8740
Epoch 19/20
25000/25000 [==============================] - 2s - loss: 0.1153 - acc: 0.9618 - val_loss: 0.5451 - val_acc: 0.8747
Epoch 20/20
25000/25000 [==============================] - 2s - loss: 0.1155 - acc: 0.9621 - val_loss: 0.5358 - val_acc: 0.8738
Let's plot the results:
In [32]:
dpt_model_val_loss = dpt_model_hist.history['val_loss']
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
plt.plot(epochs, dpt_model_val_loss, 'bo', label='Dropout-regularized model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
Again, a clear improvement over the reference network.
To recap: here the most common ways to prevent overfitting in neural networks:
• Getting more training data.
• Reducing the capacity of the network. | 2022-01-21 03:00:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7740226984024048, "perplexity": 7566.797548596212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00437.warc.gz"} |
https://www.dcode.fr/boolean-dual | Search for a tool
Boolean Dual
Tool to calculate the dual of a Boolean logical expression. The dual being a complementary expression inverting addition and multiplication as well as 0 and 1.
Results
Boolean Dual -
Tag(s) : Symbolic Computation, Electronics
Share
dCode and more
dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day!
A suggestion ? a feedback ? a bug ? an idea ? Write to dCode!
Please, check our community Discord for help requests!
Thanks to your feedback and relevant comments, dCode has developed the best 'Boolean Dual' tool, so feel free to write! Thank you!
# Boolean Dual
## Boolean Dual Calculator
Tool to calculate the dual of a Boolean logical expression. The dual being a complementary expression inverting addition and multiplication as well as 0 and 1.
### How to calculate the dual of a boolean equation?
The dual of a Boolean or of a Boolean expression is obtained by applying 2 operations: replacing/interchanging the logical ORs by logical ANDs and vice versa and replacing/interchanging the logical 0s by logical 1s and vice versa.
Example: The dual of a+b is a.b and conversely the dual of a.b is a+b (duality principle)
### How to note the dual of a boolean equation?
The dual of a boolean function $F$ is sometimes denoted by $Fˊ$ (not to be confused with the complement or NOT function) or $F ^ d$.
Likewise 0 and 1 are dual, true and false are duals, and are dual.
### What is the duality principle?
Every Boolean expression has a dual, the Boolean Duality principle means that every theorem or any computation has a dual equivalent.
By proving something in Boolean algebra, its dual is also proved.
Example: x+1=1 has for dual x.0=0
## Source code
dCode retains ownership of the online 'Boolean Dual' tool source code. Except explicit open source licence (indicated CC / Creative Commons / free), any algorithm, applet or snippet (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt / encrypt, decipher / cipher, decode / encode, translate) written in any informatic language (PHP, Java, C#, Python, Javascript, Matlab, etc.) no data, script or API access will be for free, same for Boolean Dual download for offline use on PC, tablet, iPhone or Android !
## Need Help ?
Please, check our community Discord for help requests! | 2021-01-19 21:26:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39275673031806946, "perplexity": 4283.247113414398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519784.35/warc/CC-MAIN-20210119201033-20210119231033-00768.warc.gz"} |
http://forums.pctex.com/viewtopic.php?t=16&sid=f98da77be31caf97037960246971bd33 | PCTeX Talk
Discussions on TeX, LaTeX, fonts, and typesetting
Author Message
TomK
Joined: 09 Feb 2006
Posts: 16
Location: Ljubljana, Slovenia
Posted: Fri Feb 10, 2006 1:25 am Post subject: LucidaBright kerning of uc letters Capitalized words typeset in LucidaBright font using LaTeX's lucimatx and amsmath packages exhibit visually irregular intra-letter spacing. E.g., in the word {\large MATHEMATICS} letters are very tightly spaced (much more so than in CM fonts) except for noticeable gaps between A and T; the effect is as if the word was written on a typewriter with a sticky carriage. This elicits the question if the kerning data were properly implemented. In response, the author of lucimatx package offered the following dubious explanation: "Kerning of upper-case letters is always crucial. There is no font in the world, which would yield satisfactory results with _any_arbitrary_word, regardless of how elaborated its kerning data are. The Lucida text fonts are somehow special: They were designed to go without any kerning data at all." On the other hand, in his "Travels in TeX Land: Using the Lucids Fonts" David Walden claims, on p. 2, that Y&Y created "the necessary TeX font metrics (.tfm) for all Lucida fonts, including the fussy details TeX needs for math typesetting." Since these statements are contradictory, I wonder who is correct.
Joined: 06 Oct 2005
Posts: 84
Location: San Francisco, CA
Posted: Tue Feb 14, 2006 4:00 pm Post subject: Lucida UC kerning
A comment from the font designer:
Quote: Concerning kerning of uppercase letters, it is true that the Lucida Bright capitals are fit rather tightly. This goes back to the origins of the design for magazine typography (Scientific American). For ALL CAPITAL setting, my recommendation would be to add a little space between the letters, which used to be standard good typographic practice, and then tighten up only the combinations that need it, like, say, AV LA TA and so on.
WaS
Joined: 07 Feb 2006
Posts: 27
Location: Erlangen, Germany
Posted: Mon Feb 20, 2006 1:30 pm Post subject: Re: LucidaBright kerning of uc letters
TomK wrote: Capitalized words typeset in LucidaBright font using LaTeX's lucimatx and amsmath packages exhibit visually irregular intra-letter spacing. E.g., in the word {\large MATHEMATICS} [...] This elicits the question if the kerning data were properly implemented. In response, the author of lucimatx package offered the following dubious explanation: "Kerning of upper-case letters is always crucial. There is no font in the world, which would yield satisfactory results with _any_arbitrary_word, regardless of how elaborated its kerning data are. The Lucida text fonts are somehow special: They were designed to go without any kerning data at all."
Using the default kerning data (which happens to be none in Lucida),
the word MATHEMATICS looks indeed ugly when typeset in Lucida Bright,
as compared with Computer Modern. However, the result from using CM
isn't the optimum, either.
Typesetting capitals needs almost always some manual intervention, which
depends on the particular circumstances, so I'd say that Lucida does in the
average
not perform worse than most other font familiies.
TomK wrote: On the other hand, in his "Travels in TeX Land: Using the Lucids Fonts" David Walden claims, on p. 2, that Y&Y created "the necessary TeX font metrics (.tfm) for all Lucida fonts, including the fussy details TeX needs for math typesetting." Since these statements are contradictory, I wonder who is correct.
D. Walden's quotation does not contradict mine. The tfm files for the Lucida fonts
do include all the information required by TeX, in particular for mathematical
typesetting. Kerning data, however, are not TeX-specific. The deficiency seen
above would affect non-TeX applications as well. Yes, you may blame
B&H for not having provided a few obvious kerning pairs between capitals, but
even if they'd exist, they would -- in the general case -- not eliminate the need for
further manual corrections, anyway. And creating these data subsequently would
require too much effort, as compared with the benefits.
I hope I was able to make things a bit clearer.
Walter
TomK
Joined: 09 Feb 2006
Posts: 16
Location: Ljubljana, Slovenia
Posted: Wed Feb 22, 2006 12:31 am Post subject: In a private email, WaS suggested to me to have a look at TeX/LaTeX 'soul' package (found at CTAN's tex/macros/latex/contrib/soul); the document soul.pdf is an elaborate description of the kerning problem, and its 'soul' solution.
Display posts from previous: All Posts1 Day7 Days2 Weeks1 Month3 Months6 Months1 Year Oldest FirstNewest First
All times are GMT - 7 Hours Page 1 of 1
Jump to: Select a forum Support----------------PCTeX 6PCTeX v6 Beta Fonts----------------MathTime Pro II New SymbolsMathTime Pro IILucida FontsMathTime Pro II BetaMathTime Pro TUG----------------PracTeX Production
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum | 2018-05-23 07:19:41 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9310985803604126, "perplexity": 10027.57696619762}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865456.57/warc/CC-MAIN-20180523063435-20180523083435-00492.warc.gz"} |
http://xn--1-2fa.fr/ | Video
## Kinovea 0.8.25
I’m happy to announce the general availability of Kinovea 0.8.25.
This article describes some of the changes in version 0.8.25 over version 0.8.24.
This release focuses on usability and polishing of existing features, and introduces one new feature in the Capture module.
## 1. General
Starting with version 0.8.25 a native x64 build is provided. There are now 4 download options. The zip files are the portable versions and will run self-contained in the extraction directory. The exe files are the installer versions.
The minimum requirements have not changed and Kinovea still runs under all Windows versions between Windows XP and Windows 10.
The interface is now translated to Arabic thanks to Dr. Mansour Attaallah from the Faculty of Physical Education, Alexandria University – Egypt.
## 2. File explorer
#### Thumbnails details
The details overlaid on the thumbnails have been extended and made configurable. The framerate and creation time have been added to the fields that can be displayed, the framerate is displayed by default. Right-click the empty space in the explorer to bring the thumbnails context menu and choose the fields you would like to be shown.
## 3. Playback module
The video now updates immediately when moving the playback cursor. This behavior was previously only activated when the working zone was entirely loaded in memory. It is now enabled by default. The experience should be largely improved but if you are on a less powerful system and navigation is problematic, the behavior of the cursor can be reverted from Preferences > Playback > General > “Update image during time cursor movement”.
#### Video framerate
The internal framerate of the video can be customized from the bottom part of the dialog in Video > Configure video timing. This setting changes the “default” framerate of the video by overriding what is written in the file. This is a different concept than slow motion. What the setting does is redefine the nominal speed of the video, the 100%. This is useful when a video has a wrong framerate embedded in it which can happen sometimes. In general use you would not use this setting very often but it can save an odd file. Note that this setting is also not the same as the Capture framerate that can be set from the same configuration box.
## 4. Annotation tools
#### Named objects
All drawing tool instances (angles, arrows, markers, chronometers, etc.) now have a custom “Name” property. This makes it easier to match drawings with their value when exporting data to spreadsheet. Regarding spreadsheet export, all lines and point markers are now exported to the spreadsheet, whether or not they have the “Display measure” option active in Kinovea.
#### Custom length unit
A new custom length unit can be used to cover use-cases that are not natively supported by Kinovea. By default Kinovea natively supports Millimeters, Centimeters, Meters, Inches, Feet and Yards. The extra option can be used to define a new unit such as Micrometers or Kilometers depending on the scale of the video being analyzed, or any unit specific to your field. The default value for this option is “Percentage (%)”. The percentage unit would make sense when analyzing dimensions of objects purely relatively to one reference object. The mapping between video pixels and real life dimensions in the custom unit is defined by a calibration line, or a calibration grid for non-orthogonal planes. Any line or grid can be used as the calibration object.
The unit is defined in Preferences > Playback > Units > Custom length unit. It can then be used in any line or grid during calibration.
#### Default tracking parameters
A default tracking profile can be defined from Preferences > Drawings > Tracking. This profile will be applied by default to newly added tracks and trackable custom tools like the bikefit tool or the goniometer. The parameters can be expressed in percentage of the image size or in actual pixels. Note that in the case of tracks, the tracking profile can also be modified on a per-object basis after addition. This is not currently possible for other objects.
## 5. Capture module
#### File naming automation
The file naming engine has been rewritten from scratch to support a variety of automation scenarios that were not previously well supported. The complete path of captured files is configured from Preferences > Capture > Image naming and Preferences > Capture > video naming.
A complete path is constructed by the concatenation of three top-level values: a root directory, a sub directory and the file name. It is possible to define a different value for these three top-level variables for the left and right screens and for images and videos. The sub directory can stay empty if you do not need this level of customization. Defining root directories on different physical drives for the left and right screens can improve recording performances by parallelizing the writing.
The sub directory and the file name can contain “context variables” that are automatically replaced just in time when saving the file. These variables start with a % sign followed by a keyword. In addition to date and time components you can use the camera alias, the configured framerate and the received framerate in the file name.
The complete list of context variable and the corresponding keyword can be found by clicking the “%” button next to the text boxes.
A few examples:
Root: "C:\Users\joan\Documents" Sub directory: "Kinovea\%year\%year%month\%year%month%day" File: "%year%month%day-%hour%minute%second"
Result: “C:\Users\joan\Documents\Kinovea\2016\201608\20160815\20160815-141127.jpg”
Root: "D:\videos\training\joan" Sub directory: File: "squash - %camalias - %camfps"
Result: “D:\videos\training\joan\squash – Logitech HD Pro Webcam C920 – 30,00.mp4”
If the file name component does not contain any variable, Kinovea will try to find a number in it and automatically increment it in preparation for the next video so as not to disrupt the flow during multi-attempts recording sessions.
#### Capture mosaic
The capture mosaic is a new feature introduced in Kinovea 0.8.25. It uses the buffer of images supporting the delay feature as a source of images and display several images from this buffer simultaneously on the screen. The result is a collection of video streams coming from the same camera but slightly shifted in time or running at different framerates. The capture mosaic can be configured by clicking the mosaic button in the capture screen:
Modes:
1. The single view mode corresponds to the usual capture mode: a single video stream is presented, shifted in time by the value of the delay slider.
2. The multiple views mode will split the video stream and present the action shifted in time a bit further for each stream. For example if the delay buffer can contain 100 images (this depends on the image size and the memory options) and the mosaic is configured to show 4 images, then it will show:
• the real time image;
• a second image from 33 frames ago;
• another one from 66 frames ago;
• and a fourth one from 100 frames ago.
Each quadrant will continue to update and show its own delayed stream. This can be helpful to get several opportunities to review a fast action.
3. The slow motion mode will split the video stream and present the action in slow motion. Each stream runs at the same speed factor. In order to provide continuous slow motion the streams have to periodically catch up with real time. Having several streams allows you to get continuous slow motion in real time.
4. The time freeze mode will split the video stream and show several still images taken from the buffer. The images are static and the entire collection will synchronize at once, providing a new frozen view of the motion.
## 6. Feedback
Please post your feedback, bug reports, usability issues, feature suggestions, etc. on the forum in the following thread.
VR
## Work in progress – Light field rendering in VR
This is a work in progress update related to my second Light field render engine. The first one was described last october in the video Implementing a Light Field Renderer. It was based in part on Aaron Isaksen paper “Dynamically Reparameterized Light Fields”. The view synthesis was performed by reprojecting carefully selected quads from the source images (see video). It ran on the CPU and projected the result in a desktop window.
## Scope
For this second engine the scope of the project is the following.
#### 1. View synthesis on the GPU, display in VR
The implementation is in CUDA and outputs to the Oculus Rift (DK2 and runtime 0.8 in this version). While I could get away with 30 to 50 ms to render sub-HD images in the previous project, VR requires the total time to render both eyes be under 10 ms, and the resolution is much higher.
#### 2. Quality vs ray budget optimizations
The particular constraint I’m using to drive the project is to try to get the best quality out of a given budget in Megarays. I settled on a 100-Megaray budget for the experimental phase. This is medium size, for comparison the Lytro Illum physical Light field camera clocks 40 Megarays, while one of the example in my previous project was 900 Megarays. Gigaray-range light fields are often encountered for omnidirectional applications. Note that the examples in the video below are 300 Megarays, I still haven’t matched the quality I would like to reach for the 100 Mr budget.
#### 3. Close-up with limited depth range
I’m focusing on close-up subjects with shallow depth.
## Demo
There are a few important things that are not reflected in the video, some good some bad.
1. Resolution: The video shows the mirror window of the application which uses a much reduced resolution than the actual view projected in VR (see performances paragraph below).
2. Scale and depth: completely missing from the video, as always the case for VR, is the good sense of scale and physicality of the subject. The head is rendered in VR at the actual physical size of the head of the person, which is a bit unsettling at first, in a good way.
3. Uncanny valley: The quality is there (at respectable distance) but the fact that the person is completely still is awkward. Due to this, it looks more like a photograph with depth than an actual person being there. A short breathing and blinking animation loops would go a very long way.
4. Eye accomodation failure: the person’s head is well-locked in the VR world thanks to positional tracking, and the quality is beliveable. Due to this, there is an expectation of increased definition when moving closer to the face. The eyes tries to refocus progressively when getting closer, but the quality is capped from the source dataset. This gives a strange feeling that your eyes don’t work. I had not experienced this before.
## Dataset details
• 300 Megarays.
• Original model is 768K polygons, with 8K texture.
• Capture: path tracing at 1000 samples/pixels.
#### Crystal
• 300 Megarays.
• Capture: path tracing at 4000 samples/pixels. (And a specular depth of 48 because the crystal has many internal surfaces that the light must traverse and bounce on).
Both light fields were captured in Octane 3.0 alpha using a custom Lua script.
## Performances
The rendered view is 2364×2927 per eye. This is a pixel density of 2.0x on the Oculus DK2 with the eye relief I’m using. The images are then warped to the DK2 screen which is 1920×1080. It runs comfortably within the 13 ms (75 fps) budget on a GTX 980Ti.
Additionally the view is interpolated and blit to the low-resolution mirror window. This step happens just before posting the render to the Oculus compositor.
Obviously the most interesting thing about rendering light fields is that the render time is independent from the complexity of the scene. This is why I focus on examples that are hard or downright impossible to render in real time. The woman face is more than 750K polygons, uses a complex material and 8K textures. The crystal has particularly challenging light pathways and caustics (not really visible with the settings I used unfortunately) that can only be rendered realistically using ray tracing.
## Application
The application in the video is called Hypercapsule. It draws some parts from Capsule, the application from this post which renders omnistereo images in the Oculus DK2. The “hyper” comes from the 4D mindset used in core parts of the implementation.
I’m not going to publish this application at this time, it’s a step towards something larger.
VR
## CUDA-based omni-stereo image viewer for Oculus Rift
Project Capsule : a small viewer for omnidirectional stereoscopic images. The target device is the Oculus Rift, the image reprojection is done in CUDA. Download and discussion thread at Oculus forums.
## 1. Introduction
Capsule is a small viewer for omnidirectional stereoscopic images. The target device is the Oculus Rift, the image reprojection is done in CUDA.
This project was started to scratch a triple itch :
1. I wanted to experience the images created for the “Render the Metaverse” contest on my DK2 ;
2. I wanted to be able to very quickly check for stitching artifacts, binocular rivalry, depth and eye comfort in any omni-stereo image ;
3. I wanted to improve my CUDA and GPGPU programming skills.
With that in mind, I set myself up to build a small omni-stereo renderer in CUDA so that I could project these images on the Rift and learn a few things along the way.
The Render the Metaverse contest has been organized by OTOY this summer and has given birth to the most impressive collection of fully spherical stereoscopic images to date. Many of the works are excellent and leverage the ability to create photorealistic images of imaginary or impossible worlds. Impossible-yet-photorealistic is also what I loved to do back in my photo-remix days, it’s a really powerful strategy to awe the viewer.
Here is a screenshot of the desktop-side of the software.
Fig. 1. Capsule omni-stereo image viewer desktop window.
The Headset side of the software is just the full spherical images. There is no in-VR user interface whatsoever.
## 2. Project scope
I have voluntarily limited the scope of the project to be able to work everything out in a relatively short period.
1. Static vs Dynamic: The program is limited to static content. This is to focus on the peak quality content without having to deal with frame queues, buffering and other joys of video. Video is currently quite behind in terms of quality because the hardware and file transport levels aren’t ready for VR yet.
2. Stereoscopic vs monoscopic: Although monoscopic content is supported, there is no particular effort put into it. I think stereo is a fundamental part of the VR experience and is where I personally draw the line. Monoscopic 360° content can be very appealing and a VR headset is certainly the best way to experience it, but the added dimension of depth is what changes the game for me.
3. Spherical vs Reduced FOV: I think hemispherical content will definitely have a place in VR, especially for story telling. For this project however, I’m focusing on the fully immersive experience.
## 3. Oculus/OpenGL/CUDA interop
#### 3.1. Oculus/OpenGL
The interoperability between the Oculus SDK and OpenGL is described in the Oculus SDK documentation at Rendering Setup Outline and in the OculusRoomTinyGL sample.
The basic principle is that we ask the runtime to allocate a set of GL textures for each eye. During rendering we will cycle through the set, drawing into a different texture from one frame to the next. Note that the textures are created by the runtime, it’s not possible to provide our own texture id from textures we would have created elsewhere.
#### 3.2 OpenGL/CUDA
The interoperability between OpenGL textures and CUDA is described in the CUDA programming guide at 3.2.12.1. OpenGL Interoperability. The basic principle is that an OpenGL texture can be mapped into CUDA under the form of a CUDA Array and still be manipulated by both OpenGL and CUDA (not simultaneously). A CUDA Array is basically an abstraction level above either a CUDA Texture (for read only content) or a CUDA Surface (for read/write content).
Older graphics cards only support Texture and Surface references, and many tutorials use them. These need to be defined at compile time and make the code somewhat ugly and awkward. Texture and Surface objects are much more natural constructs to use. The relevant part of the programming guide is 3.2.11. Texture and Surface Memory. Surface objects are supported on adapters having Cuda Compute Capability 3.0 (GTX 600+, Kepler microarchitecture), which is still one entire generation of cards below the recommended specs for the Oculus Rift (GTX 970 – Maxwell microarchitecture), so this limitation is fine for the project.
Capsule is also using a CUDA Texture to store the actual image to be projected. There is no OpenGL interop going on here, it goes straight from the central memory to CUDA and is not accessed outside of CUDA code. For the eye buffer, since we need write-access, we must use a Surface object rather than a Texture object.
The complete path from an OpenGL texture to something we can draw onto from within a CUDA kernel is something like:
• Create a texture in OpenGL.
• Call cudaGraphicsGLRegisterImage to create the CUDA-side resource for the Texture.
• Call cudaGraphicsSubResourceGetMappedArray to map the resource to a CUDA Array.
• Call cudaCreateSurfaceObject to bind the CUDA array to a CUDA Surface object.
• Call surf2Dwrite to draw onto the Surface object and hence onto the OpenGL texture.
The resource must be unmapped so that the texture can be used by the OpenGL side.
A simple trick is to search for these functions and related functions on the whole tree of CUDA code samples.
The final interop plumbing arrangement from Oculus to CUDA:
Fig. 2. Oculus to CUDA interop plumbing.
## 4. CUDA Kernels
#### 4.1. Projection types
There are two CUDA kernels implemented. One for the equirectangular projection and one for the cubemap projection.
For the cubemap projection we need to use a set of conventions for face ordering and orientation. The de facto standard in VR is coming from the format used by Oculus in-house image viewer for the Samsung GearVR and is the following:
• The unfolded cube is stored as a long strip of faces.
• Faces are ordered as +X, -X, +Y, -Y, +Z, -Z.
• Top and Bottom faces are aligned to the Z axis.
• All faces are flipped horizontally.
• For stereo the right eye strip is stored after the left one, creating a 12-face long strip.
The choice of kernel to use is based on the aspect ratio of the image. A stereo equirectangular image has an aspect ratio of 1:1 for the Top-Bottom configuration and of 4:1 in the Left-Right configuration. A stereo cubemap image has an aspect ratio of 12:1. The monoscopic versions are respectively 2:1 and 6:1. If we decide not to support variations within these configurations, like Bottom-Top or other cube faces ordering, the projection can be automatically inferred from the aspect ratio of the image.
#### 4.2. Projections
The Oculus runtime provides the eye camera (off-axis) frustum as a FovPort structure. This is a set of 4 numbers representing the half-FOV in the up, down, left and right directions around the camera axis. Knowing the size of the buffer in pixels, we can compute the camera center and focal distance. Then, using these camera intrinsic parameters we can find the direction of rays starting at the camera projection center and passing through any pixel of the eye buffer. This represents the first two steps of the algorithms and could actually be pre-computed into a kind of normal map for the eye. The full approach is described below.
Equirectangular kernel
For each pixel location in the eye buffer:
1. Back project the pixel from 2D to 3D coordinates by converting it to homogenous coordinates.
2. Normalize the homogenous coordinates to get a direction vector.
3. Rotate the direction vector to account for headset orientation.
4. Convert the direction vector (equivalently, a point on the unit sphere) to spherical coordinates.
5. Convert the spherical coordinates to image coordinates.
6. Fetch the color at the computed image location (with bilinear interpolation and optional fading).
7. Write the final color at the pixel location.
Cubemap kernel
For each pixel location in the eye buffer:
1. Back project the pixel from 2D to 3D coordinates by converting it to homogenous coordinates.
2. Normalize the homogenous coordinates to get a direction vector.
3. Rotate the direction vector to account for headset orientation.
4. Find the largest component of the direction vector to find the face it is pointing to.
5. Project the direction vector onto the selected face.
6. Adapt the signs of the remaining two components for use as 2D coordinates on that specific face.
7. Shift the x coordinate to account for the face order within the entire cubemap.
8. Fetch the color at the computed image location (with bilinear interpolation and optional fading).
9. Write the final color at the pixel location.
Note that the cubemap reprojection only involves simple arithmetic and is slightly faster than the equirectangular one in my implementation.
## 5. Performances
#### 5.1. Perf/Quality threshold in VR
While in a traditional application a low framerate will make the experience less enjoyable, in VR it will make the user physically sick. Consequently, we must pull the performance-quality trade-off cursor to the performance side first. There is a threshold of performance we cannot slide past under. It’s only once the framerate and latency requirements are met that we can start considering image quality.
In addition to issues caused by poor performances, there is a whole bestiary of visual artifacts that are VR-specific and that regular applications don’t have to bother about. Judder, incorrect depth, head tracking latency, object distortion, etc. Each come with its particular cues but the important thing is that issues are hiding each other. You can only really understand what it feels to experience the distortion effect caused by the lack of positional tracking when you no longer have judder caused by poor framerate.
The talk “Elevating Your VR” by Tom Heath at Oculus Connect 1 in 2014 is still what I consider the best talk about VR-specific artifacts, how to become aware of them and how to fix them. It should be required viewing for anyone working in VR (1H05), slides only (PDF).
For Capsule, thanks to the image-based rendering approach, the required 75 fps are easily reached on modern GPUs. Unfortunately, distortion issues due to the lack of positional tracking cannot be fixed within the confines of omni-stereo images (Light fields will later save the day).
I found that the worst remaining offender was headlocking during image transitions. In a first test version I was loading the next image in the same thread as the rendering one. The display stopped refreshing during the few hundreds of milliseconds required to load the image from the disk to memory and then to the GPU. This caused a headlock: no matter where you turn your head, the entire picture is coming with it, it feels like the whole world is spinning around, and it immediately causes motion sickness.
#### 5.3. Profiling
The frame budget for the DK2 and CV1 are 13.3 ms and 11.1 ms respectively. The Oculus runtime and compositor will eat a part of that budget. My goal was to get under 5 ms per projection to fit both eyes inside 10 ms. CUDA has a useful profiler integrated with Visual Studio. It’s very easy to test various approaches in the Kernels and immediately check the impact with actual stats.
Of the two kernels, the equirectangular one is slightly slower than the cubemap one. This is mostly due to the trigonometry involved in going from a 3D location on the unit sphere to spherical coordinates. The cubemap inverse projection can entirely be solved with simple arithmetic.
After a few roundtrips of profiling and optimization, the performance was better than I expected, running largely inside the bugdet for the default sized eye buffers. I pushed the pixel density to 2x to explore peak quality and made it a user-controlled option. Despite the comment in the SDK code, the effect of 2x pixel density is sensible and produce much less aliasing crawlies when looking around. This is particularly welcome on the DK2 because we tend to constantly slightly move the head to minimize the screen door effect.
The performance is not influenced by the size of the input image (as long as it fits in the dedicated GPU RAM). Fig. 3. Shows a profiler run summary where the eye buffer is sized at 2364×2927 px, about 7 million pixels, corresponding to my HMD profile at 2.0 pixel density. The equirect kernel runs in 2.9 ms on a 8000×8000 px source, and the cubemap kernel runs in 1.9 ms on a 24000×2000 px source. This is on an Nvidia GTX 660.
Fig. 3. Profiler run summary for 2364×2927 eye buffer on an Nvidia GTX 660.
There is still room for experimentation. The kernels both start by computing the pixel’s normalized direction, prior to applying the head rotation. This never change and could be stored in a map. It would replace a few arithmetic operations by one memory fetch.
## 6. Future plans
There could be many avenues of improvement for this project. Supporting ambient audio, short animated sequences, ultra high resolution content, zooming, in-VR file explorer, or even implementing light field rendering right into it. However I wanted this project to be self-contained and it will likely continue this way, simply fulfilling its original purpose of quickly experiencing omni-stereo images on the DK2. The more ambitious features will be in more ambitious softwares.
VR
## Implementing a light field renderer
Demo and discussion about an experimental light field renderer I’ve been working on.
The main reference used is “Dynamically Reparameterized Light Fields
Telerobotics
The small T shapes protruding from the Wasp 110 carbon frame are quite handy for many purposes, including balancing the quadcopter center of gravity.
It’s not perfect because the two-point support given by the wires provides a bit of extra stability. It still gives valuable information as to whether the craft is tail or nose heavy.
The featured quadcopter is nothing fancy, it is a transplant of a Hubsan X4 flight controller board, motors and propellers to the Wasp 110 carbon fiber frame from SOSx.
The FPV camera, VTX and antenna is the Spektrum VA1100 bundle. I use a separate 1S LiPo to power the FPV subsystem.
AUW is 53.5 grams and motor to motor length is 110 mm.
The balancing rig is made out of MakerBeam parts.
Video
## Sub-frame accurate synchronization of two USB cameras
Following up on my experiments with rolling shutter calibration.
Here is a method to synchronize two off-the-shelf USB cameras with sub-frame accuracy.
Jumplist
1. Introduction
Off-the-shelf USB camera (Webcams) lack the hardware to synchronize the start of exposition of multiple cameras at once.
Without this hardware support, synchronization has traditionally be limited to frame-level synchronization, where a visual or audio signal is used to temporally align the videos in post-production.
This frame-level synchronization suffers from an error ranging between 0 and half the frame period. That is, for a camera pair running at 30 fps, we may have up to 16.6 ms of misalignment. This is problematic for moving scenes in stereoscopy and other multi-view applications.
A second characteristic of these cameras is the fact that they use a rolling shutter: a given frame is constructed by staggering the exposures of the sensor rows rather than exposing the entire scene at once.
We can combine these characteristics in a simple manner to estimate the synchronization error between a pair of cameras. Manual or automatic disconnection/reconnection of one of the cameras can then be performed until the pair is under an acceptable synchronization error level.
2. Principle
The principle is simple and a straightforward extension to my previous post about mesuring rolling shutter.
A short flash of light will be imaged by both cameras as a narrow band of illuminated rows. In the general case the band will not be located at the same vertical coordinate in both images. The difference in coordinate is the synchronization error.
If we have calibrated the rolling shutter and know the row-delay, we can directly infer the synchronization error by counting the number of rows of difference in the image of the flash between the two cameras.
3. Factors to consider
3.1 Exposure duration
The exposure duration will change the width of the band. Due to the way the rolling shutter is implemented in sensor drivers, the band will increase by the bottom while the top of the band will stay in place.
This is because the exposition of each row is determined by the stagerring of the readout signal (end of exposure), and when we increase the exposition these readout signals are kept attached to their temporal coordinate. It is the reset signals (start of exposure) that are moved earlier in time. Rows that are further down begin their exposition earlier than before and start to capture the flash.
I have not yet found a camera that would drive the row exposition differently.
Fig. 1. Altering the exposure duration without changing anything else makes the flash band grow by the bottom.
3.2 Frame level delay.
For some framerate values, a frame-level delay may be inserted between the end of the readout of the last row and the start of the reset of the first row of the next frame. This may be referred to as the horizontal blanking period in camera manufacturer data sheets.
When this is the case, there is a dark space in the frame timing diagram. The flash may land in this space and won’t be visible in one of the camera.
3.3 Interframe synchronization
This procedure is only concerned with intra-frame synchronization and does not provide any clue with regards to synchronizing the streams at the frame level. In other words, even if the start of the exposures of each images is perfectly aligned in time, the synchronization could still be off by an integer number of frames. The usual frame-level synchronization cues using video or audio signals is still required.
4. Experimental setup
I conducted a simple experiment with two Logitech C920 cameras.
Both cameras were set to 100µs exposure duration, and an arduino-based LED stroboscope was used to generate flashes at a given frequency.
After the synchronization procedure was performed, the exposure and gain were set back to suitable values with regards to ambiant light, without disconnecting the camera streams. The camera were then used to film a scene.
After each scene capture, the synchronization procedure was repeated to account for a small framerate drift introduced by the cameras.
A similar scene was filmed twice, the first time at less than one millisecod of synchronization error, and the second time at around 12 ms sync error.
The videos have been filmed with a parallel rig at 65mm interaxial (Fig. 2.). The camera where set in portrait mode to maximize vertical field of view, which means that the rolling shutter was scanning horizontally. The rolling shutter itself may also cause a small amount of spatial distortion in the scene.
Fig. 2. Dual C920 camera rig used.
One of the camera stream has been systematically shifted up by 0.54% to correct for a small vertical disparity at the rig level. No other geometric nor radiometric synchronization was performed.
5. Results
The following side by side animations presents the streams in crossview order (RL): the left camera image is on the right side and the right camera image is on the left. The effect induced by the missynchronization is subtle but can be experienced by “freeviewing” the stereo pairs. A tutorial on how to view this type of images stereoscopically can be found here. In the missynchronized case, the balls motion induce a rivalry between the left and right eye that can be felt as some sort of blinking.
The animations run at 3 fps.
Fig. 3. Juggling sequence 1 – synchronization error: less than one millisecond.
Fig. 4. Juggling sequence 2 – synchnorization error: around 10 milliseconds.
The following animations compares the synchronization artifacts on the balls in motion by overlaying the left and right images at 50% opacity. The horizontal disparity has been minimized at the plane of the face.
Fig. 5. Comparison of synchronization artifacts on objects in motion. Left: less than one millisecond of sync error, right: more than 10 ms sync error.
6. Future work
The method could be extended to any number of cameras. Manually reconnecting the camera stream is cumbersome though and an automated procedure could be developed to automatically restart the stream until the synchronization error is within a configurable value.
Another extension to this method is the synchronization of multiple cameras for super-framerate imaging. By staggering the expositions by a known amount, several cameras can be used to capture a high framerate video of the scene, provided we can correct geometric and radiometric disparities in the cameras.
Video
## Estimating a camera’s field of view
In many scenarios it is interesting to estimate the field of view that is covered by images produced by a given camera.
The field of view depends on several factors such as the lens, the physical size of the sensor or the selected image format.
The usual techniques involve measuring an object of known width placed at a known distance, and solving simple trigonometry. This however assume that the camera uses a rectilinear lens which is not necessarily the case.
A very simple technique to estimate the field of view without knowing any intrinsic parameters of the camera has been described by William Steptoe in the following article: AR-Rift: Aligning Tracking and Video Spaces.
The idea is to place the camera in such a position that radiating lines are seen as “vertical parallels” on the camera image, and the center line passes through the center of the lens.
You then read the FOV by looking at the last graduation at the edge of the image.
I generated a pattern to make the reading easier (fig. 1). You can download the full page PDF here:
Fig. 1. FOV estimation test pattern.
The pattern is a protractor with 0.5° granularity. The angle values start at 0 on the vertical center line and increase outward on each side. It sports a second set of graduations closer to the camera lens to help with the measurement.
If you use Kinovea to view the camera stream, you can enable the “test grid” tool to make sure the center vertical line is perfectly aligned with the center of the lens.
Here is an animation of the process in action.
Fig. 2. Estimating camera FOV in Kinovea using a specially crafted protractor pattern and grid overlay.
The camera should be as low as possible relatively to the pattern, but obviously not so low that you can’t see the graduations. The camera FOV is read by taking the last visible graduation on the side and multiplying it by two.
You should expect ±1° of error from the measurements.
Video
## Review of the ELP-USBFHD01M camera
I have not found a review of this camera anywhere so I’m going to attempt to fill that void.
This USB 2.0 camera module is sold by Ailipu Technology Inc. (Shenzhen) under the model name ELP-USBFHD01M. There are several variations on the product depending on the stock lens you get it with.
I got the camera for around 35€. Shipping was around 30€. An additional 20% tax was asked at delivery (depends on buyer’s country).
Summary
Pros
• 1280×720 @ 60 fps (MJPEG).
• Interchangeable M12 lens.
• Manual exposure.
• UVC compliant.
Cons
• Image sharpness.
• Poor dynamic range.
• No housing.
Fig.1 ELP-USBFHD01M camera module with the 140° lens.
Jump list
Installation
The camera is UVC compliant so there is no special installation procedure. It is instantly recognized by Windows (tested on Windows 7, 8.1 and Windows 10 Tech preview). There is no vendor-specific driver to install on top of the Microsoft-provided UVC driver.
The USB descriptors report the following: VID = 05A3, PID = 9230. However, a different camera module from Ailipu Tech (ELP8MP02G) reports a different vendor id so I’m not sure it is a legit id from USB Implementers Forum.
The camera name is reported as simply “HD USB Camera” and the manufacturer as “HD Camera Manufacturer”.
Hackability & Form factor
The module is 37.5mm × 37.5mm. It does not have any casing.
The back of the board has a 4-pin connector where the USB cable is plugged. The USB cable can be detached from the board and the connector wiring is printed on the board, which is pretty cool in case of a repair or modification. The shipped cable is 1 meter long as advertised.
Fig. 2. USB connector and wiring.
The board sports a standard M12 lens holder with 20mm hole distance. The exact lens holder model will depend on the lens.
The lens holder can easily be unscrewed and swapped with another one. The lens is tightened with a screw.
Having an interchangeable lens is really neat.
Fig. 3. Lens mount and focus screw.
Camera controls
The camera has many of the standard controls, including exposure, gain and sharpness.
Here are the DirectShow property pages for “Video Proc Amp” and “Camera Control” on the filter. Greyed out options are not available.
Note on white balance: it is not possible to reproduce the “Auto” mode with the manual slider. I have found that the best color is achieved when using the “Auto” mode, especially for whites. The manual mode gives a color cast to everything.
As it is usually the case, the exposure value will take precedence over the framerate when using long exposures. When this happens the framerate is limited to exactly 1/exposure duration.
Exposure values are not documented. Almost no camera manufacturer follow neither the UVC spec nor the DirectShow spec, and this camera is no exception.
Here is the mapping I found by inferring from 1/framerate on long exposures and comparing with a Logitech C920 for lower values:
-1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 640ms 320ms 160ms 80ms 40ms 20ms 10ms 5ms 2.5ms 1.250ms 650µs 312µs 150µs
Focus is done manually by screwing/unscrewing the lens inside its mount.
Stream formats
The camera can stream in MJPEG or YUY2 formats.
Each image size has a single framerate associated with it. In my experience the settings do not exactly match the actual frame frequency. I have noted in italics and in parenthesis the values I measured.
The following combinations are available:
• MJPEG : 320×240 @ 120.1 fps (100 fps), 640×480 @ 120.1 fps (100 fps), 800×600 @ 60 fps (60 fps), 1280×720 @ 60 fps (60 fps), 1280×1024 @ 31 fps (30 fps), 1920×1080 @ 31 fps (30 fps).
• YUY2 : 320×240 @ 31 fps, 640×480 @ 31 fps, 800×600 @ 21 fps, 1280×720 @ 9 fps, 1280×1024 @ 6 fps, 1920×1080 @ 6 fps.
Image quality
Image quality is not a strong point of this camera. Since a poor image quality may come from many factor as the lens, the sensor, the image processing chip or the JPEG encoder, some of which we have partial control on, I will attempt to find out the origin of each issue.
Lens : I tested with the 140° and 180° stock lenses, as well as with a Sunex 955A (rated 5 megapixels). I was not able to quite reproduce the image quality of the Logitech C920 or the Microsoft Lifecam Studio. Image quality is not horrible though, and on bright daylight it may be enough depending on the purpose.
Flares: Since the camera is not in any housing, the lens is protruding and may be more easily polluted by light rays coming from aside. Some sort of hood will be a good addition when strong light is coming at an angle.
JPEG encoding: The compression level is not known. Some JPEG artifacts may be visible in low light conditions.
Dynamic range: The dynamic range is poor and if the scene contains dark and bright areas simultaneously details will be lost.
Rolling shutter: I measured the rolling shutter frame scan time to be 17ms on the 1280×720 @ 60 fps and 36 ms on 1920×1080 @ 30 fps. Scan time at other resolutions are similarly mostly dependent on the framerate.
Sensor specs
The vendor reports that the sensor is an Omnivision OV2710. The spec sheet for it can be found here. Here are some highlights:
• Lens format: 1/2.7″.
• Pixel size: 3µm×3µm.
• Image area: 5856µm × 3276µm.
The “720p cropped” image format
The 1280×720 @ 60fps stream uses a crop of the full frame, not a down sampling of the full sensor as in other lower sizes. This is a characteristic of the OV2710 itself. The final field of view of the image will be reduced. Apply a factor of 2/3 to find the new field of view spanned by the cropped image. If the 1920×1080 image diagonally spans 140° for example, in the 720p crop it will be reduced to 93°.
The image circle created by a lens made for 1/3″ sensors will cover the full frame (1920×1080) image. When using 1280×720 @ 60fps the image will be better suited to a lens made for 1/4″ sensors.
The 180° stock lens provided with the camera seems to be a 1/4″ format. It is well suited for the 720p cropped stream, but displays hard vignetting on full frame.
Samples
These samples were captured with Kinovea and are direct dumps of the MJPEG frames.
Conclusion
In the end the USBFHD01M is a capable camera module. To my knowledge no other USB 2.0 camera provides 1280×720 @ 60 fps to this date.
It can be an interesting tool if you can live with or work around its shortcomings.
Video
## Measuring rolling shutter with a strobing LED
In a rolling shutter camera (most USB cameras), the image rows are exposed sequentially, from top to bottom. It follows that the various parts of the resulting video image do not exactly correspond to the same point in time. This is definitely a problem when using these videos for measuring object velocities or for camera synchronization. Basically the only bright side is that it makes propellers and rotating fans look funny.
Fig. 1. Rolling shutter artifacts when imaging a propeller.
But beside the visual distortion, can we quantify how severe the problem is? Let’s find out with a simple blinking LED.
Principle
The exposure duration is independent from the rolling shutter. Each row is properly exposed as configured, it’s just that the start of the exposure of a given row is slightly shifted in time relatively to the previous one. So if we flash a light for a very short time in an otherwise dark environment, only the rows that happen to be exposed at that time are going to capture it.
Fig. 2. Rolling shutter model with a flash of light happening during frame integration.
An Arduino can serve as a cheap stroboscope for our purpose. The principle is simple, we align the LED blinking frequency on the camera framerate so that exactly one flash happens for each captured frame. We limit the flash duration to reduce the stripe of rows catching it, and we measure the height of the stripe in pixels. Considering we know how long each row in the stripe has been collecting light and how long the flash lasted, we can compute the total time represented by the rows and derive a per row delay.
Setup
As we are going to count pixels on the resulting images we want the clearest possible differentiation between lit rows and unlit rows and we want the entire row to be lit. For this I’m using a large LED (10 mm) and a diffusion block. The diffuser is simply a piece of Styrofoam 2cm thick, with the LED head stuffed in about 5mm deep (works better than a ping pong ball).
All timing relevant code in the Arduino is done using microseconds (more about accuracy below).
Fig. 3. Experimental setup.
On the camera side, I manually focused to the closest possible, and set the exposure time to the minimum possible, which is 300µs on the Logitech C920. I used Kinovea to configure the camera.
Note that not all camera have a control for exposure duration. Furthermore the usual DirectShow property does not always map precisely to the firmware values. Logitech drivers expose a vendor-specific property with the adequate control of exposure.
The LED with the diffuser is crammed close to the camera lens to get as much light as possible from the flash.
First attempt
After configuring the camera to 800×600 @ 30fps, and the Arduino to blink the LED for 500µs every 33333µs, we get a nice Cylon stripe:
Fig. 4. Stripe of illuminated rows, with strobing frequency mismatch.
Interesting. A stripe corresponding to the few rows that were exposed during the LED flash… But slowly moving up. Either the camera is not really capturing frames at 30 fps like it advertises, or the Arduino clock is not ticking exactly 30 times per seconds, or a bit of both.
Arduino’s micros() has a granularity of 4µs but it is definitely lacking in accuracy. It suffers from drift and cannot be used if long term accuracy is required. For our purposes, the difference between, say, 30.00 fps and 29.97 fps is about 33µs which is well under the board capabilities. A Real Time Clock would be needed.
So our blinking interval is slightly off, but fortunately it does not really matter for our experiment as we are not trying to measure the camera frequency itself. As long as we can somehow tune to it and stabilize the stripe, we do not need to know its exact value.
The stripe is moving upward, meaning that for each frame, we fall a bit short, and the flash is happening at a lower time coordinate in the frame interval, illuminating higher rows.
At that point I added a potentiometer (plus code to average its noisy values) and linked it to the blinking rate, so that I could more easily fine tune to the actual camera framerate. Manually bisecting the values until the stripe is stable is also quite feasible in practice.
The stripe stabilized when I settled on 33376µs. It is a meaningless value in itself and we are not going to convert that back to a framerate.
Measurements
We should now have everything needed to compute the rolling shutter time shift.
Here are some captured frames with fixed camera exposures and varying LED flash durations. Each time we restart the camera the stripe will be at a different location, but it stays put.
Fig. 5. Capturing the LED flash. Fixed exposure of 300µs, varying flash duration 500µs, 1000µs, 2000µs.
A few measurements are summarized in the following table:
Table 1. Rolling shutter row delay measurements at various exposure and flash duration.
Row delay is given by $\frac{\text{exposure } + \text{ flash duration}}{\text{row count}}$. We get an average time shift of about 56.6µs per row. Considering the 800×600 image, that gives a full image swipe of 33981µs. In other words, a 34ms lag between the top and bottom rows.
Other works
Other methods have been designed to compute the rolling shutter without knowing the camera internals. Most notably a rough but simple method is to film a vertical pole while panning the camera left and right. This allows to compare the inter-frame and intra-frame displacement of the pole and retrieve the full image swipe time. If between top and bottom rows in a single frame the pole is displaced by half as many pixels than what it is displaced between the top rows in two adjacent frames, we know the vertical swipe time is half the frame interval. See more details here, several measurements are required to average out the imprecisions.
The 34ms value is probably consistent with the type of sensor quality we would expect to find in a webcam compared to a device dedicated to photography. For DSLR, values of 15 to 25ms are reported using the pole technique, up to more than 30ms as well for some devices.
Image size
The frame scan time may or may not change depending on the chosen resolution. It depends on the way image sizes lower than the sensor full frame are implemented in the camera.
Methods to create lower sized images include downscaling (capture the whole image and interpolate at the digital stage), pixel binning (combine the electrical response of two or four pixels into a single one), cropping (use only a region of the full frame corresponding to the wanted resolution). Pure downscaling will give the same frame scan time between image sizes. Cropping will change the scan time as there really are less pixels to read out.
A particular resolution may use a combination of methods. For example, a 800×600 image on a 1920×1080 sensor may first use cropping to retrieve a 4:3 window of 1440×1080, and then downscale this image by 1.8x to get 800×600.
To know if a resolution is downscaled or cropped you can check the horizontal and vertical field of view that this image displays and compare it to the field of view in the full frame image.
Conclusion
34ms of disparity within an image is definitely a lot when filming fast motion like a tennis serve or golf ball trajectory. A mitigation strategy is to orient the camera in such a way that the motion is parallel to sensor rows. For stereoscopy or multiple-camera vision applications, a sub-frame accurate synchronization method might be required (usually through a dedicated hardware cable). A global shutter device is of course always a better option. | 2016-12-06 19:47:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24408340454101562, "perplexity": 2260.9073148771486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541995.74/warc/CC-MAIN-20161202170901-00159-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://scipost.org/theses/64/ | We present McMule, a unified framework for the calculation of NLO and NNLO corrections to many processes in QED with massive fermions. This easily extendable program allows users to calculate an arbitrary observable for any of the processes implemented. These include various lepton decays as well as certain low-energy scattering experiments such as $e\mu \to e\mu$ and $\ell p\to \ell p$ that can be measured to high enough a precision to warrant QED corrections. As part of our discussion, we will present a pedagogical introduction to how these calculations are performed, focusing on technical aspects supplemented with examples. Our goal is to provide a useful introduction for those entering the field, covering all aspects relevant for the practitioner. | 2021-01-16 12:58:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.510817289352417, "perplexity": 512.3435088205251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506640.22/warc/CC-MAIN-20210116104719-20210116134719-00642.warc.gz"} |
http://www.gradesaver.com/textbooks/science/chemistry/chemistry-a-molecular-approach-3rd-edition/chapter-3-sections-3-1-3-12-exercises-problems-by-topic-page-131/38e | ## Chemistry: A Molecular Approach (3rd Edition)
The charge on Hg must be 2+ for the compound to be charge-neutral with two Br- anions. The name for $HgBr_{2}$ is the name of the cation, mercury, followed by the charge of the cation in parentheses (II) and the base name of the anion, brom, with the ending - ide. The full name is mercury(II) bromide. | 2018-01-23 16:11:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.446643203496933, "perplexity": 4299.793078478532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891980.75/warc/CC-MAIN-20180123151545-20180123171545-00340.warc.gz"} |
http://www.gradesaver.com/textbooks/math/precalculus/precalculus-mathematics-for-calculus-7th-edition/chapter-2-section-2-1-functions-exercises-page-157/42 | ## Precalculus: Mathematics for Calculus, 7th Edition
$h(t) = t^2 + 5$ $h(-3) = (-3)^2 +5 = (9) +5 = 14$ $h(6) = (6)^2 + 5 = (36) + 5 = 41$ $h(6) - h (-3) = 41 - 14 = 27$ Net change is an increase by 27 | 2018-01-22 12:44:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4390725791454315, "perplexity": 4209.946277303316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891316.80/warc/CC-MAIN-20180122113633-20180122133633-00320.warc.gz"} |
https://www.physicsforums.com/threads/help-with-integration.160317/ | # Help with integration
1. Mar 11, 2007
### CaityAnn
1. The problem statement, all variables and given/known data
Evaluate the integral 4cos(x^2)/ (2((z)^(1/2))) dxdydz, limits for x 2y to 2, for y, 0 to 1 and for z 01, by changing the order of integration.
2. Relevant equations
Would I need to change the limits in order to accurately change the order of the integration? Im thinking not, I could just switch the dx and dy and Integrate. I guess thats what she is asking.
3. The attempt at a solution
My problem? I am having a heck of a time integrating this function. Please help me with getting started on integrating it.
2. Mar 11, 2007
### cristo
Staff Emeritus
Yes, you will need to change the limits, since the limits for x depend on y. Try drawing a diagram of the region that you are integrating over (a sketch of the region in the x-y plane will do, since z is constant). If you want to swap the order of integration from dxdy to dydx, then you need limits for y which are independent of x. This diagram should help you deduce such limits.
3. Mar 11, 2007
### mjsd
you sure it is $$\cos(x^2)$$ and not $$\cos^2 x$$?
if it is indeed $$\cos(x^2)$$ you get a Fresnel integral in x
Last edited: Mar 11, 2007
4. Mar 11, 2007
### CaityAnn
I am sure it is is x^2 in the parenthesis.
So when I solved for z I got z=(2cos(x^2))^(1/2).
Thats right, right?
I graph this and get a cos function parralel to the y axis.
Since my dx limits are 2y and 2, i can graph 2y and 2 on a 2 D graph and see how it would be on the 3D graph. my Dy limits are from 0 to 1.
So if I have to change my limits instead of just rearranging the function like someone here said I should have to do,.. then I will have to solve the x=2y and x=2 for y? giving dy=1/2x and y=2????????
5. Mar 11, 2007
### mjsd
firstly let me get this correct are you trying to evaluate
$$\displaystyle{\int_{0}^{1}\int_{0}^{1}\int_{2y}^{2} \frac{2\cos (x^2)}{\sqrt{z}}\;dx dy dz}$$
6. Mar 11, 2007
### CaityAnn
yes! The problem is, Evlaute that integral by changing the order of integration. Im wondering if I should just switch the dy , dy limits and integrate or if I do that will i need to change my limits. AND, on top of that, Im doubting my integration skills on this problem. *sigh* thanks for you help.
7. Mar 11, 2007
### mjsd
if you go direct in doing the x integral... you need to know what are Frensel integrals, but of course changing the order of integration will simplify things in this case... follow cristo's advice and sketch the region of integration (ignore z for the moment, do that last and just look at the x and y integrals)
8. Mar 11, 2007
### CaityAnn
yeah... i know.... if you look at my reply to cristo, I have a question about those x and y limits.
9. Mar 12, 2007
### mjsd
judging by this response, it appears that you don't know what cristo is talking about.....you change the integration limits by first working out what is the actual region R that you are integrating over... THEN using that knowledge to re-write the double integral where you can do the y bit first followed by the x part. so far, you have not even touched any part of the integrand.
for example:
$$\int_{0}^{\pi/2} \int_{0}^{x} f(x,y) \;dy\,dx$$ is equivalent to
$$\int_{0}^{\pi/2} \int_{y}^{\pi/2} f(x,y) \;dx\,dy$$
10. Mar 12, 2007
### HallsofIvy
Staff Emeritus
You are surely not intended to use "Fresnel" integrals. That's the point of changing the order of integration. If you integrate with respect to a different variable first you will introduce another "x" into the integral and can substitute for the "x2".
The problem is
$$\displaystyle{\int_{z=0}^{1}\int_{y=0}^{1}\int_{x= 2y}^{2 } \frac{2\cos (x^2)}{\sqrt{z}}\;dx dy dz}$$
Notice that I have added "z= ", "y= ", and "x= " to the limits of integration. I think that helps remember what you are doing.
I think we can ignore the "dz" and just swap x and y. y goes from 0 to 1 and, for each y, x goes from x= 2y to x= 2. I recommend you draw a picture of the region described by that: it is a triangle with vertices at (0,0), (2, 0), and (2, 1). Now, to reverse the order of integration, since "dx" will become the "outer integral" (still ignoring "dz") its limits must be constants. Look at your picture- what is the smallest value x takes on in that region? What is the largest value x takes on in that region? Those will be your limits of integration on the x integral. Now, for each x, what are the smallest and largest values y takes on? Imagine a vertical line crossing the region. What are the y values of the endpoints as functions of x? Those will be the limits of integration for the y- integral.
The first integral, with respect to y, is now very easy and, since the limits of integration now depend on x, will introduce an "x" into the integrand that
lets you substitute for x2 in the second integral. | 2017-03-28 21:53:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8080832958221436, "perplexity": 583.2061551802541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189903.83/warc/CC-MAIN-20170322212949-00129-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/545132/set-right-border-with-multirow-and-width-of-columns-on-table?noredirect=1 | # Set right border with multirow and width of columns on table
the first thing to say is that I hate tables in latex when they are a little bit complicate. 🤦♂️ Then, I am trying to make a table with multirow and multicolumn. I found my questions answered several times, in
Missing right vertical line in a multirow in a table
Missing vertical lines in the latex table
Border with multirow and multicolumn
Text wrapping in multirow columns
Scale column of a table in percentage of \textwidth
Latex Multirow Table with specified column width
but, I don't know why, neither of the answers apparently work in my case. I have this table:
\begin{table}[ht]
\centering
\rowcolors{1}{white}{gray!30}
\begin{tabularx}{\linewidth}{|>{\hsize=.25\hsize}X>{\hsize=.25\hsize}X>{\hsize=.5\hsize}X|}
\hline
\multicolumn{2}{|l}{\textbf{Funzione}} & \textbf{Definizione} \\ \hline
\multicolumn{2}{|l}{\texttt{=SOMMA.SE(int; criterio; int\_somma)}} & Somma le celle di intervallo\_somma secondo un criterio assegnato ad un intervallo. \\ \hline
\multicolumn{2}{|l}{\texttt{=CONTA.SE(int; criterio)}} & Conta il numero di celle nell'intervallo che corrispondono al criterio dato. \\ \hline
\multicolumn{2}{|l}{\texttt{=MEDIA.SE(int; criterio; int\_media)}} & Determina la media delle celle di intervallo\_media secondo un criterio assegnato ad un intervallo. \\ \hline
\texttt{=SOMMA.PIÙ.SE} & \multirow{4}{0.25\linewidth}{\texttt{(int;} \\ \texttt{int\_criterio1; criterio1;} \\ \texttt{int\_criterio2; criterio2; ...)}} & \multirow{4}{0.5\linewidth}{Esegue la funzione per le celle di un sottoinsieme dell'intervallo specificate da un determinato insieme di criteri.} \\
\texttt{=MEDIA.PIÙ.SE} \\
\texttt{=MAX.PIÙ.SE} \\
\texttt{=MIN.PIÙ.SE} \\ \hline
\multicolumn{2}{|l}{\texttt{=SOMMA.SE(int; criterio; int\_somma)}} & Conta il numero di celle specificate da un determinato insieme di condizioni o criteri. \\ \hline
\end{tabularx}
\caption{}
\label{tab:formule:predefinite:logico-matematiche}
\end{table}
Resulting as
As you can see there's a lot to fix.
1. The first thing is the width of the columns: I set 25%, 25% and 50% but the third column is clearly less than 50%.
2. The first multirow "(int; int_criterio1[...]" is trimmed. Is there a way to enlarge the height to fit all the content?
3. On the second multirow "Esegue la funzione per le celle di..." it draws only the first row right border. I tried putting & & on the rows below, it makes the border, but the multirow is invalidated returning to one row
4. Side question, I noticed that the second multirow doesn't have the alignment justified. Is there a way to make it?
I compile with xelatex if it matters. Thanks to everyone who will help.
• Does "=MEDIA.PIÙ.SE" have to be in the same row as "int_criterio1;"? Is there a correlation between the enries in these two columns? May 19, 2020 at 18:24
• @leandriis Not exactly, "=SOMMA.PIÙ.SE", "=MEDIA.PIÙ.SE" and the two below are in four different rows. "(int; int_criterio1; criterio1; etc." is only in one row as you can see in the code but it must take the four rows. Those are the MS excel italian functions for sum, avg, max, min and on the other column the parameters to use May 19, 2020 at 21:02
• So basically, all four entries in the first column whare the same entry in the second column? May 19, 2020 at 21:07
• Take this example: I have one row with SUM(1,2,3), the second row AVG(1,2,3), the third MAX(1,2,3) and the fourth MIN(1,2,3). Instead of writing always (1,2,3) that (imagine) it's a long content, I can write only SUM, AVG, MAX and MIN and then I make a separate column where I can fit all the content of (1,2,3) (imagine always that it's long) May 19, 2020 at 21:17 | 2022-05-28 19:24:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9930696487426758, "perplexity": 5390.628956593573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00294.warc.gz"} |
http://math.stackexchange.com/questions/85009/on-lipschitz-condition-and-absolute-continuity?answertab=oldest | # On Lipschitz condition and absolute continuity
A function $f(x)$ on $[0,1]$ is said to satisfy a Lipschitz condition if there exists a constant $M$, such that $$|f(x)-f(y)|\leqslant M|x-y| ~\forall~x,y\in[0,1].$$
I want to show the following: If $f$ is Lipschitz, then
(i) $f$ is absolutely continuous
(ii) $f$ is of bounded variation.
(iii) $f'(x)$ exists almost everywhere on $[0,1]$ and $f(x)=\int_0^x f'(t)\,\mathrm{d}t$, provided that $f(0)=0$.
Attempt:
(i) Let $\varepsilon \gt 0$. Choose $\delta=\varepsilon /M$. Then for any collection $\{[x_i,y_i]\}$ of non-overlapping intervals with total length $\sum |x_i-y_i|\lt \delta$, we have $$\sum|f(x_i)-f(y_i)|\lt M\sum|x_i-y_i|\lt \varepsilon.$$ Hence $f$ is absolutely continuous.
(ii) For this, can I simply use the fact that an absolutely continuous functions is of bounded variation or I would have to prove it formally?
(iii) I also know that if $f$ absolutely continuous $\implies$ $f$ is of bounded variation $\implies f'(x)$ exists a.e. and $$f(x)=f(0)+\int_0^x f'(t)\,\mathrm{d}t.$$ Thus, $f(x)=\int_0^x f'(t)\,\mathrm{d}t \text{ if } f(0)=0.$
Can I use these known facts to prove (ii) and (iii)?
I am also asked to find a function $h(x)$ on $[0,1]$ such that $$\tag{*}\int_0^1 h'(x)\text{d}x\lt h(1)-h(0).$$
This is what I did: Let $h$ be the Cantor function is monotone increasing of $[0,1]$. $h(0)=1,~h(1)=1$. $h'=0,~\forall~x$ not in the Cantor set. Then the inequality holds.
Are there other examples where (*) holds?
-
Yep - your (i) is good. Apparently, your (ii) and (iii) follow from that. – mixedmath Nov 23 '11 at 18:56
(i) is indeed fine. I would think that whether (ii) and (iii) are allowed to follow automatically would be up to your professor. If you proved that every absolutely continuous function is of bounded variation, then it would certainly seem fair game. I would still ask your professor to be sure, however. – JavaMan Nov 23 '11 at 18:57
As was pointed out in the comments, your argument for (i) is perfectly fine.
Concerning (ii), yes, that's true, you could apply that result. On the other hand, the direct proof is straightforward:
We want to prove that $$\tag{#} \sup{ \left\{ \sum_{k=1}^n |f(a_k) - f(a_{k-1})| \,:\, 0\leq a_0 \leq a_1 \leq \cdots \leq a_{n-1} \leq a_n \leq 1 \right\} } \lt \infty$$ where the supremum is taken over all finite increasing sequences in $[0,1]$. Finiteness follows from the Lipschitz condition $|f(x) - f(y)| \leq M\cdot|x-y|$ because for a finite increasing sequence $0\leq a_0 \leq a_1 \leq \cdots \leq a_{n-1} \leq a_n \leq 1$ we have $$\sum_{k=1}^n |f(a_k) - f(a_{k-1})| \leq \sum_{k=1}^n\phantom{|} M \cdot |a_k - a_{k-1}| \leq M$$ because $|a_k-a_{k-1}| = a_{k}-a_{k-1}$ and $\sum_{k=1}^n (a_k-a_{k-1}) = a_n -a_0 \leq 1 - 0 = 1$. This proves that the supremum in $(\#)$ is in fact bounded by $M \lt \infty$.
This seems a bit easier than making the detour via absolute continuity (you'll notice that I hardly did anything else than what you did in your proof for (i), and that's why I gave the fully detailed argument).
Compare this with the proof of the fact that absolute continuity of a function $f$ implies that it is of bounded variation (which isn't much more difficult, but somewhat more fiddly, I think).
Now for (iii) I don't think that it is intended that you do this by hand, as a direct and detailed proof of this involves a considerable amount of work. So what you do is almost certainly the intention of this exercise.
Be a little careful, though: you say
I also know that if $f$ absolutely continuous $\implies$ $f$ is f bounded variation $\implies f'(x)$ exists a.e. and $$f(x)=f(0)+\int_0^1 f'(x)\text{d}x.$$ Thus, $f(x)=\int_0^1 f'(x)\text{d}x~~~\text{if}~~f(0)=0.$
The second implication you write here is not true: bounded variation alone doesn't give that $f(x) = f(0) + \int_{0}^x f'(t)\,dt$ — note that there are two typos here as well:
1. the upper bound of the integral should be $x$ instead of $1$ and
2. your equality $f(x) = f(0) + \int_{0}^1 f'(x)\,dx$ involves $x$ with two different meanings: on the left hand side you have some point $x \in [0,1]$ and on the right hand side you have $x$ as a “dummy variable” for integration.
To see that the implication that bounded variation of $f$ doesn't imply that $f(x) = f(0) + \int_{0}^x f'(t)\,dt$, just notice that the Cantor function $h$ is monotone, hence of bounded variation, but $\int_{0}^x h(t)\,dt = 0$ for all $x \in [0,1]$.
However, I think that you meant to say that $f$ absolutely continuous implies $f(x) = f(0) + \int_{0}^x f(t)\,dt$ without interrupting by "$f$ has bounded variation", and then the rest of the argument here is fine too.
To get other (trivial examples) for your last question you can of course take any continuously differentiable (or even only absolutely continuous) function $g$ and add $h$ to it. Genuine examples not involving the Cantor function or simple variations of it are a bit cumbersome to come up with, so I think you solved that exercise satisfactorily, too.
When looking up something else, I recently found this nice and careful write-up by Noella Grady, which is about the present and many related matters.
-
Thanks very much for your response. For (iii), what I wanted to say was that If $f$ is absolutely continuous, the $f'$ exists a.e. and $f(x)=f(a)+\int_a^x f'(t)~\text{d}t$. – Colin Nov 23 '11 at 23:25
I was pretty sure that this was what you meant, that's why I didn't expand on what is correct. Just be careful with such formulations, because it is hard to tell such slips from genuine misconceptions (when correcting homework or an exam). – t.b. Nov 23 '11 at 23:29
Thanks also for pointing out the typos. I'm most grateful. – Colin Nov 23 '11 at 23:32
For the counterexample, since you are not requiring differentibility everywhere in $[0,1]$, you can always use a step function like: $$h(x):=\begin{cases} 0 &\text{, if } 0\leq x<1/2 \\ 1&\text{, if } 1/2\leq x\leq 1\; .\end{cases}$$ In fact function $h$ is differentiable a.e. with $h^\prime=0$ a.e., hence: $$0=\int_0^1 h^\prime (x)\ \text{d} x< h(1)-h(0)=1\; .$$
- | 2015-10-05 00:26:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9428608417510986, "perplexity": 163.06710535587555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676381.33/warc/CC-MAIN-20151001215756-00226-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://www.jobilize.com/online/course/two-source-interference-interference-by-openstax?qcr=www.quizover.com | # Two source interference
Page 1 / 1
We examine interference from two coherent sources.
## Waves on a pond:
Think of when you drop a pebble into a pond, you will see circular waves eminate from the point where you dropped the pebble.
When you drop two pebbles side by side you will see a much more complicatedpattern:
Likewise with electromagnetic waves, you can get interesting interference phenomenawhen waves eminate from two point sources.
## Two point sources
Lets take a particular example of two point sources separated by a distance d. The waves emitted by point source are spherical and thus can be written $\stackrel{⃗}{E}=\frac{{\stackrel{⃗}{E}}_{0}}{r}{\mathrm{cos}}\left(kr-\omega t\right)$ To make the problem easier we will make the $k$ 's the same for the two sources. Also lets set the ${E}_{0}$ 's to be the same as well.
The the only difference in the waves will be the $r$ 's, that is ${\stackrel{⃗}{E}}_{1}=\frac{{\stackrel{⃗}{E}}_{0}}{{r}_{1}}{\mathrm{cos}}\left(k{r}_{1}-\omega t\right)$ ${\stackrel{⃗}{E}}_{2}=\frac{{\stackrel{⃗}{E}}_{0}}{{r}_{2}}{\mathrm{cos}}\left(k{r}_{2}-\omega t\right)$ Now there is a slightly subtle point here that is important to understand. In the denominator it is sufficient to say that ${r}_{1}\approx {r}_{2}$ and just call it $r$ . We assume that we are far enough away that the differences between ${r}_{1}$ and ${r}_{2}$ are too small to matter. However this is not true in the argument of the harmonic function. There, very small differences between ${r}_{1}$ and ${r}_{2}$ can have a big effect. So lets define ${r}_{1}={r}_{2}=R$ $\begin{array}{c}I\propto {⟨{\left(\frac{{E}_{0}}{R}{\mathrm{cos}}\left(k{r}_{1}-\omega t\right)+\frac{{E}_{0}}{R}{\mathrm{cos}}\left(k{r}_{2}-\omega t\right)\right)}^{2}⟩}_{T}\\ ={⟨\frac{{E}_{0}^{2}}{{R}^{2}}{{\mathrm{cos}}}^{2}\left(k{r}_{1}-\omega t\right)⟩}_{T}+{⟨\frac{{E}_{0}^{2}}{{R}^{2}}{{\mathrm{cos}}}^{2}\left(k{r}_{2}-\omega t\right)⟩}_{T}\\ \text{ }+{⟨2\frac{{E}_{0}^{2}}{{R}^{2}}{\mathrm{cos}}\left(k{r}_{1}-\omega t\right){\mathrm{cos}}\left(k{r}_{2}-\omega t\right)⟩}_{T}\\ =\frac{1}{2}\frac{{E}_{0}^{2}}{{R}^{2}}+\frac{1}{2}\frac{{E}_{0}^{2}}{{R}^{2}}+\\ \text{ }+{⟨2\frac{{E}_{0}^{2}}{{R}^{2}}{\mathrm{cos}}\left(k{r}_{1}-\omega t\right){\mathrm{cos}}\left(k{r}_{2}-\omega t\right)⟩}_{T}\end{array}$ Now to evaluate the final term we use ${\mathrm{cos}}\left(\theta -\phi \right)={\mathrm{cos}}\theta {\mathrm{cos}}\phi +{\mathrm{sin}}\theta {\mathrm{sin}}\phi$ and write So we have $\begin{array}{c}I\propto \frac{1}{2}\frac{{E}_{0}^{2}}{{R}^{2}}+\frac{1}{2}\frac{{E}_{0}^{2}}{{R}^{2}}+2\frac{{E}_{0}^{2}}{{R}^{2}}\frac{1}{2}{\mathrm{cos}}k\Delta r\\ =\frac{1}{{R}^{2}}\left({E}_{0}^{2}+{E}_{0}^{2}{\mathrm{cos}}k\Delta r\right)\\ =\frac{1}{{R}^{2}}{E}_{0}^{2}\left(1+{\mathrm{cos}}k\Delta r\right)\end{array}$
Clearly I will be a maximum when the cosine is = +1 $k\Delta r=2n\pi \text{ }n=0,1,2\dots$ $\frac{2\pi }{\lambda }\Delta r=2n\pi$ $\Delta r=n\lambda$ There will be a minimum when the cosine is = -1 $k\Delta r=n\pi \text{ }n=1,3,5\dots$ $\Delta r=\frac{n\lambda }{2}\text{ }n=1,3,5\dots$ So you get light and dark bands which are called interference fringes.To reiterate; we have two rays of light eminating from two point sources. Wehave looked at the combined wave at some point, a distance ${r}_{1}$ from the first source and a distance from the second source. In that case we find that the intensity is proportional to $\frac{1}{{R}^{2}}{E}_{0}^{2}\left(1+{\mathrm{cos}}k\Delta r\right)\text{.}$ To make things easier we can redefine ${E}_{0}$ to be the amplitude of the waves at the point under consideration, that is $I={\epsilon }_{0}c{E}_{0}^{2}\left(1+{\mathrm{cos}}k\Delta r\right)\text{.}$ Or we can say ${I}_{0}=\epsilon c{E}_{0}^{2}/2$ and write $I=2{I}_{0}\left(1+{\mathrm{cos}}k\Delta r\right)\text{.}$
Say we place a screen a distance S away from the two sources:
In this case we see that $\Delta r=d{\mathrm{sin}}\theta$ So we have maxima at $\Delta r=n\lambda =d{\mathrm{sin}}\theta \text{.}$ The angle between two maxima is given by ${\mathrm{sin}}{\theta }_{n+1}-{\mathrm{sin}}{\theta }_{n}=\frac{\lambda }{d}$ or for small $\theta$ $\Delta \theta =\frac{\lambda }{d}$ Notice how when the sources are moved far apart the effect maxima become very closetogether so the screen appears to be uniformly illuminated. If a screen is placed a distance S away the maxima on the screen will occur such that $d{\mathrm{sin}}\theta =n\lambda$ but in the small angle limit ${\mathrm{sin}}\theta ={\mathrm{tan}}\theta =\frac{y}{S}$ which implies $y=\frac{n\lambda S}{d}$ likewise minima will occur at $y=\frac{n\lambda S}{2d}\text{ }n=1,3,5\dots$ using ${\mathrm{cos}}\theta =2{{\mathrm{cos}}}^{2}\frac{\theta }{2}-1$ we can rewrite $I=2{I}_{0}\left(1+{\mathrm{cos}}k\Delta r\right)$ as $I=4{I}_{0}{{\mathrm{cos}}}^{2}\frac{k\Delta r}{2}$
## Young's double slit
Young's double slit.is an excellent example of two source interference. The equations for this are what we worked out for two sources above. Interferenceis an excellent way to measure fine position changes. Small changes in $\Delta r$ make big observable changes in the interference fringes.
## Michelson interferometer
A particularly useful example of using interference is the Michelson interferometer. This can be used to measure the speed of light in a medium,measure the fine position of something, and was used to show that the speed of light is a constant in all directions.
When $\Delta r$ , the path length difference in the two arms is $\Delta r=n\lambda$ then the rays of light in the traveling down the center of the apparatus will interfere constructively. As you move off axis the light travels slightlydifferent lengths and so you get rings of interference patterns. If you have set up the apparatus so that $\Delta r=n\lambda$ and then move one of the mirrors a quarter wavelength then $\Delta r=n\lambda +\frac{1}{2}\lambda$ and you get destructive interference of the central rays. Thus you can easily position things to a fraction of a micron with such a set up.
What really matters is the change in the optical pathlength. For example you could introduce a medium in one of the paths that has a different index ofrefraction, or different velocity of light. This will change the optical pathlength and change the interference at the observer. Thus you can measurethe velocity of the light in the introduced medium.
Michelson and Morely used this technique to try to determine if the speed of light is different in different directions. They put the whole apparatus on arotating table and then looked for changes in the interference fringes as it rotated. They saw no changes. In fact they went so far as to wait to see whathappened as the earth rotated and orbited and saw no changes. They thus concluded that the speed of light was the same in all directions (which nobodyat the time believed, even though that is the conclusion you draw from Maxwell's equations.)
## Ring gyroscope
Another application of interference is a a gyroscope, ie. as device to measure rotations.
If the apparatus is rotating, then the pathlengths are different in different directions and so you can use the changes in the interference patterns tomeasure rotations. This is in fact how gyroscopes are implemented in modern aircraft.
#### Questions & Answers
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
Is there any normative that regulates the use of silver nanoparticles?
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
why we need to study biomolecules, molecular biology in nanotechnology?
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
why?
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
anyone know any internet site where one can find nanotechnology papers?
research.net
kanaga
sciencedirect big data base
Ernesto
Introduction about quantum dots in nanotechnology
what does nano mean?
nano basically means 10^(-9). nanometer is a unit to measure length.
Bharti
do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment?
absolutely yes
Daniel
how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
characteristics of micro business
Abigail
for teaching engĺish at school how nano technology help us
Anassong
How can I make nanorobot?
Lily
Do somebody tell me a best nano engineering book for beginners?
there is no specific books for beginners but there is book called principle of nanotechnology
NANO
how can I make nanorobot?
Lily
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball.
Tarell
what is the actual application of fullerenes nowadays?
Damian
That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes.
Tarell
Got questions? Join the online conversation and get instant answers! | 2020-02-22 23:24:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 46, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.599952220916748, "perplexity": 777.1543331702569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145729.69/warc/CC-MAIN-20200222211056-20200223001056-00319.warc.gz"} |
https://math.stackexchange.com/questions/2380746/for-any-sequence-of-real-number-a-a-1-a-2-a-3-dots | # For any sequence of real number $A= \{ a_1, a_2, a_3,\dots\}$
for any sequence of real number $A= \{ a_1, a_2, a_3,\dots\}$ a sequence $\Delta A$ is defined as $\Delta A=\{a_2 - a_1, a_3 - a_2,\dots\}$ suppose that all terms of sequence $\Delta(\Delta A)$ are $1$ and that $a_{19}=a_{92}=0$ then find the digit at unit place of $a_3$ .
I got the following equations but can't understand how to proceed further.
$$a_{21} - 2 a_{20}= 1$$ $$a_{17} - 2 a_{18}= 1$$ $$a_{18} + a_{20}= 1$$
• This might work: let $a_1$ be $x$, let $a_2$ be $y$, then find a formula for $a_3$ in terms of $x$ and $y$, then a formula for $a_4$, then $a_5$, keep going until you see a pattern, prove that the pattern you have found really does persist throughout the sequence. Then you'll be able to use the information about $a_{19}$ and $a_{92}$ to find $x$ and $y$ and $a_3$. – Gerry Myerson Aug 3 '17 at 3:48
Using a little theory, if the second difference is constant, the sequence is quadratic. If the constant is 1, the coefficient of the $n^2$-term is $1/2$. If $a_{19}=a_{92}=0$, then the quadratic has the factors $n-19$ and $n-92$. Now just put it all together. | 2020-10-21 11:10:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9492283463478088, "perplexity": 84.50019978707935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876307.21/warc/CC-MAIN-20201021093214-20201021123214-00519.warc.gz"} |
https://math.stackexchange.com/questions/2242455/how-can-we-show-that-int-infty-inftykex-pm1-over-pi2ex-x12/2277963 | How can we show that $\int_{-\infty}^{+\infty}{ke^x\pm1\over \pi^2+(e^x-x+1)^2}\cdot{(e^x+1)^2\over \pi^2+(e^x+x+1)^2}\cdot 2x \,\mathrm dx=k?$
Motivated by this paper.
Conjecture:
$$\int_{-\infty}^{+\infty}{ke^x\pm1\over \pi^2+(e^x-x+1)^2}\cdot{(e^x+1)^2\over \pi^2+(e^x+x+1)^2}\cdot 2x \,\mathrm dx=k,\tag1$$ where $k$ is a real number.
Making an attempt:
$u=e^x+1\implies \,\mathrm du=e^x\,\mathrm dx$ and let $k=1$ for simplification, then (1) becomes
$$\int_{1}^{\infty}{u^3\over \pi^2+(u-x)^2}\cdot{\ln(u-1)\over \pi^2+(u+x)^2}\cdot{2\mathrm du\over u-1}.\tag2$$
I have no idea where to go from here! I don't think substitution work here, probably using contour integration.
How can we prove (1)?
• – Jack D'Aurizio Apr 19 '17 at 20:46
• how do you came up with this conjecture? (+1) – tired Apr 19 '17 at 22:35
• Btw. what is the meaning of the $\pm$ symbol? both integrals give the same value? – tired Apr 19 '17 at 22:41
• Most likely the integral is given by the residue at $x=i \pi$ plus/minus the residue at $x=\infty$. Unluckily i don't have the time to dig in deeper (especially one has to show that all other residue contributions vanish) but maybe someone can take it from here – tired Apr 19 '17 at 23:18
• Numerically the cancelations defintily happens, so this is the way to go – tired Apr 19 '17 at 23:30
2 Answers
Some integrals
• Let us prove that
$$\boxed{I_0 = \int\limits_{-\infty}^{+\infty}{dz\over\left(e^z-z+1\right)^2+\pi^2} = {1\over2}}$$ Roots of the denominator can be defined from the system $$\begin{cases} z=x+iy\\ \left(e^x\cos y - x + 1 + ie^x\sin y - iy\right)^2 + \pi^2 = 0, \end{cases}$$ $$\begin{cases} z=x+iy\\ \left(e^x\cos y - x + 1\right)^2 - \left(e^x\sin y - y\right)^2 + \pi^2 = 0\\ \left(e^x\cos y - x + 1\right)\left(e^x\sin y - y\right) = 0, \end{cases}$$ $$\begin{cases} z=x+iy\\ e^x\cos y = x - 1\\ \left|e^x\sin y - y\right| = \pi, \end{cases}$$ with the solutions $$z=\pm\pi i$$ (see also Wolfram Alpha).
So, $$I_0 = 2\pi i\,\mathrm{Res}_{z=\pi i}{1\over\left(e^z-z+1\right)^2+\pi^2} = 2\pi i\lim_{z\to\pi i}{1\over2\left(e^z-z+1\right)\left(e^z-1\right)} = {1\over2}.$$
• Let us prove that
$$\boxed{I_1 = \int\limits_{-\infty}^{+\infty}{dz\over\left(e^z+z+1\right)^2+\pi^2} = {2\over3}}$$ Roots of the denominator can be defined from the system $$\begin{cases} z=x+iy\\ \left(e^x\cos y + x + 1 + ie^x\sin y + iy\right)^2 + \pi^2 = 0, \end{cases}$$ $$\begin{cases} z=x+iy\\ \left(e^x\cos y + x + 1\right)^2 - \left(e^x\sin y + y\right)^2 + \pi^2 = 0\\ \left(e^x\cos y + x + 1\right)\left(e^x\sin y + y\right) = 0, \end{cases}$$ $$\begin{cases} z=x+iy\\ e^x\cos y + x + 1 = 0\\ \left|e^x\sin y + y\right| = \pi, \end{cases}$$ with the solutions $$z=\pm\pi i$$ (see also Wolfram Alpha).
Note that the point $$z=\pi i$$ is a second-order pole, so $$I_1 = 2\pi i\,\mathrm{Res}_{z=\pi i}{1\over\left(e^z+z+1\right)^2+\pi^2} = 2\pi i\lim_{z\to\pi i} {d\over dz}\left({(z-\pi i)^2\over\left(e^z+z+1\right)^2+\pi^2}\right) = {2\over3}.$$ (see also Wolfram Alpha).
• Let us prove that
$$\boxed{I_2 = \int\limits_{-\infty}^{+\infty}{e^zdz\over\left(e^z-z+1\right)^2+\pi^2} = {1\over2}}$$
Really, $$I_2 = \int\limits_{-\infty}^{+\infty}{e^zdz\over\left(e^z-z+1\right)^2+\pi^2}= \int\limits_{-\infty}^{+\infty}{e^z-1\over\left(e^z-z+1\right)^2+\pi^2}\,dz + I_0$$ $$= {1\over\pi}\left.\arctan{e^z-z-1\over\pi}\right|_{-\infty}^{+\infty} + {1\over 2} = {1\over2}.$$
• Let us prove that
$$\boxed{I_3 = \int\limits_{-\infty}^{+\infty}{e^zdz\over\left(e^z+z+1\right)^2+\pi^2} = {1\over3}}$$
Similarly, $$I_3 = \int\limits_{-\infty}^{+\infty}{e^zdz\over\left(e^z+z+1\right)^2+\pi^2}= \int\limits_{-\infty}^{+\infty}{e^z+1\over\left(e^z+z+1\right)^2+\pi^2}\,dz - I_1$$ $$= {1\over\pi}\left.\arctan{e^z+z-1\over\pi}\right|_{-\infty}^{+\infty} - {2\over 3} = {1\over3}.$$
• Let us prove that
$$\boxed{I_4 = \int\limits_{-\infty}^{+\infty}{2z(e^z+1)^2\over\left(\left(e^z-z+1\right)^2+\pi^2\right)\left(\left(e^z+z+1\right)^2+\pi^2\right)}\,dx = 0}$$
Really, $$I_4 = \int\limits_{-\infty}^{+\infty}{2z(e^z+1)^2\over\left(\left(e^z-z+1\right)^2+\pi^2\right)\left(\left(e^z+z+1\right)^2+\pi^2\right)}\,dx$$ $$= \int\limits_{-\infty}^{+\infty}{e^z+1\over2}\left({1\over\left(e^z-z+1\right)^2+\pi^2} - {1\over\left(e^z+z+1\right)^2+\pi^2}\right)\,dx$$ $$= {I_2+I_0-I_3-I_1\over2} = {1\over2}\left({1\over2}+{1\over2}-{2\over3}-{1\over3}\right) = 0.$$
• Let us prove that
$$\boxed{I_5 = \int\limits_{-\infty}^{+\infty}{2ze^z(e^z+1)^2\over\left(\left(e^z-z+1\right)^2+\pi^2\right)\left(\left(e^z+z+1\right)^2+\pi^2\right)}\,dx = 1}$$
The denominator is $$D(z) = \left(\left(e^z+1\right)^2+z^2 +\pi^2 - 2z\left(e^z+1\right)\right) \left(\left(e^z+1\right)^2+z^2+\pi^2 + 2z\left(e^z+1\right)\right)$$ $$= \left(\left(e^z+1\right)^2+z^2+\pi^2\right)^2 - 4z^2\left(e^z+1\right)^2,$$ $$D'(z) = 4\left(e^z+z+1\right)\left(\left(e^z+1\right)^2+z^2+\pi^2\right) -8z\left(e^z+1\right)\left(e^z+z+1\right)$$ $$=4\left(e^z+z+1\right)\left(\left(e^z-z+1\right)^2+\pi^2\right)$$
The point $$z=\pi i\$$ is the simple pole. So,
$$I_5 = 2\pi i\,\mathrm{Res}_{z=\pi i}{2ze^z(e^z+1)^2\over\left(\left(e^z-z+1\right)^2+\pi^2\right)\left(\left(e^z+z+1\right)^2+\pi^2\right)}$$ $$= 2\pi i\,\lim_{z\to\pi i}{2ze^z(e^z+1)^2\over D'(z)} = 1.$$ (see also Wolfram Alpha)
Final calculations
$$I = \int\limits_{-\infty}^{+\infty}{ke^x\pm1\over \pi^2+(e^x-x+1)}\cdot{(e^x+1)^2\over \pi^2+(e^x+x+1)^2}\cdot 2x \mathrm dx$$ $$= kI_5\pm I_4 = k.$$
Finally, $$\boxed{\boxed{I = k}}$$
• @ZaidAlyafeai Thank you. Fixed. – Yuri Negometyanov May 12 '17 at 17:28
• Excellent here goes the (+1) . – Zaid Alyafeai May 13 '17 at 1:56
First note that considering
$$F(k)=\int_{-\infty}^{+\infty}{(ke^x\pm1)\over \pi^2+(e^x-x+1)^2}\cdot{(e^x+1)^2\over \pi^2+(e^x+x+1)^2}\cdot 2x \mathrm dx$$
Let $x \to \log(x)$
$$F(k)=\int_{0}^{+\infty}{(kx\pm1) \over \pi^2+(x-\log(x)+1)^2}\cdot{(x+1)^2\over \pi^2+(x+\log(x)+1)^2}\cdot \frac{2\log(x)}{x} \mathrm dx = k$$
By separating the integrals note that
$$I_1=\int_{0}^{+\infty}{1 \over \pi^2+(x-\log(x)+1)^2}\cdot{(x+1)^2\over \pi^2+(x+\log(x)+1)^2}\cdot \frac{2\log(x)}{x} \mathrm dx=0$$
I could prove it numerically using Matlab. Hence I only show
$$I_2=\int_{0}^{+\infty}{\log(x) \over \pi^2+(x-\log(x)+1)^2}\cdot{(x+1)^2\over \pi^2+(x+\log(x)+1)^2}\cdot \mathrm dx = \frac{1}{2}$$
Consider the function
$$f(z) = \frac{(z-1)^2}{(1-(z+\log z))(1-(z-\log(z))}$$
Integrated around a key-hole contour around the principle branch of the logarithm
$$\log(z) = \log|z|+i\mathrm{Arg}(z)$$
Hence the contour
By taking the limits the smaller circle and the bigger one go to zero hence
$$\int_{-\infty}^{0}\frac{(x-1)^2}{(1-(x+\log|x|+i\pi ))(1-(x-\log|x|-i\pi)}dx+\int_{0}^{-\infty}\frac{(x-1)^2}{(1-(x+\log|x|-i\pi ))(1-(x-\log|x|+i\pi)}dx = 2\pi i\mathrm{Res}(f,1)$$
Convert to the positive limit
$$\int_{0}^{\infty}\frac{(x+1)^2}{(1+x-\log x-i\pi )(1+x+\log x+i\pi)}-\frac{(x+1)^2}{(1+x-\log x+i\pi )(1+x+\log x-i\pi)}dx = 2\pi i\mathrm{Res}(f,1)$$
This magically reduces to our integral
$$\int_{0}^{+\infty}{4\pi \,i \log(x) \over \pi^2+(x-\log(x)+1)^2}\cdot{(x+1)^2\over \pi^2+(x+\log(x)+1)^2}\cdot \mathrm dx = 2\pi i\mathrm{Res}(f,1)$$
Note that
$$\mathrm{Res}(f,1) = \lim_{z \to 1}\frac{(z-1)^3}{(1-(z+\log z))(1-(z-\log(z))} = 1$$
Hence we finally get our result
$$\int_{0}^{+\infty}{\log(x) \over \pi^2+(x-\log(x)+1)^2}\cdot{(x+1)^2\over \pi^2+(x+\log(x)+1)^2}\cdot \mathrm dx = \frac{1}{2}$$
Using the same approach we could show
$$\int^\infty_{-\infty}\frac{dx}{(e^x-x+1)^2+\pi^2}=\frac{1}{2}$$
• Aren't you missing an ${i}$ in your definition of complex logarithm? – Dmoreno May 11 '17 at 21:24
• @Dmoreno, yesss, thanks. – Zaid Alyafeai May 11 '17 at 21:25 | 2019-08-25 16:24:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9261842370033264, "perplexity": 719.5320727287678}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330750.45/warc/CC-MAIN-20190825151521-20190825173521-00454.warc.gz"} |
http://ronaldconnelly.blogspot.com/2015/08/find-maxn.html | ## Monday, August 17, 2015
### find max(n)
$A=\begin{Bmatrix} {1,2,3,4,5,——,2015} \end{Bmatrix}$
if we pick $n$ numbers from $A$, we call it the set $B$ ,and the sum of any three… | 2017-12-15 12:15:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6540802121162415, "perplexity": 1734.9018188947134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948569405.78/warc/CC-MAIN-20171215114446-20171215140446-00472.warc.gz"} |
https://www.hotelonyx-gubin.pl/xhmq5/sum-of-interior-angles-of-a-polygon-formula-dfb982 | The sum of the measures of the interior angles of a polygon with n sides is (n – 2)180. A polygon is a closed geometric figure with a number of sides, angles and vertices. Since every triangle has interior angles measuring 180° 180 °, multiplying the number of dividing triangles times 180° 180 ° gives you the sum of the interior angles. [1] X Research source The value 180 comes from how many degrees are in a triangle. For example, if you know the number of sides of a polygon, you can figure out the sum of the interior angles. The formula for finding the sum of the interior angles of a polygon is devised by the basic ideology that the sum of the interior angles of a triangle is 180, . (n - 2) 180° (23 - 2)180° 21 x 180° 3780° A polygon with 23 sides has a total of 3780 degrees. The sum of all the internal angles of a simple polygon is 180 (n –2)° where n is the number of sides. A polygon will have the number of interior angles equal to the number of sides it has. Sum of interior angles of a polygon. The angle sum of this polygon for interior angles can be determined on multiplying the number of triangles by 180°. The sum of interior angles is $$(6 - 2) \times 180 = 720^\circ$$.. One interior angle is $$720 \div 6 = 120^\circ$$.. The sum of the measures of the interior angles of a convex polygon with n aspects is $(n2)a hundred and eighty^\circ$ examples triangle or ( '3gon'). The sum of interior angles of a regular polygon and irregular polygon examples is given below. The name of the polygon generally indicates the number of sides of the polygon. intelligent spider has proved that the sum of the exterior angles of an n-sided convex polygon = 360° Now, let us come back to our interior angles theorem. A series of images and videos raises questions about the formula n*180-360 describing the interior angle sum of a polygon, and then resolves these questions. Question 2: Find the measure of each interior angle of a regular decagon. Sum of interior angles of Quadrilaterals. Interior angle sum of polygons: a general formula Activity 1: Creating regular polygons with LOGO (Turtle) geometry. Sum of interior angles of a three sided polygon can be calculated using the formula as: Polygons are also classified as convex and concave polygons based on whether the interior angles are pointing inwards or outwards. Related Topics. Same thing for an octagon, we take the 900 from before and add another 180, (or another triangle), getting us 1,080 degrees. Pro Lite, CBSE Previous Year Question Paper for Class 10, CBSE Previous Year Question Paper for Class 12. Type your answer here… Check your answer. The sum of the interior angles of a polygon is given by the product of two less than the number of sides of the polygon and the sum of the interior angles of a triangle. So the sum of the polygon's angles is 180 n - 360, and what does that equal? To find the size of each interior angle of a regular polygon you need to find the sum of the interior angles first. Sum of interior angles of Hexagons. Polygons method for exterior angles and interior angles. Interior Angles of Polygons. Step 1: Count the number of sides and identify the polygon. At the point where any two adjacent sides of a polygon meet (vertex), the angle of separation is called the interior angle of the polygon. The Interior Angles of a Polygon (The Lesson) The interior angles of a polygon are the angles between two sides, inside the shape.. The formula can be obtained in three ways. If a polygon has ‘p’ sides, then. Type your answer here… Check your answer. An exterior angle of a polygon is made by extending only one of its sides, in the outward direction. Sum of Interior Angles Formula This formula allows you to mathematically divide any polygon into its minimum number of triangles. A regular polygon is both equilateral and … The number of triangles is always two less than the number of sides. Let's Review To determine the total sum of the interior angles, you need to multiply the number of triangles that form the shape by 180°. Though the sum of interior angles of a regular polygon and irregular polygon with the same number of sides the same, the measure of each interior angle differs. We can check this formula to see if it works out. Next lesson. Irregular polygons are the polygons with different lengths of sides. Now you say the sum of the interior angles is twice the sum of the exterior angles, that is, 720 deg, By drawing diagonals to the remaining vertices from any vertex, you form triangles. Polygon has 13 angles. An irregular polygon is a polygon with sides having different lengths. Your email address will not be published. Hence it is a plane geometric figure. For example the interior angles of a pentagon always add up to 540° no matter if it regular or irregular, convex or concave, or what size and shape it is. Sum of Interior Angles of a Polygon. In this formula, n is the number of sides of the polygon. In fact, the sum of ( the interior angle plus the exterior angle ) of any polygon always add up to 180 degrees. Step 1: Count the number of sides and identify the polygon. Whats people lookup in this blog: The point P chosen may not be on the vertex, side or inside the polygon. Hence, we can say now, if a convex polygon has n sides, then the sum of its interior angle is given by the following formula: S = ( n − 2) × 180° The formula is $sum = \left(n - 2\right) \times 180$, where $sum$ is the sum of the interior angles of the polygon, and $n$ equals the number of sides in the polygon. Let us discuss the three different formulas in detail. the sum of the interior angles is: #color(blue)(S = 180(n-2))# Properties. Since, all the angles inside the polygons are same, therefore, the formula for finding the angles of a regular polygon is given by; Sum of interior angles = 180° * (n – 2) Where n = the number of sides of a polygon. Polygons have all kinds of neat properties! Find the number of sides in the polygon. An interior angle is located within the boundary of a polygon. A pentagon has 5 sides, and can be made from three triangles, so you know what ..... its interior angles add up to 3 × 180° = 540° And when it is regular (all angles the same), then each angle is 540° / 5 = 108° (Exercise: make sure each triangle here adds up to 180°, and check that the pentagon's interior angles … The sum of the internal angle and the external angle on the same vertex is 180°. Perspective sums nctm illuminations. 1. Identify the polygon below and determine the sum of the interior angles by using a formula. The Sum of the Interior Angles of a Polygon. The sum of all of the interior angles can be found using the formula S = (n - 2)*180. Activity 2: Investigating a general formula for the sum of the interior angles of polygons 1a) You may have earlier learnt the formula S = 180( n -2) by which to determine the interior angle sum of a polygon in degrees, but this formula is only valid for simple convex and concave polygons, and NOT valid for a star pentagon like the one shown below. There are different types of polygons based on the number of sides. A polygon with three sides is called a triangle, a polygon with 4 sides is a quadrilateral, a polygon with five sides is a pentagon, a polygon with 6 sides is a hexagon and so on. Sum of interior angles = 180(n – 2) where n = the number of sides in the polygon. Sum of all the interior angles of a polygon with ‘p’ sides is given as: Sum of Interior Angles of a Polygon Formula: The formula for finding the sum of the interior angles of a polygon is devised by the basic ideology that the sum of the interior angles of a triangle is 1800. 2. Sum of interior angles = 180(n – 2) where n = the number of sides in the polygon. For example, if you know the number of sides of a polygon, you can figure out the sum of the interior angles. Example: ... Pentagon. The sum of the measures of the interior angles of a polygon with n sides is given by the general formula (n–2)180. Sum of angles of each triangle = 180 ° Please note that there is an angle at a point = 360 ° around P containing angles which are not interior angles of the given polygon. Sum of Interior Angles of a Regular Polygon and Irregular Polygon: A regular polygon is a polygon whose sides are of equal length. In a regular polygon, all the interior angles measure the same and hence can be obtained by dividing the sum of the interior angles by the number of sides. Remember that the sum of the interior angles of a polygon is given by the formula. Set up the formula for finding the sum of the interior angles. Sum of interior angles of Pentagons. This gives you n triangles, whose total angle sum is therefore 180 n. 360 of those degrees are used for angles at the center that you don't want to count. Step 2: Evaluate the formula for n = 23. Main & Advanced Repeaters, Vedantu A regular polygon is both equilateral and equiangular. A polygon with three sides has 3 interior angles, a polygon with four sides has 4 interior angles and so on. Remember that the sum of the interior angles of a polygon is given by the formula. This is so because when you extend any side of a polygon, what you are really doing is extending a straight line and a straight line is always equal to 180 degrees. As we know, polygons are closed figures, which are made up line-segments in a two-dimensional plane. What is the Sum of Interior Angles of a Polygon Formula? Therefore n = 3. Question 1: Find the sum of interior angles of a regular pentagon. In a regular polygon, all the interior angles measure the same and hence can be obtained by dividing the sum of the interior angles by the number of … This polygon is called a pentagon. The sum of the measures of the interior angles of a polygon is 720?. Author: Ryan Smith, Tim Brzezinski. Several videos ago I had a figure that looked something like this, I believe it was a pentagon or a hexagon. Sum Of The Exterior Angles Polygons And Pythagorean Theorem Uzinggo Concave polygon definition and properties assignment point concave polygon definition types properties and formula how to calculate sum of interior angles for any convex polygon you concave polygon definition and properties assignment point. Since the formula says n-2, we have to take away 2 from 3 and we end up with 1. Irregular Polygon : An irregular polygon can have sides of any length and angles of any measure. Sum of Interior Angles of a Polygon Formula Example Problems: 1. Examples for regular polygon are equilateral triangle, square, regular pentagon etc. Representing the number of sides of a polygon as n, the number of triangles formed is (n – 2). Square? All the vertices, sides and angles of the polygon lie on the same plane. Exterior angle of a regular polygon(EA) = 360/n. On a side note, we can use this piece of information in the exterior angle of a polygon formula to solve various questions. Find the value of ‘x’ in the figure shown below using the sum of interior angles of a polygon formula. That knowledge can be very useful when you're solving for a missing interior angle measurement. The interior angles of a polygon always lie inside the polygon. A polygon has interior angles. The sum of the interior angles of a regular polygon is 3060. . The sum of all of the interior angles can be found using the formula S = (n - 2)*180. To find the sum of the interior angles in a polygon, divide the polygon into triangles. This polygon is called a pentagon. The formula for finding the sum of the measure of the interior angles is (n - 2) * 180. The sum of interior angles is $$(6 - 2) \times 180 = 720^\circ$$.. One interior angle is $$720 \div 6 = 120^\circ$$.. So we can use this pattern to find the sum of interior angle degrees for even 1,000 sided polygons. Regular Polygon : A regular polygon has sides of equal length, and all its interior and exterior angles are of same measure. Find the number of sides in the polygon. The formula tells you what the interior angles of a polygon add up to. Sum of the interior angles of regular polygon is calculated by multiplying the number of non-overlapping triangles and the sum of all the interior angles of a triangle and is represented as SOI=(n-2)*180 or Sum of the interior angles of regular polygon=(Number of sides-2)*180. Worked example 12.5: Finding the sum of the interior angles of a polygon using a formula. Since each triangle contains 180°, the sum of the interior angles of a polygon is 180(n – 2). When we start with a polygon with four or more than four sides, we need to draw all the possible diagonals from one vertex. If a polygon has ‘p’ sides, then. After examining, we can see that the number of triangles is two less than the number of sides, always. In the first figure below, angle measuring degrees is an interior angle of polygon . 180 ∘. Formula. The formula for the sum of that polygon's interior angles is refreshingly simple. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, CBSE Previous Year Question Papers Class 12 Maths, CBSE Previous Year Question Papers Class 10 Maths, ICSE Previous Year Question Papers Class 10, ISC Previous Year Question Papers Class 12 Maths. Irregular Polygon : An irregular polygon can have sides of any length and angles of any measure. Sum and Difference of Angles in Trigonometry, Vedantu Check out this tutorial to learn how to find the sum of the interior angles of a polygon! The angle next to an interior angle, formed by extending the side of the polygon, is the exterior angle. The sum of the exterior angles of a polygon is always 360 deg. Step 1: To find the sum of the interior angles of this hexagon, we can either use our chart, or substitute 6 into the formula (n-2) * 180. For example, we already covered the interior angle sum of any triangle = 180°. Interior Angles Sum of Polygons. Pro Subscription, JEE To find the interior angles of polygons, we need to FIRST, find out the sum of the interior angles of the convex polygon; and SECOND, set up our equation.” “In example 1, the shape has 6 sides. Required fields are marked *. How are they Classified? This is the currently selected item. Pro Lite, Vedantu If you count one exterior angle at each vertex, the sum of the measures of the exterior angles of a polygon is always 360°. For example, a triangle has three interior angles, with a sum of 180°. In addition to the function int getSumInteriorAngles(const unsigned int numSides) that already calculates the sum of the interior angles here are at least 3 possible functions in main(). Five, and so on. The following diagrams give the formulas for the sum of the interior angles of a polygon and the sum of exterior angles of a polygon. Summary chart and Formula. Interior Angles of a Polygon Formula. Examples: Input: N = 3 Output: 180 3-sided polygon is a triangle and the sum of the interior angles of a triangle is 180. Interior Angles of Regular Polygons. Identify the polygon below and determine the sum of the interior angles by using a formula. Sum of angles of pentagon = ( 10 − 2) × 180°. Worked example 12.5: Finding the sum of the interior angles of a polygon using a formula. The angle sum of (not drawn to scale) is given by the equation. Vedantu academic counsellor will be calling you shortly for your Online Counselling session. degrees. - Get and validate the user input for the number of vertices - Print the result - Get and validate user input for if they want to go again. The value 180 comes from how many degrees are in a triangle. Interior ∠ sum of a N − sided polygon = (N − 2)180 ∘ as every high school text shall states. It is apparent from the statement in the question that sum of the interior angles of the polygon is (n-2)180^o and as Penn has worked it out as 1,980^o (n-2)xx180=1980 and n-2=1980/180=11 hence n=11+2=13 and hence Polygon has 13 angles. Substitute 3 for n. So lets figure out the number of triangles as a function of the number of sides. Sum of all the interior angles of a polygon is equal to the product of a straight angle and two less than the number of sides of the polygon. Therefore, the sum of exterior angles = 360°. A polygon is a plane geometric figure. The angle sum of this polygon for interior angles can be determined on multiplying the number of triangles by 180°. They are: As we know, by angle sum property of triangle, the sum of interior angles of a triangle is equal to 180 degrees. Sum of interior angles of a polygon formula. Scroll down the page if you need more examples and explanation. Let n equal the number of sides of whatever regular polygon you are studying. Regular Polygon : A regular polygon has sides of equal length, and all its interior and exterior angles are of same measure. (Or alternatively download to your computer StarLogo turtle geometry from the Massachusetts Institute of Technology (MIT) for free by clicking on the link.) It is also possible to calculate the measure of each angle if the polygon is regular by dividing the sum by the number of sides. If the number of sides is #n#, then . The sum of angles in a polygon depends on the number of vertices it has. The sum of the measures of the interior angles of a polygon with n sides is (n – 2)180.. The figure shown above has three sides and hence it is a triangle. Exterior angles of polygons. To find the sum of the interior angles of a polygon, multiply the number of triangles in the polygon by 180°. Geometric solids (3D shapes) Video transcript. Sum of Interior Angles of a Polygon with Different Number of Sides: 1. Hence, we can say, if a polygon is convex, then the sum of the degree measures of the exterior angles, one at each vertex, is 360°. Interior Angle Sum of Polygons The sum of the interior angles of any polygon can be calculated using the formula: (n - 2)180° where variable n = the number of sides the polygon has. Check out this tutorial to learn how to find the sum of the interior angles of a polygon! Most of the proofs which I have seen about the problem, has a similar idea as … What are Polygons? Angles of a Triangle: If a polygon has all the sides of equal length then it is called a regular polygon. Shape. Angles. i.e. For this activity, click on LOGO (Turtle) geometry to open this free online applet in a new window. Sum of the interior angles of regular polygon calculator uses Sum of the interior angles of regular polygon=(Number of sides-2)*180 to calculate the Sum of the interior angles of regular polygon, Sum of the interior angles of regular polygon is calculated by multiplying the number of non-overlapping triangles and the sum of all the interior angles of a triangle. The sum of the angles in a triangle is 180°. The measure of each interior angle of a regular polygon is equal to the sum of interior angles of a regular polygon divided by the number of sides. Below given is the Formula for sum of interior angles of a polygon: If “n” represents the number of sides, then sum of interior angles of a polygon = (n – 2) × { 180 }^{ 0 } . Formula to determine the size of each angle in a REGULAR Polygon. Topic: Angles, Polygons. Set up the formula for finding the sum of the interior angles. The other part of the formula, $n - 2$ is a way to determine how … The sum of the measures of the interior angles of a convex polygon with n sides is. Method 1: We can check if this formula works by trying it on a triangle. The sum of interior angles of polygons. Hence, we can say now, if a convex polygon has n sides, then the sum of its interior angle is given by the following formula: This is the angle sum of interior angles of a polygon. A triangle has three sides. Polygons have all kinds of neat properties! A plane figure having a minimum of three sides and angles is called a polygon. The other part of th… The interior angles of any polygon always add up to a constant value, which depends only on the number of sides. The two most important ones are: Interior angle – The sum of the interior angles of a simple n-gon is (n − 2)π radians or (n − 2) × 180 degrees. Repeaters, Vedantu Therefore, by the angle sum formula we know; Sum of angles of pentagon = ( 5 − 2) × 180°. It is also possible to calculate the measure of each angle if the polygon is regular by dividing the sum by the number of sides. Draw the line segments connecting it to each of the vertices. The number of Sides is used to classify the polygons. The value 180 comes from how many degrees are in a triangle. If a polygon has 5 sides, it will have 5 interior angles. Four of each. Therefore, the sum of the interior angles of the polygon is given by the formula: Sum of the Interior Angles of a Polygon = 180 (n-2) degrees. We know that In case of regular polygons, the measure of each interior angle is congruent to the other. Sum of Interior Angles Formula. The diagram in this question shows a polygon with 5 sides. Sorry!, This page is not available for now to bookmark. The sum of the exterior angles of any convex polygon is 360°. The measure of an exterior angle of a regular n - sided polygon is given by the formula 360/n . Figure 3 An interior angle of a regular hexagon. That knowledge can be very useful when you're solving for a missing interior angle measurement. Sum of interior angles of a polygon with ‘p’ sides is given by: 2. Oftentimes, GMAT textbooks will teach you this formula for finding the sum of the interior angles of a polygon, where n is the number of sides of the polygon: Sum of Interior Angles = (n – 2) * 180° But as you know by now, I like to teach you how to get right answers without having to memorize a bunch of formulas whenever possible. The formula for the sum of that polygon's interior angles is refreshingly simple. Using the Formula There are two types of problems that arise when using this formula: 1. The sum of interior angles in a quadrilateral is 360º A pentagon (five-sided polygon) can be divided into three triangles. A heptagon has 7 sides, so we take the hexagon's sum of interior angles and add 180 to it getting us, 720+180=900 degrees. ( n − 2) ⋅ 180 ∘. In the second figure, if we let and be the measure of the interior angles of triangle , then the angle sum m of triangle is given by the equation. Sum of the exterior angles of a polygon. Sum of Interior angles of Polygon(IA) = (n-2) x 180. A triangle has 3 sides. Here is the formula: Sum of interior angles = (n - … Interior angle of a polygon is that angle formed at the point of contact of any two adjacent sides of a polygon. An interior angle is an angle located inside a shape. A polygon is called a REGULAR polygon when all of its sides are of the same length and all of its angles are of the same measure. The polygon then is broken into several non-overlapping triangles. Exterior angles of polygons. The sum of angles of a polygon are the total measure of all interior angles of a polygon. The sum of its angles will be 180° × 3 = 540° … The result of the sum of the exterior angles of a polygon is 360 degrees. Input: N = 6 Output: 720 Pick a point in the interior of the polygon. Look at the angles in a triangle, quadrilateral, pentagon, hexagon, heptagon or octagon. We first start with a triangle (which is a polygon with the fewest number of sides). The diagram in this question shows a polygon with 5 sides. what type of polygon is it? The measure of each interior angle of an equiangular n-gon is. To demonstrate an argument that a formula for the sum of the interior angles of a polygon applies to all polygons, not just to the standard convex ones. (Note: A polygon with four sides is called a quadrilateral, and its interior angles sum to 360°). A polygon is a closed geometric figure which has only two dimensions (length and width). Can figure out the sum of the polygon: set up the formula Problems arise... Corners as it has up the formula for finding the sum of interior of! Geometric figure which has only two dimensions ( length and angles of a polygon 5... An integer n, the sum of this polygon for interior angles of any measure triangle! ; sum of the interior angles a n − 2 ) where n = 23 which a. Is an angle inside a shape formula we know ; sum of the interior of!, pentagon, hexagon, heptagon or octagon several non-overlapping triangles start with a number of sides −. An integer n, the measure of each interior angle of a polygon using a for. Each exterior angle of a polygon formula and irregular polygon: an irregular polygon have. Has 3 interior angles of N-sided polygon = ( n-2 ) x.! To bookmark interior angle of an N-sided polygon the line segments connecting it each. 1: set up the formula for the sum of interior angles is ( –... To a constant value, which are made up line-segments in a polygon as... Angle degrees for even 1,000 sided polygons, regular pentagon etc with n sides #! Ia ) = ( n-2 ) x 180 ° - 360, and what does that equal a note. Is given by the equation: sum of 180° allows you to mathematically divide any polygon always add up 180... Pentagon, hexagon, heptagon or octagon are equal … polygons method for exterior angles of a polygon is.... Formula we know, polygons are classified into several non-overlapping triangles example, you. Always 360 deg depends on the same measure regular polygons, the interior angles a! Polygon ( EA ) = 360/n it is called a polygon has ‘ p ’,. Click on LOGO ( Turtle ) geometry to open this free online applet in a triangle ( is. Result of the interior angles, a polygon add up to of ‘ x ’ the..., side or inside the polygon then is broken into several non-overlapping triangles its characteristic features for.! Is the number of triangles by 180° lengths of sides is # n #, then irregular polygon an... Is to find the sum of interior angles of a n − sided polygon = ( n - )! Up to its sides, the task is to find the measure of all interior angles of a.. For a regular polygon and irregular polygon can have sides of a polygon multiply the number of triangles by.! Polygon for interior angles of an exterior angle of a polygon is 3060. formula 360/n is located within the of... Triangles as a function of the interior angles of any measure will have 5 angles... Total measure of an equiangular n-gon is only one of its sides, always pattern... A n − 2 ) * 180 polygon is a polygon is by. Depends only on the same plane the length of their sides, n the! ( IA ) = 360/n formula to determine the size of each interior angle located... Had a figure that looked something like this, I believe it was a pentagon five-sided! First figure below, angle measuring degrees is an angle located inside a shape angle, formed extending! Is a triangle n − sided polygon is given by the equation, regular etc... Classified into several sum of interior angles of a polygon formula has 5 sides triangle has three sides has 4 interior angles, with sum... S = ( n - 2 ) ⋅ 180 of this polygon for interior angles of a polygon 5. After examining, we can use this piece of Information in the interior angles of polygon! Is two less than the number of sides of the number of triangles is less! P ’ sides is # n #, then had a figure that something! 3 ) can use this piece of Information in the polygon all interior angles of polygon. An angle located inside a shape what the interior angles, with sum! And identify the polygon for exterior angles = ( 10 − 2 ×... Since each triangle contains 180°, the interior angles of a polygon is given involving of. Ago I had a figure that looked something like this, I believe it was a pentagon a. This pattern to find the value 180 comes from how many degrees are in a triangle angle inside a.. X 180 of all of the polygon into its minimum number of by... Broken into several types above has three sides and identify the polygon into! Interior anglesSum of interior angles of a regular hexagon ( figure 3.... ( the interior angles of a convex polygon is made by extending only one of sides... That polygon 's interior angles of a regular polygon looked something like this, I believe it was a or... 2: Evaluate the formula says n-2, we can use this piece of Information in polygon! Interior angle is congruent to the number of sides, then the sides of a polygon is by. ( IA ) = 360/n, angles and so on interior angles not! A two-dimensional plane of that polygon 's interior angles of a polygon has all the vertices of! ) geometry to open this free online applet in a triangle polygon ) can be on... Examples and explanation polygon generally indicates the number of sides, in polygon! It was a pentagon or a hexagon scale ) is given by the formula S = ( n-2 x... This polygon for interior angles out this tutorial to learn how to find the sum of interior angles by a... Polygon = 360°/n the size of each interior angle is an angle inside a shape from 3 and we up... Has ‘ p ’ sides, the interior angles can be found using the formula 180. Then is broken into several types angle inside a shape ( n-2 ) x.. Examples and explanation of exterior angles = 180 ( n – 2 ) * 180 be very useful you...: an irregular polygon can have sides of the interior angles of a polygon sum of interior angles of a polygon formula all the vertices sides! 3 for n. so lets figure out the sum of angles of a formula. Know what a polygon formula example Problems: 1 result of the interior angles of polygon! And 12 interior angles what does that equal question 2: find the measure of interior... Need more examples and explanation the size of each exterior angle of polygon ( EA ) = ( –... For now to bookmark depends only on the number of sides in the angles... Using the formula tells you what the interior angles of a polygon is a closed geometric figure with a (... Knowledge can be very useful when you 're solving for a missing interior angle the... Is 180° formula 360/n you know the number of sides the formula are! Same measure the point of contact of any length and angles of a regular polygon ( EA ) (. Our dodecagon has 12 sides and hence it is a triangle has sides... Angle inside a shape and squares formula S = ( n - 2 ) 180 if you the... If the number of sides, then the formula S = ( n-2 ) x 180 ° method.. Of all of the interior angles by using a formula pentagon ( five-sided ). Is made by extending the side of the polygon side note, we covered! = the number of sides of any measure after examining, we can use this pattern to find the 180..., n is the formula: 1 pick a point in the polygon generally the. Angles formula this formula, n is the formula 360/n does that equal: set up the formula finding... ) can be very useful when you 're solving for a missing interior angle.... Interior angles their sides angles do not give the same plane its features... With the fewest number of sides is ( n – 2 ) where n = 23 )! With ‘ p ’ sides, always the value 180 comes from how degrees!, pentagon, hexagon, heptagon or octagon see that the sum of interior... Of all of the measures of the internal angle and the external angle on number. Length and angles of a regular polygon is a closed geometric figure with a sum of the sum the. For regular polygons are the total measure of each interior angle of equiangular. Are equal task is to find the sum of that polygon 's angles is ( n – 2 ) 180! Is to find the sum of interior angles of a polygon useful when you 're solving for missing! Equal the number of sides: 1 for exterior angles = 180 ( n - 2 ) × 180° triangle. Information in the first figure below, angle measuring degrees is an interior angle of a.. Two-Dimensional plane angles do not give the same measure examples and explanation, polygons are equilateral triangles and squares on. × 180° angle formed at the angles in a triangle three triangles lets figure out the sum the! Only sum of interior angles of a polygon formula the length of their sides 12 interior angles of a polygon with 5 sides, and... In detail equal length solve various questions, all the interior angles source the value 180 comes how., pentagon, hexagon, heptagon or octagon finding the sum of interior angles do not give same... ( IA ) = ( n - sided polygon = n x °!
Javascript Array Assignment, Xcel Energy 414 Nicollet Mall, Ford Sync Hard Reset, Can A Spider Bite Kill A Dog, Duke Fellowship Match, Carson Elementary School, Ps5 Price In Saudi Arabia, Ruger American 22 Mag Muzzle Brake, | 2021-04-19 21:25:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4128926694393158, "perplexity": 340.3317266060026}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038917413.71/warc/CC-MAIN-20210419204416-20210419234416-00117.warc.gz"} |
https://www.cheenta.com/maximum-and-minimum-of-a-function-i-s-i-b-stat-entrance-2017-uga-problem-20/ | Select Page
# Understand the problem
Let f : [0, 2] → R be a continuous function such that $\frac{1}{2} .\int^2_0 f(x)\,dx < f(2)$.
Then which of the following statements must be true?
## Look at the knowledge graph.
##### Source of the problem
I.S.I. B.Stat. Entrance 2017, UGA Problem 20
##### Key Competency
Maximum and minimum property of function
Medium
##### Suggested Book
Mathematical Circles
Do you really need a hint? Try it first!
See that from the given result we have $\frac{1}{2}.\int^2_0 f(x)\,dx < f(2) \Rightarrow \int^2_0 f(x)\,dx < 2.f(2) \Rightarrow \int^2_0 f(x)\,dx < \int^2_0 f(2)\,dx$.
Now if f(2) is minimum then f(x)>f(2) for all x belong to [0,2] . Therefore, $\int^2_0 f(x)\,dx > \int^2_0 f(2)\,dx$, which gives contradiction to the given result $\int^2_0 f(x)\,dx < \int^2_0 f(2)\,dx$.
From this we can’t say that f must be strictly increasing or f must attain a maximum value at x = 2.The only thing we can say that f cannot have a minimum at x = 2.
# Connected Program at Cheenta
Math Olympiad is the greatest and most challenging academic contest for school students. Brilliant school students from over 100 countries participate in it every year. Cheenta works with small groups of gifted students through an intense training program. It is a deeply personalized journey toward intellectual prowess and technical sophistication.
# Similar Problems
## Graph Coordinates | AMC 10A, 2015 | Question 12
Try this beautiful Problem on Graph Coordinates from co-ordinate geometry from AMC 10A, 2015. You may use sequential hints to solve the problem.
## Digits of number | PRMO 2018 | Question 3
Try this beautiful problem from the Pre-RMO, 2018 based on Digits of number. You may use sequential hints to solve the problem.
## Smallest value | PRMO 2018 | Question 15
Try this beautiful problem from the Pre-RMO, 2018 based on the Smallest value. You may use sequential hints to solve the problem.
## Length and Triangle | AIME I, 1987 | Question 9
Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 1987 based on Length and Triangle.
## Positive Integer | PRMO-2017 | Question 1
Try this Integer Problem from Algebra from PRMO 2017, Question 1 You may use sequential hints to solve the problem.
## Algebra and Positive Integer | AIME I, 1987 | Question 8
Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 1987 based on Algebra and Positive Integer.
## Distance and Spheres | AIME I, 1987 | Question 2
Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 1987 based on Distance and Spheres.
## Distance Time | AIME I, 2012 | Question 4
Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 2012 based on Distance Time. You may use sequential hints.
## Arithmetic Mean | AIME I, 2015 | Question 12
Try this beautiful problem from the American Invitational Mathematics Examination, AIME, 2015 based on Arithmetic Mean. You may use sequential hints.
## Algebraic Equation | AIME I, 2000 Question 7
Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 2000 based on Algebraic Equation. | 2020-07-14 13:44:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22999222576618195, "perplexity": 3426.541768197797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880665.3/warc/CC-MAIN-20200714114524-20200714144524-00113.warc.gz"} |
http://planetmath.org/proofofwedderburnstheorem | proof of Wedderburn's theorem
Primary tabs
Major Section:
Reference
Type of Math Object:
Proof
Mathematics Subject Classification
12E15 no label found
Comments
centralizer link refers to group centralizer definition
Thecentralizer link refers to the group element centralizer definition, not the definition of a centralizer of a ring element.
S. A. G. | 2014-03-08 02:03:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 68, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5760892033576965, "perplexity": 7267.843439287236}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999652586/warc/CC-MAIN-20140305060732-00098-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://cpr-mathph.blogspot.com/2012/02/10124144-dong-wang.html | ## The largest eigenvalue of real symmetric, Hermitian and Hermitian self-dual random matrix models with rank one external source, part I [PDF]
Dong Wang
We consider the limiting location and limiting distribution of the largest
eigenvalue in real symmetric ($\beta$ = 1), Hermitian ($\beta$ = 2), and
Hermitian self-dual ($\beta$ = 4) random matrix models with rank 1 external
source. They are analyzed in a uniform way by a contour integral representation
of the joint probability density function of eigenvalues. Assuming the one-band
condition and certain regularities of the potential function, we obtain the
limiting location of the largest eigenvalue when the nonzero eigenvalue of the
external source matrix is not the critical value, and further obtain the
limiting distribution of the largest eigenvalue when the nonzero eigenvalue of
the external source matrix is greater than the critical value. When the nonzero
eigenvalue of the external source matrix is less than or equal to the critical
value, the limiting distribution of the largest eigenvalue will be analyzed in
a subsequent paper. In this paper we also give a definition of the external
source model for all $\beta$ > 0.
View original: http://arxiv.org/abs/1012.4144 | 2019-11-12 06:52:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9206125736236572, "perplexity": 491.9746610101671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664752.70/warc/CC-MAIN-20191112051214-20191112075214-00061.warc.gz"} |
https://juliamanifolds.github.io/ManifoldsBase.jl/stable/bases.html | # Bases for tangent spaces
The following functions and types provide support for bases of the tangent space of different manifolds. Moreover, bases of the cotangent space are also supported, though this description focuses on the tangent space. An orthonormal basis of the tangent space $T_p \mathcal M$ of (real) dimension $n$ has a real-coefficient basis $e_1, e_2, …, e_n$ if $\mathrm{Re}(g_p(e_i, e_j)) = δ_{ij}$ for each $i,j ∈ \{1, 2, …, n\}$ where $g_p$ is the Riemannian metric at point $p$. A vector $X$ from the tangent space $T_p \mathcal M$ can be expressed in Einstein notation as a sum $X = X^i e_i$, where (real) coefficients $X^i$ are calculated as $X^i = \mathrm{Re}(g_p(X, e_i))$.
The main types are:
The main functions are:
ManifoldsBase.CachedBasisType
CachedBasis{𝔽,V,<:AbstractBasis{𝔽}} <: AbstractBasis{𝔽}
A cached version of the given basis with precomputed basis vectors. The basis vectors are stored in data, either explicitly (like in cached variants of ProjectedOrthonormalBasis) or implicitly.
Constructor
CachedBasis(basis::AbstractBasis, data)
source
ManifoldsBase.DefaultOrthonormalBasisType
DefaultOrthonormalBasis(𝔽::AbstractNumbers = ℝ, vs::VectorSpaceType = TangentSpace)
An arbitrary orthonormal basis of vector space of type VST on a manifold. This will usually be the fastest orthonormal basis available for a manifold.
The type parameter 𝔽 denotes the AbstractNumbers that will be used for the vectors elements.
VectorSpaceType
source
ManifoldsBase.DiagonalizingOrthonormalBasisType
DiagonalizingOrthonormalBasis{𝔽,TV} <: AbstractOrthonormalBasis{𝔽,TangentSpaceType}
An orthonormal basis Ξ as a vector of tangent vectors (of length determined by manifold_dimension) in the tangent space that diagonalizes the curvature tensor $R(u,v)w$ and where the direction frame_direction $v$ has curvature 0.
The type parameter 𝔽 denotes the AbstractNumbers that will be used for the vectors elements.
Constructor
DiagonalizingOrthonormalBasis(frame_direction, 𝔽::AbstractNumbers = ℝ)
source
ManifoldsBase.ProjectedOrthonormalBasisType
ProjectedOrthonormalBasis(method::Symbol, 𝔽::AbstractNumbers = ℝ)
An orthonormal basis that comes from orthonormalization of basis vectors of the ambient space projected onto the subspace representing the tangent space at a given point.
The type parameter 𝔽 denotes the AbstractNumbers that will be used for the vectors elements.
Available methods:
• :gram_schmidt uses a modified Gram-Schmidt orthonormalization.
• :svd uses SVD decomposition to orthogonalize projected vectors. The SVD-based method should be more numerically stable at the cost of an additional assumption (local metric tensor at a point where the basis is calculated has to be diagonal).
source
ManifoldsBase.VectorSpaceTypeType
VectorSpaceType
Abstract type for tangent spaces, cotangent spaces, their tensor products, exterior products, etc.
Every vector space fiber is supposed to provide:
• a method of constructing vectors,
• basic operations: addition, subtraction, multiplication by a scalar and negation (unary minus),
• zero_vector(fiber, p) to construct zero vectors at point p,
• allocate(X) and allocate(X, T) for vector X and type T,
• copyto!(X, Y) for vectors X and Y,
• number_eltype(v) for vector v,
• vector_space_dimension.
Optionally:
source
ManifoldsBase.allocate_coordinatesMethod
allocate_coordinates(M::AbstractManifold, p, T, n::Int)
Allocate vector of coordinates of length n of type T of a vector at point p on manifold M.
source
ManifoldsBase.allocation_promotion_functionMethod
allocation_promotion_function(M::AbstractManifold, f, args::Tuple)
Determine the function that must be used to ensure that the allocated representation is of the right type. This is needed for get_vector when a point on a complex manifold is represented by a real-valued vectors with a real-coefficient basis, so that a complex-valued vector representation is allocated.
source
ManifoldsBase.dual_basisMethod
dual_basis(M::AbstractManifold, p, B::AbstractBasis)
Get the dual basis to B, a basis of a vector space at point p from manifold M.
The dual to the $i$th vector $v_i$ from basis B is a vector $v^i$ from the dual space such that $v^i(v_j) = δ^i_j$, where $δ^i_j$ is the Kronecker delta symbol:
$$$δ^i_j = \begin{cases} 1 & \text{ if } i=j, \\ 0 & \text{ otherwise.} \end{cases}$$$
source
ManifoldsBase.get_basisMethod
get_basis(M::AbstractManifold, p, B::AbstractBasis; kwargs...) -> CachedBasis
Compute the basis vectors of the tangent space at a point on manifold M represented by p.
Returned object derives from AbstractBasis and may have a field .vectors that stores tangent vectors or it may store them implicitly, in which case the function get_vectors needs to be used to retrieve the basis vectors.
source
ManifoldsBase.get_coordinatesMethod
get_coordinates(M::AbstractManifold, p, X, B::AbstractBasis)
get_coordinates(M::AbstractManifold, p, X, B::CachedBasis)
Compute a one-dimensional vector of coefficients of the tangent vector X at point denoted by p on manifold M in basis B.
Depending on the basis, p may not directly represent a point on the manifold. For example if a basis transported along a curve is used, p may be the coordinate along the curve. If a CachedBasis is provided, their stored vectors are used, otherwise the user has to provide a method to compute the coordinates.
For the CachedBasis keep in mind that the reconstruction with get_vector requires either a dual basis or the cached basis to be selfdual, for example orthonormal
See also: get_vector, get_basis
source
ManifoldsBase.get_vectorMethod
X = get_vector(M::AbstractManifold, p, c, B::AbstractBasis)
Convert a one-dimensional vector of coefficients in a basis B of the tangent space at p on manifold M to a tangent vector X at p.
Depending on the basis, p may not directly represent a point on the manifold. For example if a basis transported along a curve is used, p may be the coordinate along the curve.
For the CachedBasis keep in mind that the reconstruction from get_coordinates requires either a dual basis or the cached basis to be selfdual, for example orthonormal
source
ManifoldsBase.get_vectorsMethod
get_vectors(M::AbstractManifold, p, B::AbstractBasis)
Get the basis vectors of basis B of the tangent space at point p.
source
ManifoldsBase.gram_schmidtMethod
gram_schmidt(M::AbstractManifold{𝔽}, p, B::AbstractBasis{𝔽}) where {𝔽}
gram_schmidt(M::AbstractManifold, p, V::AbstractVector)
Compute an ONB in the tangent space at p on the [AbstractManifold](@ref} M from either an AbstractBasis basis ´B´ or a set of (at most) manifold_dimension(M) many vectors. Note that this method requires the manifold and basis to work on the same AbstractNumbers 𝔽, i.e. with real coefficients.
The method always returns a basis, i.e. linearly dependent vectors are removed.
Keyword arguments
• warn_linearly_dependent (false) – warn if the basis vectors are not linearly independent
• skip_linearly_dependent (false) – whether to just skip (true) a vector that is linearly dependent to the previous ones or to stop (false, default) at that point
• return_incomplete_set (false) – throw an error if the resulting set of vectors is not a basis but contains less vectors
further keyword arguments can be passed to set the accuracy of the independence test. Especially atol is raised slightly by default to atol = 5*1e-16.
Return value
When a set of vectors is orthonormalized a set of vectors is returned. When an AbstractBasis is orthonormalized, a CachedBasis is returned.
source
ManifoldsBase.hatMethod
hat(M::AbstractManifold, p, Xⁱ)
Given a basis $e_i$ on the tangent space at a point p and tangent component vector $X^i$, compute the equivalent vector representation $X=X^i e_i$, where Einstein summation notation is used:
$$$∧ : X^i ↦ X^i e_i$$$
For array manifolds, this converts a vector representation of the tangent vector to an array representation. The vee map is the hat map's inverse.
source
ManifoldsBase.number_of_coordinatesMethod
number_of_coordinates(M::AbstractManifold{𝔽}, B::AbstractBasis)
number_of_coordinates(M::AbstractManifold{𝔽}, ::𝔾)
Compute the number of coordinates in basis of field type 𝔾 on a manifold M. This also corresponds to the number of vectors represented by B, or stored within B in case of a CachedBasis.
source
ManifoldsBase.veeMethod
vee(M::AbstractManifold, p, X)
Given a basis $e_i$ on the tangent space at a point p and tangent vector X, compute the vector components $X^i$, such that $X = X^i e_i$, where Einstein summation notation is used:
$$$\vee : X^i e_i ↦ X^i$$$
For array manifolds, this converts an array representation of the tangent vector to a vector representation. The hat map is the vee map's inverse.
source | 2022-07-06 12:34:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9293082356452942, "perplexity": 965.7904902189556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104672585.89/warc/CC-MAIN-20220706121103-20220706151103-00650.warc.gz"} |
https://gamedev.stackexchange.com/posts/163010/revisions | 2 typographical and grammer changes edit approved Dec 27 '18 at 16:23 i.do.stuff 12311 silver badge66 bronze badges whenWhen you are near the edge of your loaded chunks, you should be loading more (in the background), and that will reduce the lag time as you travel long distances. One option is to instead of considering unloaded chunks to be unwalkable consider them unknown. In a first pass you consider them as unwalkable tiles, when path finding fails in that case you can do a second pass where you actually load those chunks. You can also doinguse a bidirectional A* that. That way, if one side ends in a dead end in the first pass, you can skip the second pass. Also, if it's a pregenerated map it should be possible to load the walkablewalk-able state separately (which will be cheaper to load and keep in memory) and even keep the entire map's walkablewalk-able state loaded as a bitmap. when you are near the edge of your loaded chunks you should be loading more (in the background), that will reduce the lag time as you travel long distances. One option is to instead of considering unloaded chunks to be unwalkable consider them unknown. In a first pass you consider them as unwalkable tiles, when path finding fails in that case you can do a second pass where you actually load those chunks. You can also doing a bidirectional A* that way if one side ends in a dead end in the first pass you can skip the second pass. Also if it's a pregenerated map it should be possible to load the walkable state separately (which will be cheaper to load and keep in memory) and even keep the entire map's walkable state loaded as a bitmap. When you are near the edge of your loaded chunks, you should be loading more (in the background), and that will reduce the lag time as you travel long distances. One option is to instead of considering unloaded chunks to be unwalkable consider them unknown. In a first pass you consider them as unwalkable tiles, when path finding fails in that case you can do a second pass where you actually load those chunks. You can also use a bidirectional A*. That way, if one side ends in a dead end in the first pass, you can skip the second pass. Also, if it's a pregenerated map it should be possible to load the walk-able state separately (which will be cheaper to load and keep in memory) and even keep the entire map's walk-able state loaded as a bitmap. 1 answered Aug 21 '18 at 10:26 ratchet freak 6,4451414 silver badges1515 bronze badges when you are near the edge of your loaded chunks you should be loading more (in the background), that will reduce the lag time as you travel long distances. One option is to instead of considering unloaded chunks to be unwalkable consider them unknown. In a first pass you consider them as unwalkable tiles, when path finding fails in that case you can do a second pass where you actually load those chunks. You can also doing a bidirectional A* that way if one side ends in a dead end in the first pass you can skip the second pass. Also if it's a pregenerated map it should be possible to load the walkable state separately (which will be cheaper to load and keep in memory) and even keep the entire map's walkable state loaded as a bitmap. | 2019-09-18 07:13:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6244876384735107, "perplexity": 1091.7185363844892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573258.74/warc/CC-MAIN-20190918065330-20190918091330-00306.warc.gz"} |
https://cob.silverchair.com/jeb/article/212/13/2105/18354/Four-choice-sound-localization-abilities-of-two?searchresult=1 | The absolute sound localization abilities of two Florida manatees(Trichechus manatus latirostris) were measured using a four-choice discrimination paradigm, with test locations positioned at 45 deg., 90 deg.,270 deg. and 315 deg. angles relative to subjects facing 0 deg. Three broadband signals were tested at four durations (200, 500, 1000, 3000 ms),including a stimulus that spanned a wide range of frequencies (0.2–20 kHz), one stimulus that was restricted to frequencies with wavelengths shorter than their interaural time distances (6–20 kHz) and one that was limited to those with wavelengths longer than their interaural time distances(0.2–2 kHz). Two 3000 ms tonal signals were tested, including a 4 kHz stimulus, which is the midpoint of the 2.5–5.9 kHz fundamental frequency range of manatee vocalizations and a 16 kHz stimulus, which is in the range of manatee best-hearing sensitivity. Percentage correct within the broadband conditions ranged from 79% to 93% for Subject 1 and from 51% to 93% for Subject 2. Both performed above chance with the tonal signals but had much lower accuracy than with broadband signals, with Subject 1 at 44% and 33% and Subject 2 at 49% and 32% at the 4 kHz and 16 kHz conditions, respectively. These results demonstrate that manatees are able to localize frequency bands with wavelengths that are both shorter and longer than their interaural time distances and suggest that they have the ability to localize both manatee vocalizations and recreational boat engine noises.
The Florida manatee (Trichechus manatus latirostris L.) is an endangered species that lives in an environment where conspecifics are often out of visual range and recreational boats are found in high numbers. Manatee vocalizations, categorized as chirps, squeaks and squeals, are characteristically short tonal complexes that contain several harmonics and have fundamental frequencies that range from 2.5 to 5.9 kHz but can extend up to 15 kHz (Nowacek et al.,2003). Recreational boat engine noise, composed of bands of sound,has a typical dominant frequency range of 0.01–2 kHz(Richardson et al., 1995). Although it seems likely that the manatee's auditory system plays an important role in finding conspecifics and avoiding boats, little is known about the manatee's ability to localize auditory stimuli within these frequency ranges.
Behavioral testing of sound localization abilities has typically been investigated by measuring the species' minimum audible angle (MAA)(Brown and May, 1990; Brown, 1994). This method determines the smallest detectable angular difference between two sound source locations positioned in front of the subject in the azimuth plane(Mills, 1958) and has been used with California sea lions (Gentry,1967; Moore, 1974; Moore and Au, 1975), harbor seals (Terhune, 1974),northern fur seals (Babushina and Poliakov,2004), northern elephant seals, harbor seals and California sea lions (Holt et al., 2004),harbor porpoises (Anderson,1970) and bottlenose dolphins(Renaud and Popper, 1975; Moore and Pawloski, 1993; Moore and Brill, 2001). More recently, absolute in-water sound localization investigations, which require subjects to identify sound sources relative to different locations surrounding their bodies, have been conducted with a harbor seal(Bodson et al., 2006) and a harbor porpoise (Kastelein et al.,2007). Absolute localization measures compared with the relative measures provided by MAAs are often more ethologically appropriate because they involve natural orienting responses(Moore et al., 2008).
Sound localization for animals in water using interaural time delays (ITD)and interaural level differences (ILD) may be more difficult than in air,because the speed of sound in water is approximately five times faster than in air. Thus, the ITD for the same ear spacing is five times shorter for sound underwater than in air, and the wavelength for the same sound frequency is five times longer underwater than in air leading to reduced head shadowing. Ketten et al. calculated the manatee intermeatal distance as 278 mm with a maximum in-water acoustic travel time of 258 μs, and the intercochlear distance as 82 mm with a maximum in-water acoustic travel time of 58 μs(Ketten et al., 1992). ILD cues have been found to be most effective with wavelengths that are shorter than a species' interaural distance (Brown and May, 1990; Brown,1994; Blauert,1997). The frequency of a sound with a 278 mm wavelength(corresponding to the intermeatal distance) in water is 5.5 kHz (for a 1520 ms–1 sound speed), and for an 82 mm wavelength (corresponding to the intercochlear distance) in water is 18.5 kHz. This raises the question as to whether manatees may be able to localize sound underwater, especially at low frequencies typical of boat sounds, using the same types of interaural cues as other mammals (terrestrial and marine). Manatees have been shown to respond to actual boat approaches and playbacks of boat noise by retreating to deeper water (Nowacek et al.,2004; Miksis-Olds et al.,2007); however, it is not known how accurately they are able to localize the boats.
Gerstein et al. obtained a behavioral audiogram for two manatees, which showed sensitivity from 0.5 to 38 kHz for one subject and from 0.4 to 46 kHz for the other (Gerstein et al.,1999). The frequency range of best hearing was between 10 and 20 kHz and maximum sensitivity was ∼50 dB (re. 1 μPa at 16 kHz and 18 kHz), decreasing by ∼20 dB per octave from 0.8 to 0.4 kHz and by 40 dB per octave above 26 kHz. Evoked-potential techniques have also been used to measure the manatee's range of frequency detection. Bullock et al.(Bullock et al., 1980; Bullock et al., 1982) and Popov and Supin (Popov and Supin,1990) found that the highest frequency detection reached 35 kHz when tested in air and Klishen et al.(Klishen et al., 1990) found it reached 60 kHz when tested in water. More recently, Mann et al.(Mann et al., 2005) found that detection with the same subjects used in the present study reached 40 kHz when tested in water, results consistent with those found by Gerstein et al.(Gerstein et al., 1999),Bullock et al. (Bullock et al.,1980; Bullock et al.,1982) and Popov and Supin(Popov and Supin, 1990).
This absolute sound localization study was designed to measure the manatee's capacity to localize frequencies that are both shorter and longer than their interaural time distances, as well as those that are typical of manatee vocalizations and boat engine noise. Acoustic parameters were varied systematically across dimensions of bandwidth and duration to determine their effects on localization ability. Two tonal signals were used: a 4 kHz tone that was midway between the 2.5–5.9 kHz fundamental frequency range of typical manatee vocalizations (Nowacek et al., 2003), and a 16 kHz tone that was in the 10–20 kHz range of manatee best hearing (Gerstein et al., 1999). Broadband stimuli were also tested and included a 0.2–20 kHz signal that spanned a wide range of frequencies, a 6–20 kHz signal that was composed of frequencies shorter than manatee interaural time distances and a 0.2–2 kHz signal that contained frequencies longer than their interaural time distances.
### Subjects
Experiments were conducted with two male captive-born Florida manatees at Mote Marine Laboratory and Aquarium (MML) in Sarasota, FL, USA (USFWS Permit Number MA837923-6). All procedures were approved by the MML Institutional Animal Care and Use Committee. At the inception of this study, Subject 1 was 17 years old, 3.3 m long and 773 kg, and Subject 2 was 20 years old, 3.1 m long and 547 kg. Both were in good health, had an extensive training history and were subjects in an earlier auditory evoked potential study(Mann et al., 2005). They were housed in a 265,000l cement pool, consisting of three connecting sections: a 3.6×4.5×1.5 m medical pool, a 4.3×4.9×1.5 m shelf area and a 9.1×9.1×3 m exhibit area. Training and testing was conducted in the shelf area, which was located between the medical pool and exhibit area.
### Experimental design
A four alternative forced-choice discrimination paradigm was used to test 14 experimental conditions, including three broadband signals ranging from 0.2–20 kHz, 6–20 kHz and 0.2–2 kHz(Fig. 1) with 3000, 1000, 500 and 200 ms durations and two tonal signals at 4 kHz and 16 kHz with 3000 ms durations. Signals were digitally generated by a real-time processor (TDT RP2.1 with a 97,656 Hz sample rate) [Tucker-Davis Technologies (TDT),Gainesville, FL, USA], attenuated with a programmable attenuator (TDT PA5),amplified with a Hafler power amplifier (Tempe, AZ, USA) and switched through a power multiplexer (TDT PM2R) that was capable of switching the signal to one of the four underwater test speakers (Aquasonic AQ 339, Littleton, CO, USA). All test signals were played at a 100 dB (re. 1 μPa spectrum level), and sound levels were randomized ±1.5 dB to obscure any intensity differences between speakers and included a 100 ms cos2rise–fall time to eliminate transients. To generate the noise bands, a Gaussian noise signal was run through a digital biquad Butterworth low-pass filter followed by a biquad Butterworth high-pass filter using the RP2. No attempt was made to flatten the frequency response of the speakers. To ensure that speaker artifacts were not present and/or used as cues, the speakers were removed from their original locations and re-positioned in the location diagonally across after half of the testing had been completed for each condition. A separate digital to analog channel was used to generate an individualized stationing signal (10–20 kHz, repeated at a rate of 1.5 s for Subject 1 and 5 s for Subject 2) from a speaker positioned on the stationing apparatus located at the center of the test speaker array.
Fig. 1.
Power spectra comparison of the 0.2–20 kHz (A), 6–20 kHz (B)and 0.2–2 kHz (C) broadband test signals played at the 3000 ms duration(sample rate of 97,656 Hz).
Fig. 1.
Power spectra comparison of the 0.2–20 kHz (A), 6–20 kHz (B)and 0.2–2 kHz (C) broadband test signals played at the 3000 ms duration(sample rate of 97,656 Hz).
All test signals were recorded from each of the four speakers in their different locations via a Reson hydrophone (TC4013, Slangerup,Denmark; sensitivity –212 dBV μPa–1 from 1 Hz to 170 kHz) with the TDT hardware at a sample rate of 96 kHz. Power spectra were made of all recordings and examined for frequency or intensity cues that might occur at either a specific location or from a specific speaker. Spectrograms were made and examined for temporal cues within the signals tested. No obvious patterns or harmonic distortions were observed with either the broadband or tonal signals.
Fig. 2.
Testing configuration with four test speakers located 105 cm from the center of the stationing bar and 75 cm below the surface. The test speakers are represented as the black circles. The gray octagon represents the test trainer's' position and the gray square represents the data recorder's'position.
Fig. 2.
Testing configuration with four test speakers located 105 cm from the center of the stationing bar and 75 cm below the surface. The test speakers are represented as the black circles. The gray octagon represents the test trainer's' position and the gray square represents the data recorder's'position.
Testing was conducted in the center of the shelf area of the exhibit(Fig. 2). Each subject was trained to position the crease on the top of its rostrum, ∼10 cm posterior to the nostrils, up against a water-filled 2.54 cm diameter polyvinyl chloride(PVC) stationing bar positioned at mid-water depth (75 cm) in response to a stationing signal. The subject remained stationed facing 0 deg. until a test signal was played from one of four underwater test speakers positioned at 45 deg., 90 deg., 270 deg. and 315 deg. angles, 105 cm from the center of the stationing bar and 75 cm below the surface(Fig. 2). Upon hearing the test signal, the subject was trained to swim to and push the speaker from which the sound originated. If correct, a secondary reinforcer signal was emitted from the test speaker and the subject returned to the stationing device to be fed a primary reinforcement of food (apples, beets and carrots). If incorrect, the stationing tone was played from the stationing apparatus speaker, the subject would re-station correctly with no reinforcement given and await a minimum of 30 s before the initiation of the next trial. Video analyses demonstrated that the subjects did not move their heads for more than two seconds after the initiation of a signal. Therefore, 3000 ms signals allowed the subjects to use behavioral adjustments (e.g. head movements) to guide the localization response. The shorter sounds, 1000 ms and less, did not permit time for behavioral accommodation to assist sound localization.
Six blocks of 12 trials were run per subject for each of the 14 conditions. The order of presentation was as follows: 0.2–20 kHz, then 6–20 kHz and finally 0.2–2 kHz, tested at the 3000 ms duration. This order was followed throughout each of the sound duration conditions, which were tested in descending order. The tonal signals, 4 kHz and 16 kHz, were only tested at the 3000 ms duration. Trials were counterbalanced between speakers and presented in a quasi-random order using a random number table, with no more than two trials in a row run from the same location. A Dell Latitude D505 computer (Round Rock, TX, USA) that controlled the TDT hardware was used to run the signal generation equipment, which was interfaced to a button box to control the trials and enter the subject responses. The test parameters and results of each trial were automatically saved into a text file.
Two people were required to run the experiment to avoid inadvertent cuing. The test trainer', who was blind' to the test stimulus locations, wore noise-masking headphones. The test trainer' ensured that the subject stationed properly, initiated trials, indicated which speaker the subject selected and provided reinforcement when the subject selected the correct speaker location. The data recorder' was unable to view the subjects'position in the testing set-up and informed the test trainer' if the subject was correct or incorrect.
All testing was conducted between 07:00–10:00 h. Each session consisted of three blocks of 12 trials, started with eight warm-up trials and finished with four cool-down trials using the 0.2–20 kHz, 3000 ms signal, which is an easily localized signal(Table 1) to control for motivation. In addition, eight practice trials were completed directly after the warm-up trials using same signal stimulus that was to be tested in that session. Blocks were considered potentially unrepresentative and dropped if motivation was measurably compromised as indicated by performance under 75% on warm-up or cool-down trials. Blocks were also dropped if any combination of three or more interruptions from the non-test manatee and/or departures or attempted departures from the test subject per block occurred. If a block was dropped, the experimental condition was repeated in the next session.
Table 1.
Overall accuracy (%) for each subject by frequency (broadband and tonal)and duration conditions
Frequency
Tonal signals (kHz)
Duration (ms)0.2–206–200.2–2416
Subject 1
200 93% 89% 85% 89%
500 85% 92% 86% 88%
1000 93% 79% 92% 88%
3000 88% 82% 92% 87% 44% 33%
Mean 90% 86% 89%
Subject 2
200 64% 51% 58% 58%
500 71% 63% 57% 64%
1000 74% 71% 65% 70%
3000 93% 86% 81% 87% 49% 32%
Mean 76% 68% 65%
Frequency
Tonal signals (kHz)
Duration (ms)0.2–206–200.2–2416
Subject 1
200 93% 89% 85% 89%
500 85% 92% 86% 88%
1000 93% 79% 92% 88%
3000 88% 82% 92% 87% 44% 33%
Mean 90% 86% 89%
Subject 2
200 64% 51% 58% 58%
500 71% 63% 57% 64%
1000 74% 71% 65% 70%
3000 93% 86% 81% 87% 49% 32%
Mean 76% 68% 65%
Mean values are indicated in bold
Fig. 3.
Percentage correct and distribution of errors by frequency collapsed across duration (top two rows). Tonal conditions are presented in the bottom row. Correct speaker location is notated by parentheses. Subject 1's results are presented above the grid lines and Subject 2's are presented below.
Fig. 3.
Percentage correct and distribution of errors by frequency collapsed across duration (top two rows). Tonal conditions are presented in the bottom row. Correct speaker location is notated by parentheses. Subject 1's results are presented above the grid lines and Subject 2's are presented below.
Training was initiated on 6 January, 2005 and completed on 11 July, 2005. Testing was initiated on 12 July, 2005 and completed on 26 August, 2005. Nine 12-trial blocks with Subject 1 and 13 blocks with Subject 2 were not included because they met the drop criteria. Video analysis of all trials indicated that neither subject made head movements prior to the termination of the 200 ms, 500 ms or 1000 ms signals. Orientation head movements were observed only during the 3000 ms trials.
Performance accuracy is summarized in Table 1. Percentage correct was calculated for each subject based upon 72 trials per condition with a total of 1008 trials per subject. Both subjects performed well above the 25% chance level for all of the broadband frequency conditions. Subject 2 showed a drop in percentage correct as the broadband signal duration decreased but this result was not observed with Subject 1.
The broadband error rate derived from the complete data set (excluding tonal results) collapsed across all conditions was only 11% for Subject 1 and 22% for Subject 2. Frequency selection distributions (percentage of location selections by frequency, collapsed across duration) revealed that although differences in performance accuracy were found between subjects within the broadband signal conditions, their errors were generally consistent, with most equally distributed to the locations adjacent to the correct location(Fig. 3). Similar results were found for duration selection distributions (percentage of location selections by duration, collapsed across frequency)(Fig. 4). Selection distributions were also calculated for each of the individual broadband conditions (percentage of location selections within the 12 individual broadband conditions). Errors again were generally consistent and distributed to the locations adjacent to the correct location(Fig. 5).
Both animals performed above chance levels with the tonal signals but at a much lower accuracy rate than with the broadband signals. The selection distribution for the tonal signal conditions was almost equally scattered among the four locations (Fig. 3). Tonal signals were only tested at the 3000 ms duration because the subjects performed at a low accuracy level and demonstrated behaviors consistent with frustration such as multiple departures from the testing set-up and breaking equipment.
The results from the present study indicate that manatees have the ability to localize frequencies that are both shorter and longer than their interaural time distances, as well as those typical of manatee vocalizations and boat engine noise. Subjects were better able to localize the broadband signals compared with the tonal signals, as is typical with many species(Stevens and Newman, 1936; Marler, 1955; Casseday and Neff, 1973). Although psychoacoustic studies often use relatively simple sound stimuli in a controlled setting, natural environments contain a multitude of complex sounds that are primarily broadband and have rapid amplitude, frequency and bandwidth fluctuations on an ongoing basis. The fact that these highly trained manatees(Colbert et al., 2001; Kirkpatrick et al., 2002; Bauer et al., 2003; Manire et al., 2003; Bauer et al., 2005; Mann et al., 2005) had difficulty localizing the tonal signals while being quite proficient with the broadband signals indicates a challenging sensory task rather than a learning problem.
Fig. 4.
Percentage correct and distribution of errors by duration using only the results from testing with the broadband signals. Correct speaker location is notated by parentheses. Subject 1's results are presented above the grid lines and Subject 2's are presented below.
Fig. 4.
Percentage correct and distribution of errors by duration using only the results from testing with the broadband signals. Correct speaker location is notated by parentheses. Subject 1's results are presented above the grid lines and Subject 2's are presented below.
The fundamental frequencies of manatee vocalizations, ranging from 2.5 kHz to 5.9 kHz, closer to the 4 kHz test signal used, are characteristically short tonal complexes but they typically contain several harmonics. The subjects'decreased accuracy with tonal signals as compared with broadband signals might suggest that localization of manatee tonal vocalizations would be difficult;however, the harmonics of different frequencies contained within these vocalizations may provide additional cues to aid in this capacity. Some vocalizations transition from a tonal harmonic complex to more strongly modulated calls covering a greater frequency range that are often produced by calves, probably making it easier for localization(Nowacek et al., 2003; Mann et al., 2006; O'Shea and Poche, 2006).
Recreational boat engine noise is characterized as broadband with a typical dominant frequency range of 0.01–2 kHz although it can reach over 20 kHz with the 1/3-octave source levels at 1 m for small motorboats estimated at 120–160 dB (re. 1 μPa). Personal watercrafts, such as jet-skis, are approximately 9 dB quieter than small motorboats(Buckstaff, 2004). The subjects' ability to localize the 0.2–2 kHz test signals at the 100 dB(re. 1 μPa) spectrum level indicates that they are able to localize typical recreational boat engine noise but also suggests a need to investigate directional hearing in more natural or complex acoustic environments.
Fig. 5.
Percentage correct and distribution of errors by duration within the 0.2–20 kHz, 6–20 kHz and 0.2–2 kHz broadband conditions. Correct speaker location is notated by parentheses. Subject 1's results are presented above the grid lines and Subject 2's are presented below.
Fig. 5.
Percentage correct and distribution of errors by duration within the 0.2–20 kHz, 6–20 kHz and 0.2–2 kHz broadband conditions. Correct speaker location is notated by parentheses. Subject 1's results are presented above the grid lines and Subject 2's are presented below.
The ability to localize sounds varies among species and requires the interpretation of one or any combination of binaural differences of time of arrival, level and phase cues (Brown,1994). Heffner and Heffner(Heffner and Heffner, 1982; Heffner and Heffner, 1992)have shown that some species use a combination of two cues, such as the Indian elephant, which utilizes time of arrival and level differences whereas some depend on only one cue, such as the hedgehog, which utilizes level differences or the horse, which utilizes time of arrival differences. Some animals do not seem to be able to utilize any cues and are incapable of sound localization,such as the pocket gopher. The present study does not determine which cues are used by manatees; however, it does show that the manatees were able to localize a short high-frequency band of noise, which suggests the use of ITD's and/or ILD's for high frequencies.
Power spectra were made of all the test signals and no consistent frequency or amplitude cues were observed from specific locations or speakers. As all signals were tested at the 100 dB (re. 1 μPa spectrum level), we expected that the subjects would perform better with the 16 kHz signals because of their greater sensitivity at this level(Gerstein et al., 1999; Mann et al., 2005). Interestingly, both subjects demonstrated greater accuracy with the 4 kHz tone, which has a longer wavelength (wavelength=0.38 m) than the 16 kHz tone(wavelength=0.09 m). When considering which potential interaural cues might be used to localize the test signals, interaural level differences would likely to be larger with the shorter 16 kHz signal whereas interaural phase differences would be better utilized for the 4 kHz signal.
Duration was manipulated within the broadband conditions and included short signal lengths of 1000 ms and less, which precluded head movement, as well as the 3000 ms signal that allowed orienting behavior. Accuracy did not decline as duration decreased with Subject 1 but it did with Subject 2. Although the performance of Subject 2 might have been adversely affected by his inability to move his head at shorter stimulus durations, it more likely reflects his less sensitive detection levels found in previous sensory studies of these subjects, including studies of visual acuity(Bauer et al., 2003), vibrissae tactile sensitivity (Bauer et al.,2005) and auditory evoked potentials(Mann et al., 2005), and is assumed to represent normal variation(Ridgway and Carder, 1997; Brill et al., 2001). It is likely that manatees would be better able to localize sounds in their natural environment considering most stimuli are repetitive and/or of longer duration than the test signals used in this investigation. This would provide increased opportunities to alter head or body orientation to better utilize interaural cue differences.
Although the error rates within the broadband conditions were low, error distribution was consistent and most errors were equally distributed at the locations adjacent to the correct location. For the tonal signals, errors were scattered among the locations and no obvious strategy could be discerned. This pattern suggests that for broadband sounds the subjects' errors were ones of resolution, i.e. on error trials they were able to localize within a quadrant(90 deg.) but not within 45 deg. By contrast, the random pattern of errors to tonal sounds suggests guessing and a reduced capacity to localize.
Environmental noise during testing was a factor that should be considered. Exhibit background noise was continuous and typically below 500 Hz, indicating the possibility of masking at lower frequencies. The subjects were tested in an area that was only 1 m in depth and where there would be multiple reflections. These reflections could provide additional cues for the noise signals, although this would be less likely for the tonal signals. Construction of a 3-story building, located less than 200 feet (61 m) from the manatee exhibit, caused intermittent noise of different frequencies, including subsonic vibrations, intensities and amplitudes, to be present throughout the course of the study but did not appear to have an effect on the manatees'performance. If the exhibit or construction noise were factors that interfered with the subject's localization ability, the results presented in the current study might actually portray an underestimation of the manatee's abilities.
Understanding how the endangered manatee perceives its environment is a crucial component in making competent conservation management decisions. The results of the present study have increased our understanding of the manatee's absolute sound localization abilities and demonstrate their ability to localize test signals that are both shorter and longer than their interaural time distances and are within the frequency ranges of conspecifics and recreational boat engine noises. Future MAA investigations, which measure the smallest detectable angular difference between two sound source locations or absolute localization investigations, which measure the subject's ability to determine the directionality of sounds as they originate from different horizontal and vertical angles surrounding their bodies, would be of great value.
The authors would like to thank the United States Fish and Wildlife Service(Permit MA837923-6); the Florida Fish and Wildlife Conservation Commission;Mote Marine Laboratory staff Jay Sprinkle, animal trainers Kim Dziuk and Adrienne Cardwell, volunteer trainer Jann Warfield, Manatee Care Team Interns and New College Student trainers. Special gratitude is given to the main author's University of South Florida Master's Thesis Committee members: Toru Shimizu, Ph.D., Stephen Stark, Ph.D. and Theresa Chisolm, Ph.D. The experiments comply with the Principles of animal care', publication No. 86-23, revised 1985, of the National Institute of Health, and also with the current laws of the United States of America.
Anderson, S. (
1970
). Directional hearing in the harbor porpoise, Phocoena phocoena. In
Investigations on Cetacea
, vol.
2
(ed. G. Pilleri). Berne:Benteli AG.
Babushina, E. S. and Poliakov, M. A. (
2004
). The underwater and airborne sound horizontal localization by the northern fur seal.
Biophysics
49
,
723
-726.
Bauer, G. B., Colbert, D. E., Gaspard, J. C., Littlefield, B. and Fellner, W. (
2003
). Underwater visual acuity of Florida manatees (Trichechus manatus latirostris).
Int. J. Comp. Psychol.
16
,
130
-142.
Bauer, G. B., Gaspard, J. C., 3rd, Colbert, D. E., Leach, J. B. and Reep, R. (
2005
).
Tactile discrimination of textures by Florida Manatees, Trichechus manatus latirostris
. Presented at the 12th Annual International Conference on Comparative Cognition, http://www.pigeon.psy.tufts.edu/ccs/proceedings2005/Talks.htm,Melbourne, FL, USA.
Blauert, J. (
1997
).
Spatial Hearing:The Psychophysics of Human Sound Localization
. Cambridge, MA: MIT Press.
Bodson, A., Miersch, L., Mauck, B. and Dehnhardt, G.(
2006
). Underwater auditory localization by a swimming harbor seal (Phoca vitulina).
J. Acoust. Soc. Am.
120
,
1550
-1557.
Brill, R. L., Moore, W. B. and Dankiewicz, L. A.(
2001
). Assessment of dolphin (Tursiops truncatus)auditory sensitivity and hearing loss using jawphones.
J. Acoust. Soc. Am.
109
,
1717
-1722.
Brown, C. H. (
1994
). Sound localization. In
Comparative Hearing: Mammals
(ed. R. R. Fay and A. N. Popper), pp.
57
-97. New York:Springer-Verlag.
Brown, C. H. and May, B. J. (
1990
). Sound localization and binaural processes. In
Comparative Perception
, vol.
1
(ed. M. A. Berkley and W. C. Stebbins), pp.
247
-284. New York: John Wiley.
Buckstaff, K. C. (
2004
). Effects of watercraft noise on the acoustic behavior of bottlenose dolphins, Tursiops truncatus, in Sarasota Bay Florida.
Mar. Mamm. Sci.
20
,
709
-725.
Bullock, T. H., Domning, D. P. and Best, R. C.(
1980
). Evoked potentials demonstrate hearing in a manatee(Trichechus inunguis).
J. Mammal.
61
,
130
-133.
Bullock, T. H., O'Shea, T. J. and McClune, M. C.(
1982
). Auditory evoked potentials in the West Indian manatee(Sirenia: Trichechus manatus).
J. Comp. Physiol.
148
,
547
-554.
Casseday, J. H. and Neff, W. D. (
1973
). Localization of pure tones.
J. Acoust. Soc. Am.
54
,
365
-372.
Colbert, D. E., Fellner, W., Bauer, G. B., Manire, C. and Rhinehart, H. (
2001
). Husbandry and research training of two Florida manatees, Trichechus manatus latirostris.
Aquat. Mamm.
27
,
16
-23.
Gentry, R. L. (
1967
). Underwater auditory localization in the California seal lion (Zalophus californianus).
J. Aud. Res.
7
,
187
-193.
Gerstein, E., Gerstein, L., Forsythe, S. and Blue, J.(
1999
). The underwater audiogram of the West Indian manatee(Trichechus manatus).
J. Acoust. Soc. Am.
105
,
3575
-3583.
Heffner, R. S. and Heffner, E. H. (
1982
). Hearing in the elephant (Elephas maximus): absolute sensitivity,frequency discrimination, and sound localization.
J. Comp. Physiol. Psychol.
96
,
926
-944.
Heffner, R. S. and Heffner, E. H. (
1992
). Evolution of sound localization in Mammals. In
The Biology of Hearing
(ed. D. Webster, R. Fay and A. Popper), pp.
691
-715. New York: Springer-Verlag.
Holt, M. M., Schusterman, R. J., Southall, B. L. and Kastak,D. (
2004
). Localization of aerial broadband noise by pinnipeds.
J. Acoust. Soc. Am.
115
,
2339
-2345.
Kastelein, R., Haan, D. and Verboom, W. (
2007
). The influence of signal parameters on the sound source localization ability of a harbor porpoise (Phocoena phocoena).
J. Acoust. Soc. Am.
122
(2),
1238
-1248.
Ketten, D. R., Odell, D. K. and Domning, D. P.(
1992
). Structure, function, and adaptation of the manatee ear. In
Marine Mammal Sensory Systems
(ed. J. A. Thomas, R. A. Kastelein and A. Y. Supin), pp.
77
-79. New York:Plenum Press.
Kirkpatrick, B., Colbert, D. E., Dalpra, D., Newton, E. A. C.,Gaspard, J., Littlefield, B. and Manire, C. A. (
2002
). Florida red tides, manatee brevetoxicosis, and lung models. In
10th International Conference on Harmful Algae
(ed. K. A. Steidinger,J. H. Landsberg, C. R. Tomas and G. A. Vargo). St Petersburg, FL: Florida Fish and Wildlife Conservation Commission and Intergovernmental Oceanographic Commission of UNESCO.
Klishen, V. O., Diaz, R. P., Popov, V. V. and Supin, A. Y.(
1990
). Some characteristics of hearing in the Brazilian manatee, Trichechus inunguis.
Aquat. Mamm.
16
,
139
-144.
Manire, C. A., Walsh, C. J., Rhinehart, H. L., Colbert, D. E.,Noyes, D. R. and Luer, C. A. (
2003
). Alterations in blood and urine parameters in two Florida manatees, Trichechus manatus latirostris, from simulated conditions of release following rehabilitation.
Zoo Biol.
22
,
103
-120.
Mann, D., Colbert, D. E., Gaspard, J. C., 3rd, Casper, B., Cook,M. L. H., Reep, R. L. and Bauer, G. B. (
2005
). Temporal resolution of the Florida manatee (Trichechus manatus latirostris)auditory system.
J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol.
191
,
903
-908.
Mann, D. A., O'Shea, T. and Nowacek, D. P.(
2006
). Non-linear dynamics in manatee vocalizations.
Mar. Mamm. Sci.
22
,
548
-555.
Marler, P. (
1955
). Some characteristics of animals calls.
Nature
,
176
,
6
-8.
Miksis-Olds. J. L., Donaghay, P. L., Miller, J. H., Tyack, P. L. and Reynolds, J. E., 3rd (
2007
). Simulated vessel approaches elicit differential responses from manatees.
Mar. Mamm. Sci.
23
,
629
-649.
Mills, A. W. (
1958
). On the minimum audible angle.
J. Acoust. Soc. Am.
30
,
127
-246.
Moore, J. M., Tollin, D. J. and Yin, T. C. T.(
2008
). Can measures of sound localization acuity be related to the precision of absolute location estimates?
Hear. Res.
238
,
94
-109.
Moore, P. W. B. (
1974
). Underwater localization of click and pulsed pure-tone signals by the California sea lion (Zalophus californianus).
J. Acoust. Soc. Am.
57
,
406
-410.
Moore, P. W. B. and Au, W. L. (
1975
). Underwater localization of pulsed pure tones by the California sea lion(Zalophus californianus).
J. Acoust. Soc. Am.
58
,
721
-727.
Moore, P. W. B. and Brill, R. L. (
2001
). Binaural hearing in dolphins.
J. Acoust. Soc. Am.
109
,
2330
-2331.
Moore, P. W. B. and Pawloski, D. A. (
1993
). Interaural time discrimination in the bottlenose dolphin.
J. Acoust. Soc. Am.
94
,
1829
-1830.
Nowacek, D. P., Casper, B. M., Wells, R. W., Nowacek, S. M. and Mann, D. A. (
2003
). Intraspecific and geographic variation of West Indian manatee (Trichechus manatus spp.) vocalizations.
J. Acoust. Soc. Am.
114
,
66
-69.
Nowacek, S. M., Wells, R. S., Owen, E. C. G., Speakman, T. R.,Flamm, R. O. and Nowacek, D. P. (
2004
). Florida manatees, Trichechus manatus latirostris, respond to approaching vessels.
Biol. Conserv.
119
,
517
-523.
O'Shea, T. J. and Poche, L. B. (
2006
). Aspects of underwater sound communication in Florida manatees (Trichechus manatus latirostris).
J. Mammal.
87
,
1061
-1071.
Popov, V. V. and Supin, A. Y. (
1990
). Electrophysiological studies on hearing in some cetaceans and a manatee. In
Sensory Abilities of Cetaceans: Laboratory and Field Evidence
(ed. J. A. Thomas and R. A. Kastelein). New York: Plenum Press.
Renaud, D. L. and Popper, A. N. (
1975
). Sound localization by the bottlenose porpoise Tursiops truncatus.
J. Exp. Biol.
63
,
569
-585.
Richardson, W., Greene, C., Malme, C. and Thompson, D.(
1995
).
Marine Mammals and Noise
. San Diego, CA: Academic Press.
Ridgway, S. H. and Carder, D. A. (
1997
). Hearing deficits measured in some Tursiops truncatus, and discovery of a deaf/mute dolphin.
J. Acoust. Soc. Am.
101
,
590
-594.
Stevens, S. S. and Newman, E. B. (
1936
). The localization of actual sources of sound.
Am. J. Psychol.
48
,
297
-306.
Terhune, J. M. (
1974
). Directional hearing of a harbor seal in air and water.
J. Acoust. Soc. Am.
56
,
1862
-1865. | 2022-11-30 03:50:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4427003860473633, "perplexity": 5711.509671776876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710719.4/warc/CC-MAIN-20221130024541-20221130054541-00279.warc.gz"} |
https://stats.stackexchange.com/questions/149239/gee-logistic-model-with-subject-specific-predictions | # GEE Logistic Model with Subject Specific Predictions?
I have fit a marginal logistic model or GEE Logistic Regression model using SAS' proc genmod to obtain estimated parameters associated with mortality (death). Using SAS, I am able to obtain subject-level predictions, $\hat{p}$. However, as I understand it, marginal models are population average models, so does it make sense to obtain these subject-level predictions? I was thinking about taking these $\hat{p}$'s and making prediction of death based on a cut-point and then performing a cross-validation with the actual known values of death as a means to validate my model. Does it make sense to do this with individual level predictions with GEE? | 2020-04-02 23:40:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8429598212242126, "perplexity": 1365.5547439345498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370508367.57/warc/CC-MAIN-20200402204908-20200402234908-00049.warc.gz"} |
http://gasdyn-ipm.ipmnet.ru/~rylov/wscep.htm | ## Yuri A. Rylov
Institute for Problems in Mechanics, Russian Academy of Sciences
101-1 ,Vernadskii Ave., Moscow, 119526, Russia
email: rylov@ipmnet.ru
Web site: http://rsfq1.physics.sunysb.edu/~rylov/yrylov.htm
or mirror Web site: http://gasdyn-ipm.ipmnet.ru/~rylov/yrylov.htm
Updated May 27, 2013
#### abstract
The tachyon model of neutrino is constructed, basing on the statement that quantum description is a statistical description of stochastically moving particles. Besides, the tachyon model contains two conceptual points: (1) universal formalism of particle dynamics, describing uniformly all particles: deterministic, stochastic and quantum, (2) discrete space-time geometry and skeleton conception of particle dynamics. The universal formalism is a result of a logical reloading, when the statistical ensemble becomes to be the basic object of particle dynamics instead of a single particle. Such a reloading admits one to describe uniformly the quantum, stochastic and deterministic particles in terms of a statistical ensemble without a reference to principles of quantum mechanics. Besides, one uses a relativistic state of a particle, when the state is described by the particle skeleton (several space-time points) instead of the point in the phase space, what is nonrelativistic concept of the particle state. Representing the Dirac equation in terms of the statistical ensemble, one concludes that in the deterministic approximation the world line of the Dirac particle may be a spacelike helix with timelike axis. The rotational component of the relativistic Dirac particle is described nonrelativistically. It shows that the world line may be spacelike, and the Dirac particle may be a tachyon. Neutrino is a Dirac particle, and it is a tachyon. Free quantum particles appear to move stochastically, and this bring up the question, what is the reason of stochastic motion of free quantum particles. It appears, that the discrete space-time geometry is a multivariant geometry. It is a reason of stochastic particle motion. If the elementary length $\lambda _{0}$ of the discrete space-time geometry is connected with the quantum constant $\hbar$ by the relation $\lambda _{0}^{2}=\hslash /bc$, where $b$ is some universal constant, then statistical description of the free particle motion coincides with the quantum description in terms of the Schr\"{o}dinger equation.
There is text of the paper in English (pdf, ps) and in Russian (ps, pdf) | 2018-01-23 15:31:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6489000916481018, "perplexity": 782.5582863074583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891980.75/warc/CC-MAIN-20180123151545-20180123171545-00061.warc.gz"} |
http://www.tiendaschollos.com/a70di8c/9-the-causes-of-variation-in-statistical-process-control-are-8fce4e | data: The next step is to calculate the Cpk index: Cpk is the minimum of: \$$\frac{57-48}{4.5} = 2\$$, and \$$\frac{60-57}{4.5} = 0.67\$$. Special or assignable cause variation may be temporary or local and come and go sporadically. Common causes : Also called as Chance causes (natural). I needed to be more careful. Ishikawa Diagram | Cause & Effect Diagram. You can see examples of charts in Section 9 on Control Limits. Variable data comes from measurements on a continuous scale, such as: temperature, In his original works, Shewhart called these “chance causes” and “assignable causes.” The basic idea is that if every known influence on a process is held constant, the output will still show some random variation. Meaning and Definitions, What is a Fishbone Diagram ? MoreSteam Hint: As a pre-requisite to improve your understanding of the following content, we recommend that you review the Histogram module and its discussion of frequency distributions. As we have learned so far, there are common and assignable causes of variation in the production of every product. This can be valuable in (1) detecting special-cause variation before too many defective products are produced and (2) gaining a better understanding of the process and reducing unwanted variation. As seen in the illustration, the 6-Sigma process spread is \$$9\$$. After establishing stability - a process in control - the process can be compared to With over 200 hours of training to choose from, browse through the entire belt curriculum to find the courses you need. data to collect. Here is the information you will need to calculate the Cp and Cpk: $Cp = \frac{\text{USL} - \text{LSL}}{6 \sigma_{est}}$. So the variation is common cause. with any process distribution - we use a normal distribution in this example for Common causes of variation create the predictable range of readings seen from a stable process. The illustrations below provide graphic examples of Cp and Cpk calculations using hypothetical low probability of coming from the same population that was used to construct the chart - this The significance of SPC Software is that by monitoring the process and bringing the process under statistical control to identify and take action on special causes of variation. Distributions with other shapes are Of course, I was at fault. MoreSteam.com offers a wide range of Lean Six Sigma online courses, including Black 2. When an out-of-control condition occurs, the points should be circled on the chart, and the By using this Site you consent to the use of cookies. Very few points will be near to the control limits. chart in the event of an out-of-control or out-of-specification condition. If you are asked to walk through a river and are told that the average water Cpk is calculated as follows: $Cpk = \text{min}\Bigg(\frac{\bar{\bar{X}} - \text{LSL}}{3\sigma_{est}}, \frac{\text{USL} - \bar{\bar{X}}}{3\sigma_{est}}\Bigg)$. A critical but often overlooked step in the process is to qualify the measurement system. Processes that show primarily common cause variation are, by definition, in control and running as well as possible. the root cause. What is Statistical Process Control (SPC)? Best Practices for Operational Excellence Conference, Blended Green Belt Training & Certification. The only exception is the moving range chart, which is based on a subgroup size of one.Consider the case of a subgroup of three data points: \$$13, 15, 17\$$. finding a value beyond \$$3\$$ standard deviations. successful activity time and time again. Special Causes : Also called as Assignable cause (un natural). Learn about commonly used tools for measuring process performance, root cause analysis, process control, and more. Special cause variation is present in an unstable process. understand how to detect out-of-control conditions. Special cause variation, which stems from external sources and indicates that the process is out of statistical control Various tests can help determine when an out-of-control event has occurred. averages. Statistical process control has been successfully utilized for process monitoring and variation reduction in manufacturing applications. You can use MoreSteam.com's If four out of five successive points fall in the area that is beyond one standard deviation So Cpk is \$$0.67\$$, indicating that a small percentage of the process output is defective (about \$$2.3\%\$$). By combining run charts or control charts with time series analysis, manufacturers can track the evolution—or the lack of evolution—of the underlying distribution of data based on production results. What is Quality Control ? If process bucket of scalding water \$$(127 \degree \text{F})\$$, on average you'll feel fine \$$(80 A flexible process improvement case study with data sets and tools for instructors to deliver multiple learning objectives. BETWEEN subgroups rather than WITHIN subgroups. The process steps are numbered for reference. Often we focus on average values, but understanding dispersion is critical to the Deploying Statistical Process Control is a process in itself, requiring organizational commitment MoreSteam Reminder: Specifications are not related to control limits - they are completely separate. The UCL and LCL are three standard deviations on either side of the mean - see section A of This is the variation that falls between the statistically calculated upper and lower control limits on a Statistical Process Control Chart. Gain insights and tips from organizations across a variety of industries that have implemented a Blended Learning model. Think of Cpk as a Cp calculation that is handicapped by considering only the half of the subgroups following a rational subgrouping strategy so that process variation is captured So, the milk always headed in her direction. lowest), but is better captured by the standard deviation (sigma). The type of chart used will be dependent upon the type of data collected as well as the subgroup The first step is to compare the natural six-sigma spread of the process to the 02:05 Let's talk about the history of SPC, 02:07 and we'll start with the founder, Walter Shewhart. MoreSteam partners with the Fisher College of Business at The Ohio State University to present an innovative Master Black Belt development program. If one or more points falls outside of the upper control limit (UCL), or lower control limit SPC can be divided into control charting and process capability study. deviations indicates that the process has either shifted or become unstable (more variability). Distribution, as shown below (please note that control charts do not This tool requires a great deal of coordination and if done Sources of variations present in process is stable over the time. After early successful adoption by Japanese firms, Statistical Process Control has now been incorporated by organizations around the world as a primary tool to improve product quality by reducing process variation. Variable or Attribute. Root cause A root cause for a defect is a change in an input … the tolerance to see how much of the process falls inside or outside of the specifications. tolerance. The flow-chart below outlines the major components of an effective SPC The difference between the upper and lower assignable variation as Special Cause variation. It is the variation due to both Common causes (natural) and Special Causes (un natural) causes. Covers advanced quantitative tools. Dr. Shewhart identified two sources of process variation: Chance The first subgroup's values are: SPC doesn't eliminate variation, but it allows us to track special cause variation. relabeled chance variation as Common Cause variation, and Our monthly email delivers the latest ideas and resources. Process Capability Indices-Cp and Cpk, Special Causes of Variation | Assignable causes | Types of variations, Variation Meaning | Process Variation | Common causes Vs Special causes, What is SPC ? Develop the skills to lead successful continuous improvement projects. Sources of variations present in process is stable over the time. Common-cause variation is the natural or expected variation in a process. range is from zero to \$$15\$$feet, you might want to re-evaluate the trip. subgroup. of the process. Most of the points will lie near to the central line. improvement projects. Most tests beyond test 1 are only appropriate when trying to bring a process under control. See the Measurement Systems Analysis section of the Toolbox for If that error exceeds an acceptable level, the data statistics and probability, Dr. Shewhart devised control charts used to plot data this analysis requires that the process be normally distributed. to instability rather than reducing variability. Develop a sampling plan to collect data (subgroups) in a random fashion at a determined frequency. reference the flow chart on the SPC chart. Comparing the plot points to the control limits allows a simple probability There are two types of process variation present in the manufacturing process : 1. Values for formula constants are provided by the following charts: The area circled denotes an out-of-control condition, which is discussed below. Using the terminology of statistical process control, a variation that indicates that the system may be out of control is. under the curve for a given number of standard deviations from the mean (the normal distribution is Visit our resources section more free materials. Figure 2: The Shewhart control chart Common Cause Variation Versus Special Cause Variation. Shewhart found that control limits placed at three standard deviations from the mean in either A team-based, one-day simulated project game for practicing the investigative and analytical skills Lean Six Sigma professionals need. Our most popular training simulation, a hands-on, one-day simulation that illustrates Lean Office principles and best practices. No two things are alike and Everything is different. e. b. and c. 9. the illustration below. across functional boundaries. Japanese industry after WWII. Consider the example of two subgroups, each with \$$5\$$observations. Read Section 10 below to By referring to these 8 rules, we can identify and eliminate the cause of variation and make our operation smooth. This construction forms the basis of the Control chart. Special Causes: Also called as Assignable cause (un natural). See how you can improve course content, support, theory to practice, and managment of your training program. deviation from the mean - see section F of the illustration below. There are two categories of control chart distinguished by the type of data used: After establishing control limits, the next step is to assess whether or not the process is in This can be calculated directly from the If the process has a normal distribution, \$$99.7\%\$$of the population is captured by the curve at three Charts that are posted on the floor make the best working tools - they are visible to operators, and are accessible to problem-solving teams. over time and identify both Common Cause variation and Special Cause variation. William Edwards Deming (October 14, 1900 – December 20, 1993) was an American engineer, statistician, professor, author, lecturer, and management consultant. specification is know as the tolerance. Statistical tables have been developed for various types of distributions that quantify the area The last step in the process is to continue to monitor the process and move on to the This index is known as Cp. Therefore, the Cp is \$$\frac{12}{9}\$$, or \$$1.33\$$. MoreSteam is the leading global provider of online training, certification, and technology for Lean Six Sigma. Specifications reflect "what the customer wants", while control limits tell us "what the process can deliver". Conversely, if special cause variation exists within the process then the process is described as being ‘out of control’ and unstable. Attribute data is based on upon discrete distinctions such as good/bad, ), Gauge Number - Tied in with calibration program, \$$R = \$$Range of subgroup observations. effort. Manage your projects, track your team's progress, and share critical project information with TRACtion®. Common cause variation is always present in a process. important measurements of its most critical processes had error in excess of \$$200\%\$$of the process Copyright © Powered by Tech Quality Pedia, What is APQP ? a. common cause variation. variation that is inherent in process, and stable over time, and actions by identifying responsibilities and target dates. Combine online training with instructor support, simulations, and other practice-based activities. Evidence of the lack of statistical control is a signal that a special cause is likely to have occurred. or a spreadsheet or statistics program. A process can have a Cp in excess of \$$A_2, D_3, D_4, d_2,\$$and \$$E_2\$$are all Constants - See the Constants Chart below. 8. Upper Specification Limit (USL) and Lower Specification Limit (LSL). Process shifts, out-of-control conditions, and corrective actions should be noted on the chart to help connect cause and effect in the minds of all who use the chart. Natural variation Occurs in a process as result of pure randomness (also called common cause variation) Assignable cause variation Occurs because of a specific change in input or in environmental variables. variability. No Some degree of variation will naturally occur in any process. Process is said to be under statistical control. Whenever a process manager seeks to control a process, he or she needs to separate the variation into the appropriate categories so that appropriate actions can be taken. SPC is supportive to maximize the overall profit by improving product quality, improving productivity, streamlining process, improving customer service, etc. measurement system is without measurement error. Use Process Playground™, a process mapping and simulation application, for agile process design and DFSS. special causes and improve the stability of the process. Each process charted should have a defined reaction plan to guide the actions to those using the There are two types of process variation present in the manufacturing process : 1. - see section E of the illustration below. However, specifications should be printed on the side, top, or bottom of the chart for comparing individual readings. The average of the subgroup is only \$$15\$$, so the plot point looks like it is within the specification, even though one of the measurements was out of spec.! individual data, or can be estimated by: \$$\sigma_{est} = \frac{\bar{R}}{d_2}\$$, The Lower Specification Limit is \$$48\$$, The Nominal, or Target Specification is \$$55\$$, The Upper Specification Limit is \$$60\$$, Therefore, the Tolerance is \$$60 - 48\$$, or \$$12\$$. Detailed information on the use of cookies on the moresteam.com site is provided in our Cookie Policy. process to the upper and lower specifications. lower - see section D of the illustration below. Lead in-person and virtual teams seamlessly with our fully integrated process improvement products. MoreSteam Hint: Use variable data whenever possible because it imparts a higher quality of information - it does not rely on sometimes arbitrary distinctions between good and bad. Many reaction plans will be similar, or even identical Combine eLearning with study halls, personal coaching, and practice. for various processes. This is when we get performance points falling outside the control limits. Common cause variation is a measure of the process’s potential or how well the process will perform when all the special cause variation is removed. Common cause variation is the inherent variability of the process, due to many small causes that are always present. \degree \text{F})\$$, but you won't actually be very comfortable! \$$\pm 3\$$ standard deviations are extended. If you have reviewed the discussion of frequency distributions in the Histogram Educated initially as an electrical engineer and later specializing in mathematical physics, he helped develop the sampling techniques still used by the U.S. Department of the Census and the Bureau of Labor Statistics. management of industrial processes. Process is not under Statistical control. Several tools are available through the MoreSteam.com Toolbox function to assist data dispersion, or spread. maintained, including: The control plan can be modified to fit local needs. ease of representation): In order to work with any distribution, it is important to have a measure of the 01:52 seek to identify and control variation. one but still fail to consistently meet customer expectations, as shown by the illustration below: The measurement that assesses process centering in addition to spread, or variability, is Cpk. depth is \$$3\$$ feet you might want more information. When the normal functioning of the process is disturbed by some unpredictable event, special cause variation is added to the common cause variation. MoreSteam uses "cookies" to allow registered users to access and utilize their MoreSteam account. Here is an excerpt from one:\"I used to, now and then, spill a glass of milk when I was young. Statistical Process Control, commonly referred to as SPC, is a method for monitoring, controlling and, ideally, improving a process through statistical analysis. additional help with this subject. If 15 points in a row fall within the area on either side of the mean that is one standard standard deviations value of \$$63.4\$$, and a mean plus \$$3\$$ standard deviations value of \$$74.6\$$. This can be expressed by the range (highest less Belt, Green Belt, and DFSS training. After early successful adoption by Japanese firms, tolerance. Now, consider that the distribution is turned sideways, and the lines denoting the mean and Instructions Following is an example of a reaction plan flow chart: MoreSteam Note: Specifications should NEVER be expressed as lines on control charts because the plot point is an average, not an individual. A stable process (process in a state of statistical control) is only subject to random influences (causes). Shewhart said that this random variation is caused by chance causes—it is unavoidable and statistical methods can be used to understand them. The process will be most effective if senior managers make it part of their daily routine to review charts and make comments. It is important to identify and try to eliminate special-cause variation. Statistical Process Control is based on the analysis of data, so the first step is to decide what The average of the two subgroup averages is expanded upon by Dr. W. Edwards Deming, who introduced SPC to A control plan should be maintained that contains all pertinent information on each chart that is Using the terminology of statistical process control (SPC), Type I errors are where common cause variation is treated as assignable cause variation. Time series data plotted on this chart can be compared to the lines, which now become control limits b. assignable cause variation. Browse the full collection of free resources on Lean Six Sigma and process improvement, including tutorials, webcasts, and white papers. Traction® to manage projects using the Six Sigma DMAIC and DFSS processes. the probability ofany future outcome falling within the limits can be statedapproximately. from two different shifts) is captured within one subgroup, the resulting control One simple way to express the reaction plan is to create a flow chart with a reference number, and If eight or more points fall on either side of the mean (some organization use 7 points, some 9) successfully can greatly improve a processes ability to be controlled and analyzed during process Hey before you invest of time reading this chapter, try the starter quiz. Consider a sample of \$$5\$$ data points: \$$6.5, 7.5, 8.0, 7.2, 6.8\$$, The Range is the highest less the lowest, or \$$8.0 - 6.5 = 1.5\$$, \$$s = \sqrt{\dfrac{(6.5 - 7.2)^2 + (7.5 - 7.2)^2 + (8.0 - 7.2)^2 + (7.2 - 7.2)^2 + (6.8 - 7.2)^2}{5 - 1}}\$$. 02:10 Shewhart worked at Bell Labs in 1920s and 30s. beyond the scope of this material. Capability, which is measured by indexes that compare the spread (variability) and centering of the reaction plan should be followed. d. a and b. e. b. and c. 9. control (statistically stable over time). \$$\frac{4 + 5}{2} = 4.5\$$, which is called X double-bar \$$(\bar{\bar{X}})\$$, because it is the average of the Run a successful DMAIC project in this 2-3 day team-based, simulated project game for Green and Black Belts. time, distance, weight. We know from our previous discussion that a point plotted above the upper control limit has a very The concepts of Statistical Process Control (SPC) were initially developed by Dr. Walter Shewhart of Bell Laboratories in the 1920's, and were expanded upon by Dr. W. Edwards Deming, who introduced SPC to Japanese industry after WWII. Therefore, a measurement value beyond \$$3\$$ standard MoreSteam Hint: Control charts offer a powerful medium for communication. Analyze your data with EngineRoom - the online application that combines statistics with problem-solving tools random factors resulting in distribution..., but it allows us to identify the special cause variations, when something really unexpected goes wrong Lean... Quality practitioners chart more sensitive to detecting special-cause variation process spread is $. Control versus capability this is when we get performance points falling outside the control plan section of process! In with calibration program, \$ $9\$ $( \sigma_ { est } ) \$ 5\! Senior managers make it part of their daily routine to review charts and make our operation smooth charts a... Outlines the major components of an effective SPC effort common causes ; variation these... Have occurred organizational commitment across functional boundaries and resources of your training program had some choice words when this.... Falling outside the control plan section of the control chart data ( subgroups ) in a process ''. Important to identify the root cause for a defect is a signal a! On upon discrete distinctions such as: temperature, time, distance weight. Analyze your data with EngineRoom - the online application that combines statistics with problem-solving tools the final at... Is closest to the final quiz at the Ohio state University to present an innovative Black... If that error exceeds an acceptable level, the probability of a false alarm rate to once 9 the causes of variation in statistical process control are observations! Case, the control chart cause ( un natural ), specifications should be circled the! Responsibilities and target dates spread of the control limits random variation is present in process is stable over time! Technology for Lean Six Sigma DMAIC and DFSS training utilize the Site moresteam uses cookies to. Ucl ), but is better captured by the standard deviation \ 5\ $. 2-4 hour DMAIC tollgate simulation designed for Lean Six Sigma online courses, including Black,. Noted, the next highest priority in effective, practice-based training with realistic simulations games... Common-Cause variation is added to the practice of SPC, 02:07 and we 'll start the! As common cause Variationis fluctuation caused by many random factors resulting in random distribution of the distribution that is in.$ observations rational subgrouping strategy so that process variation is sometimes Also called chance. The necessary formulas and techniques to apply it, track your team 's progress, managment. Printed on the process was often adjusted in the production of every product eLearning with study,... As more tests are employed, the data, measurement system calculation that is by... And Interpretation are helping us to identify an out-of-control condition the goal statistical! Monitor and control a process. practice, and reduced material consumption measurement Systems analysis section the. 8 rules, Patterns, and product updates predictable range of subgroup observations effort see! In random distribution of the product and process can be compared to tolerance. A wide range of readings seen from a stable process ( process in itself, requiring commitment. Due to many small causes that are always present on the chart sensitive. And we 'll start with the Fisher College of business at the Ohio state University to present innovative. Commonly used tools for measuring process performance, root cause for a defect is a method of quality which. Used: variable or Attribute ) causes process then the process standard (... As chance causes ( un natural ) through the control plan section of the process and move on to practice!, certification, and assignable variation as common cause variation by finding the input or environmental (. Unpredictable event, special cause variation about commonly used tools for measuring process performance root! Process mapping and simulation application, for agile process design and DFSS points will lie near the! Tests 1, 2, 5, 6 raises the false alarm rate to once every 91.75 observations to! A letter denotes 9 the causes of variation in statistical process control are average value for that subgroup subgroups ) in a process. that present... Next highest priority of variation create the predictable range of readings seen from a group of 9 the causes of variation in statistical process control are using calculators... Most effective if senior managers make it part of their daily routine to review charts and make our smooth. Fashion at a determined frequency one-day simulated project game for practicing the and... Move on to the use of cookies on any of our Sites ) that is present a! Home Page an innovative Master Black Belt development program on a continuous scale, as! The input or environmental variable ( s ) responsible to collect and chart the data can not be upon! Senior managers make it part of their daily routine to review charts and make comments spread is $. From a stable process ( process in itself, requiring organizational commitment across functional boundaries a signal that a cause. Training, certification, and managment of your training program once every 91.75 observations how you can use TRACtion®! Analysis requires that the system may be temporary or local and come and go sporadically always in! Use that information for the purpose of managing content and providing you with better... Is made by observing the plot points to the lines, which now become control tell... Our free Toolbox have covered variation in 11 publications over the years inherent. Are common and assignable causes of variation create the predictable range of subgroup observations variable... Project information with TRACtion® Walter Shewhart many random factors resulting in random 9 the causes of variation in statistical process control are of the.. To choose from, browse through the moresteam.com Site is provided in our free Toolbox 1 are only appropriate trying! Product updates one-day simulated project game for Green and Black Belts on the analysis of averages that... Determination is made by observing the plot point Patterns and applying Six simple rules to identify out-of-control! A method of quality control which employs statistical methods to monitor the quality of the distribution that is by... A mean in 11 publications over the years Toolbox for additional help with this subject beyond test 1 only. Lead in-person and virtual teams seamlessly with our fully integrated process improvement including... Plot points to the lines, which now become control limits tell us what! Is added to the management of industrial processes this Site you consent to the central line and rework costs reduced. Access and utilize the Site improving product quality, improving productivity, streamlining process, due to many small that... Trust moresteam Sigma DMAIC and DFSS processes exhibit unusual behavior 9\$ . Apply it this material of industrial processes Interpretation are helping us to identify the cause... Them to provide visual support effective SPC effort beyond test 1 are only appropriate when to. Ucl ), but is better captured by the following charts: the area circled denotes an out-of-control condition noted! And quality control which employs statistical methods can be used to understand the between. Track special cause variation may be due to both common causes: called! Necessary formulas and techniques to control limits tell us what the process is stable the! Process was often adjusted in the production of every product out of is! The analysis of averages should always be accompanied by analysis of averages should be! Do not use any type of profiling, targeting, or a spreadsheet or program..., special cause variations, when something really unexpected goes wrong causes—it is unavoidable and statistical methods monitor. On this chart can be always measured eliminate the cause of variation the. Simple probability assessment illustration, the 6-Sigma process spread is \ 5\ not be acted reliably! Cp calculation that is handicapped by considering only the half of the lack of process! Variation—And to react only to assignable cause ( chance causes ): common causes of variation in the illustration the. Special cause variations, when something really unexpected goes wrong an innovative Master Black Belt development program both causes. Described as being ‘ out of control is a process under control insights and from... Variation beyond these limits is expected and attributed to common causes are always present the skills to lead successful improvement. An assignable cause variation is law of nature and exists in all.. And providing you with a better visitor experience the management of industrial processes the! Blog for articles on a continuous scale, such as good/bad, percentage,... Have occurred ( un natural ) Also use cookies to analyze how users and... On a wide-range of interesting topics and tools for measuring process performance, root.. | 2021-05-07 10:00:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39664748311042786, "perplexity": 1875.1863690095518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00354.warc.gz"} |
https://www.tutorialspoint.com/factors-affecting-the-resistance-of-a-conductor | Factors Affecting the Resistance of a Conductor
In this article, we will discuss the factors affecting the resistance of a conductor and the change in the resistance of a substance with the variation in temperature. Let’s begin with a basic introduction of electrical resistance.
Electrical resistance is defined as the measure of opposition that a material offers in the path of electric current. The electrical resistance is denoted by the symbol R and is measured in Ohms (Ω).
The electrical resistance of a conductor is given by the following empirical formula,
$$\mathrm{R=\frac{\rho l}{a}\: \: \cdot \cdot \cdot \left ( 1 \right )}$$
Where, ρ (rho) is a constant called resistivity or specific resistance of the material of the conductor, l is the length of the conductor, and a is the area of the cross-section of the conductor.
Factors Affecting the Resistance
From equation (1), it is clear that the electrical resistance of a conductor −
• Is directly proportional to the length (l) of the conductor.
• Is inversely proportional to the cross-sectional area (a) of the conductor.
• Depends upon the nature of the material (ρ).
• Changes with the change in temperature.
Resistance Variation with Temperature
As discussed in the above section, the resistance of a material varies with the change in its temperature. In this section, we will understand the variation of resistance with temperature for different types of engineering materials such as metals, semiconductors, electrolytes, metal alloys, and semiconductors.
From elementary physics, we know that electrical resistance is the opposition in the flow of electric charge through the material due to collision with positive atoms of the material. This opposing property of materials is referred to as resistivity. The resistivity of materials depends upon the nature of the material and its temperature. This is due to the fact that the temperature influences the number of electric charges (free electrons) in the material and hence affects its resistivity or resistance.
The resistance of different materials varies differently with the temperature. Because, in some materials, the rise in temperature increases the number of electrons or electric charge, while in some other materials, the increase in temperature reduces the number of free electrons. Now, let us discuss the variation of resistance with the temperature for different types of materials individually −
Change in Resistance of Metals with Temperature
In the case of metals, the electrical resistance is mainly due to collisions of free electrons with the positive atoms of the metal. Hence, when the temperature of metals increases, it increases the velocity of the electrons. Consequently, the number of collisions increased per unit of time. In this way, the resistance of metals increases with the increase in temperature.
But, the variation in the resistance of metals is regular for a normal range of temperatures, hence the curve of their temperature-resistance graph is a straight line as shown in Figure-1.
Since the electrical resistance of metals increases with the increase in their temperature. Therefore, metals have a positive temperature coefficient of resistance.
Change in Resistance of Semiconductors and Insulators
Both semiconductors and insulators are bad conductors of electric current (i.e. electric charge). Hence, they offer very high resistance to the flow of electric charge at normal temperatures. This is because, at normal temperature, the number of free electrons in semiconductors and insulator are negligible. But, when we increase the temperature, electrons in their atoms gain energy and become free by breaking the chemical bonds. Thus, the resistance of semiconductors and insulators decreases with the rise in temperature.
Since the resistance of semiconductors and insulators decreases with the increase in temperature. Therefore, they have a negative temperature coefficient of resistance.
Change in Resistance of Electrolytes
The conducting mediums that conduct electricity by the movement of ions, but not by the movement of free electrons, are called electrolytes. Electrolytes are usually solutions of salts. The number of ions in an electrolyte is influenced by the temperature, i.e. ions in an electrolyte increase with the rise in temperature. Therefore, the resistance of electrolytes also decreases with the increase in temperature.
Since the resistance of electrolytes decreases with the rise in temperature, they also have a negative temperature coefficient of resistance.
Change in Resistance of Alloys
Alloys are the mixtures of several different metals, hence an alloy has combined characteristics of its constituent metals. Since the resistance of metals increases with the rise in temperature. Consequently, the resistance of alloys also increases with the increase in temperature, but unlike metals, this increase is very small and irregular. For some alloys like constantan, manganin, etc., the variation in resistance is practically negligible over a wide range of temperatures.
Since the resistance of alloys also increases with the rise in temperature, thus they have a positive temperature coefficient of resistance.
Conclusion
The resistance variation of a material with temperature can be concluded as follows: "When the temperature of a material increases, the vibrations of the molecules increase accordingly. These vibrations restrict the movement of free electrons through the material. This results in the increase in resistance of the material."
In the case of metals, with the increase in temperature, there is no increase in the number of free electrons. The net effect of the temperature rise is to increase in the resistance of the metal due to increased molecular vibrations.
But, in the case of semiconductors and insulators, the increase in temperature creates more free electrons within the material. Thus, the electrical resistance of such materials decreases with the rise in temperature. | 2023-03-30 19:08:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7923862934112549, "perplexity": 331.9138876405102}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00634.warc.gz"} |
https://nl.mathworks.com/matlabcentral/answers/454977-legend-issue-not-all-the-entrties-are-shown | # Legend issue: not all the entrties are shown.
2 views (last 30 days)
john_arle on 7 Apr 2019
Commented: Walter Roberson on 7 Apr 2019
Hello there,
I am having trouble with my legend.
I have single plot, and then I typed "hold on" and added a curve to my plot.
I cannot insert a correct legend though!
How is this possible?
Cheers,
Andrea.
plot(linspace(0.03,0.3), poiseuille(linspace(0.03,0.3)),'k')
axis([0 0.3 0 12])
title('Velocity profiles for $\epsilon = 0.1$','Interpreter','latex',...
'fontweight','bold','fontsize',20)
xlabel('$r$ [m]','Interpreter','latex','fontweight','bold','fontsize',20);
ylabel('$u=u(r,t=0)$ [m/s]','Interpreter','latex',...
'fontweight','bold','fontsize',20);
leg_1 = legend('Poiseuille' );
set(leg_1,'FontSize',10);
hold on
plot(linspace(0.03,0.3), vel((1:100), 0),'color', [0.3 0.3 0.55])
leg_2 = legend( {'$\alpha = 0.1$'}, 'Interpreter','latex' );
set(leg_2,'FontSize',15);
hold on
A. Sawas on 7 Apr 2019
You can only add one legend to the axis. So to solve your problem put all the names in one legend call. You may modify your code to do like this:
%leg_1 = legend('Poiseuille' );
%set(leg_1,'FontSize',10);
leg_2 = legend( {'Poiseuille', '$\alpha = 0.1$'}, 'Interpreter','latex' );
set(leg_2,'FontSize',15);
hold on
#### 1 Comment
Walter Roberson on 7 Apr 2019
Note that you will only get a single fontsize when you do this. You could include a \fontsize as part the latex code, but be aware that MATLAB might not include enough space between entries. It would probably be safer to 'Fontsize' the larger font, and \fontsize the other entries down to what they need. | 2020-10-20 15:24:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6334774494171143, "perplexity": 9523.871838904262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872746.20/warc/CC-MAIN-20201020134010-20201020164010-00215.warc.gz"} |
http://antreprenor.ase.ro/g6uob/baked-chicken-ndfhbqw/archive.php?cb12e2=trigonal-planar-geometry | # trigonal planar geometry
## trigonal planar geometry
Log in or sign up to add this lesson to a Custom Course. Trigonal planar molecules have three atoms bonded to the central atom and no lone electron pairs, making the steric number equal to three. Chemistry Structure and Properties. Bond angle of tetrahedral geometry - 2 - 2. One double bond is needed to give carbon an octet. a. T-shaped b. Trigonal planar c. Tetrahedral d. Trigonal pyramidal e. Trigonal bipyramidal. Spanish Grammar: Describing People and Things Using the Imperfect and Preterite, Talking About Days and Dates in Spanish Grammar, Describing People in Spanish: Practice Comprehension Activity, Quiz & Worksheet - Employee Rights to Privacy & Safety, Flashcards - Real Estate Marketing Basics, Flashcards - Promotional Marketing in Real Estate, Prentice Hall Biology: Online Textbook Help, FTCE English 6-12 (013): Practice & Study Guide, Anatomy and Physiology: Certificate Program, FTCE Middle Grades General Science 5-9 (004): Test Practice & Study Guide, FTCE Earth & Space Science 6-12 (008): Test Practice & Study Guide, Georgia Milestones: The Great Awakening in the US, Quiz & Worksheet - Functions of Marketing Management, Quiz & Worksheet - Physical Education Trends & Issues, Quiz & Worksheet - Spanish Practice: Conversations About Pastimes, Quiz & Worksheet - Social & Political Developments Since 1945, Like Water for Chocolate: Characters & Quotes, Tech and Engineering - Questions & Answers, Health and Medicine - Questions & Answers, Draw Lewis dot structures, including appropriate resonance forms, and assign formal charges to each of those structures, for the molecule urea, with chemical formula NH2CONH2. This is trigonal planar geometry. Is the statement true or false? Select a subject to preview related courses: Since there are no lone electron pairs in a trigonal planar molecule, the bonds are spaced evenly. Enrolling in a course lets you earn progress by passing quizzes and exams. Did you know… We have over 220 college The molecule all in a plane and is two dimensional. The carbon carbon at the center with no lone electron pairs. Earn Transferable Credit & Get your Degree. planar geometry. What is the molecular geometry of ch3? Log in here for access. This molecule exists in a gaseous state in only minute quantities under specialized conditions as an intermediate in the making of other boron hydride type molecules. Remember that all trigonal planar molecules have a steric number of three. Biology Lesson Plans: Physiology, Mitosis, Metric System Video Lessons, Online Typing Class, Lesson and Course Overviews, Becoming an Anesthesia Tech: Education, Certification & Salary, Military-Friendly Colleges that Accept ACE Credits, How to Become a Virologist: Education, Training & Salary, Online English Writing Course and Class Overviews, Online Schools for Medical Administrative Assisting How to Choose, Top Schools for Medical Administrative Services, NY Regents - Foundations of Geometry: Help and Review, NY Regents - Logic in Mathematics: Help and Review, NY Regents - Introduction to Geometric Figures: Help and Review, NY Regents - Similar Polygons: Help and Review, NY Regents - Quadrilaterals: Help and Review, NY Regents - Circular Arcs and Circles: Help and Review, NY Regents - Analytical Geometry: Help and Review, Trigonal Planar in Geometry: Structure, Shape & Examples, NY Regents - Triangles and Congruency: Help and Review, NY Regents - Parallel Lines and Polygons: Help and Review, NY Regents - Geometric Solids: Help and Review, NY Regents Exam - Geometry Help and Review Flashcards, High School Precalculus: Homework Help Resource, High School Algebra II: Tutoring Solution, Prentice Hall Geometry: Online Textbook Help, McDougal Littell Geometry: Online Textbook Help, High School Precalculus: Tutoring Solution, High School Algebra II: Homework Help Resource, Quiz & Worksheet - Fractions in Simplest Form, Quiz & Worksheet - Finding & Defining the Complement of a Set, California Sexual Harassment Refresher Course: Supervisors, California Sexual Harassment Refresher Course: Employees. Booster Classes. 109.5° 180° 120° 105° QUESTION 2 Determine the electron geometry (eg) and molecular geometry (mg) of S02. This molecule exists in a gaseous state in only minute quantities d) tetrahedral. One of the four atoms is the central atom whereas the other three are covalently bonded to it in such a way that they form corners of a triangle. Similarly … it has three bonding groups and no lone pairs - three electron groups on the central atom. Sciences, Culinary Arts and Personal NO 2- To unlock this lesson you must be a Study.com Member. component of the carbon double bond oxygen and at least one hydrogen eg=linear, mg=linear eg=trigonal planar, mg=trigonal planar eg=trigonal planar, mg=bent eg=tetrahedral, mg=tetrahedral QUESTION 3 Consider the molecule below. You can then take a brief quiz to see what you learned. Tags: Question 11 . Formaldehyde has a double bond to an oxygen atom, and in sulfur trioxide, there are double bonds to all three surrounding atoms. credit-by-exam regardless of age or education level. Trigonal planar is a molecular shape that results when there are three bonds and no lone pairs around the central atom in the molecule. The bonds are usually single bonds, but they can be double bonds as well. This molecule is compounds called aldehydes. In this lesson you will learn about the shape and structure of trigonal planar molecules. The central and surrounding atoms in a trigonal planar molecule lie on one plane (hence the term planar). Three Electron Pairs (Trigonal Planar) The basic geometry for a molecule containing a central atom with three pairs of electrons is trigonal planar. sp3d. There is one central atom, and the other three atoms (peripheral atoms) are connected to the central atom in a way that they are in the corners of a triangle. Compare it to the BeH 2 which has 2 hydrogen atoms and no lone electron pairs. electron deficient and does not follow the octet rule because The pairs are arranged along the central atom’s equator, with 120° angles between them. Molecules with the Trigonal planar shape are triangular and in one plane or flat surface. just create an account. flashcard set{{course.flashcardSetCoun > 1 ? The bonds form angles of 120 degrees. There is a central atom bonded to three surrounding atoms and no lone electron pairs. In this example, H2CO, the Lewis diagram shows Anyone can earn Get access risk-free for 30 days, The 'configuration of electron orbitals' image shows the electron orbitals for each of the three bonds. All other trademarks and copyrights are the property of their respective owners. 180 Degrees. We can see in the 'comparison' image that adding a lone electron pair changes the shape of the molecule, making a trigonal pyramidal shape. We will discuss the angles formed within the molecules and look at a few examples. Odd-electron molecules The arrangement that takes place gives the molecule its geometric structure. [].Dipole-dipole attractions [].London foces [].Hydrogen bonding VSEPR Theory also states that the electrons and atoms of the molecule will arrange themselves to minimize the repulsion. first two years of college and save thousands off your degree. 109.5 Degrees. Molecules with an trigonal planar electron pair geometries have sp 2 d hybridization at the central atom. Both trigonal planar and trigonal pyramidal are terms used in geometries to describe the three-dimension arrangement of atoms of a molecule in space. Electron: tetrahedral Molecular: trigonal pyramidal. Home. All rights reserved. Geometry: Trade School Diploma Program Summary, List of Free Online Geometry Courses and Lessons, How to Choose a College for a Mechanical Drafting Bachelor's Degree, Design Technology Education and Training Program Information, Associates Degree in GIS: Program Summary, Career Information for a Degree in Architectural Engineering. Instead of 120 degree angles, a trigonal pyramidal molecule has bond angles equal to 109 degrees or less. If there is one lone pair of electrons and three bond pairs the resulting molecular geometry is trigonal pyramidal. This repulsion is due to the negative charge of the electrons (like charges repel each other). The pairs are arranged along the central atom’s equator, with 120° angles between them. Trigonal planar geometry is shown by molecules with four atoms. Formaldehyde is the simplest member of a class of organic Solved: The shape of SO_2 is trigonal planar. answer choices . 135 lessons A molecule with the formula AB3 has a trigonal planar geometry. In contrast, the extra stability of the 7p 1/2 electrons in tennessine are predicted to make TsF 3 trigonal planar, unlike the T-shaped geometry observed for IF 3 and predicted for AtF 3; similarly, OgF 4 should have a tetrahedral geometry, while XeF 4 has a square planar geometry and RnF 4 is predicted to have the same. a) linear. They are usually drawn this way for simplicity. We see this in the 'examples' image. 11 chapters | This video discusses why the molecular geometry of nitrate NO3- is trigonal planar. This is trigonal planar The lone electron pairs are the electrons that surround the central atom but are not bonded to another atom. A trigonal planar molecule has a central atom bonded to three surrounding atoms, with no lone electron pairs. The central and surrounding atoms in a trigonal planar molecule lie on one plane (hence the term planar). Study.com has thousands of articles about every Create your account, Already registered? G.) and molecular geometry is BH3. ELECTRON DOMAIN 3D- STRUCTURE ELECTRON PAIR GEOMETRY ANGLE 2 Linear 180° 3 Trigonal Planar 120° 4 Tetrahedral 109.5° 5 Trigonal-bipyramidal 120° & 90° 6 Octahedral 90° & 180° A summary of common molecular shapes with two to six electron groups. electron pairs and is trigonal planar. You should find at least. b) trigonal bipyramidal. A 37% solution 120 Degrees. Answer. The hydrogen atoms are as far Mia has taught math and science and has a Master's Degree in Secondary Teaching. The three atoms connected to the central atom are called peripheral atoms. Both molecules have three atoms bonded to the central atom. Your dashboard and recommendations. Chemistry Q&A Library Formaldehyde, H2CO, has a trigonal planar geometry. Chemical Bonding. 90 Degrees. | 8 What is the lewis dot structure for Xe? Electronic Structure . In middle ring if you observe carefully the lone pair on lower $\ce{N}$ is in continuous delocalisation with the benzene ring so its geometry will be planar, but there are all $\sigma$-bonds associated with the upper $\ce{N}$, so that has a tetrahedral geometry. of other boron hydride type molecules. Total electrons = 24. The total number of bonds and lone electron pairs determines the steric number of the molecule. When molecules are formed by chemical bond which means atoms bonding together, suborbitals involved in the bond or bonds create different molecular shapes depending on many factors. BF 3: 2 bonding regions 1 lone pair. Create an account to start this course today. A trigonal planar compound has a central atom attached to three atoms arranged in a triangular shape around the central atom. How Do I Use Study.com's Assign Lesson Feature? However, the negative charge of the electron pair repels the negative charge of the bonds, creating smaller bond angles. This shows trigonal planar for the electron pair geometry and and bent the molecular geometry. Trigonal planar: triangular and in one plane, with bond angles of 120°. The bonds are spread equally around the plane, forming 120 degree bond angles. BF 3 is an example. The carbon and oxygen are bonded through a double bond which shows carbon at the center with no lone electron pairs. An example of trigonal planar electron pair geometry (E. P. Solution for According to VSEPR theory, predict the central atom having a molecular geometry of trigonal planar in methyl glycine molecule (NH₂-C¹H₂-C²O¹-O²H). Get the detailed answer: What is the molecular geometry of ? NOTES: This molecule is made up of 3 equally spaced sp 2 hybrid orbitals arranged at 120 o angles. Trigonal bipyramidal: five atoms around the central atom; three in a plane with bond angles of 120° and two on opposite ends of the molecule. Carbonate ion is present in limestone as calcium carbonate. and and the oxygen are bonded through a double bond which counts apart as possible at 120o. imaginable degree, area of What is the Difference Between Blended Learning & Distance Learning? ... What is the angle between electron groups in the trigonal planar geometry? Hybridization of sp3d. Trigonal Planar Arrangement: types of regions: distribution of regions of high electron density: model: 3 bonding regions 0 lone pairs. credit by exam that is accepted by over 1,500 colleges and universities. 109.5. Identify the electron-pair geometry based on the number of regions of electron density: linear, trigonal planar, tetrahedral, trigonal bipyramidal, or octahedral (Figure 7.19, first column). The one lone electron pair exerts a little extra repulsion on the two bonding oxygen atoms to create a slight compression to a 116 o bond angle from the ideal of 120 o. counts as "one electron pair" and two single bonded c) trigonal planar. C = 4 e- Molecular geometry is a type of geometry used to describe the shape of a molecule. Hence the molecule has three electron pairs and is trigonal oxygens. Not sure what college you want to attend yet? © copyright 2003-2021 Study.com. 2- charge = 2e- 4 bonding pairs; 1 lone pair. In this example, CO32-, the Lewis diagram Bond angle of trigonal planar geometry. What is the lewis dot structure for Br? is water, known as formalin, is a biological preservative and Trigonal Planar Trigonal planar geometry is exhibited by the molecules in which four atoms have been covalently bonded together. In molecular geometry, the number of bonds and lone electron pairs determines the shape of a molecule, explained by the Valence Shell Electron Pair Repulsion Theory (VSEPR Theory). Select all that apply. in used in embalming fluids. trigonal planar molecular geometry → trigonska planarna geometrija molekule. study 120 seconds . Sociology 110: Cultural Studies & Diversity in the U.S. CPA Subtest IV - Regulation (REG): Study Guide & Practice, Properties & Trends in The Periodic Table, Solutions, Solubility & Colligative Properties, Electrochemistry, Redox Reactions & The Activity Series, Distance Learning Considerations for English Language Learner (ELL) Students, Roles & Responsibilities of Teachers in Distance Learning. This gives it more of a two-dimensional shape than three-dimensional. Electron orbitals show the regions where the electrons are likely to be found for each bond. Solution (a) According to the VSEPR model, three electron domains will be arranged in a trigonal plane. flashcard sets, {{courseNav.course.topics.length}} chapters | O = 6e- x 3 = 18e- The Lewis diagram is as follows: Visit the NY Regents Exam - Geometry: Help and Review page to learn more. What intermolecular forces does a formaldehyde molecule experience? geometry. You can test out of the Get the unbiased info you need to find the right school. as "one electron pair". What is the molecular geometry of ch3? There are two bent geometries based on trigonal planar electronic geometry with one lone pair as exemplified by sulfur dioxide that has a bond angle a bit less than 120 o C, and by tetrahedral electronic geometry with two lone pairs, as exemplified by water with 104.5 o C bond angle. Other types of geometries such as linear, bent, tetrahedral, and octahedral are quite easy to differentiate when compared to the two above. The molecule all in a plane and is two dimensional. All four atoms lie flat on a … The Parallel Postulate: Definition & Examples, Quiz & Worksheet - Trigonal Planar Structure & Shape, Over 83,000 lessons in all major subjects, {{courseNav.course.mDynamicIntFields.lessonCount}}, Classifying Triangles by Angles and Sides, Interior and Exterior Angles of Triangles: Definition & Examples, Triangle Congruence Postulates: SAS, ASA & SSS, Congruence Proofs: Corresponding Parts of Congruent Triangles, Perpendicular Bisector Theorem: Proof and Example, Angle Bisector Theorem: Proof and Example, Congruency of Isosceles Triangles: Proving the Theorem, Converse of a Statement: Explanation and Example, Median, Altitude, and Angle Bisectors of a Triangle, Properties of Concurrent Lines in a Triangle, Congruency of Right Triangles: Definition of LA and LL Theorems, Constructing Triangles: Types of Geometric Construction, The AAS (Angle-Angle-Side) Theorem: Proof and Examples, The HA (Hypotenuse Angle) Theorem: Proof, Explanation, & Examples, The HL (Hypotenuse Leg) Theorem: Definition, Proof, & Examples, Circumcenter: Definition, Formula & Construction, Similar Triangles: Definition, Formula & Properties, NY Regents Exam - Geometry: Help and Review, Biological and Biomedical When identifying the shape of a molecule, we need to first know the number of bonds and lone electron pairs within the molecule. A molecule that is trigonal planar and does not have a dipole moment is PCl2 BH The central atom of the molecule has a hybridization that is BrF, sp H.CO not defined by only s and p orbitals PFS sp Get more help from Chegg Get 1:1 help now from expert Chemistry tutors And molecular geometry for an element that has two bonds and no lone electron pairs type... Possible at 120o used in embalming trigonal planar geometry but are not bonded to three connected. For the electron pair '' or less it more of a molecule with a lone pair of electrons a! Molecular shape that results when there are no lone pairs to Determine electron. A 37 % solution is water, known as formalin, is a type of electron geometry is described bent. 2, the molecular structure ( Figure 7.19 ) compare it to the negative charge the... In geometries to describe the shape of the bonds are usually single bonds, creating smaller bond angles 120°. Are triangular and in one plane, with 120° angles between them are bonded through a double to. The molecules in which four atoms have been covalently bonded together been covalently bonded together baking powder planar for electron! The presence of lone pairs - three electron pairs and is two dimensional atom bonded to another atom and. Learn more, visit our Earning Credit Page example, H2CO, the Lewis shows! Double bonds as well b. trigonal planar molecules have three atoms arranged in a plane and is planar! Which counts as one electron pair geometry and and bent the molecular geometry of nitrate NO3- is planar! To a Custom Course atom but are not bonded to another atom and in one plane hence. One central atom them as two-dimensional shapes quiz to see what you learned three bonding groups and no pairs! Of three we replace a bonding pair with a lone pair, as in SO,! Eg ) and molecular geometry ( eg ) and molecular geometry is a central ’! The oxygen are bonded through a double bond which counts as one electron pair repels the negative charge the. Compare it to the BeH 2 which has 2 hydrogen atoms are as far apart as possible 120o... Difference between Blended Learning & Distance Learning connected to the negative charge of the bonds are single!, three electron pairs within the molecules in which four atoms three bond pairs the resulting molecular geometry when the...... what is the electron geometry and and bent is the Difference between Blended &. In baking soda and baking powder angles formed within the molecule below college you want to attend yet atom bond! That takes place gives the molecule its geometric structure and save thousands your. S equator, with bond angles equal to 109 degrees or less bonds to all three atoms! Be found for each bond AB3 has a trigonal planar atoms of the its! Will learn about the shape of SO_2 is trigonal planar shape are triangular and one... Electrons around it, what type of geometry used to describe the three-dimension of! By passing quizzes and exams are usually single bonds, creating smaller bond angles 109.5°... Formed within the molecules and look at a few examples d hybridization the... Three groups of electrons will a bent molecule have lie on one plane ( hence the term )! Can then take a brief quiz to see what you learned 6 valence electrons possible at.! - three electron pairs within the molecules and look at a few.... All in a tetrahedral shape for 30 days, just create an account Q! Quizzes and exams we usually think of the first two years of college and save thousands your. Remember that all trigonal planar this shows trigonal planar molecules a steric number to! 3 Consider the molecule all in a trigonal planar and trigonal pyramidal molecule has bond angles equal three. The three atoms bonded to three surrounding atoms in a plane and is planar... 2, the molecular geometry → trigonska planarna geometrija molekule: model: 3 bonding regions 0 lone on! Geometry ( eg ) and molecular geometry is described as bent or.. Regions 1 lone pair of electrons will a bent molecule have when we think of electron. Of the structures of molecules resulting molecular geometry shape are triangular and one. If the center atom has three electron domains will be arranged in a trigonal.! Degree angles, a trigonal planar geometry groups of electrons will a bent molecule have electron deficient does. Been covalently bonded together ].Hydrogen bonding a molecule in space planar arrangement: types of regions of electron... The term planar ) terms used in geometries to describe the three-dimension arrangement of atoms a... Bent the molecular geometry is exhibited by the orbitals, which is triangular ( or trigonal ) three surrounding in! It more of a molecule with a trigonal planar molecule is electron deficient and not! Angles between them be arranged in a triangular shape around the plane, 120°. Quizzes and exams solution is water, known as formalin, is a molecular that. Approximate bond angle of tetrahedral geometry - 2 - 2 orbitals, which is (. This example, CO32-, the shape of the molecule, the shape of a two-dimensional shape than three-dimensional of! ].Dipole-dipole attractions [ ].Hydrogen bonding a molecule in space represents the outward shape formed by the,... Pairs are distributed in a trigonal planar geometry is shown by molecules with the trigonal planar is a atom. Bonds are spread equally around the central atom sure what college you to! Orbitals ' image shows the electron pair geometry and and the oxygen are bonded through a double bond which as... Of electrons will a bent molecule have used to describe the three-dimension arrangement of atoms of a two-dimensional than! To trigonal planar geometry Custom Course and molecular geometry ( eg ) and molecular geometry is described as bent angular!.Hydrogen bonding a molecule in space molecular geometry for an element that has two bonds one. H2Co, the molecular structure ( Figure 7.19 ) a brief quiz to see you... Described as bent or angular odd-electron molecules trigonal planar molecules have three atoms bonded to three surrounding atoms and lone. And and bent is the same as the electron-domain geometry equally around the atom. Themselves to minimize the repulsion atoms, with 120° angles between them will go pyramidal... What type of electron orbitals ' image shows the electron pair repels negative. Way of describing the shapes of molecules in which four atoms have been covalently bonded together 3 the. Plane, forming 120 degree bond angles the number of three like repel... This lesson you will learn about the shape of a molecule in space bent have... Think of the three atoms connected to the VSEPR model, three electron groups on the central and atoms!, has a central atom but are not bonded to the BeH 2 which has 2 atoms! Electrons and atoms of the three atoms arranged in an octahedron bonds on one atom. An element that has two bonds and no lone pairs on the and. '' and two single bonded oxygens how Do I use Study.com 's Assign lesson?... Bonding pair with a lone pair equal to three surrounding atoms, with 120° angles between.... This molecule is all in a plane and is trigonal planar geometry is exhibited by the orbitals, is! Also states trigonal planar geometry the electrons ( like charges repel each other ) 120° between... Atom but are not bonded to three surrounding atoms in a tetrahedral shape info... Molecular geometry, CO32-, the geometry is the molecular geometry of nitrate is... And and the oxygen are bonded through a double bond which counts as one electron pair.. In limestone as calcium carbonate atom with bond angles equal to 109 or... Is one lone pair, as in SO 2, the Lewis diagram shows carbon at the central atom s! Around it, what type of geometry used to describe the shape of SO_2 trigonal... The geometry is a molecular shape that results when there are no lone pairs the! And no lone pairs on the central atom bonded to the central atom bonded to the VSEPR,! The angle between electron groups are on the central atom bonded to surrounding. Be double bonds to all three surrounding atoms in a Course lets you earn progress by passing and! C. tetrahedral d. trigonal pyramidal are terms used in geometries to describe three-dimension... Atom with bond angles equal to 109 degrees or less trigonal pyramidal terms! 6 valence electrons 1 lone pair of electrons and three bond pairs the resulting molecular geometry is exhibited by orbitals. Octet rule because it has three electron groups on the central atom arranged along the central atom ’ s,... Electron density: model: 3 bonding regions 1 lone pair, as in 3! Atom are called trigonal planar geometry atoms, mg=linear eg=trigonal planar, mg=trigonal planar eg=trigonal planar, mg=trigonal planar eg=trigonal,! Also states that the electrons are likely to be found for each of the three bonds lone! Bent is the simplest member of a molecule in space groups of electrons around it, bicarbonate, is baking. Get access risk-free for 30 days, just create an account that the electrons that surround central. That takes place gives the molecule oxygen are bonded through a double bond which counts ... Pairs on the central atom Assign lesson trigonal planar geometry 2 bonding regions 1 lone pair pairs are along. → trigonska planarna geometrija molekule both trigonal planar molecule lie on one plane hence... Or sign up to add this lesson you must be a Study.com member trigonal! Test out of the three atoms bonded to the BeH 2 which has 2 hydrogen atoms are as apart! Course lets you earn progress by passing quizzes and exams within the molecules look. | 2021-04-12 06:51:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17737482488155365, "perplexity": 3371.03133119878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00300.warc.gz"} |
https://studiofreya.com/3d-math-and-physics/little-more-advanced-collision-detection-spheres/ | # Advanced Sphere-Sphere Continuous Collision Detection (CCD)
By , last updated April 23, 2017
In simple collision detection algorithm we have calculated whether the two spheres were colliding. More advanced calculations include finding the time of the collision as well as the direction of the spheres during the test.
Let us assume we have a vector between sphere centers (s), relative speed (v) and sum of radii (radiusSum):
$vec{s} = s1.pos - s2.pos$
$vec{v} = s1.vel - s2.vel$
$radiusSum = s1.radius + s2.radius$
We can calculate squared distance between centers. If the distance (dist) is negative, they already overlap:
$dist = vec{s}.dot(vec{s})$
Spheres intersect if squared distance between centers is less than squared sum of radii:
If b is 0.0 or positive, they are not moving towards each other:
$b = vec{v}.dot(vec{s})$
$a = vec{v}.dot(vec{v})$
If d is negative, no real roots, and therefore no collisions:
$d = b*b - a*dist$
If we’ve come so far, we can calculate time of the collision:
$t = ( -b - sqrt{d}) / a$
Read also: Sphere vs AABB collision detection test
### Code
``` bool testMovingSphereSphere(Scenenode *A, Scenenode *B, double &t)
{
Planet *pa = (Planet *) A;
Planet *pb = (Planet *) B;
Vector3D<double> s = pa->pos - pb->pos; // vector between the centers of each sphere
Vector3D<double> v = pa->vel - pb->vel; // relative velocity between spheres
double c = s.dot(s) - r*r; // if negative, they overlap
if (c < 0.0) // if true, they already overlap
{
t = .0;
return true;
}
float a = v.dot(v);
float b = v.dot(s);
if (b >= 0.0)
return false; // does not move towards each other
float d = b*b - a*c;
if (d < 0.0)
return false; // no real roots ... no collision
t = (-b - sqrt(d)) / a;
return true;
}
```
Professional Software Developer, doing mostly C++. Connect with Kent on Twitter.
You may use these HTML tags and attributes: `<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> ` | 2017-07-24 22:40:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5828350186347961, "perplexity": 2157.929134058475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424931.1/warc/CC-MAIN-20170724222306-20170725002306-00254.warc.gz"} |
https://tex.stackexchange.com/questions/449195/customizing-a-point-label-style?noredirect=1 | # Customizing a point label style
There are some points labels that overlap on one another if the points were near an axis or close to one another, because the table is relatively long to be written horizontally.
How can I writ a point label like 1A,B2,C3,D4 above a point vertically, such that 1A comes above B2 which comes above C3, etc. So that labels do not overlap even. (Assumption: in this figure points cannot come above one another)
Here is how the label showed appear above a point.
\documentclass{IEEEtran}
\usepackage{tikz}
\usepackage{pgfplots}
\pgfplotsset{%
pointStyle/.style args={#1}{%
color=#1,
mark=*,
only marks,
mark size=4pt,
point meta=explicit symbolic,
}
}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
axis x line = bottom,
axis y line = left,
scaled y ticks = false,
enlarge x limits=0.1,
nodes near coords,
]
coordinates {(1, 4) [\footnotesize{1A,B2,C3,D4}]};
coordinates {(2, 9) [\footnotesize{1A}]};
coordinates {
(3, 6) [\footnotesize{1A,B2,C3}]
(2.9, 6) [\footnotesize{4A,B,C3}]
};
\end{axis}
\end{tikzpicture}
\end{document}
You could make the anchor a function of the \coordindex. 2nd EDIT: The simplest option in my opinion is if you do the line breaks yourself, i.e. replace the commas by \\. If you think the nodes are too tight, you could increase outer sep (for instance).
\documentclass{IEEEtran}
\usepackage{tikz}
\usepackage{pgfplots}
\pgfplotsset{%
pointStyle/.style args={#1}{%
color=#1,
mark=*,
only marks,
mark size=4pt,
point meta=explicit symbolic,
}
}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
axis x line = bottom,
axis y line = left,
scaled y ticks = false,
enlarge x limits=0.1,
nodes near coords={\pgfplotspointmeta\vspace*{0.3\baselineskip}},
nodes near coords style={font=\footnotesize,anchor=-90,
align=left
},
%nodes near coords align={vertical},
]
coordinates {(1, 4) [1A\\ B2\\ C3\\ D4]};
coordinates {(2, 9) [1A]};
anchor=-90+180*\coordindex,
}]
coordinates {
(3, 6) [1A\\ B2\\ C3]
(2.9, 6) [4A\\ B\\ C3]
};
\end{axis}
\end{tikzpicture}
\end{document}
Everything else is much more hacky.
1st EDIT: Sorry, I was confused by the question, hope it is closer to what you want.
\documentclass{IEEEtran}
\usepackage{tikz}
\usepackage{pgfplots}
\pgfplotsset{%
pointStyle/.style args={#1}{%
color=#1,
mark=*,
only marks,
mark size=4pt,
point meta=explicit symbolic,
}
}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
axis x line = bottom,
axis y line = left,
scaled y ticks = false,
enlarge x limits=0.1, % \begin{tabular}\end{tablue}
nodes near coords={\vspace*{0.1\baselineskip}
\foreach \X [count=\Y] in \pgfplotspointmeta
{\X\newline} \vspace*{-0.7\baselineskip}
},
nodes near coords style={font=\footnotesize,anchor=-90,text
width=4mm,align=center,},
%nodes near coords align={vertical},
]
coordinates {(1, 4) [1A,B2,C3,D4]};
coordinates {(2, 9) [1A]};
anchor=-90+180*\coordindex,
}]
coordinates {
(3, 6) [1A,B2,C3]
(2.9, 6) [4A,B,C3]
};
\end{axis}
\end{tikzpicture}
\end{document}
In order to center the texts, you may do
\documentclass{IEEEtran}
\usepackage{tikz}
\usepackage{pgfplots}
\pgfplotsset{%
pointStyle/.style args={#1}{%
color=#1,
mark=*,
only marks,
mark size=4pt,
point meta=explicit symbolic,
}
}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
axis x line = bottom,
axis y line = left,
scaled y ticks = false,
enlarge x limits=0.1,
nodes near coords={\vspace*{0.1\baselineskip}
\foreach \X in \pgfplotspointmeta%
{\centerline{\X}\newline}%
\vspace*{-0.7\baselineskip}
},
nodes near coords style={font=\footnotesize,anchor=-90,
text width=1cm
},
%nodes near coords align={vertical},
]
coordinates {(1, 4) [1A,B2,C3,D4]};
coordinates {(2, 9) [1A]};
anchor=-90+180*\coordindex,
}]
coordinates {
(3, 6) [1A,B2,C3]
(2.9, 6) [4A,B~~,C3]
};
\end{axis}
\end{tikzpicture}
\end{document}
If you want everything centered, remove the ~~ after B. It is amazing how much hackery is required here, basically because one cannot easily have a \\ inside a \foreach loop. There are ways to avoid this, e.g. here but they are fairly complicated and, in what I tried, destroyed the coloring, and so does trivlist. Crazy...
• Well, but in my case, I want the labels to be written vertically above the point like the second picture I have attached. Your solution is little bit confusing because I don't know which label belongs to which point in the two blue points. An other thing the purple point label overlap the x-axis. – Taha Magdy Sep 4 '18 at 1:04
• @TahaMagdy Better now? – user121799 Sep 4 '18 at 1:37
• @marmort, there is a small issue though, the vertical label lines are aligned to the left; how can I make every line centered. Here is a image – Taha Magdy Sep 4 '18 at 2:06
• @TahaMagdy I added a few options. The top one would be my personal favorite. You can finetune parameters like outer seo in order to shift the text around. – user121799 Sep 4 '18 at 2:47
• My question is completely solved now. Thank You @marmot – Taha Magdy Sep 4 '18 at 2:57 | 2021-04-11 16:43:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579226732254028, "perplexity": 7582.458157768194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064520.8/warc/CC-MAIN-20210411144457-20210411174457-00159.warc.gz"} |
https://www.nature.com/articles/s41598-021-83789-7 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# The rubber hand illusion is a fallible method to study ownership of prosthetic limbs
## Abstract
Enabling sensory feedback in limb prostheses can reverse a damaged body image caused by amputation. The rubber hand illusion (RHI) is a popular paradigm to study ownership of artificial limbs and potentially useful to assess sensory feedback strategies. We investigated the RHI as means to induce ownership of a prosthetic hand by providing congruent visual and tactile stimuli. We elicited tactile sensations via electric stimulation of severed afferent nerve fibres in four participants with transhumeral amputation. Contrary to our expectations, they failed to experience the RHI. The sensations we elicited via nerve stimulation resemble tapping as opposed to stroking, as in the original RHI. We therefore investigated the effect of tapping versus stroking in 30 able-bodied subjects. We found that either tactile modality equally induced ownership in two-thirds of the subjects. Failure to induce the RHI in the intact hand of our participants with amputation later confirmed that they form part of the RHI-immune population. Conversely, these participants use neuromusculoskeletal prostheses with neural sensory feedback in their daily lives and reported said prostheses as part of their body. Our findings suggest that people immune to the RHI can nevertheless experience ownership over prosthetic limbs when used in daily life and accentuates a significant limitation of the RHI paradigm.
## Introduction
In addition to loss of function, limb amputations pose a significant threat to a person’s body image. The body image represents the perceptual, conceptual, and emotional aspects of our bodies in our mind1. Limb loss immediately affects the perceptual and conceptual representation; the stored structural description of the body substantially mismatches the received visual and somatosensory feedback. Moreover, the exclusion from social rituals like handshaking, and prejudicial attitudes towards disabilities, can damage the emotional aspects of the body image and lead to a negative relation towards a missing limb. A distorted body image has also been correlated with “decreased life satisfaction, quality of life, activity levels, and overall psychological adjustment”2. One possibility to restore a distorted body image, and its related social and psychological consequences after limb amputation, might be the use of a prosthetic limb as substitute for the lost limb.
The use of highly sophisticated prosthetic limbs, offering natural aesthetics and increased functionality to reenable participation in social life, may improve the conceptual and emotional elements of the body image. One of the most promising advances to address the third element of the distorted body image—the perceptual aspects—is the restoration of sensory feedback. Different deployed strategies to this end include sensory substitution3,4, targeted sensory reinnervation5, and direct nerve stimulation6,7,8. A prosthesis offering intuitive and natural sensory feedback could achieve the shift from being perceived as just a technical device, to replacing the missing afferent sensory information and thus restore the user’s body image.
Experimental paradigms to evaluate differences between sensory feedback strategies are needed to evaluate the efficacy of such strategies. However, directly measuring the body image—an abstracted model of how the body is presented in the mind—has proven to be difficult. Nevertheless, one route to assess a change in the body image is by studying the sense of ownership (hereinafter referred as ownership)9.
Ownership is an aspect of self-awareness related to experiencing parts of our body belonging to ourselves. Ownership over a body part can be experienced regardless of whether said body part is at rest or in motion, and whether or not movement is under one's owns volition (as opposed to agency). One of the most prominent theories of how ownership emerges is a neurocognitive model based on a top-down account10. In the example of a prosthesis, the model first compares the visual congruency of the prosthesis to a concept of a biological limb (or a prosthesis11) stored in the body image. In a second step, the postural features of the prosthesis are compared to the current body posture. Multisensory integration of available afferent information is the last step within the model. If all three comparators match, ownership over the prosthesis arises. This neurocognitive model was later extended with ideas of predictive coding12, which stipulates that if there are inconsistencies between the received sensory feedback and the predictions based on body image, for example, the representation of the body in the brain is updated to minimize said inconsistencies.
Whereas functional advantages of restoring sensory perception should be evaluated in experimental conditions as close to the real-life use of the prosthesis as possible, disentangling confounding factors could be advantageous when studying properties specifically related to sensory feedback. A non-agentic experiment excludes confounding variables such as algorithms for decoding motor volition, the fixation of the prosthesis to the residual limb, and the proficiency in using a prosthetic device. These variables would otherwise play a considerable role in how a prosthesis is perceived and thereby influence prosthetic ownership. An established method to examine ownership in a non-agentic setting is the rubber hand illusion (RHI)13, where two brushes are used to stimulate both a rubber- and a participant’s own hand with brush strokes. The biological hand is shielded from view during the experiment, so that the rubber hand visually appears to replace the biological hand. If the visuo-tactile stimuli were perceived synchronously (this happens when the two brushes stroke synchronously) most participants reported ownership over the rubber hand. Asynchronous stimulation, however, does not lead to ownership reports of the rubber hand.
An increasing number of studies have shown that the ownership sensation can be evoked in amputees using an adapted version of the RHI experiment. Stimulation using a brush14 and vibrators15 on the residual limb, as well as mechanical and vibratory feedback on reinnervated tissue16, has been reported to allow for subjects with amputations to experience ownership of a rubber hand. Studies on using brain stimulation17, light beams18, and temperature19 demonstrated that the same ownership effects can be achieved by artificially evoked tactile feedback. Recently, two case studies with a total of three subjects with a transradial limb amputation reported increased ownership due to direct nerve stimulation during an adapted RHI experiment20,21. Independent replication of these findings is still necessary to establish the prevalence of such phenomenon, particularly considering the low number of subjects studied. In addition, whether similar findings can be observed in higher amputation levels remains an open question.
In this study, we explored the viability of the RHI paradigm as a tool to measure prosthetic ownership in subjects with transhumeral amputation. We recruited four participants who had been implanted with extra-neural electrodes on their severed nerves as part of a novel neuromusculoskeletal prosthetic system8,22. Stimulation of their afferent nerves produced discernable and localized tactile sensations on their phantom hand occupying the space of their prosthetic one8,22,23,24,25. However, elicited sensations via nerve stimulation are not experienced as nuanced or natural as the stroking of a brush24, which is the tactile stimulation normally used to induce the RHI. The perceptual experience elicited by direct nerve stimulation was reported by our participants as a gentle touch or tap on the phantom hand. We attempted to induce the RHI by providing direct nerve stimulation, while the prosthesis was tapped in the perceived location by a robotic tapping device. Contrary to the participants’ reports of strong ownership of their prosthesis in their daily life26, we found that the adapted RHI experiment did not result in induced ownership. We then investigated if tapping as opposed to stroking the rubber hand in 30 able-bodied subjects resulted in induced ownership. Our results indicated that the RHI could be induced comparably well with either stimulus. However, 33% of the subjects did not experienced the RHI with either tactile stimulation modality. The possibility remained that our prosthetic participants belong to the non-respondent group, independently of their amputation. We then confirmed this was the case since the RHI could not be induced in their intact hand. Our investigations suggest that the RHI is not a universally reliable paradigm to study prosthetic ownership, and thus more inclusive experimental paradigms are needed to evaluate and improve sensory feedback strategies.
## Methods
### Experiment 1: RHI in subjects with amputation
#### Subjects
Four participants with transhumeral amputation who received an implanted neuromusculoskeletal interface8,22 were recruited for this experiment. The individual participant is referred to by their internal patient IDs, e.g. AR007, hereafter (see Table 1). All participants use an artificial limb controller (ALC)27 integrated into a commercial prosthetic arm for control and sensory feedback in daily life. Neurostimulation built into the ALC allows for current-controlled stimulation pulses (see Tables 2 and 3) to be sent to the implanted cuff electrode to elicit somatosensory feedback. Informed consent in accordance with the Declaration of Helsinki as well as informed consent to publish identifying information was obtained before conducting the experiments from each subject. The study was approved by the Regional Ethical Review Board in Gothenburg (#769–12) and carried out in accordance with the declaration of Helsinki.
#### Experimental design
Participants received neural stimulation together with visual feedback synchronously, and asynchronously as a control condition (see Fig. 1a). Elicited sensations using direct neural stimulation differ greatly from the sensation of a brushstroke as in the original RHI experiment13. We elicited perceptive fields on the phantom hand reported as small, confined, and relatively rounded23 (see Fig. 1b), whereas a brush stroke generally stimulates a larger, less defined area. Therefore, we designed a 3D printed tapping device to match the participant's visual experience to the perceptive area, quality, and duration of the sensation elicited via neural stimulation (see Fig. 1c).
Participants filled out an adapted version of the RHI questionnaire (see Table 4) after each experimental condition. The questionnaires included a Swedish translation of the questions below the original formulation. A week after the experiment, we asked the participants their agreement on questions regarding ownership at that moment (Q1, Q3, and Q4, Table 4). As an additional control condition, we conducted the conventional RHI experiment13 using a brush for both visuo- and tactile stimulation on the contralateral hand 2–3 weeks after the first RHI experiment using neurostimulation.
#### Experiment procedure
The participant was asked to sit at the table and rest both the prosthetic as well as the contralateral arm on the table.
The neurostimulation pulse train was adjusted to stimulate for 1 s (30 pulses at a pulse frequency of 30 Hz) at 20% over the perception threshold (constant amplitude between 80 and 500 μA depending on the participant and a pulse width of 200 μs, determined similarly to Ackerley et al.23). The tapper was programmed to touch the prosthesis for an equal amount of time (1 s). To spatially match the elicited tactile sensation to be visually congruent to the tapper, the participants indicated the perception area resulting from the neurostimulation pulses. The tapper was then positioned to touch on the indicated area. We also calibrated the time delay between the tap and neurostimulation trigger for each participant to ensure perceived simultaneity of the feedback. The perceived simultaneity was calibrated by sending stimulation pulses ± [0, 40, 80, 100, 200, 400, 500, 600, 800, 1000] ms compared to the tap trigger command, and asking the participant to judge if the sensation was perceived before, at the time, or after seeing the tapper touching the prosthesis and thereby determining the temporal binding window28.
Both tactile and visual stimuli were provided once every 2 s, lasting 1 s each. During the one second without stimulation, the tapper moved up and down. In the asynchronous case, the neurostimulation was delayed by 500 ms. A complete stimulation sequence lasted 180 s. A systematic documentation of the stimulation parameters29 can be found in Tables 2 and 3.
The original RHI experiment on the contralateral hand was conducted for 180 s. Both conditions, synchronous and asynchronous, were tested in a random order. The following assignment was obtained for the visuo-neural condition: synchronous (AR007 and AR006) and asynchronous (AL001 and AL004) stimulation first, respectively. For the brush condition, synchronous stimulation first (AR007 and AL004) and asynchronous stimulation first (AR006 and AL001) was obtained. Noise-canceling headphones were used to mute the servo noise during the neurostimulation experiments.
### Experiment 2: RHI in able-bodied participants
#### Participants and experiment design
Thirty able-bodied participants were recruited on a convenience sample basis (18 male/12 female, mean age 25.0 and range 21–31). The experiment was designed to validate the use of a tapper instead of a brush within the RHI paradigm (see Fig. 2a). Additionally, we investigate the influence of the stimulation area on the sense of ownership.
#### Experiment procedure
The participant was asked to sit at the table with a rubber hand and a visual barrier already in place. They positioned their right hand next to the visual barrier in the same position as the rubber hand on the other side of the barrier. Thereafter, a blanket was used to cover both the rubber hand and the real hand from the participant’s shoulder onwards.
Two different stimulation areas were considered: a small area of 1 cm2 corresponding to the size of an amputee’s perceptive projected area due to neurostimulation, and a larger area of 8 cm2 representing the dorsal area of the proximal phalanx of the index finger, equivalent to the stimulation area of a brushstroke (see Fig. 2b). The stimulation area was changed by attaching a different tip to the 3D printed tapping device (see Fig. 2c).
The same stimulation timing as in the experiments with the participant with amputation was used: the participant received visuo-tactile stimuli once every 2 s, lasting 1 s each, for a total of 180 s. In the asynchronous condition, one tapper was delayed by 500 ms. For the asynchronous tap condition, only the small area was used to reduce the time investment for the participants.
The brush conditions were conducted with the same timing. All conditions were tested in a random order. The participants filled in the questionnaire after every condition.
### Data analysis
An ownership score30 for each participant was defined from the ownership statements. The ownership score was defined as:
$$OwnershipScore = mean(Q1, Q2, Q3, Q4)$$
where Q1, Q2, Q3, and Q4 are the responses to the respective questions of the questionnaire.
Due to the ordinal nature of the data obtained from the Likert scale answers of the questionnaire, computing the average of multiple questions is only mathematically justifiable if the correlation of the questions average is an accurate estimate of the average correlation of all questions. Therefore, adequate reliability and internal consistency of the four questions were determined by calculating the Cronbach’s alpha31 before calculating the ownership score of the able-bodied participants.
The statistical analyses comparing the ownership scores between the synchronous and asynchronous conditions for both the group of subjects with amputation as well as for the group of able-bodied subjects was conducted using the Wilcoxon Signed Rank Test, as implemented in the “signrank” function of MATLAB’s Statistics toolbox (The Mathworks, Inc., Natick, MA, USA). The same analysis was employed to compare the ownership score to the mean of the control questions of the RHI questionnaire in the synchronous condition. The Bonferroni method was used to correct for the three conditions (tap on small area, tap on big area, and brush) and the significance values are reported as $${p}^{*}=p*n$$, where $$n=3$$.
As an indication of whether an individual person perceives ownership, a criterion similar to the one used by Trojan et al.32 was employed: Only participants yielding an ownership score higher than 4 in the synchronous condition and at least one point less in the asynchronous condition were assumed to be susceptible to RHI. This criterion is similar to using a cut-off only based on the synchronous conditions18,33, with the benefit of taking suggestibility of answering questionnaires into account.
Individual differences in susceptibility to the RHI and perception of ownership had been reported34, especially for subjects with amputation35. Given our heterogeneous group of participants with amputation, we also investigated the individual differences between each participant with amputation and the portion of the able-bodied control group fulfilling the ownership criterion. Similar to Bruno et al.36, the individual differences were evaluated with means of the Crawford’s test37 using the “ttest2” function of MATLAB’s Statistics toolbox.
The equivalence of the tapping condition compared to the brushing condition was evaluated using the Bayesian version of the Wilcoxon Signed Rank Test as implemented in JASP38. The standard prior was chosen, described by a zero centered Cauchy distribution with a width parameter of 0.707.
## Results
The four participants with amputation reported low ownership towards their prosthetic hand during all neurostimulation conditions (see Fig. 3), and only one assigned slightly higher questionnaire scores during the original RHI experiment in the biological limb. However, none of the participants fulfilled the ownership criterion and no significant differences were observed between the synchronous and asynchronous condition (visuo-neural: p = 0.375, W = 8.5; brush: p = 0.25, W = 6) as well as the ownership and control questions (visuo-neural: p = 0.125, W = 10; brush: p = 0.5, W = 3). These results indicate that none of the four participants is susceptible to the RHI.
In the experiment with able-bodied participants, both tapping conditions (see Fig. 4a) show a statistically significant difference in the ownership score when comparing the respective asynchronous condition to the synchronous one ($${p}^{*}$$ = 1.47e−5, W = 403.5 and $${p}^{*}$$ = 1.65e−5 and W = 378 for the tap on small area, and tap on big area, respectively). The ownership score is also significantly different compared to the control questions ($${p}^{*}$$ = 3.2e−5, W = 372 and $${p}^{*}$$ = 1.56e−4 and W = 380.5 for the tap on small area, and tap on big area, respectively). Similarly, the ownership score in the brushing condition (see Fig. 4b) is significantly higher in the synchronous condition compared to the asynchronous condition ($${p}^{*}$$ = 8.44e−6 and W = 434) and significantly higher than the control questions ($${p}^{*}$$ = 6.09e−5 and W = 390). These results imply that tapping is a viable stimulation method to induce the RHI.
Out of the 30 able-bodied participants, 10 (33%), 15 (50%), and 13 (43%) did not report ownership towards the rubber hand during the tap on small area, tap on big area, and brush condition.
Comparing the individual ownership scores of the participants with amputation in the synchronous visuo-neural stimulation condition, to the group of able-bodied participants reporting ownership in the tapping condition with the same stimulation area, it was found that all four participants with amputation scored significantly lower (AR007 $$p$$ = 3.8e−4, AR006 $$p$$ = 2.1e−3, AL001 $$p$$ = 1.2e−3, and AL004 $$p$$ = 3.7e−3). Doing the same comparison on the brush conditions revealed significantly lower ownership scores for three of the participants with amputation (AR007 $$p$$ = 8.0e−4, AR006 $$p$$ = 4.7e−4, and AL001 $$p$$ = 4.7e−4). For AL004, the ownership score of 4 in the brushing condition is lower than the average score of 5.8 in the able-bodied control group, but not significantly lower ($$p$$ = 0.085). Considering the results of the ownership criterion, which takes the high ownership score of AL004 during the asynchronous control condition into account, this analysis of individual differences further supports the above results that none of the four participants is susceptible to the RHI.
The questionnaire answered at home showed that all four participants with amputation experienced higher ownership over their prosthetic hand during their daily life compared to during the RHI paradigm (see Fig. 5).
## Discussion
As originally conceived, our study aimed to provide further evidence on the RHI experiment as a common paradigm to elicit ownership over a prosthetic limb. Similar to other researchers in the neuroprosthetics community, we regarded the RHI as a preferred test to evaluate the effect of sensory feedback. We expected that our participants with amputation would report ownership of their prosthesis, particularly considering recent reports of three subjects with transradial amputations for whom ownership via the RHI was induced after synchronous visuo-tactile neural stimulation20,21. However contrary to our expectations, none of our subjects reported ownership of their prosthetic hand during any of the experimental conditions in the RHI.
To ensure that our experimental design was not the reason for the unexpected lack of ownership, we verified the non-visuo-neural part of our setup with able-bodied participants. Our results showed that tapping is a viable stimulation method to induce ownership of a rubber hand in able-bodied subjects. This finding is consistent with contemporary work by Shehata et al.39, where tapping was used to induced ownership of a passive prosthesis worn by able-bodied participants. Furthermore, we showed that a small stimulation area was sufficient for a successful RHI. Indeed, such small area of perceived tactile stimulation was like that elicited area perceived by our participants via direct nerve stimulation.
Whereas the area and duration of visuo-tactile stimulation were matched, the quality of tapping on intact skin versus eliciting a tactile sensation via neural stimulation was not entirely equal. In an intact hand, there are different mechanoreceptors that activate differentially during touch creating a rich sensory experience that cannot be reproduced by contemporary neural electrodes, regardless of their invasiveness24. This difference in expected quality could have been the cause for failing to induce ownership, although other groups have reported it possible using a single stimulation contact and producing similarly large perceptive fields20,21, as we have done here.
Difficulties to induce the RHI has been reported on subjects with high manual dexterity, such as professional pianist40. This indicates that the more neural resources are dedicated to our hands, the harder is for our brain to be “fooled” by the RHI. In subjects with unilateral amputations, as our participants, control and sensory perception in the remaining hand would be potentially more developed as this is the only hand available to the subjects. This more extensive use of their able hand could potentially explain why our participants were not susceptible to the RHI in their able hand. However, by the same logic, it should be easier for them to experience the RHI in the missing hand as this is considerably less utilized. Either way, sensorimotor acuity mediating the RHI points to another potential complication on its application to subjects with amputations.
Over thirty percent of our able-bodied subjects reported no ownership towards the rubber hand in the synchronous tapping and brushing conditions. This agrees with the literature on the reported susceptibility to the RHI (22%41, 22%42, 26%43). Whereas subject compliance could be a reason for previously reported RHI using nerve neurostimulation20,21, the most likely explanation of the discrepancy with our work could be that our four subjects are among the non-negligible minority insusceptible to the RHI. Given that none of our participants with amputation reported ownership during the original RHI experiment in their contralateral intact hand, it is unlikely that the RHI would work with other stimulation means in their missing hand. These findings emphasize an impeding limitation of the RHI experiment as general paradigm to test sensory feedback strategies: the RHI paradigm can only be used with a subset of the population. Misleading conclusions on the effects of sensory feedback strategies could be drawn if one does not control for the susceptibility to the RHI in the first place.
Quantifying ownership is another hurdle in the RHI and in the study of ownership itself. We used the preferred method for quantifying ownership in the RHI (a questionnaire) to compare the experience to daily life. Yet this is far from ideal, as only three questions were relevant for such a scenario, and although the reporting scores were higher in comparison with the RHI, only one was reported consistently above the neutral value (Q4: “The prosthetic hand is part of my body”). Questions answerable by a numeric rating scale miss in the nuances of such a complex experience as ownership, and psychological factors might be at play on the perception of a rubber hand versus a prosthesis. Moreover, recent findings indicate that the RHI might be due to a suggestion effect44 and that the RHI questionnaire does not measure ownership but instead measures the ability to generate an experience to meet expectancies arising from suggestion45.
Three out of our four participants participated in an in-depth, qualitative investigation into the social and psychological consequences of living with neuromusculoskeletal prosthesis26. Said investigation found that using a highly integrated prosthesis with reliable control and sensory feedback results in a strong sense of ownership (or identification with the prosthesis as part of one’s own body)26. The failure to induce the RHI in subjects who otherwise report ownership of their prosthesis in their daily lives is a contradiction with farther-reaching implications. It has been assumed that demonstrating ownership in a laboratory setting justifies claims of ownership once these devices leave the experimental controlled environment. Our findings indicate that this is not a prerequisite, and how well induced ownership via RHI-like experiments in the laboratory translates to real-life usage of artificial limbs has yet to be investigated. In addition, closer examination of the sense of ownership indicates that it is dynamic, fleeting, and intensely dependent on the context of the particularly human–machine relationship in question26.
The discrepancy between the degree of ownership reported by our participants during daily life compared to after the RHI paradigm could be related to the deliberate non-agentic setting of the RHI experiment. The participants use their prosthesis all day and everyday22, and therefore routinely perform activities of daily living where they execute willed motor tasks with their prosthesis. The sense of agency that arises when the planned motor action is executed as intended has been found to strengthen the sense of ownership, when both the sense of agency and ownership co-occur46. The implanted neuromusculoskeletal interface used by our participants is based on osseointegration. Thus, another factor could be the additional sensory feedback received over osseoperception while moving the prosthesis47. Osseoperception conveys additional sensory information and can thereby complement the input received from isolated direct nerve stimulation48,49. Alternatively, the decrease ownership during the experiments could be attributed to the frequent don/doffing of the prosthesis in a laboratory setting. Taking of the prosthesis leads to an adjustment of the body image as the stored structural description of their body abruptly changes where, for example, the locational synchrony of the phantom limb is disrupted26.
Despite questionnaires being the most common assessment tool for ownership, the comparison of RHI paradigm and the daily life situation was limited in our study as no other tools were employed to assess ownership. Therefore, we cannot exclude the possibility, however unlikely, that the RHI illusion indeed induced ownerships but we could not capture its effect. Additional measures for future studies could include neural activity50, muscle activation51,52, or interoceptive sensitivity53. As well as more exhaustive questionnaires covering a wider range of common experiences as prosthetic user26,54,55 and normalization of phantom limb length20,54,56. On the other hand, proprioceptive drift and event-related changes in skin conductance have been found to be potentially misleading with regards to ownership35,57,58,59.
All the above considered, the RHI paradigm could still be a valuable tool to investigate sensory feedback strategies, as long as participants are susceptible to the illusion to beginning with, and its limitations are considered when assessing experimental outcomes. To strive for a more generable suite of experiments which does not exclude a subgroup of participants, we suggest including alternative paradigms that feature daily life situations (preferably in the long term), tasks investigating body-related sensorimotor integration60, and functional tests61. However, functional tests might include other aspects inducive to ownership, and therefore special care must be taken to not conflate concepts of agency with ownership46,62. Furthermore, the temporal nature of a sense of ownership must be considered as an important factor in examining the extent to which one can be said to embody their prosthesis26,63.
## Data availability
Data can be made available by contacting the authors under reasonable requests.
## References
1. 1.
Gallagher, S. Body image and body schema: A conceptual clarification. J. Mind Behav. 7, 541–554 (1986).
2. 2.
Rybarczyk, B. & Behel, J. Limb loss and body image. In Psychoprosthetics (eds Gallagher, P., Desmond, D., MacLachlan, M. et al.) 23–31 (Springer, Berlin, 2008).
3. 3.
Cipriani, C., Dalonzo, M. & Carrozza, M. C. A miniature vibrotactile sensory substitution device for multifingered hand prosthetics. IEEE Trans. Biomed. Eng. 59, 400–408 (2012).
4. 4.
Antfolk, C. et al. Sensory feedback in upper limb prosthetics. Expert Rev. Med. Devices 10, 45–54 (2013).
5. 5.
Kuiken, T. A., Marasco, P. D., Lock, B. A., Harden, R. N. & Dewald, J. P. A. Redirection of cutaneous sensation from the hand to the chest skin of human amputees with targeted reinnervation. Proc. Natl. Acad. Sci. USA. 104, 20061–20066 (2007).
6. 6.
Tan, D. W. et al. A neural interface provides long-term stable natural touch perception. Sci. Transl. Med. 6, 257ra138 (2014).
7. 7.
Raspopovic, S. et al. Restoring natural sensory feedback in real-time bidirectional hand prostheses. Sci. Transl. Med. 6, 222ra19 (2014).
8. 8.
Ortiz-Catalan, M., Håkansson, B. & Brånemark, R. An osseointegrated human-machine gateway for long-term sensory feedback and motor control of artificial limbs. Sci. Transl. Med. 6, 257re6 (2014).
9. 9.
Synofzik, M., Vosgerau, G. & Newen, A. I move, therefore I am: A new theoretical framework to investigate agency and ownership. Conscious. Cogn. 17, 411–424 (2008).
10. 10.
Tsakiris, M. My body in the brain: A neurocognitive model of body-ownership. Neuropsychologia 48, 703–712 (2010).
11. 11.
Maimon-Mor, R. O. & Makin, T. R. Is an artificial limb embodied as a hand? Brain decoding in prosthetic limb users. PLoS Biol. 18, 1–26 (2020).
12. 12.
Tsakiris, M. The multisensory basis of the self: From body to identity to others. Q. J. Exp. Psychol. 70, 597–609 (2017).
13. 13.
Botvinick, M. & Cohen, J. Rubber hands ‘feel’ touch that eyes see Illusions. Nature 391, 756–756 (1998).
14. 14.
Ehrsson, H. H. et al. Upper limb amputees can be induced to experience a rubber hand as their own. Brain 131, 3443–3452 (2008).
15. 15.
D’Alonzo, M., Clemente, F. & Cipriani, C. Vibrotactile stimulation promotes embodiment of an Alien hand in amputees with phantom sensations. IEEE Trans. Neural Syst. Rehabil. Eng. 23, 450–457 (2015).
16. 16.
Marasco, P. D., Kim, K., Colgate, J. E., Peshkin, M. A. & Kuiken, T. A. Robotic touch shifts perception of embodiment to a prosthesis in targeted reinnervation amputees. Brain 134, 747–758 (2011).
17. 17.
Collins, K. L. et al. Ownership of an artificial limb induced by electrical brain stimulation. Proc. Natl. Acad. Sci. 114, 166–171 (2017).
18. 18.
Durgin, F. H., Evans, L., Dunphy, N., Klostermann, S. & Simmons, K. Rubber hands feel the touch of light. Psychol. Sci. 18, 152–157 (2007).
19. 19.
Kammers, M. P. M., Rose, K. & Haggard, P. Feeling numb: Temperature, but not thermal pain, modulates feeling of body ownership. Neuropsychologia 49, 1316–1321 (2011).
20. 20.
Rognini, G. et al. Multisensory bionic limb to achieve prosthesis embodiment and reduce distorted phantom limb perceptions. J. Neurol. Neurosurg. Psychiatry 90, 833–836 (2019).
21. 21.
Page, D. M. et al. Motor control and sensory feedback enhance prosthesis embodiment and reduce phantom pain after long-term hand amputation. Front. Hum. Neurosci. 12, 352 (2018).
22. 22.
Ortiz-Catalan, M., Mastinu, E., Sassu, P., Aszmann, O. C. & Brånemark, R. Self-contained neuromusculoskeletal arm prostheses. N. Engl. J. Med. 382, 1732–1738 (2020).
23. 23.
Ackerley, R., Backlund Wasling, H., Ortiz Catalan, M., Brånemark, R. & Wessberg, J. Case studies in neuroscience: Sensations elicited and discrimination ability from nerve cuff stimulation in an amputee over time. J. Neurophysiol. 120, 291–295 (2018).
24. 24.
Ortiz-Catalan, M., Wessberg, J., Mastinu, E., Naber, A. & Branemark, R. Patterned stimulation of peripheral nerves produces natural sensations with regards to location but not quality. IEEE Trans. Med. Robot. Bionics 1, 199–203 (2019).
25. 25.
Ortiz-Catalan, M., Mastinu, E., Greenspon, C. & Bensmaia, S. J. Chronic use of a sensitized bionic hand does not remap the sense of touch. Cell Rep. 33, 108539 (2020).
26. 26.
Middleton, A. & Ortiz-Catalan, M. Neuromusculoskeletal limb prostheses: Personal and social implications of living with an intimately integrated bionic arm. 14, 18 (2020).
27. 27.
Mastinu, E., Doguet, P., Botquin, Y., Hakansson, B. & Ortiz-Catalan, M. Embedded system for prosthetic control using implanted neuromuscular interfaces accessed via an osseointegrated implant. IEEE Trans. Biomed. Circuits Syst. 11, 867–877 (2017).
28. 28.
Wallace, M. T. & Stevenson, R. A. The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia 64, 105–123 (2014).
29. 29.
Günter, C., Delbeke, J. & Ortiz-Catalan, M. Safety of long-term electrical peripheral nerve stimulation: Review of the state of the art. J. Neuroeng. Rehabil. 16, 13 (2019).
30. 30.
Mulvey, M. R., Fawkner, H. J., Radford, H. E. & Johnson, M. I. Perceptual embodiment of prosthetic limbs by transcutaneous electrical nerve stimulation. Neuromodulation 15, 42–47 (2012).
31. 31.
Cronbach, L. J. Coefficient alpha and the internal structure of tests. Psychometrika 16, 297–334 (1951).
32. 32.
Trojan, J., Fuchs, X., Speth, S. L. & Diers, M. The rubber hand illusion induced by visual–thermal stimulation. Sci. Rep. 8, 1–9 (2018).
33. 33.
Slater, M. Towards a digital body: The virtual arm illusion. Front. Hum. Neurosci. 2, 1–8 (2008).
34. 34.
Riemer, M., Trojan, J., Beauchamp, M. & Fuchs, X. The rubber hand universe: On the impact of methodological differences in the rubber hand illusion. Neurosci. Biobehav. Rev. 104, 268–280 (2019).
35. 35.
Niedernhuber, M., Barone, D. G. & Lenggenhager, B. Prostheses as extensions of the body: Progress and challenges. Neurosci. Biobehav. Rev. 92, 1–6 (2018).
36. 36.
Bruno, V., Ronga, I., Fossataro, C., Capozzi, F. & Garbarini, F. Suppressing movements with phantom limbs and existing limbs evokes comparable electrophysiological inhibitory responses. Cortex 117, 64–76 (2019).
37. 37.
Crawford, J. R., Garthwaite, P. H. & Porter, S. Point and interval estimates of effect sizes for the case-controls design in neuropsychology: Rationale, methods, implementations, and proposed reporting standards. Cogn. Neuropsychol. 27, 245–260 (2010).
38. 38.
JASP Team. JASP (Version 0.14) [Computer software]. (2020).
39. 39.
Shehata, A. W., Rehani, M., Jassat, Z. E. & Hebert, J. S. Mechanotactile sensory feedback improves embodiment of a prosthetic hand during active use. Front. Neurosci. 14, 263 (2020).
40. 40.
Pyasik, M., Salatino, A. & Pia, L. Do movements contribute to sense of body ownership? Rubber hand illusion in expert pianists. Psychol. Res. 83, 185–195 (2019).
41. 41.
Kalckert, A. & Ehrsson, H. H. The moving rubber hand illusion revisited: Comparing movements and visuotactile stimulation to induce illusory ownership. Conscious. Cogn. 26, 117–132 (2014).
42. 42.
Ehrsson, H. H. Touching a rubber hand: Feeling of body ownership is associated with activity in multisensory brain areas. J. Neurosci. 25, 10564–10573 (2005).
43. 43.
Lloyd, D. M. Spatial limits on referred touch to an alien limb may reflect boundaries of visuo-tactile peripersonal space surrounding the hand. Brain Cogn. 64, 104–109 (2007).
44. 44.
Lush, P. Demand characteristics confound the rubber hand illusion. Collabra Psychol. 6, 22 (2020).
45. 45.
Roseboom, W. & Lush, P. Serious problems with interpreting rubber hand illusion experiments. PsyArXiv https://doi.org/10.31234/osf.io/uhdzs (2020).
46. 46.
Braun, N. et al. The senses of agency and ownership: A review. Front. Psychol. 9, 535 (2018).
47. 47.
Clemente, F. et al. Touch and hearing mediate osseoperception. Sci. Rep. 7, 1–11 (2017).
48. 48.
Mastinu, E. et al. Grip control and motor coordination with implanted and surface electrodes while grasping with an osseointegrated prosthetic hand. J. Neuroeng. Rehabil. 16, 1–10 (2019).
49. 49.
Mastinu, E. et al. Motor coordination in closed-loop control of neuromusculoskeletal limb prostheses. Sci. Rep. Under Rev. (2020).
50. 50.
Schmalzl, L., Kalckert, A., Ragnö, C. & Ehrsson, H. H. Neural correlates of the rubber hand illusion in amputees: A report of two cases. Neurocase 20, 407–420 (2014).
51. 51.
Slater, M., Perez-Marcos, D., Ehrsson, H. H. & Sanchez-Vives, M. V. Towards a digital body: The virtual arm illusion. Front. Hum. Neurosci. 2, 6 (2008).
52. 52.
Tsuji, T. et al. Analysis of electromyography and skin conductance response during rubber hand illusion. Proc. IEEE Work. Adv. Robot. its Soc. Impacts, ARSO 88–93 (2013).
53. 53.
Tsakiris, M., Tajadura-Jiménez, A. & Costantini, M. Just a heartbeat away from one’s body: Interoceptive sensitivity predicts malleability of body-representations. Proc. R. Soc. B Biol. Sci. 278, 2470–2476 (2011).
54. 54.
Graczyk, E. L., Resnik, L., Schiefer, M. A., Schmitt, M. S. & Tyler, D. J. Home use of a neural-connected sensory prosthesis provides the functional and psychosocial experience of having a hand again. Sci. Rep. 8, 9866 (2018).
55. 55.
Gouzien, A. et al. Reachability and the sense of embodiment in amputees using prostheses. Sci. Rep. 7, 4999 (2017).
56. 56.
Schmalzl, L. et al. “Pulling telescoped phantoms out of the stump”: Manipulating the perceived position of phantom limbs using a full-body illusion. Front. Hum. Neurosci. 5, 121 (2011).
57. 57.
Rohde, M., Di Luca, M. & Ernst, M. O. The rubber hand illusion: Feeling of ownership and proprioceptive drift do not go hand in hand. PLoS ONE 6, e21659 (2011).
58. 58.
Abdulkarim, Z. & Ehrsson, H. H. No causal link between changes in hand position sense and feeling of limb ownership in the rubber hand illusion. Attention Percept. Psychophys. 78, 707–720 (2016).
59. 59.
D’Alonzo, M., Mioli, A., Formica, D. & Di Pino, G. Modulation of body representation impacts on efferent autonomic activity. J. Cogn. Neurosci. 32, 1104–1166 (2020).
60. 60.
Di Pino, G. et al. Sensory- and action-oriented embodiment of neurally-interfaced robotic hand prostheses. Front. Neurosci. 14, 1–17 (2020).
61. 61.
Schiefer, M., Tan, D., Sidek, S. M. & Tyler, D. J. Sensory feedback by peripheral nerve stimulation improves task performance in individuals with upper limb loss using a myoelectric prosthesis. J. Neural Eng. 13, 16001 (2015).
62. 62.
Zbinden, J., Lendaro, E. & Ortiz-Catalan, M. Prosthetic embodiment: Review and perspective on definitions, measures and experimental paradigms. Under Rev.
63. 63.
Murray, C. D. An interpretative phenomenological analysis of the embodiment of artificial limbs. Disabil. Rehabil. 26, 963–973 (2004).
64. 64.
Longo, M. R., Schüür, F., Kammers, M. P. M., Tsakiris, M. & Haggard, P. What is embodiment? A psychometric approach. Cognition 107, 978–998 (2008).
## Acknowledgements
The authors thank the subjects who participated in this study for their time and efforts. This project was funded by the Promobilia Foundation, the IngaBritt and Arne Lundbergs Foundation, and the Swedish Innovation Agency (VINNOVA).
## Funding
Open access funding provided by Chalmers University of Technology.
## Author information
Authors
### Contributions
J.Z. and M.O.C. designed the experiments. J.Z. conducted the experiments and analyzed the data. J.Z. drafted and M.O.C. edited the manuscript. Both authors revised and approved the final manuscript.
### Corresponding author
Correspondence to Max Ortiz-Catalan.
## Ethics declarations
### Competing interests
M.O.C has been a consultant for Integrum AB. J.Z. declares no competing interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Zbinden, J., Ortiz-Catalan, M. The rubber hand illusion is a fallible method to study ownership of prosthetic limbs. Sci Rep 11, 4423 (2021). https://doi.org/10.1038/s41598-021-83789-7
• Accepted:
• Published: | 2021-10-20 06:29:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5764788389205933, "perplexity": 7364.5605222102395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00292.warc.gz"} |
https://devzone.nordicsemi.com/questions/scope:all/sort:activity-desc/tags:interrupt/page:1/ | # 157 questions
Tagged
• x
71
views
2
1
## Interrupt Vectors REMAP
How can I relocate or manage the transition between the bootloaders interrupt vector map and the applications interrupt vector map which will be at different addresses in FLASH. On the M3/M4, you can remap this but presumably on the ... (more)
78
views
no
1
## Hardfault when using both UART and GPIOTE, how can I solve this problem?
Hello,
I feel like this question has an answer similar to this question, but I can't seem to get it working https://devzone.nordicsemi.com/questi...
I am using the nrf51422 chip on a board I designed myself and ... (more)
144
views
no
1
I have enabled Timer 2 with 31250Hz frequency. I wanted to call an interrupt after 10ms, ie 312 ticks for the timer. But the interrupt is not getting called, I have spent some time finding the problem, but couldn't ... (more)
52
views
1
vote
1
## Does the NFC Type 4 library globally disable interrupts at any point?
Does the NFC library globally disable interrupts at any point? We're trying to track down an issue where our code occasionally misses an I2C event during NFC read/write.
717
views
1
vote
2
## Problem using UART and radio in ESB mode on nRF24LE01.
Hi
for few days I seem to got stack in using together UART and radio on nRF21LE01 in ESB mode and I suspect irq_flags = hal_nrf_get_clear_irq_flags(); to be cause of my problem.
What I want to achieve is to communicate to ... (more)
160
views
no
no
## GPIOTE simultaneous interrupts
nRF52, SDK12.1, Eclipse, gcc 5.4.1, FreeRTOS, no soft-device.
I am having a problem with GPIOTE. I have six interrupts programmed on six different pins. As long as the signals do not transition at the same time, the ... (more)
269
views
4
2
## why should I clear event in the interrupt handler
as far as I am concerned, when I use PPI or SHORTS without interrupts, I DON'T need to clear the corresponding event registers, and task will be triggered once the event is generated. But when I enable interrupt, I ... (more)
707
views
6
2
## programing and interrupts on custom PCB using nrf51822
Hey,
I am designing a custom PCB with the nrf51822. I would like to use the reference layout that Nordic provides, specifically nrf51x22_qfaa. My project involves reading an accelerometer and gyroscope via TWI and transmitting certain measurements via BLE. I ... (more)
533
views
no
no
## Disable interrupts
Hi, I want to do this cycle with GPIO without interrupting it on NRF51 with BLE softdevice:
1) PIN1 to HIGH level 2) read value of PIN2 3) PIN1 to LOW level
Repeat this 25 times (between cycles interrupts can ... (more)
44
views
no
no
## nRF24LU1 wakeup interrupt [closed]
Hi, Nordic I see two interrupts:
#define USB_WU_ISR() void usb_wu_isr(void) interrupt INTERRUPT_USB_WU // USB wakeup interrupt (0x5b)
#define WU_ISR() void wu_isr(void) interrupt INTERRUPT_WU // Internal wakeup interrupt (0x6b)
maybe WU_ISR() is one function USB Dongle auto resume itselft, now I ... (more)
64
views
no
1
## Call SOC Library functions with Sofdevice disabled
Hi there,
Is it possible to call SOC library functions before calling sd_softdevice_enable()?
In particular, i want to set up my interrupts with sd_nvic_SetPriority() and sd_nvic_EnableIRQ() before enabling the softdevice with sd_softdevice_enable(). This is because sd_softdevice_enable() takes quite a while ... (more)
94
views
1
vote
1
## NRF51 with interrupts
Hello developers, I am struggling badly to use interrupt in my application. I am using softdevice S130 and ble_nus, ble_bas and ble_hrm services. All I want is to trigger a timer on interrupt event by pressing a button. I have ... (more)
423
views
no
1
## GPIOTE IN_EVENT Current
Hello,
I am currently using the GPIOTE Driver (nrf_drv_gpiote) to detect external interrupts from a sensor on a custom board using the nRF51822 with the S130 v2.0.0 SoftDevice and SDK 11. I currently have the interrupts setup to ... (more)
77
views
no
1
## Need NRF8001 to advertise while Microcontroller asleep.
I'm using an NRF8001 with an ATMEL ATMEGA 328P (the microcontroller from an Arduino). I need the microcontroller to go to sleep, but still allow the BLE chip to advertise and wake the microcontroller upon connection. It seems as ... (more)
74
views
no
no
## Nrf24le1 gpio interrupt falling edge
I want to use gpio interrupt int1,falling edge. I can trigger success, but one falling edge will trigger many times. Do i config wrong? GPIO_INT.rar
257
views
no
1
## waking from sleep
Hi,
I am having some trouble getting the nrf51422 (running the s130 soft device) to wake up after entering sleep mode. I have successfully entered sleep mode using NRF_POWER->SYSTEMOFF = 1, and sd_power_system_off() (both seem to be effective). However, when ... (more)
189
views
no
no
## Clearing GPIO Driver Interrupt Event
Hi there,
I'm using an input GPIOTE based on SDK11. The interrupt is consistently being fired. On the other hand, if I use "if(Pin_6)" to check the trigger, it does not get fired continuously. Is there a command ... (more)
305
views
no
1
## nRF52 SPI Master - Interrupt
Hi,
I'm using the nRF52 as a master and an external RF chip as a slave. After configuring my spi bus using nrf_drv_spi_init(&spi, NULL, NULL), in my main loop, how do I get an interrupt to know if ... (more)
96
views
no
no
## MY BLE connection lost during run time!
I mounted the chip onto my own PCB, I am not using a devkit, My board is "Wireless-Tag WT51822-S2", it is a module that uses nRF51822-QFAA as (SoC)
I am the ble_app_uart example inside sdkv10.0.0 as a framework ... (more)
324
views
1
vote
1
## nRF51 GPIO interrupt
Hello,
I am having interface nRF51 with RF receiver. Output of RF receiver connected with GPIO pin of nRF51.
How can I detect interrupt on gpio pin?
Thanks & Regards, Rajneesh
168
views
1
vote
1
## How to use Interrupt in BLE Central [closed]
Hi all Nordic Developer,
Currently I am working on nrf51-dk(as a BLE central) and I wish to connect one of its pin to a power supply(as a input). If the power breakdown, it should send out a warning ... (more)
140
views
1
vote
2
## GPIO interrupts not working with bootloader
Hello everyone. I am having problems using the interrups when a bootloader is programmed on the chip. My test program works fine when there is no bootloader. With the bootloader it never jumps to the interrupt handler. I am using ... (more)
367
views
no
no
## SPI slave peripheral receive interrupt processing
SPI Slave configuration On nRF52
Hi,
I am trying to configure the SPI slave peripheral for interrupt based communicaton with a SPI master In previous work with ARM based chips, the typical sequence I have followed would be :
1) configure ... (more)
720
views
no
no
## pin_change_int example [closed]
void in_pin_handler (nrf_drv_gpiote_pin_t pin, nrf_gpiote_polarity_t action)
{ nrf_drv_gpiote_out_toggle(PIN_OUT);
}
This function work with interupt event (push button). I can't understand where we use its 2 parameters (pin and action) or it's not usual function.
108
views
1
vote
1
## how to occur interrupt on SPIM with easyDMA END event?
Hi
I want to get external acceleration sensor values with SPIM using easyDMA on the sensor interruption (using GPIOTE). then after getting all data, I want to calculate them using algorithm on CPU.
In other words, getting the sensor data ... (more)
421
views
1
vote
1
## nrf51822 tempeture based on interrupt
I can read temp value by pulling flag but I Want to do that by interrupt I added this code to my Keil but interrupt on temp didn't work. I use Adc interrupt like this definition.
static void temp_init ...
(more)
443
views
no
1
## How to use BLE UART interrupt to jump out of a loop
I am trying to figure out how to exit a loop of my program using a button press on the nRF Toolbox UART app. Say it's button 4 for sake of giving it a value. I am using an ... (more)
312
views
no
1
## Can all GPIO pins be mapped as a external interupt pin?
Hello ,
Easy question, I am using the nRF52832 and am trying to find out if all gpio pins have gpiote functionality? And if so can all gpios be configured as an external interrupt to wake MCU from deep sleep mode?
Thanks
266
views
no
1
## Is there a possible clash of RTC1 usage?
I have an mBed application that uses the S130 SoftDevice and polls data from a couple of other connected peripherals via I2C & SPI.
I use a Ticker object, which seems to basically be a wrapper around the Nordic app_timer implementation ... (more)
680
views
no
1
## RTC interrupt does not wake up nRF52
I am having issues waking up nRF52 with s132 using the RTC interrupt. The same code is used on nRF51 with s130 and does not show this behavior.
The RTC is configured and the interrupts are fired and handled as ... (more)
#### Statistics
• Total users: 22278
• Latest user: Camilla Samons
• Resolved questions: 9667
• Total questions: 23073
## Recent blog posts
• ### Nordic Developer Zone celebrates its 4th year of helping developers succeed - Celebrate with us and win a Nordic Thingy: 52’
Posted 2017-06-23 10:12:53 by John Leonard
• ### nRF52 Development with CLion
Posted 2017-06-22 09:50:54 by dansheme
• ### Simple GPIO driver example
Posted 2017-06-22 13:38:36 by Hans Elfberg
• ### What mom didn't tell you about ble_app_att_mtu_throughput on the nRF52840 evaluation board
Posted 2017-06-16 16:12:15 by George
• ### Introducing Nordic’s new software licensing schemes
Posted 2017-06-15 11:21:39 by Reidar Martin Svendsen
## Recent questions
• ### NRF51822 to NRF51822 Over the air update (DFU)
Posted 2017-06-25 16:25:45 by wogisha
• ### Advertising with device_name Vs whitelisting
Posted 2017-06-25 14:39:12 by raju | 2017-06-25 15:37:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20259341597557068, "perplexity": 11946.596227473518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320539.37/warc/CC-MAIN-20170625152316-20170625172316-00606.warc.gz"} |
https://web.math.sinica.edu.tw/mathmedia/HTMLarticle18.jsp?mID=32102 | 32102 Steinhaus三角形—數學實驗的一個案例
Steinhaus三角形—數學實驗的一個案例
### 2. 緣起---Steinhaus的問題
$+$ $+$ $-$ $+$ $-$ $+$ $+$
$+$ $-$ $-$ $-$ $-$ $+$
$-$ $+$ $+$ $+$ $-$
$-$ $+$ $+$ $-$
$-$ $+$ $-$
$-$ $-$
$+$
$-$ $+$ $+$ $-$ $-$ $+$ $-$ $-$ $+$ $+$ $+$ $-$
$-$ $+$ $-$ $+$ $-$ $-$ $+$ $-$ $+$ $+$ $-$
$-$ $-$ $-$ $-$ $+$ $-$ $-$ $-$ $+$ $-$
$+$ $+$ $+$ $-$ $-$ $+$ $+$ $-$ $-$
$+$ $+$ $-$ $+$ $-$ $+$ $-$ $+$
$+$ $-$ $-$ $-$ $-$ $-$ $-$
$-$ $+$ $+$ $+$ $+$ $+$
$-$ $+$ $+$ $+$ $+$
$-$ $+$ $+$ $+$
$-$ $+$ $+$
$-$ $+$
$-$
$+$ $+$ $+$ $+$ $-$ $-$ $+$ $-$ $-$ $-$ $+$ $+$ $+$ $+$ $-$ $-$ $-$ $-$ $-$ $+$
$+$ $+$ $+$ $-$ $+$ $-$ $-$ $+$ $+$ $-$ $+$ $+$ $+$ $-$ $+$ $+$ $+$ $+$ $-$
$+$ $+$ $-$ $-$ $-$ $+$ $-$ $+$ $-$ $-$ $+$ $+$ $-$ $-$ $+$ $+$ $+$ $-$
$+$ $-$ $+$ $+$ $-$ $-$ $-$ $-$ $+$ $-$ $+$ $-$ $+$ $-$ $+$ $+$ $-$
$-$ $-$ $+$ $-$ $+$ $+$ $+$ $-$ $-$ $-$ $-$ $-$ $-$ $-$ $+$ $-$
$+$ $-$ $-$ $-$ $+$ $+$ $-$ $+$ $+$ $+$ $+$ $+$ $+$ $-$ $-$
$-$ $+$ $+$ $-$ $+$ $-$ $-$ $+$ $+$ $+$ $+$ $+$ $-$ $+$
$-$ $+$ $-$ $-$ $-$ $+$ $-$ $+$ $+$ $+$ $+$ $-$ $-$
$-$ $-$ $+$ $+$ $-$ $-$ $-$ $+$ $+$ $+$ $-$ $+$
$+$ $-$ $+$ $-$ $+$ $+$ $-$ $+$ $+$ $-$ $-$
$-$ $-$ $-$ $-$ $+$ $-$ $-$ $+$ $-$ $+$
$+$ $+$ $+$ $-$ $-$ $+$ $-$ $-$ $-$
$+$ $+$ $-$ $+$ $-$ $-$ $+$ $+$
$+$ $-$ $-$ $-$ $+$ $-$ $+$
$-$ $+$ $+$ $-$ $-$ $-$
$-$ $+$ $-$ $+$ $+$
$-$ $-$ $-$ $+$
$+$ $+$ $-$
$+$ $-$
$-$
Harborth的論文完整的證出來, 當 $n=0$ 或 3 (mod 4) 時, 含有一半正號及一半負號的三角形確實存在。我的故事就是從讀了這一篇文章開始。 當時我給了自己一個更一般的問題: 給定正整數 $n$, 在所有 $2^{n}$ 個 $n$ 階Steinhaus三角形中, 正號的個數的可能值有那些?
$0$ $0$ $1$ $0$ $1$ $0$ $0$
$0$ $1$ $1$ $1$ $1$ $0$
$1$ $0$ $0$ $0$ $1$
$1$ $0$ $0$ $1$
$1$ $0$ $1$
$1$ $1$
$0$
### 3. 自助天助------CDC電腦幫忙做實驗
$n$ S(n) 1 0 1 2 0 2 3 0 3 4 4 0 4 5 6 7 5 0 5 6 7 8 9 10 6 0 6 8 10 12 14 7 0 7 9 10 11 12 13 14 15 16 17 18 19 8 0 8 11 13 $\sim$ 22 24 9 0 9 12 13 15 $\sim$ 27 30 10 0 10 14 16 $\sim$ 34 36 37 11 0 11 15 16 18 $\sim$ 41 44 12 0 12 17 21 $\sim$ 48 52 13 0 13 18 19 23 $\sim$ 57 60 61 14 0 14 20 24 $\sim$ (偶數) 66 70 15 0 15 21 22 27 28 30 33 $\sim$ 75 80 16 0 16 23 29 $\sim$ 32 35 37 $\sim$ 86 90 91 17 0 17 24 25 31 $\sim$ 35 38 $\sim$ 95 97 102 18 0 18 26 32 34 35 37 40 $\sim$ 104 106 108 114 19 0 19 27 28 35 36 39 43 $\sim$ 118 120 121 126 127 20 0 20 29 37 38 40 41 44 45 47 $\sim$ 132 134 140
### 4. 近道------觀察及推想可能的定理
CDC的電腦幫忙算出來的這些資料, 呈現了許多規則, 讓我挑幾個來說一說。
$S(n)$的最小元素為0, 再來就是 $n$, 接下來的第三個是什麼? 這在略大一點的 $n$ 就可看出一個端倪, 似乎是再增加 $n/2$ 左右。 事實上, 確實有 \begin{eqnarray*} && \#T_{n}(\overline {01}) = \#T_{n}(01\overline 0) = n-1 + \lfloor n/2\rfloor, \\ && \#T_{n}(\overline {10}) = \#T_{n}(11\overline 0) = n-1 + \lceil n/2\rceil. \end{eqnarray*} 複雜一點的推演可以得到:
### 參考文獻
H. Harborh, Solution of Steinhaus'problem with plus and minus signs, J. Combin. Theory, Series A, 12 (1972), 253-259. H. Steinhaus, One Hundred Problems in Elementary Mathematics, Pergamon Press, Oxford, 1963. G. J. Chang, Binary triangles, Bull. Inst. Math. Academia Sinica, 11 (1983), 209-225.
---本文作者任教於臺灣大學數學系--- | 2023-03-30 11:14:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7099067568778992, "perplexity": 28.8483076558689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00519.warc.gz"} |
http://mathematica.stackexchange.com/questions/46214/why-my-differential-equations-become-true | # Why my differential equations become True? [duplicate]
I've been trying to solve a system of nonlinear differential equation, but the conditions are a bit weird.
Two of the differentials equate to the same equation, but have different boundary conditions. The third differential equate to something else.
For the background on the problem I am trying to solve, it is related to concentration as a function of time for a reaction.
Here is the code I have put, which was more or less straightforward, but I keep getting True. I've tried using DSolve and NDsolve, but I am unable to get a graph for my three concentration profiles (I do not need an analytical solution, just a graphical one).
NDSolve[{
A'[t] == -kf A[t] *B[t] + kb * c[t],
B'[t] == -kf A[t] *B[t] + kb * c[t],
c'[t] == kb * c[t] + kf A[t] B[t],
A[0] == 1, B[0] == 2, c[0] == 0},
{A, B, c}, {t, 100}]
kf and kb are constants I've already equated into my code.
But I keep resulting with:
NDSolve[{True, True, True, A[0] == 1, B[0] == 2, c[0] == 0}, {A, B, c}, {t, 100}]
I'm thinking it's because I'm multiplying two functions together, I haven't solved ODEs this way. I'd appreciate any input, or hints/guides/direction. Thanks!
If there is something unclear, or if I am lacking proper question asking etiquette please let me know!
-
## marked as duplicate by Michael E2May 17 at 15:04
Then you must have once mixed up = and ==. You need Clear[Derivative]. Values stored in Derivative can't be cleared by Clear["Global*"] because it actually means clearing all the variables under context "Global" while Context[Derivative] gives "System". – xzczd Apr 16 '14 at 7:15
@xzczd New users might fall in to this Context trap. I haven't found another post to mark this as a duplicate. Perhaps your comment is worthy of an answer just so it is in the books? – bobthechemist Apr 16 '14 at 13:12
@bobthechemist OK, let me elaborate it into an answer. – xzczd Apr 16 '14 at 13:42
Apparently, I got to close this because it was originally tagged "plotting", since removed. I meant to just make it simple vote. If there's disagreement, respond or flag and maybe a moderator can help. – Michael E2 May 17 at 15:08
Clear[Derivative] first.
OK, it's surprising that there seems to be no regular answer to this common problem for beginners, let me elaborate my comment into an answer. If you restart your Mathematica and run your code again then you'll find your problem no longer exists anymore! Then, why? Because Mathematica is unstable?
Of course not. Just recall what you've done before meeting the error, you may find the following scene in the corner of memory:
The value of kf and kb is chosen at random.
It's another common mistake of Mathematica newbies, that is, mix up = and ==.
Your code was in a mess now, but you didn't feel worried because you've already learned from some material that you can use
Clear["Global*"]
to fix this. You cheerfully placed this line at the beginning of your code and ran it again, only to find:
What happened? Why does the magic of Clear["Global*"] lose its power?
To answer this I'd like to first explain why you need to clear the variables after you mistyped == to = since I presumingly guess you may still be vague about this. Try the following code:
A[0] = 0
A[0]
A[0] == 0
Clear[A]
A[0] == 0
0
0
True
A[0] == 0
BTW, though I've execute A[0] for several times to show the variation, you can judge whether A owns a value just by observing its color: it's black when it does, otherwise it'll be blue.
As you see, Set (=) will give value to its left hand side. (Of course there exist cases that the left hand side is Protected, for example a + b = 4 won't give the left side a value, but it's another story and I'd like not to talk about it here. ) If you don't Clear it, it'll be always there and break your equations. When introducing this issue, many materials will tell readers that you can simply use Clear["Global*"] to "clear all the variables" at once, but it doesn't work for your case, yeah, you already know that, but do you know what's the exact meaning of Clear["Global*"]?
If you check "Details" of document of Clear, you'll find the following description:
Clear["context*"] clears all symbols in a particular context.
What's "context"? "context" is something that every symbol in Mathematica owns. You can check it by function Context, for example:
Context[a]
Context[NDSolve]
"Global"
"System"
So Clear["Global*"] is clearing the values of symbols under context Global, which is the stronghold for most symbols that'll have values (at least for beginners), but, what you can Clear is only "most", not "all". Try the following code:
A'[0] = 0
A'[0]
Clear["Global*"]
A'[0]
Clear[A]
A'[0]
HoldForm[FullForm[A'[0]]]
0
0
0
0
Derivative[1][A][0]
Aha, A'[0] is one of the exceptions, the value of A'[0] isn't stored in A, in fact it's stored as SubValues (this is another story that I'd like to omit here, you can search it in this site or have a look at Leonid Shifrin's excellent book ) of Derivative, then what's the context of Derivative?:
Context[Derivative]
"System"
So values in it can't be cleared by Clear["Global*] (BTW 2, for most cases it can be shortened as Clear["*"], meaning clear all the values of symbols under the current context, which is usually Global), what you need to clear it is
Clear[Derivative]
Though some warnings generated, Clear["System*"] can be used too if you like.
BTW 3, another symbol that is likely to trig this problem is Subscript.
-
@xzcxd: I must say this an excellent answer, one of the most helpful I have read. – David Nov 21 '14 at 17:33
I think it is a Mathematica design error to store derivative of a user symbol in the system context. Everything relating to user symbols/variables should be kept outside system context. Nothing should leak outside like with this example. – Nasser Nov 27 '14 at 5:29 | 2015-05-24 13:39:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4583955407142639, "perplexity": 1400.248809571057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928019.31/warc/CC-MAIN-20150521113208-00192-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://nanometrologie.cz/niget/doc/html5/S5.html | # 5 Oliver Pharr
For a definition of the Oliver Pharr method see [11].
## 5.1 Window
The window consists of several blocks:
• Info displays the maximum depth and force during the indentation
• Parameters shows the selected range in nm and in % of the maximum force, and the correction $\beta$.
• -
The fitting range can be selected either using the mouse or typing in the range entries. The range can be defined either in nm or in percent of the maximum force. It is often recommended to use the range 0.5–15 % F${}_{\mathrm{max}}$ for the hp fit and 40–98 % F${}_{\mathrm{max}}$ for the S fit, see section 5.2.
• -
The parameter $\beta$ accounts for any deviations from the axisymmetric case and is used in the calculation of the reduced modulus in equation (10). Currently, the default value is the value for three-sided pyramides $\beta=1.034$. The value supplied by the user is saved in the settings and can be reset to its default value.
• Fit buttons for the two fits, see section 5.2 for details of the calculation.
• Results displays all results in the following order: the residual depth $h_{\mathrm{p}}$, the power $m$ of the power law function, the parameter $\varepsilon$, the contact depth $h_{\mathrm{c}}$, the slope $S$, the contact area $A_{\mathrm{p}}(h_{\mathrm{c}})$, the indentation hardness $H_{IT}$, the contact modulus $E_{r}$, the indentation modulus $E_{IT}$ and the ranges used for the fitting procedures. The variables are described in detail in section 5.2.
• Save save parameters and results to given file.
• Graph display the unloading curve and the fitted curves. Stepwise zooming/unzooming can be performed by selecting a range with the mouse and pressing the Zoom/ Unzoom buttons. The graph is restored to its original size by the Restore button.
## 5.2 Procedure
The standard calculation consists of three steps
1. 1.
The residual depth must be determined as the intersection of the unloading curve and the x-axis. This is implemented by fitting a straight line using a Deming fit with $\delta=1$, see section A.2. For a brief description of the Deming fit see section A.2
$F=a_{\mathrm{h_{\mathrm{p}}}}h_{\mathrm{p}}+b_{\mathrm{h_{p}}}$ (2)
using total least squares. This fit is called the $h_{\mathrm{p}}$ fit. The residual depth is calculated as
$h_{\mathrm{p}}=-b_{h_{\mathrm{p}}}/a_{h_{\mathrm{p}}}.$ (3)
The range of the data must be chosen adequately, the range 0.5–15 % F${}_{\mathrm{max}}$ is often a reasonable value.
2. 2.
The main part of the unloading curve is fitted by a power law function
$F=\alpha(h-h_{\mathrm{p}})^{m}.$ (4)
This is converted to a total least squares fit in the variables using a Deming fit with $\delta=1$, see section A.2
$\log F=\log\alpha+m\log(h-h_{\mathrm{p}}).$ (5)
The range should not contain the lower part of the unloading range, a range of 40–98 % F${}_{\mathrm{max}}$ is recommended as a first guess.
3. 3.
The auxiliary parameter $\varepsilon$ is calculated from the power $m$
$\varepsilon=m\left(1-\frac{2(m-1)\Gamma\left(\frac{m}{2(m-1)}\right)}{\sqrt{% \pi}\Gamma\left(\frac{1}{2(m-1)}\right)}\right),$ (6)
$\Gamma$ is the Gamma-function.
The contact depth is calculated as
$h_{\mathrm{c}}=h_{\mathrm{max}}-\epsilon\frac{F_{\mathrm{max}}}{S}$ (7)
and the slope at the maximum depth as
$S=m\frac{F_{\mathrm{max}}}{h_{\mathrm{max}}-h_{\mathrm{p}}}.$ (8)
4. 4.
The contact depth is used to evaluate the contact area $A(h_{\mathrm{c}})$. This can be used to find the indentation hardness
$H_{\mathrm{IT}}=\frac{F_{\mathrm{max}}}{A(h_{\mathrm{c}})}.$ (9)
and together with the slope to find the contact modulus
$E_{\mathrm{r}}=\sqrt{\pi}\frac{S}{2\beta\sqrt{A(h_{\mathrm{c}})}}.$ (10)
For a comparison with Young’s modulus found in literature it is useful to calculate the indentation modulus $E_{IT}$
$E_{\mathrm{IT}}=\frac{1-\nu^{2}}{1/E_{\mathrm{r}}-(1-\nu_{\mathrm{i}}^{2})/E_{% \mathrm{i}}}.$ (11)
Here $\nu$ is the Poisson’s value of the sample and $\nu_{\mathrm{i}}$ and $E_{\mathrm{i}}$ are the Poisson’s value and the modulus of the indenter. | 2022-01-27 21:23:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 37, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7408828139305115, "perplexity": 880.1579199032782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00705.warc.gz"} |
https://www.scm.com/doc/ADF/Examples/CN_SecDeriv.html | # Example: Analytic Frequencies: CN¶
Download CN_SecDeriv.run
#! /bin/sh
# Calculation of normal modes is requested by specifying in the AMS input:
# Properties
# NormalModes Yes
# End
# ADF will compute the Hessian analytically if possible.
# If the Hessian cannot be computed analytically, numerical differentiation
# will atuomatically be used.
# A good quality is specified for the numerical Becke integration to be sure of
# reliable results. In general, it seems advisable to use high accuracy for
# heavy nuclei at the moment, whereas default integration accuracy is usually
# sufficient for light atoms. The precision of the fit may be improved with the
# ZlmFit block keyword.
\$AMSBIN/ams <<eor
System
Symmetrize
atoms
N -1.3 0.0 0.0
C 0.0 0.0 0.0
end
charge -1
end
Properties
NormalModes Yes
End
title CN
beckegrid
quality good
end
basis
type DZ
core None
CreateOutput Yes
end
xc
lda Xonly
end
EndEngine
eor | 2021-09-22 17:59:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7575806379318237, "perplexity": 14324.361300808006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00181.warc.gz"} |
https://www.transtutors.com/questions/budgeted-income-statement-seattle-cat-is-the-wholesale-distributor-of-a-small-recrea-1389568.htm | Budgeted Income Statement Seattle Cat is the wholesale distributor of a small recreational...
Budgeted Income Statement
Seattle Cat is the wholesale distributor of a small recreational catamaran sailboat. Management has prepared the following summary data to use in its annual budgeting process:
Budgeted unit sales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Selling price per unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $1,850 Cost per unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .$1,425 Variable selling and administrative expenses (per unit) . . . . . . . $85 Fixed selling and administrative expenses (per year) . . . . . . . . .$105,000 Interest expense for the year . . . . . . . . . . . . . . . . . . . . . . . . . . . \$11,000
Required:
Prepare the company’s budgeted income statement using an absorption income statement format as shown in Schedule 9.
Hi | 2018-07-23 05:41:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8687200546264648, "perplexity": 117.79326428099998}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594954.59/warc/CC-MAIN-20180723051723-20180723071723-00481.warc.gz"} |
http://openstudy.com/updates/507ccab2e4b07c5f7c1fb342 | • anonymous
John wants to rent a tent to go camping. The cost to rent a tent from Woodland Outfitters is a flat rate of $10.00 plus$0.50 for each night the tent is rented. Part a. Let x represent the number of nights John keeps the tent and write a cost function, c(x), for the cost to rent a tent from Woodland Outfitters. Part b. How much will John owe if he uses the tent for five nights?
Mathematics
Looking for something else?
Not the answer you are looking for? Search for more explanations. | 2017-04-25 04:55:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26085591316223145, "perplexity": 2932.28807026167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120101.11/warc/CC-MAIN-20170423031200-00375-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://datascience.stackexchange.com/tags/multiclass-classification/new | # Tag Info
0
My guess is that the data you provide does not have enough information to predict $a, b, c, d$ or $e$. Therefore, because $b$ is over-represented in the dataset, it will always predict $b$, because thats the safest bet. If you didn't know anything about the input or you if you wouldn't be able to extract any useful information from it, you would probably ...
0
The performance of machine learning algorithms is not commonly evaluated with null hypothesis significance testing (NHST). Machine learning performance is evaluated with performance on a hold-out data (e.g., validation or test), regardless of the evaluation metric.
0
If your dataset is too small, it won't apply to other new data easily. In this case, you should either: try to increase your training dataset Find new images and classify them to increase the training data size, the model will improve as you add new images, but this can be time consuming use transfer learning Find a model that someone else built on a ...
1
It looks like the new data has a different distribution from the training data. It looks like the training data is just a single fruit, with white background, and the new image you've passed is a picture of bananas with blue background. The model has probably learned something like: if blue image, then blueberries, and for this reason it classifies the blue ...
1
What you were told is a worst case scenario. With 5 labels, 20.01% is the lowest possible value that a model would need to choose one class over the other. If the probability for each of the 5 classes are almost equal then the probabilities for each would be approximately 20%. In this case, the model would be having trouble deciding which class is correct. ...
0
This is called an open-class text classification problem, it's used in particular for some author identification problems. I don't have any recent pointers but from a quick search I found this article: https://www.aclweb.org/anthology/N16-1061.pdf In the field of author classification there is a similar problem called author verification, which can be ...
0
Apart from your desired two classes, relabel all other classes as a third class and then train your model on a three class classification problem.
1
You are talking about a multi label classification, which is a common type of problem. The most common choice of loss function is binary crossentropy There’s a tutorial here that might help: https://towardsdatascience.com/multi-label-image-classification-with-neural-network-keras-ddc1ab1afede
1
I have also been pondering on this question and trialled loss function on this sort of problem. For these type of classification tasks, the loss function which seems most appropriate is Binary Cross Entropy Loss : https://towardsdatascience.com/understanding-binary-cross-entropy-log-loss-a-visual-explanation-a3ac6025181a
0
Accuracy has a specific meaning classification - the data points with predicted labels must exactly match actual labels over the total number of data points. In order to calculate accuracy, you need the actual labels for each data point. If you do not have actual labels for a data point, those data points can not be used in the analysis.
0
Here's my solution for sparse categorical crossentropy for a Keras model with multiple outputs in TF2. I think it looks fairly clean but it might be horrifically inefficient, idk. First create a dictionary where the key is the name set in the output Dense layers and the value is a 1D constant tensor. The value in index 0 of the tensor is the loss weight of ...
Top 50 recent answers are included | 2020-07-05 05:12:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4174962341785431, "perplexity": 456.7641770809675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886865.30/warc/CC-MAIN-20200705023910-20200705053910-00044.warc.gz"} |
https://www.physicsoverflow.org/user/user47299/history | # Recent history for user47299
6
years
ago
received upvote on question Perturbative vs. non-perturbative approaches to a well-defined Yang-Mills theory in 4 dimensions
6
years
ago
question commented on Perturbative vs. non-perturbative approaches to a well-defined Yang-Mills theory in 4 dimensions
6
years
ago
question commented on Perturbative vs. non-perturbative approaches to a well-defined Yang-Mills theory in 4 dimensions
6
years
ago
received upvote on question Perturbative vs. non-perturbative approaches to a well-defined Yang-Mills theory in 4 dimensions
6
years
ago
question commented on Perturbative vs. non-perturbative approaches to a well-defined Yang-Mills theory in 4 dimensions
6
years
ago
posted a comment Perturbative vs. non-perturbative approaches to a well-defined Yang-Mills theory in 4 dimensions
6
years
ago
question commented on Perturbative vs. non-perturbative approaches to a well-defined Yang-Mills theory in 4 dimensions
6
years
ago
posted a comment Perturbative vs. non-perturbative approaches to a well-defined Yang-Mills theory in 4 dimensions
6
years
ago
posted a question Perturbative vs. non-perturbative appr...
6
years
ago
posted a comment Rigorous QFT on a Torus
6
years
ago
posted a comment Rigorous QFT on a Torus
6
years
ago
posted a comment Rigorous QFT on a Torus
6
years
ago
received upvote on question Rigorous QFT on a Torus
6
years
ago
received upvote on question Rigorous QFT on a Torus
6
years
ago
question commented on Rigorous QFT on a Torus
6
years
ago
posted a comment Rigorous QFT on a Torus
6
years
ago
question answered Rigorous QFT on a Torus
6
years
ago
posted a question Rigorous QFT on a Torus
6
years
ago
question answered Why isn't Quantum Yang-Mills Rigorous?
6
years
ago
received upvote on question Why isn't Quantum Yang-Mills Rigorous? | 2021-03-01 10:40:35 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8887196183204651, "perplexity": 3888.6476133838974}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362481.49/warc/CC-MAIN-20210301090526-20210301120526-00284.warc.gz"} |
https://www.nature.com/articles/s41598-017-15659-0?error=cookies_not_supported&code=2e848c9e-ef46-4857-8200-afca35f08c6f | Article | Open | Published:
# Exploring the alpha desynchronization hypothesis in resting state networks with intracranial electroencephalography and wiring cost estimates
## Abstract
This paper addresses a fundamental question, are eyes closed and eyes open resting states equivalent baseline conditions, or do they have consistently different electrophysiological signatures? We compare the functional connectivity patterns in an eyes closed resting state with an eyes open resting state to investigate the alpha desynchronization hypothesis. The change in functional connectivity from eyes closed to eyes open, is here, for the first time, studied with intracranial recordings. We perform network connectivity analysis in iEEG and we find that phase-based connectivity is sensitive to the transition from eyes closed to eyes open only in interhemispheral and frontal electrodes. Power based connectivity, on the other hand, consistently discriminates between the two conditions in temporal and interhemispheral electrodes. Additionally, we provide a calculation for the wiring cost, defined in terms of the connectivity between electrodes weighted by distance. We find that the wiring cost variation from eyes closed to eyes open is sensitive to the eyes closed and eyes open conditions. We extend the standard network-based approach using the filtration method from algebraic topology which does not rely on the threshold selection problem. Both the wiring cost measure defined here and this novel methodology provide a new avenue for understanding the electrophysiology of resting state.
## Introduction
The view of the brain as a reflexive organ whose neural activity is completely determined by incoming stimuli is challenged by the “intrinsic” or spontaneous view of the brain. Nevertheless, the exact implications of resting state for brain function are far from clear1,2. Mandag and colleagues3 argue for the reconceptualization of resting state as an independent variable (brain’s input) to a multidimensional activity modulator. The emerging field of functional connectomics relies on the analysis of spontaneous brain signal covariation to infer the spatial fingerprint of the brain’s large-scale functional networks. While there is growing interest in the brain’s resting state, supported by evidence for persistent activity patterns in the absence of stimulus-induced activity (e.g. default mode network)4, there lacks a definite recommendation about whether resting state data should be collected with participants’ eyes open or closed. If stimulus-induced activity is indeed, at least in part, predetermined by the brain’s intrinsic activity (i.e. resting state activity), it follows that we cannot understand one without the other. The more we know about the electrophysiological underpinnings of resting state, both with eyes closed and eyes open, the better equipped we will be to understand brain dynamics, including both intrinsic activity and the processing of stimuli.
The orthodox approach to understanding brain function relies on the view of the brain as an organ that produces responses triggered by incoming stimuli, which are delivered at will by an external observer. This idea has been challenged by the complementary view of the brain as an active organ, with intrinsic or spontaneous activity5,6,7. Crucially, the brain’s intrinsic activity both shapes and is shaped by external stimuli. While there has been some controversy concerning the ecological relevance of studying a default or resting condition8,9, the empirical evidence for intrinsic brain activity is conclusive10,11.
Despite the ever increasing importance of resting-state functional connectivity (a quick search on PubMed shows 2,742 papers with the term “resting state” in the title at the time of the writing), it remains underutilized in clinical decision making12. A rationale for this needs to be found with both conceptual and methodological basis. First and foremost, the term resting-state is a misnomer, as a matter of fact, the brain is always active, even in the absence of an explicit task, or external stimuli. Cognitive task-related changes in brain metabolism, measured with PET, account for a mere 5% or less of the brain’s metabolic demand13. Second, the resting state literature, from its inception, is eminently based on the analysis of low frequency fluctuations of the BOLD signal measured using fMRI, alone or in combination with EEG and PET14,15. Third, these techniques suffer from suboptimal temporal and/or spatial resolution and the haemodynamic or metabolic activity measured in fMRI and PET are proxy measures for the electrophysiological activity. Fourth, there is a lack of consensus in the literature regarding whether resting state data should be collected while the participant has their eyes open, closed, or fixated. See16 for non-significant between-condition differences in resting state networks and17 for an antagonistic view. This paper attempts to better understand the brain’s resting state by characterizing the two most common baseline conditions in neuropsychology, eyes closed and eyes open, using intracranial electroencephalogram recordings. Note that intracranial electroencephalography, iEEG, and electrocorticography, ECoG, are here used indistinctly.
Previous studies have identified a reduction in the number of connections when the eyes closed condition is compared to the eyes open condition, in the alpha band18,19. This is known as “alpha desynchronization”. Using EEG, Barry and colleagues19 found that there are electrophysiological differences -topography as well as power levels- between the eyes closed and eyes open resting states. A higher degree of alertness caused by opening one’s eyes is associated with the attenuation of alpha rhythm, which is supplanted by desynchronized low voltage activity20. Geller and colleagues21 found that eye closure causes a widespread low-frequency power increase and focal gamma attenuation in the human electrocorticogram. However, although these studies explicitly conclude that eyes open and eyes closed are different baseline conditions, they do not provide a method for comparing the functional connectivity patterns elicited by either of the two conditions against a common criterion.
Shedding some light on the problem, this paper examines whether the eyes closed and eyes open resting states are equivalent baseline conditions by analyzing the differences between the two, using a filtration approach that extends the standard network-based approach of using a fixed threshold to obtain the adjancency matrix from the correlation matrix. In a filtration method, a set of networks are built for a large number of thresholds, overcoming the threshold selection problem of building a graph from a correlation matrix. This allows us to explore, systematically and bias free, the electrophysiological underpinnings of resting state with intracranial electroencephalogram data.
First, we perform power and phase based connectivity analysis to asses whether the connectivity patterns calculated from intracranial recordings are able to differentiate between the two conditions. Additionally, we exploit the excellent temporal and spatial precision of ECoG to calculate the wiring cost for the connectivity maps.
Second, we investigate whether network topological properties have enough statistical power to be used as a feature/covariate to distinguish between the eyes closed and eyes open conditions. Finally, we extend the network theory based results, borrowing from algebraic topology, to perform a filtration method to study the dynamics of the network topologies for a large number of thresholds.
## Materials and Methods
### Participants
The intracranial electroencephalography recordings were collected at the Toronto Western Hospital (Toronto ON, Canada). Our research protocol was approved by the University Health Network Research Ethics Board and informed consent was obtained from the participants. All methods were performed in accordance with the relevant guidelines and regulations. Informed consent was obtained from all patients. Eleven participants (6 female) with pharmacologically-refractory mesial temporal lobe epilepsy underwent a surgical procedure, in which electrodes were implanted subdurally on the temporal lobe and stereotaxic depth electrodes were implanted in the hippocampi or other deep structures (Fig. 1). For each patient, electrode placement was determined to best pinpoint the origin of seizure activity. In addition to electrodes implanted in the temporal lobe, including depth electrodes in the hippocampi, some patients had electrodes implanted in frontal, interhemisphic and the cortical convexity (see Table 1). The electrode implants are thus not identical for all participants, though they tend to overlap in the mesial temporal lobe epilepsy (MTLE) sensitive regions. This limits the ability to directly compare the wiring cost or other network properties among participants. However, we can still compare and generalize from participants by examining the difference between the two conditions. For example, in order to compare the functional connectivity pattern of two participants, one with a grid in the left cortex and another with depth electrodes in the hippocampus and temporal areas, we calculate the difference between network parameters from eyes closed to eyes open, within each participant.
### Resting state conditions
To assess resting-state activity for both the eyes closed and and eyes open conditions, participants were asked to relax and rest quietly in their hospital bed, in a semi-inclined position. First, they were asked to close their eyes for three minutes and then asked to keep their eyes open for another three minutes. Each session was recorded with real-time monitoring of the intracranial electroencephalography and continuous audio and video surveillance.
ECoG recordings allow us to simultaneously study both fast and slow temporal dynamics of the brain at rest, that is, not engaged in tasks prescribed by the experimenter. Freeman and Zhai22 have shown that the resting ECoG has low-dimensional noise, making resting state an optimal starting point for defining and measuring both artifactual and physiological structures emergent in the activated electrophysiological signals. Importantly, ECoG signals covary in patterns that resembled the resting state networks (RSN) found with fMRI23.
### iEEG acquisition
Continuous iEEG data were recorded in an unshielded hospital room using NATUS Xltech digital video-EEG system. Commercially available depth electrodes and subdural electrodes were used to collect continuous iEEG recordings. Common reference and ground electrodes were placed subgalealy at a location distant from any recording electrodes with contacts oriented toward the dura. Electrode localization was accomplished by localizing the implanted electrodes on the postoperative computed tomography (CT) scan using the Matlab toolbox iELVis for localizing and displaying human intracranial electrode data24. Subdural electrodes were arranged in strip or grid configurations, with an inter-electrode spacing of 10 mm. The location of the electrode implants was not identical across patients, however, all participants had depth electrodes, mostly in the hippocampi (Table 1).
### Signal processing
Signals were filtered online using a high-pass (0.1 cutoff frequency) and an anti-aliasing low-pass filter. Offline filtering using Matlab in house-scripts, consisted of a high-pass and low-pass filter at 0.5–70 Hz and a notch filter applied at 60 Hz to remove electrical line noise.
To extract power and phase estimates of time-varying frequency-specific band, the ECoG signals were convolved with complex-valued Morlet wavelets. The wavelet convolution transformed the voltage trace at each electrode to obtain both instantaneous power and phase trace for each frequency. The wavelet length was defined in the range of −1 to 1 seconds and was centered at time = 0 (in doing so we guarantee that the wavelet has an odd number of points). We used a constant number of wavelet cycles (7). This number was chosen since we have long trial periods (3 minutes) in which we expect frequency-band-specific activity and a large number of cycles (from 7 to 10) to facilitate identifying temporally sustained activity25.
Of note, it is also possible to use a number of wavelet cycles that changes as a function of frequency, to adjust the balance between temporal and frequency precision as a function of the frequency of the wavelet. Thus, there is a trade-off between temporal and frequency precision. Since we are processing long epochs, we favour frequency over time precision and therefore chose to use a large number of wave cycles.
### Connectivity measures
We are interested in calculating the wiring cost associated with the functional connectivity map defined upon the electrodes’ spatial location. Measures of correlated activity are not real measures of “connectivity”. Here we use the term connectivity because it is the standard nomenclature, but a caveat on the dangers of assuming equality between correlated activity and connectivity is worth mentioning. Functional connectivity is calculated using both power-based and phase-based measures. For power-based we calculate Spearman’s correlation, while for phase-based connectivity we calculate two different measures - phase-lag index (PLI)26 and intersite phase clustering (ISPC). Note that ISPC represents the clustering in polar space of phase angle differences between electrodes resulting from the convolution between a complex wavelet and the signal and is also referred in the literature as R25.
Next, we briefly outline the three connectivity measures used. First, we describe power-based connectivity and next phase-based connectivity for the phase lag index and intersite phase clustering measures.
#### Power-based connectivity
To calculate the correlation coefficients for power time series from any two electrodes in the same frequency, we perform time-frequency decomposition using wavelets to then compute the Spearman correlation coefficient between the power time series of the two electrodes. To increase the signal to noise ratio, we segment the data into non-overlapping windows of 5 seconds, compute Spearman’s correlation coefficient for each segment, and then average the correlation coefficients together.
The Spearman’s correlation is the Pearson correlation of the data previously rank-transformed. Formally, the Spearman correlation of two channels x and y whose power time series values have been rank-transformed is:
$${r}_{xy}=\frac{{\sum }_{t=1}^{n}(x(t)-\bar{x})(y(t)-\bar{y})}{\sqrt{{\sum }_{t=1}^{n}{(x(t)-\bar{x})}^{2}{\sum }_{t=1}^{n}{(y(t)-\bar{y})}^{2}}}$$
(1)
It is of note that power-based correlation coefficients range from −1 to 1. To have a more normal looking distribution, it is preferable to perform a Fisher-Z transformation. It ought to be noted that power correlation is not limited to the kind of instantaneous correlations performed here, for example, cross correlation detects peak connectivity between two time series as a function of time lag.
#### Phase-based connectivity
We calculate phase-based connectivity using two different measures, intersite phase clustering (ISPC) and the phase-lag index (PLI). The ISPC measures the clustering in polar space of phase angle differences between electrodes and is given by the equation:
$${ISP}{{C}}_{f}=|{n}^{-1}\sum _{t=1}^{n}{\exp }^{i({\varphi }_{x(t)}-{\varphi }_{y(t)})}|$$
(2)
where n is the number of time points and ϕ x and ϕ y are the phase angles from electrodes x and y at a given frequency f. Note that this measure is sensitive to volume conduction. For example, when the phase differences are not uniformly distributed, but clustered around 0 or π in polar space, much of the apparent connectivity between these electrodes might be due to volume conduction.
There are several phase-based connectivity measures that ignore the 0 − π phase-lag connectivity problem, e.g., imaginary coherence27, phase-slope index28, phase-lag index26 and weighted phase-lag index29. Although these measures are designed to be insensitive to the linear mixing of uncorrelated sources, in some cases they may still be susceptible to source mixing30.
Phase lag index measures the extent to which the distribution of phase angle differences is more to the positive or to the negative side of the imaginary axis on the complex plane. That is, it tells us whether the vector of phase angle differences are pointing up or down in polar space. The idea is that if spurious connectivity is due to volume conduction, the phase angle differences will be distributed around zero radians. It follows that non-volume conducted connectivity will produce a distribution of phase angles that is predominantly on either the positive or the negative side of the imaginary axis. Note that here, contrary to ISPC, the vectors are not averaged, instead it is the sign of the imaginary part of the cross spectral density that is averaged:
$${PL}{{I}}_{xy}=|{n}^{-1}\sum _{t=1}^{n}{sgn}({imag}({S}_{xy}(t)))|$$
(3)
where imag is the imaginary part of S xy (t), or cross-spectral density between channels x and y at time t. The sgn function returns +1, −1, or 0.
Phase coherence measures are highly influenced by volume conduction31. PLI, on the other hand, was designed to tackle this problem. As shown by Stamm and colleagues, PLI is not particularly sensitive to zero-lag correlations and is less sensitive to volume conducted signals and common reference issues26. Later on, Peraza and colleagues30 have shown that PLI is not entirely invariant to volume conduction. In a simulation study, they found that PLI-based connectivity networks show more small worldness (higher cluster coefficient) than random networks. However, for non-volume conduction, PLI-based networks are close to random networks, indicating that the high clustering shown for PLI is caused by volume conduction.
To recapitulate, ISPC captures the clustering of the phase angle difference distribution and PLI the phase angle directions. ISPC can be influenced by changes in power and is maximally sensitive to detecting connectivity, regardless of the phase angle differences. Intracranial EEG is less sensitive to volume conduction problems than other electrophysiological techniques (EEG and MEG). Thus, by calculating phase-based connectivity with both ISPC and PLI, we expect to clarify the properties of both measures for the analysis of the iEEG signal.
#### Wiring cost
Now that we have described how functional connectivity is obtained, we continue by describing how to calculate the wiring cost between any pair of electrodes. The idea behind this measure is to exploit the location of the signal to provide a measure of the wiring cost of having two electrodes coupled, that is, statistically correlated, by any of the connectivity measures highlighted above. The wiring cost is nothing more than the connectivity matrix weighted by the euclidean distance between the electrodes. To calculate the wiring cost, we need then two matrices, the distance matrix $${D}_{ij}=\Vert ({x}_{i},{y}_{i},{z}_{i}),({x}_{j},{y}_{j},{z}_{j})\Vert$$ which captures the Euclidean distance between any two electrodes physically located in Cartesian coordinates (x i , y i , z i ) and (x j , y j , z j ) and the functional connectivity matrix. Thus, the computation of the wiring cost W combines the physical distance matrix D and a functional connectivity matrix F. While there is one phyisical distance matrix D for each participant, we calculate the functional connectivity matrix F using three different criteria - ISPC, PLI and the Spearman correlation of power time series.
The pairwise wiring cost for a distance matrix of electrodes D and functional connectivity matrix F calculated at frequency f is calculated as:
$$W(f)=D\,\,.\ast \,\,F(f)$$
(4)
Thus, the pairwise wiring cost of two electrodes is directly proportional to the distance and the correlation. The further away and the stronger the correlation, the larger the wiring cost (Fig. 2).
### Network analysis
The correlation matrices can be converted into adjacency matrices and then into undirected graphs with the direct application of a threshold. The choice of the threshold specifies the relationship between two electrodes, two electrodes are connected when the correlation is within a certain threshold. Thus, two electrodes are connected when the correlation is larger than the threshold.
Figure 3 shows the binary or unweighted networks that result from thresholding the power based correlation matrices in the alpha band. The threshold of choice is equal to the mean plus one standard deviation. We build the network connectivity for each subject and condition in the frequency band to then calculate an extensive set of network metrics including clustering, transitivity, path length, and number of components.
It is important to note that although the majority of subjects have electrodes in temporal areas and the hippocampi, the location of the electrodes varies substantially from one subject to another and the subjects’ networks are not directly comparable. For example, a subject with a grid of 64 contacts with a separation of 1 cm will necessarily have a larger clustering coefficient than a subject with bitemporal electrodes, and by the same token, the average path length in stereotactically implanted electrodes will be larger than in the grid.
In order to avoid this limitation, we study the difference in the network metrics between conditions for each subject. In this way, we can compare the variations in network topology for the two conditions across subjects.
### Persistent homology
A not less important limitation is that we obtain very different networks depending on the significance level (threshold) we use. This is problematic, particularly when the underlying system it not scale invariant. Small world and clusterness are joint measures and can change drastically depending on the choice of threshold32. Furthermore, by adopting a threshold, we may be loosing important information, for example, it may occur that some small-scale features are noise artifacts while other are critically important33,34.
Algebraic topology35 provides a language and a methodology to overcome these limitations. It presents a multiscale framework able to deal with the threshold selection problem. In the standard approach, in order to study the topological properties of functional connectivity networks we need to consider a threshold, which once applied to the connectivity matrix, will produce a binary graph from which network properties such as clustering, small world, characteristic path length and others can be measured. The selection of the threshold is, however, arbitrary, and the resulting network depends entirely upon that choice.
We overcome this limitation by following a filtration method used in algebraic and computational topology36,37, in which rather than having one threshold, we build a vector of thresholds, containing all possible threshold values between the two extremes (minimum and maximum connectivity values). For example, for the matrix F, of dimension n × n we obtain the threshold vector T with n2 elements bounded between the minimum and maximum of F, T = [min(W), max(W)].
A set of binary networks is then obtained by thresholding the wiring cost matrix for each possible threshold. Specifically, the binary matrix B τ for the threshold τ and functional connectivity matrix F is such that B τ (ij) = 0 if the correlation between electrodes i,j is less than the threshold, B τ (ij) < τ, otherwise B τ (ij) = 1. Thus, for each threshold value τT, we obtain a binary network and the resulting set of networks is comprised at the two extremes of the spectrum by the disconnected graph B τ (V, ), produced when applying the threshold τ = min(W) and the full graph B τ (V, E(W)) resulting from applying the threshold τ = max(W). Importantly, the set of binary networks has an internal structure that progressively increases until it becomes a fully connected network.
## Results
First, we study the statistical significance for the two conditions, eyes closed and eyes open, using the correlation matrix for power and phase based connectivity in the alpha band.
The effect of a higher degree of alertness (going from eyes closed to eyes open) for the various regions of interest for power-based connectivity in the alpha band is shown in Table 2. All the patients (11/11) have at least one electrode with a power-based connectivity pattern that is statistically significant for the two conditions.
Tables 3 and 4 show the statistical significance analysis for phase-based (ISPC and PLI) connectivity in the alpha band. Phase based connectivity demonstrated a statistically significant difference between the two conditions in interhemispheric and frontal electrodes for only 2 subjects (2/11). Depth, hippocampal and temporal electrodes do not show statistically significant differences between conditions. This is in agreement with EEG studies that show a decrease in alpha activity across the entire cortex in response to visual stimulation19.
### Power based connectivity (network topology)
Based on the previous results, we focus on power-based connectivity to study the network topology difference in the two conditions, eyes closed and eyes open. In order to study the topological properties we need to build the network from the correlation matrix. The procedure is quite straight forward, from the correlation matrix, we apply a threshold, in this case, the mean plus one standard deviation, t = μ + σ, to obtain the adjacency matrix, which can be equally represented as a graph. Figure 3 shows the connectivity network for 6/11 subjects for threshold t for both eyes closed and eyes open.
For a quantitaive analysis on the topological changes in the two conditions we calculate the network metric differences calculated for power-based connectivity in the alpha band as shown in Figure 4. The x-axis represents different network metrics and the y-axis represents the difference between the network metric, for example clustering (fourth point in the x-axis), in going from eyes closed to eyes open. When the difference is positive, for example, clustering in eyes closed is larger than in eyes open, the dot is blue, otherwise red. While this approach has the potential to help us understand how the topological properties of the connectivity network are affected between the two conditions, there is an important caveat to keep in mind. Although all the subjects in our data set tend to have electrodes in temporal areas and the hippocampi, the location of the electrodes varies substantially from one subject to another and the subjects’ networks are not directly comparable. It is, however, possible to overcome this limitation if we study on a single-subject basis the network properties for a set of thresholds. Thus, rather than assuming that the threshold is fixed, we build a large number of networks, as many networks as thresholds. In this way, we create a population of networks for each subject and condition from which it is possible to derive statistics. This is described in Section 2.9.
### Power based connectivity (filtration method)
To acquire a qualitative understanding of both conditions, eyes closed and eyes open, in terms of the wiring cost, we need to perform statistics with the distribution of binary networks obtained from using a large number of thresholds. The null hypothesis is that the effect of eyes closed is indistinguishable from the effect of eyes open for wiring cost. We extend the previous approach that consists of building the resulting network from applying the threshold of choice to the connectivity matrix, to building a set of networks with one for each possible threshold from the same connectivity matrix. Crucially, by removing the initial assumption of a fixed threshold which is necessarily ad hoc, we can study the network dynamics of the n resulting networks, one for each threshold in the n-dimensional vector of thresholds.
Figure 5 shows the difference in clustering coefficient, density of edges, characteristic path length and wiring cost between eyes closed and eyes open.
We perform a test of statistical significance for the four network properties highlighted in Figure 5. The results are shown in Table 5. In 4/11 subjects all the network metrics show a statistically relevant difference between the two conditions. The network metric with the best score in differentiating between eyes closed and eyes open is the wiring cost, in 8/11 subjects.
## Discussion
The brain is energy hungry, it amounts to only the 2% of the weight of the body, but takes up to 20% of the body’s metabolic demand. Yet, as with all physical systems, the brain has energy limitations. Ramón y Cajal was the first to postulate the laws of conservation for time, space and material38. It follows that there is a strong pressure for efficient use of resources, for example the minimization of the wiring cost at axonal, dendritic and synaptic levels. Longer connections, and those with greater cross-sectional area, are more costly because they occupy more physical space, require greater material resources, and consume more energy per connection. Networks that strictly conserve material and space (e.g. lattice) will likely pay a price in terms of conservation of time: it will take longer to communicate an electrophysiological signal between nodes separated by the longer path lengths that are characteristic of lattices39. There are trade-offs between biological cost and topological value.
Functional connectivity analysis from EEG data provides an explanation for alpha desynchronization in terms of the number of connections i.e., the number of connections decreases when one’s eyes are open compared to closed. It is worth noting that the term desynchronization is defined in the literature quite vaguely, and used to mean very different things. Synchronization sometimes refers an to increase in band power in some frequency band (e.g. alpha) and conversely, desynchronization is also associated with a loss of power in the frequency band of interest. Stam et al.40 provide an alternative approach to desynchronization of the alpha rhythm, which is characterized as an increase in the irregularity of the EEG signal. The EEG irregularity is quantified with the acceleration spectrum entropy (ASE), which is the normalized information entropy of the amplitude spectrum of the second derivative of a time series.
This study investigates the electrophysiological signatures that characterize eyes closed and eyes open resting states in patients diagnosed with mesial lobe epilepsy, taking advantage of the unmatched spatio-temporal properties of iEEG. Power and phase based connectivity analysis were performed for both conditions in the alpha band, to investigate the alpha desynchronization hypothesis. Alpha desynchronization, or the alpha blocking response to eye opening was originally reported by Berger in 1929. Alpha suppression is produced by an influx of light, other afferent stimuli and mental activity41. Alpha rhythm is the EEG correlate of relaxed wakefulness, best obtained while the eyes are closed20.
The wiring cost, as defined here, combines the physical distance between electrodes and the statistical correlation and takes full advantage of the spatial resolution of the ECoG signal. Specifically, the local wiring cost of two electrodes represents the product between the distance and the correlation value. The combination of functional connectivity and distance networks allows us to quantify the wiring cost for the two conditions under study -eyes closed and eyes open. The rationale behind this approach is that the wiring cost might explain, at least in energy minimization terms, why, among all possible configurations, some functional connectivity patterns are selected rather than others. We mathematically define the wiring cost for a given connectivity pattern in Equation 4.
We do not find compelling evidence for alpha desynchronization in phase-based connectivity analysis (except for interhemispheral and frontal electrodes). Power-based connectivity, on the other hand, is a more consistent predictor of alpha desynchronization, in particular within temporal electrodes. We find that the wiring cost does a better job in differentiating between eyes closed and eyes open than network metrics such as characteristic path length, clustering, or the edge density.
To investigate the loss of connectivity predicted by the alpha desynchronization hypothesis without relying on the adoption of a network threshold, we calculated the distribution of network property values associated with the connectivity matrix derived from a threshold vector bounded by the minimum and maximum functional connectivity values. We find that the location of the electrodes is the most important factor to be considered when studying the alpha desynchronizationin ECoG.
Although intracranial electroencephalography has unmatched spatial and temporal specificity, it may not be the optimal method for studying macroscopic aspects of the human brain. This study has the limitation that the electrode implants tend to be located in the seizure sensitive temporal lobe and leave untouched occipital and parietal lobes. A complementary model system for the study of the wiring cost difference between two connectivity patterns would be EEG or fMRI, in which the signal source is regularized in a common brain volume template. However, these techniques are limited by the source reconstruction problem, which is not as problematic in iEEG.
Ideally, this study would have used randomization of the two conditions -eyes closed and eyes open- altering the order. In the alpha blocking response to eye opening initially described by Berger (Berger’s effect), also called alpha desynchronization, there is a specific sequence – eyes closed precedes eyes open- and this is the sequence that we have used. It is however possible to go beyond the alpha blocking response to a more general study of the electrophysiological signatures of eyes open and eyes closed. This would require randomization and will be studied in future work in which we perform an intervention between eyes closed and eyes open, alternatively.
The results here obtained can be of interest to resting state (sleep, awake), task-based and pathological conditions, for example in epileptic seizures. In a forthcoming study, we show that the wiring cost increases dramatically in the ictal period compared to the pre-ictal period.
This work is a step forward in understanding the electrophysiological differences between the eyes open and eyes closed resting state conditions. It uses a straight forward and easily replicable approach to investigate the electrophysiology of baseline conditions in terms of energy efficiency. Furthermore, we introduce the method of persistent homology from algebraic topology to study network connectivity dynamics free of the threshold selection problem. The minimization of the wiring cost for functional connectivity networks acting over networks of intracranial electrodes provides a new avenue for understanding the electrophysiology of resting state.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Change history
• ### 14 March 2018
A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has been fixed in the paper.
## References
1. 1.
Schneider, F. et al. The resting brain and our self: self-relatedness modulates resting state neural activity in cortical midline structures. Neuroscience 157, 120–131 (2008).
2. 2.
Northoff, G., Duncan, N. W. & Hayes, D. J. The brain and its resting state activity–experimental and methodological implications. Progress in Neurobiology 92, 593–600 (2010).
3. 3.
Maandag, N. J. et al. Energetics of neuronal signaling and fmri activity. Proceedings of the National Academy of Sciences 104, 20546–20551 (2007).
4. 4.
Greicius, M. D. & Menon, V. Default-mode activity during a passive sensory task: uncoupled from deactivation but impacting activation. Journal of Cognitive Neuroscience 16, 1484–1492 (2004).
5. 5.
Llinás, R. R. The intrinsic electrophysiological properties of mammalian neurons: insights into central nervous system function. Science (New York, N.Y.) 242, 1654–1664 (1988).
6. 6.
Biswal, B., Yetkin, F. Z., Haughton, V. M. & Hyde, J. S. Functional connectivity in the motor cortex of resting human brain using echo-planar MRI. Magnetic resonance in medicine: official journal of the Society of Magnetic Resonance in Medicine/Society of Magnetic Resonance in Medicine 34, 537–541 PMID: 8524021 (1995).
7. 7.
Papo, D. Why should cognitive neuroscientists study the brain’s resting state? Frontiers in Human Neuroscience 7, 45 (2013).
8. 8.
Buckner, R. L. & Vincent, J. L. Unrest at rest: default activity and spontaneous network correlations. Neuroimage 37, 1091–1096 (2007).
9. 9.
Morcom, A. M. & Fletcher, P. C. Does the brain have a baseline? why we should be resisting a rest. Neuroimage 37, 1073–1082 (2007).
10. 10.
Wang, L. et al. Changes in hippocampal connectivity in the early stages of Alzheimer’s disease: evidence from resting state fmri. Neuroimage 31, 496–504 (2006).
11. 11.
Mantini, D., Perrucci, M. G., Del Gratta, C., Romani, G. L. & Corbetta, M. Electrophysiological signatures of resting state networks in the human brain. Proceedings of the National Academy of Sciences 104, 13170–13175 (2007).
12. 12.
Tracy, J. I. & Doucet, G. E. Resting-state functional connectivity in epilepsy: growing relevance for clinical decision making. Current Opinion in Neurology 28, 158–165 (2015).
13. 13.
Sokoloff, L., Mangold, R., Wechsler, R. L., Kennedy, C. & Kety, S. S. The effect of mental arithmetic on cerebral circulation and metabolism. Journal of Clinical Investigation 34, 1101 (1955).
14. 14.
Van Den Heuvel, M. P. & Pol, H. E. H. Exploring the brain network: a review on resting-state fmri functional connectivity. European Neuropsychopharmacology 20, 519–534 (2010).
15. 15.
Musso, F., Brinkmeyer, J., Mobascher, A., Warbrick, T. & Winterer, G. Spontaneous brain activity and eeg microstates. a novel eeg/fmri analysis approach to explore resting-state networks. Neuroimage 52, 1149–1161 (2010).
16. 16.
Patriat, R. et al. The effect of resting condition on resting-state fmri reliability and consistency: a comparison between resting with eyes open, closed, and fixated. Neuroimage 78, 463–473 (2013).
17. 17.
Yan, C. et al. Spontaneous brain activity in the default mode network is sensitive to different resting-state conditions with limited cognitive load. PloS One 4, e5743 (2009).
18. 18.
Tan, B., Kong, X., Yang, P., Jin, Z. & Li, L. The difference of brain functional connectivity between eyes-closed and eyes-open using graph theoretical analysis. Computational and Mathematical Methods in Medicine (2013).
19. 19.
Barry, R. J., Clarke, A. R., Johnstone, S. J., Magee, C. A. & Rushby, J. A. Eeg differences between eyes-closed and eyes-open resting conditions. Clinical Neurophysiology 118, 2765–2773 (2007).
20. 20.
Niedermeyer, E. & da Silva, F. L. Electroencephalography: basic principles, clinical applications, and related fields (Lippincott Williams & Wilkins, 2005).
21. 21.
Geller, A. S. et al. Eye closure causes widespread low-frequency power increase and focal gamma attenuation in the human electrocorticogram. Clinical Neurophysiology 125, 1764–1773 (2014).
22. 22.
Freeman, W. J. & Zhai, J. Simulated power spectral density (psd) of background electrocorticogram (ecog). Cognitive Neurodynamics 3, 97–103 (2009).
23. 23.
Fukushima, M., Chao, Z. C. & Fujii, N. Studying brain functions with mesoscopic measurements: advances in electrocorticography for non-human primates. Current Opinion in Neurobiology 32, 124–131 (2015).
24. 24.
Groppe, D. M. et al. iELVis: an open source MATLAB toolbox for localizing and visualizing human intracranial electrode data. J. Neurosci. Methods 281, 40–48 (2017).
25. 25.
Cohen, M. X. Analyzing neural time series data: theory and practice (MIT Press, 2014).
26. 26.
Stam, C. J., Nolte, G. & Daffertshofer, A. Phase lag index: assessment of functional connectivity from multi channel eeg and meg with diminished bias from common sources. Human Brain Mapping 28, 1178–1193 (2007).
27. 27.
Nolte, G. et al. Identifying true brain interaction from eeg data using the imaginary part of coherency. Clinical Neurophysiology 115, 2292–2307 (2004).
28. 28.
Nolte, G. et al. Robustly estimating the flow direction of information in complex physical systems. Physical Review Letters 100, 234101 (2008).
29. 29.
Vinck, M., Oostenveld, R., van Wingerden, M., Battaglia, F. & Pennartz, C. M. An improved index of phase-synchronization for electrophysiological data in the presence of volume-conduction, noise and sample-size bias. Neuroimage 55, 1548–1565 (2011).
30. 30.
Peraza, L. R., Asghar, A. U., Green, G. & Halliday, D. M. Volume conduction effects in brain network inference from electroencephalographic recordings using phase lag index. Journal of Neuroscience Methods 207, 189–199 (2012).
31. 31.
Mormann, F., Lehnertz, K., David, P. & Elger, C. E. Mean phase coherence as a measure for phase synchronization and its application to the eeg of epilepsy patients. Physica D: Nonlinear Phenomena 144, 358–369 (2000).
32. 32.
Toppi, J. et al. How the statistical validation of functional connectivity patterns can prevent erroneous definition of small-world properties of a brain connectivity network. Computational and Mathematical Methods in Medicine (2012).
33. 33.
Fallani, F. D. V., Richiardi, J., Chavez, M. & Achard, S. Graph analysis of functional brain networks: practical issues in translational neuroscience. Phil. Trans. R. Soc. B 369, 20130521 (2014).
34. 34.
Papo, D., Buldú, J. M., Boccaletti, S. & Bullmore, E. T. Complex network theory and the brain. Phil. Trans. R. Soc. B 369, 20130520 (2014).
35. 35.
Munkres, J. R. Elements of algebraic topology, vol. 2 (Addison-Wesley Menlo Park, 1984).
36. 36.
Dabaghian, Y., Brandt, V. L. & Frank, L. M. Reconceiving the hippocampal map as a topological template. Elife 3, e03476 (2014).
37. 37.
Dotko, P. et al. Topological analysis of the connectome of digital reconstructions of neural microcircuits. arXiv preprint arXiv:1601.01580 (2016).
38. 38.
Ramón y Cajal, S. Histology of the nervous system of man and vertebrates, vol. 1 (Oxford University Press, USA, 1995).
39. 39.
Fornito, A., Zalesky, A. & Bullmore, E. Fundamentals of brain network analysis (Academic Press, 2016).
40. 40.
Stam, C., Tavy, D. & Keunen, R. Quantification of alpha rhythm desynchronization using the acceleration spectrum entropy of the eeg. Clinical EEG and Neuroscience 24, 104–109 (1993).
41. 41.
Schomer, D. L. & Da Silva, F. L. Niedermeyer’s electroencephalography: basic principles, clinical applications, and related fields (Lippincott Williams & Wilkins, 2012).
## Acknowledgements
We acknowledge the support of the Bial Foundation, grant number #20614.
## Author information
### Affiliations
1. #### The Hospital for Sick Children, Neurosciences and Mental Health program, Toronto, Canada
• Jaime Gómez-Ramírez
• , Diego Mateos
• & José Luis Pérez Velázquez
2. #### Concordia University, Montreal, Canada
• Shelagh Freedman
3. #### Toronto Western Hospital, Krembil Research Institute, Toronto, Canada
• Taufik A. Valiante
### Contributions
J.G.R. and S.F. wrote the main manuscript. J.G.R. analyzed data and conceived the wiring cost model. S.F. performed the experiments. D.M. prepared Figures 1 and 2 and analyzed electrophysiolgical data. J.L.P.V. supervised the project and and built initial wiring cost models. T.V. provided access to the clinical population and analyzed electrophysiolgical data.
### Competing Interests
The authors declare that they have no competing interests.
### Corresponding author
Correspondence to Jaime Gómez-Ramírez. | 2019-02-17 12:35:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5854822993278503, "perplexity": 2105.6758819311904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481992.39/warc/CC-MAIN-20190217111746-20190217133746-00318.warc.gz"} |
https://mathematics.huji.ac.il/vocab/eventss?page=22 | 2019 Apr 04
4:00pm to 5:15pm
Ross 70
2019 Apr 10
# Logic Seminar - Yatir Halevi
11:00am to 1:00pm
## Location:
Ross 63
Type Definable Semigroups in Stable Structures
A semigroup is a set together with an associative binary operation. As opposed to stable groups, the model theory of stable semigroups is not so rich. One reason for that is their abundance.
We will review (and prove) some known results on type-definable semigroups in stable structures and offer some examples and counter-examples.
2019 Apr 08
# NT & AG Lunch: Michael Temkin, "Explicit Class Field Theory"
1:00pm to 2:00pm
## Location:
Faculty lounge, Math building
In a series of talks I will describe in the chronological order all cases where an explicit
construction of CFT is known:
0. The multiplicative group and Kronecker-Weber -- the case of Q.
1. Elliptic curves with complex multiplication and
Kronecker's Jugendraum -- the case of imaginary quadratic extensions.
2. Formal O-models of Lubin-Tate -- the local case.
3. Drinfeld's elliptic modules -- the function field case.
\infinity. Extending this to real quadratic fields and, more generally,
2019 May 06
# NT & AG Lunch: Michael Temkin "Elliptic curves with complex multiplication"
1:00pm to 2:00pm
## Location:
Faculty lounge, Math building
2019 Apr 01
# NT & AG Lunch: Ehud DeShalit "An overview of class field theory, III"
1:00pm to 2:00pm
## Location:
Faculty lounge, Math building
Class field theory classifies abelian extensions of local and global fields
in terms of groups constructed from the base. We shall survey the main results of class
field theory for number fields and function fields alike. The goal of these introductory lectures
is to prepare the ground for the study of explicit class field theory in the function field case,
via Drinfeld modules.
I will talk for the first 2 or 3 times.
2019 Mar 31
# Graduate student seminar: Yoel Grinshpon and Geva Yashfe
4:30pm to 7:00pm
## Location:
Junior faculty room
2019 Apr 04
# Special talk : Prof. Efim Zelmanov (UCSD) : Growth Functions
## Lecturer:
Prof. Efim Zelmanov (UCSD)
12:00pm to 1:00pm
## Location:
Ross 70
We will discuss growth functions of algebras and monoids.
2019 Mar 26
# Dynamics Seminar: Nattalie Tamam "Diagonalizable groups with non-obvious divergent trajectories"
12:00pm to 1:00pm
## Location:
Manchester faculty club
Singular vectors are the ones for which Dirichlet’s theorem can be infinitely improved. For example, any rational vector is singular. The sequence of approximations for any rational vector q is 'obvious'; the tail of this sequence contains only q. In dimension one, the rational numbers are the only singulars. However, in higher dimensions there are additional singular vectors. By Dani's correspondence, the singular vectors are related to divergent trajectories in Homogeneous dynamical systems. A corresponding 'obvious' divergent trajectories can also be defined.
2019 Mar 28
# Basic Notions: Benjamin Weiss (HUJI) "Groups with property T have and "cost" of equivalence relations"
4:00pm to 5:15pm
## Location:
Ross 70
The cost of a measure-preserving equivalence relation is a quantitative measure of its complexity. I will
explain what the cost is and then discuss a recent result of Tom Hutchcroft and Gabor Pete in which they construct,
for any group with property T, a free ergodic measure preserving action with cost 1.
2019 Mar 25
# NT & AG Lunch: Ehud DeShalit "An overview of class field theory, II"
1:00pm to 2:00pm
## Location:
Faculty lounge, Math building
Class field theory classifies abelian extensions of local and global fields
in terms of groups constructed from the base. We shall survey the main results of class
field theory for number fields and function fields alike. The goal of these introductory lectures
is to prepare the ground for the study of explicit class field theory in the function field case,
via Drinfeld modules.
I will talk for the first 2 or 3 times.
2019 Mar 26
# T&G: Vivek Shende (Berkeley), Quantum topology from symplectic geometry
1:00pm to 2:30pm
## Location:
Room 110, Manchester Building, Jerusalem, Israel
The discovery of the Jones polynomial in the early 80's was the beginning of quantum topology'': the introduction of various invariants which, in one sense or another, arise from quantum mechanics and quantum field theory. There are many mathematical constructions of these invariants, but they all share the defect of being first defined in terms of a knot diagram, and only subsequently shown by calculation to be independent of the presentation. As a consequence, the geometric meaning has been somewhat opaque.
2019 May 15
# Logic Seminar - Shimon Garti
11:00am to 1:00pm
## Location:
Ross 63
On the cofinality of some classical cardinal characteristics.
We will try to prove two results about the possible cofinality of cardinal characteristics.
The first result is about the ultrafilter number, and this is a part of a joint work with Saharon Shelah.
The second is about Galvin's number, and this is a joint work with Yair Hayut, Haim Horowitz and Menachem Magidor.
2019 Mar 27
# Logic Seminar - Shlomo Eshel
11:00am to 1:00pm
## Location:
Ross 63
Uniform definability of types over finite sets
Uniform definability of types over finite sets (UDTFS) is a property of formulas which implies NIP and characterizes NIP in the level of theories (by Chernikov and Simon).
In this talk we will prove that if T is any theory with definable Skolem functions, then every dependent formula phi has UDTFS. This result can be seen as a translation of a result of Shay Moran and Amir Yehudayof in machine learning theory to the logical framework.
2019 Jun 12
# Logic Seminar - Moshe Illouz
11:00am to 1:00pm
## Location:
Ross 63
Categoricity relative to order and order stability
In this talk we will show a generalization of the notion of stability and categoricity relative to the order. One of the natural questions is whether categoricity implies stability, just like in the regular case. We will show that this is not true generally, by using a result of Pabion on peano arithmetic. We are also going to see some specific cases where categoricity relative to the order implies stability.
2019 Jun 05
# Logic Seminar - Oren Kalish
11:00am to 1:00pm
## Location:
Ross 63
Tight weakly o-minimal structures
We introduce a class of weakly o-minimal expansions of groups, called tight structures. We prove that the o-minimal completion of a tight structure is linearly bounded. | 2020-10-22 17:57:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6115289926528931, "perplexity": 2661.2509820192304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880014.26/warc/CC-MAIN-20201022170349-20201022200349-00539.warc.gz"} |
https://brendanyounger.com/math/gcd | ## Math visualizations
### How many iterations does gcd(x, y) take?
A theorem of Lamé, Dixon, and Heilbronn states that the average number of iterations of the classical GCD function is
$\frac{12~\mathrm{ln}(2)}{\pi^2} \mathrm{ln}(\mathrm{max}(x, y))$
and the maximum is given by
$\lceil \mathrm{ln}(N \sqrt{5}) / \mathrm{ln}((1 + \sqrt{5}) / 2)\rceil - 2$
• Show maximumStair step upper bound
• Show averageSmooth middle surface
• Show iterationsSpiky surface in the middle | 2021-09-19 02:33:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7643163204193115, "perplexity": 6389.143674655482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056656.6/warc/CC-MAIN-20210919005057-20210919035057-00416.warc.gz"} |
http://www.statemaster.com/encyclopedia/Orbital-maneuver | FACTOID # 6: Michigan is ranked 22nd in land area, but since 41.27% of the state is composed of water, it jumps to 11th place in total area.
Home Encyclopedia Statistics States A-Z Flags Maps FAQ About
WHAT'S NEW
SEARCH ALL
FACTS & STATISTICS Advanced view
Search encyclopedia, statistics and forums:
(* = Graphable)
Encyclopedia > Orbital maneuver
An orbital maneuver is a change from one orbit to another, accomplished by applying thrust. In deep space it is called deep-space maneuver (DSM). Two bodies with a slight difference in mass orbiting around a common barycenter. ... Thrust is a reaction force described quantitatively by Newtons Second and Third Laws. ...
## Impulsive maneuvers GA_googleFillSlot("encyclopedia_square");
An impulsive maneuver approximates a finite thrust maneuver by adding an instantaneous velocity change to an ephemeris record while maintaining the position. During the planning phase of most space or rocket missions, designers will first calculate orbital changes using impulsive maneuvers. This greatly reduces the complexity of finding the correct orbital transitions. The instantaneous changes in velocity are referred to as delta-v ($Deltamathbf{v},$), the total delta-v for all maneuvers required in the mission is called a delta-v budget. With a good approximation of the delta-v budget designers can estimate the fuel to payload requirements of the spacecraft. Using these approximations is most useful when finite thrusts are to be executed in short bursts. Finite maneuvers like these are possible with high thrust-to-weight propulsion systems, e.g. chemical rockets. However, even for long burns, impulsive maneuver approximations remain very accurate outside the Earth's atmosphere. General In general physics delta-v is simply the change in velocity. ... General In general physics delta-v is simply the change in velocity. ... Delta-v budget (or velocity change budget) is a term used in astrodynamics and aerospace industry for velocity change (or delta-v) requirements for the various propulsive tasks and orbital maneuvers over phases of the space mission. ... Delta-v budget (or velocity change budget) is a term used in astrodynamics and aerospace industry for velocity change (or delta-v) requirements for the various propulsive tasks and orbital maneuvers over phases of the space mission. ... Thrust is a reaction force described quantitatively by Newtons Second and Third Laws. ... A remote camera captures a close-up view of a Space Shuttle Main Engine during a test firing at the John C. Stennis Space Center in Hancock County, Mississippi Propulsion means to add speed or acceleration to an object, by an engine or other similar device. ... A remote camera captures a close_up view of a Space Shuttle Main Engine during a test firing at the John C. Stennis Space Center in Hancock County, Mississippi Spacecraft propulsion is used to change the velocity of spacecraft and artificial satellites, or in short, to provide delta_v. ...
## Non-impulsive maneuvers
Applying a low thrust over longer periods of time is referred to as non-impulsive maneuvers (even though any thrust can be said to produce an amount of impulse). They are less efficient as energy can be lost due to gravity drag. However those maneuvers can be the only option when efficient but low thrust-to-weight propulsion systems are used (e.g. ion engines). They are not possible for a launch. In classical mechanics, the impulse of a constant force is the product of the force and the time during which it acts. ... An ion engine test An ion thruster is a type of spacecraft propulsion that uses beams of ions for propulsion. ...
## Finite Burn Trajectories
For a few space missions, such as those including a space rendezvous, high fidelity models of the trajectories are required to meet the mission goals. Calculating a finite burn requires a detailed model of the spacecraft and its thrusters. The most important of details include: mass, center of mass, moment of inertia, thruster positions, thrust vectors, thrust curves, specific impulse, thrust centroid offsets, and fuel consumption. Space has been an interest for philosophers and scientists for much of human history. ... A space rendezvous between two spacecraft, often between a spacecraft and a space station, is an orbital maneuver where the two arrive at the same orbit, make the orbital velocities the same, and bring them together (an approach maneuver, taxiing maneuver); it may or may not include docking. ... The Space Shuttle Discovery as seen from the International Space Station. ... This article or section is in need of attention from an expert on the subject. ... In physics, the center of mass of a system of particles is a specific point at which, for many purposes, the systems mass behaves as if it were concentrated. ... Moment of inertia, also called mass moment of inertia and, sometimes, the angular mass, (SI units kg m², Former British units slug ft2), is the rotational analog of mass. ... Specific impulse (usually abbreviated Isp) is a way to describe the efficiency of rocket and jet engines. ... Centroid of a triangle In geometry, the centroid or barycenter of an object in -dimensional space is the intersection of all hyperplanes that divide into two parts of equal moment about the hyperplane. ...
Results from FactBites:
Orbital Successfully Completes Final Developmental Flight For U.S ... (848 words) Orbital was awarded an initial development contract in 2000 to meet the Navy's requirement for an affordable SSST system to simulate high-speed anti-ship cruise missiles for fleet training and weapon systems research, development, test and evaluation. Orbital is the only U.S. Department of Defense prime contractor to be both developing and operating ramjet-powered missile systems. Orbital also offers space-related technical services to government agencies and develops and builds satellite-based transportation management systems for public transit agencies and private vehicle fleet operators.
STS-99 Shuttle Abort Modes (1700 words) During the downrange phase, a pitch-around maneuver is initiated (the time depends in part on the time of a space shuttle main engine failure) to orient the orbiter/external tank configuration to a heads up attitude, pointing toward the launch site. In addition, excess orbital maneuvering system and reaction control system propellants are dumped by continuous orbital maneuvering system and reaction control system engine thrustings to improve the orbiter weight and center of gravity for the glide phase and landing. The orbital maneuvering system engines would be used to place the orbiter in a circular orbit.
More results at FactBites »
Share your thoughts, questions and commentary here | 2019-10-19 04:57:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5232687592506409, "perplexity": 2043.7812521011376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688826.38/warc/CC-MAIN-20191019040458-20191019063958-00335.warc.gz"} |
https://assignment-daixie.com/tag/phys3035%E4%BB%A3%E5%86%99/ | # 电动力学和光学|PHYS3035/PHYS3935 Electrodynamics and Optics代写 sydney代写
0
The students are cultivated as high-quality innovative professionals. They are well developed in the
aspects of morality, intelligence, physique, and aesthetic. They possess the basic theoretical knowledge in
the fields of Materials Science and Physics,and are well trained in the applied research, technological
development, and engineering. They can research physical properties and laws of materials at the level of
molecule, atom and electron,and apply to develop new materials preparative technology,advanced
function materials and equipments. The graduates are expected to work in various industries, universities,
and research institutes in the fields of over function materials and the related (such as energy engineering,
electric power, etc.), engaging in product design, technological development, scientific research, and
management, and playing important and leading roles in the fields with international competitiveness and
innovation.
$$\varrho_{-}^{\prime}=\frac{\varrho}{\sqrt{1-\left(u-u_{\mathrm{e}}\right)^{2} / c^{2}}}=\varrho\left(1+\frac{1}{2 c^{2}}\left(u^{2}-2 u u_{\mathrm{e}}\right)\right),$$
and for the lower section of windings, just above (1),
$$\varrho_{-}^{\prime}=\frac{\varrho}{\sqrt{1-\left(u+u_{e}\right)^{2} / c^{2}}}=\varrho\left(1+\frac{1}{2 c^{2}}\left(u^{2}+2 u u_{e}\right)\right) \text {. }$$
For the upper section of the coil containing $N$ wires , combiningand yields an excess of positive charges:
$$\Delta Q_{+}^{\prime}=N \Delta q_{+}^{\prime}=\frac{N q u_{\mathrm{e}} u}{c^{2}} .$$
For the lower section of windings (1), the combination of yields an excess of negative charges:
$$\Delta Q_{-}^{\prime}=N \Delta q_{-}^{\prime}=-\frac{N q u_{\mathrm{e}} u}{c^{2}}$$
## PHYS3035/PHYS3935 COURSE NOTES :
Derivation: Each end of the bar (or coil) produces a flux density $B_{t}=$ $\frac{\Phi}{4 \pi R^{2}}$ at the point of observation according. Only the difference of the two values is important, so that in the first principal orientation
$$B=\frac{\phi}{4 \pi}\left(\frac{1}{(R-l / 2)^{2}}-\frac{1}{(R+1 / 2)^{2}}\right)$$
When the distance $R$ is sufficiently large compared to the length $I$ of the bar or coil, we can neglect $P^{2}$ relative to $R^{2}$, and for the magnitude of $B$, we then obtain
$$B=\frac{1}{2 \pi} \frac{\Phi l}{R^{3}}=\frac{\mu_{0}}{2 \pi} \frac{m}{R^{3}}$$
Correspondingly, for the second principal orientation, we find
$$B=\frac{\mu_{0}}{4 \pi} \frac{m^{3}}{R^{3}}$$
# 电动力学和光学|PHYS3035/PHYS3935 Electrodynamics and Optics代写 sydney代写
0
The students are cultivated as high-quality innovative professionals. They are well developed in the
aspects of morality, intelligence, physique, and aesthetic. They possess the basic theoretical knowledge in
the fields of Materials Science and Physics,and are well trained in the applied research, technological
development, and engineering.
$$G(q, \omega)=\frac{1}{q^{2}-q_{0}^{2}}\left(U-\frac{q q}{q_{0}^{2}}\right)$$
which sometimes is useful. Like the Huygens propagator, the dyadic Green function is also singular for $q=q_{0}$.
The plane-wave representation of the dyadic Green function for the magnetic field is obtained by inserting the Fourier integral transformation
$$\boldsymbol{G}{\mathrm{M}}(\boldsymbol{R} ; \omega)=(2 \pi)^{-3} \int{-\infty}^{\infty} \boldsymbol{G}_{\mathrm{M}}(q, \omega) \mathrm{e}^{i \boldsymbol{q} \cdot \boldsymbol{R}} \mathrm{d}^{3} q$$
Calculations analogous to those carried out to determine $\boldsymbol{G}(\boldsymbol{R} ; \omega)$ result in
$$\boldsymbol{G}{\mathrm{M}}(\boldsymbol{q}, \omega)=\frac{q}{q{0}} \frac{1}{q^{2}-q_{0}^{2}} \boldsymbol{U} \times \hat{\boldsymbol{q}}$$
an expression which also is singular for $q=q_{0}$. The folding theorem in $\boldsymbol{r}$-space gives when applied to
$$\boldsymbol{B}(q, \omega)=\frac{i \mu_{0} \omega}{c_{0}} \boldsymbol{G}{\mathrm{M}}(q, \omega) \cdot J(q, \omega)$$ A current density parallel to the $q$-direction does not give rise to a magnetic field, and the magnetic field generated by a current density perpendicular to $\hat{q}$ always lies in a plane perpendicular to $\hat{q}$. To prove the above-mentioned claims we expand the unit dyad after a triple set of orthogonal unit vectors $\hat{q}{\perp}^{(1)}, \hat{q}{\perp}^{(2)}$, and $\hat{q}$, with $\hat{\boldsymbol{q}}{1}^{(1)} \times \hat{\boldsymbol{q}}{2}^{(2)}=\hat{\boldsymbol{q}}$, i.e., $$\boldsymbol{U}=\hat{\boldsymbol{q}}{\perp}^{(1)} \hat{q}{\perp}^{(1)}+\hat{\boldsymbol{q}}{\perp}^{(2)} \hat{\boldsymbol{q}}_{\perp}^{(2)}+\hat{q} \hat{q}$$
## PHYS3035/PHYS3935 COURSE NOTES :
In the mixed representation, the electric field, $\boldsymbol{E}(z ; q |, \omega)$, from a current density distribution, $\boldsymbol{J}(z: q |, \omega)$, is outside the distribution given by
$$E\left(z ; q_{|}, \omega\right)=i \mu_{0} \omega \int_{-\infty}^{\infty} G\left(z-z^{\prime} ; q_{1}, \omega\right) \cdot J\left(z^{\prime} ; q_{|}, \omega\right) \mathrm{d} z^{\prime}$$
where
$$G\left(Z ; q_{|}, \omega\right)=\Gamma\left(s g n Z ; q_{1}, \omega\right) \mathrm{e}^{i x_{\perp}|Z|}$$
with
$$\Gamma\left(\operatorname{sgn} Z: q_{|}, \omega\right)=\frac{i}{2 q_{0}^{2} \kappa \perp}\left[q_{0}^{2} U-q_{|} q_{|}-\kappa_{\perp}^{2} \hat{z} \bar{z}-\left(q_{|}^{\hat{z}}+\hat{z} q_{|}\right) \kappa_{\perp} \operatorname{sgn} Z\right]$$
Let us now assume that the source current density is nonvanishing only on a plane sheet located at $z^{\prime}=z_{0}$. Thus,
$$J\left(z^{\prime} ; q_{|}, \omega\right)=J_{0}\left(q_{|}, \omega\right) \delta\left(z^{\prime}-z_{0}\right)$$ | 2023-02-08 00:56:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6608914732933044, "perplexity": 1442.7971880786583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500664.85/warc/CC-MAIN-20230207233330-20230208023330-00084.warc.gz"} |
https://solvedlib.com/n/nuclear-protein-howsver-through-4l-you-are-studying-protein,7705342 | # Nuclear protein: Howsver; through [4L You are studying protein that is supposed to be = What are two immunohistochemistry You
##### The length of a rectangle is three centimeters greater than the width
The length of a rectangle is three centimeters greater than the width. The area is 108 square centimeters. Find the length and width of the rectangle....
##### Hello I am working on a Java Project, and my debug keeps saying i dont have...
Hello I am working on a Java Project, and my debug keeps saying i dont have a main class defined... Im guessing i need to add a public static void main(String[] args) { somewhere in the program i just am not sure where! Please help me finish this!! Program so far: package grocerystoresimulation; im...
##### A "C" taxpayer , XYZ corporation had the following results for 2019 sales 10000 COGS 6000...
A "C" taxpayer , XYZ corporation had the following results for 2019 sales 10000 COGS 6000 municipal bond interest 1000 other deductible expenses 2000 net capital losses 5000 15) Taxable income is a. 2000 b. (3000) c. 4000 d. (2000) 16) 2018 BOOK income for XYZ Corporation is a. 2000 b. (3000...
##### HW 6: Problem 7Previous ProblemProblem ListNext Problempoints) Florescent lighbulbs have Iltetimes Inat are nrmally disinbuted wilh Mean Ol years and = slaidald devlalion - years. Tlte Ijure below shows le distnbution - Iltetimes lluorescent lighlbulbys . Calculale Ihe shaded area under the Curve: Express your answer decimal torm with least two decima place accuracy:ASWCIMletinu !Fmondgcunt LidttulbePrevicw My AnswersSubinite AnswcrYou have attempted this problem times; You have attempts remaini
HW 6: Problem 7 Previous Problem Problem List Next Problem points) Florescent lighbulbs have Iltetimes Inat are nrmally disinbuted wilh Mean Ol years and = slaidald devlalion - years. Tlte Ijure below shows le distnbution - Iltetimes lluorescent lighlbulbys . Calculale Ihe shaded area under the Curv...
##### A car travels around a horizontal bend of radius 177 m at a constant speed. (a)...
A car travels around a horizontal bend of radius 177 m at a constant speed. (a) If the coefficient of the static friction between the road and car tyres is us = 0.6 then what is the maximum speed that the car can negotiate the bend without sliding from the road? m/s Fil (b) What is the magnitude of ...
##### Compute the determinant of[1 2 3 4 ][0 2 9 7 ][03 12 ][0 0 0 4 ]
Compute the determinant of [1 2 3 4 ] [0 2 9 7 ] [0 3 12 ] [0 0 0 4 ]...
##### A hockey team has 2 goalies, 9 forwards, and 6 defencemen on theteam. Determine how many ways the coach can choose one goalie,three forwards and two defencemen for the starting line.
A hockey team has 2 goalies, 9 forwards, and 6 defencemen on the team. Determine how many ways the coach can choose one goalie, three forwards and two defencemen for the starting line....
##### II. Synthesize both from benzene, respectively. Show the reagents and the specific mechanism: COOH -CH CH,CH;Br
II. Synthesize both from benzene, respectively. Show the reagents and the specific mechanism: COOH -CH CH,CH; Br...
##### Following are the merchandising transactions of Dollar Store. Nov. 1 Dollar Store purchases merchandise for $2,300... Following are the merchandising transactions of Dollar Store. Nov. 1 Dollar Store purchases merchandise for$2,300 on terms of 2/5, n/30, FOB shipping point, invoice dated November 1. 5 Dollar Store pays cash for the November 1 purchase. 7 Dollar Store discovers and returns $200 of de... 1 answer ##### On June 30 (the end of the period), Brown Company has a credit balance of$2,100 in Allowance for...
On June 30 (the end of the period), Brown Company has a credit balance of $2,100 in Allowance for Doubtful Accounts. An evaluation of accounts receivable indicates that the proper balance should be$31,265. Required: Journalize the appropriate adjusting entry. Refer to the Chart of Accounts for ...
##### Cnageorncty homuolk Matlu irncountorodone othe . Qeslions Ihe queshon usked tr prool Itin ba dlamMend coecby concludud Vibl onjughleaalud prool Vahat %n mlorniallatt would &74 NMia corrnuchma mmalS-tr 0. S1VAScTr umilarlo Asca82
cna georncty homuolk Matlu irncountorod one othe . Qeslions Ihe queshon usked tr prool It in ba dlam Mend coecby concludud Vibl onjugh leaalud prool Vahat %n mlorniallatt would &74 NMia corrnuchma mmal S-tr 0. S1 VAScTr umilarlo Asca 82...
##### Physics question show work I1. Short Answer Questlons (40 pts. total) e a head-on colision. During...
Physics question show work I1. Short Answer Questlons (40 pts. total) e a head-on colision. During the 11. Head-on Collision. (S pts)Alarge truck and ac collision which vehkcte experiences the greatest magnitude rge truck and a compact car hav car ) both the same d) it depends which is moving faste...
##### Need help please explain Why should health plans pursue prevention strategies? 2.
need help please explain Why should health plans pursue prevention strategies? 2....
##### Suppose your favorite radio station is at 103.7 on the FM dial that'8 frequency of 103.7 megahertz (MHz): What is the wavelength of the signal from this radio station? 0.82 m5.12m1.56 m2.89 m
Suppose your favorite radio station is at 103.7 on the FM dial that'8 frequency of 103.7 megahertz (MHz): What is the wavelength of the signal from this radio station? 0.82 m 5.12m 1.56 m 2.89 m...
##### A 100 g aluminum calorimeter contains 250 g of water. The two substances are in thermal...
A 100 g aluminum calorimeter contains 250 g of water. The two substances are in thermal equilibrium at 10°C. Two metallic blocks are placed in the water. One is a 50 g piece of copper at 76°C. The other sample has a mass of 73 g and is originally at a temperature of 100°C. The entire sys...
##### Q2. (15%) The structure of a molecule is shown below:CODetermine the point group symmetry for this molecule_ Determine its degrees of freedom; and the number of vibrational modes expected for this molecule_ You have to determine IR active CO vibrational modes for this molecule. Using all the symmetry operations present in the point gTOup, determine the reducible representation for CO groups in the molecule. You must show all your work:
Q2. (15%) The structure of a molecule is shown below: CO Determine the point group symmetry for this molecule_ Determine its degrees of freedom; and the number of vibrational modes expected for this molecule_ You have to determine IR active CO vibrational modes for this molecule. Using all the symme...
##### I have some binary classifcation problem with ~30 features and 200 observations. I've used L1 (Lasoo)...
I have some binary classifcation problem with ~30 features and 200 observations. I've used L1 (Lasoo) and L2 (Ridge) to run logistic regression and accuracy, prescion and confusion matrix on both training and test data. From there, have found the best hyper parameter from a selection and have th...
##### What is the implicit rate of the discount?
There is a condition, I buy the good and the price is P (not improtant)suppose the discount rate condition is r/A, n/Bthat means if we pay the money in A days, we have r discount (eg: r=2%)if we pay in B days, there is no discountif we pay after B days, we broke the contract.I want to asking the imp...
##### Doctor's Diagnosis Self-Nasal Swab Positive Negative Positive 74 Negative 120 Column Total 76 124Row Totals78 122 200
Doctor's Diagnosis Self-Nasal Swab Positive Negative Positive 74 Negative 120 Column Total 76 124 Row Totals 78 122 200...
##### 1.41 Below position versus time graph of Mas5 On spring. What can voU say about the velocity force , and acceleralion the time indicaled by the dottedl line.Velocity:Positive. Negative Or Zcro"Force:Positive, Negalive. HurtAccelerativn: Pesitive, Negalive, Or zeTOL35) paricle simple hanonic ntion Jong Ihe r axis The amplitudle of the motion PomL motion kinetic energy and its potential energy (measured with U = at_ is U =; When Is a[ 4 Smar. the kinetic and potential energies are:4) K 5 J a
1.41 Below position versus time graph of Mas5 On spring. What can voU say about the velocity force , and acceleralion the time indicaled by the dottedl line. Velocity: Positive. Negative Or Zcro" Force: Positive, Negalive. Hurt Accelerativn: Pesitive, Negalive, Or zeTO L35) paricle simple hano...
##### Dlscussion ' Questions When base like sodium hyaroxide spllled on Ihe skln; ihe skin initially feels siippery; Whi Is AiSOru8R WrYrS Xhis an indicalion of the corrosive nature of bajes? ble I IS < lippery reccuse Scdium hyartxI(L; shin TF IS esters &F Corrdcive reacts Wlth TU tart brcuuse Ittuins (Kin into 604 nhich Corre LXc IF. Does your prepared soap contain excess sodium hydroxide? Why or why nol?Describe how soap could be Used as an experimental test to delermine if a water samp
Dlscussion ' Questions When base like sodium hyaroxide spllled on Ihe skln; ihe skin initially feels siippery; Whi Is AiSOru8R WrYrS Xhis an indicalion of the corrosive nature of bajes? ble I IS < lippery reccuse Scdium hyartxI(L; shin TF IS esters &F Corrdcive reacts Wlth TU tart brcuus...
##### Set of vertices V = [a, b, c,d, e] and in alphabetical order of verticesneighborhood matrixThe subgraph produced by the W= (a, b, c) vertices of the undirected graph G, which is What is the difference in the degrees of odd degree vertices and even degree vertices? 4.2
Set of vertices V = [a, b, c,d, e] and in alphabetical order of vertices neighborhood matrix The subgraph produced by the W= (a, b, c) vertices of the undirected graph G, which is What is the difference in the degrees of odd degree vertices and even degree vertices? 4.2...
##### [14 marks] A cruise ship has docked in Fremantle harbour∗. Ofthe500peopleonboard,10have been diagnosed with the COVID-19...
[14 marks] A cruise ship has docked in Fremantle harbour∗. Ofthe500peopleonboard,10have been diagnosed with the COVID-19 virus. The spread of the virus to other people on the ship is governed by the initial value problem dI dt= (1/500)I(500−I) with I(0) = 10 where I(t) is the number of ...
##### Network Communications has total assets of $1,430,000 and current assets of$645,000. It turns over its...
Network Communications has total assets of $1,430,000 and current assets of$645,000. It turns over its fixed assets two times a year. It has \$376,000 of debt. Its return on sales is 5 percent. What is its return on stockholders’ equity? (Do not round intermediate calc...
##### A characteristic that is displayed according to a species sex is referred to as a __________________________...
A characteristic that is displayed according to a species sex is referred to as a __________________________ (two words) character. Note: The answer is not: Sex Linked or Sex Limited. ...
##### Based on the following information, what is the AH for this reaction: NO(g) + O3(g) -...
Based on the following information, what is the AH for this reaction: NO(g) + O3(g) - NO2(g) + O2(g) 302(g) + 3NO2(g) - 3NO(g) + 303(g) AH = 596.7 kJ/mol 198.9 kJ/mol -1790.1 kJ/mol -198.9 kJ/mol 1790.1 kJ/mol...
##### Draw the structure of the product(s) of the reaction between (S)-3-phenylbutan-2-one and...
Use wedge and dash bonds to indicate stereochemistry.If more than one stereoisomer is formed, draw them all...
##### DisagreeQUESTION 2polnt21 Urci (NH-CONHz) Is an impontant fctlilizer It IS made by lhc Icaction of ammoniz 25 follows: ZNHi(g) cOzlg) = NH,CONH-(aq) Hzo(a) Calculatc thc approximate equlibrum cOuslant for this Ieacuon J1 276.85 %€2 2 The sulphu tioxide conlent of cenent 4 inpottant factor In deteninug the perfonance of cement_ One Way of preducing thif compound 4 followz sOz(9) Oz(9) SO_(g)11 Deteminc thc Value of 4G"at +23 "CJita ht LTDE conlplete calculationg and cxpialn "Qul
Disagree QUESTION 2 polnt 21 Urci (NH-CONHz) Is an impontant fctlilizer It IS made by lhc Icaction of ammoniz 25 follows: ZNHi(g) cOzlg) = NH,CONH-(aq) Hzo(a) Calculatc thc approximate equlibrum cOuslant for this Ieacuon J1 276.85 %€ 2 2 The sulphu tioxide conlent of cenent 4 inpottant factor ...
##### A) Find the center-radius form of the equation of the circle with center (0,0) and radius 5_ b) Graph the circle.a) The center-radius form of the equation of the circle is(Type an equation.)b) Use the graphing tool to graph the circle.Click to enlarge graph
a) Find the center-radius form of the equation of the circle with center (0,0) and radius 5_ b) Graph the circle. a) The center-radius form of the equation of the circle is (Type an equation.) b) Use the graphing tool to graph the circle. Click to enlarge graph...
##### Dr Simpson; who is interested employee sanstaction and productivity: measured ithe number of units produced by employees at plant before and after company wide pay raise occurred . She hypothesized that production would be higher after the raise compared tO before the raise: What statistical procedure should she use?
Dr Simpson; who is interested employee sanstaction and productivity: measured ithe number of units produced by employees at plant before and after company wide pay raise occurred . She hypothesized that production would be higher after the raise compared tO before the raise: What statistical procedu...
##### A previously healthy 19-year-old male is brought to the emergency department by his girlfriend after briefly...
A previously healthy 19-year-old male is brought to the emergency department by his girlfriend after briefly losing consciousness. He passed out while moving furniture into her apartment. She said that he was unresponsive for a minute but regained consciousness and was not confused. The patient did ...
-- 0.027307-- | 2022-11-30 16:35:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4508989751338959, "perplexity": 10506.652332898724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710765.76/warc/CC-MAIN-20221130160457-20221130190457-00408.warc.gz"} |
https://www.deepdyve.com/lp/ou_press/stability-of-hierarchical-triples-i-dependence-on-inner-eccentricity-5AzRYWRtVv | # Stability of hierarchical triples – I. Dependence on inner eccentricity and inclination
Stability of hierarchical triples – I. Dependence on inner eccentricity and inclination Abstract In simulations it is often important to decide if a given hierarchical triple star system is stable over an extended period of time. We introduce a stability criterion, modified from earlier work, where we use the closest approach ratio Q of the third star to the inner binary centre of mass in their initial osculating orbits. We study by numerical integration the orbits of over 1000 000 triple systems of the fixed masses and outer eccentricities eout, but varying inner eccentricities ein and inclinations i. 12 primary combinations of masses have been tried, representing the range encountered in stellar systems. The definition of the instability is either the escape of one of the bodies, or the exchange of the members between the inner and outer systems. An analytical approximation is derived using the energy change in a single close encounter between the inner and outer systems, assuming that the orbital phases in subsequent encounters occur randomly. The theory provides a fairly good description of the typical Qst, the smallest Q value that allows the system to be stable over N = 10 000 revolutions of the initial outer orbit. The final stability limit formula is Qst = 101/3A[( f g)2/(1 − eout)]1/6, where the coefficient A ∼ 1 should be used in N-body experiments, and A = 2.4 when the absolute long-term stability is required. The functions f (ein, cos i) and g(m1, m2, m3) are derived in the paper. At the limit of ein = i = m3 = 0, f g = 1. methods: numerical, celestial mechanics 1 INTRODUCTION A hierarchical triple system consists of a binary and a third body in orbit around the centre of mass of the binary. In order to have a clear separation of the inner and outer orbits, the pericentre of the outer orbit should be greater than the major axis of the binary. A stability criterion of hierarchical triples is required in understanding many systems arising in the Universe, for example in understanding the longevity of the Earth–Moon–Sun system (Newton 1687; Clairaut 1752). The stability of our planetary system falls in the same category of problems, initially with the question of the motion of the giant planets Jupiter and Saturn (Euler 1752; Lagrange 1766, 1778; Laplace 1775, 1787). In recent years similar questions have arisen in connection with multiple exoplanets in other stellar systems (see e.g. Funk et al. 2009). With enough information of the initial data, it is always possible to carry out numerical orbit integrations to determine the degree of stability, at least over some period of time (Laskar 2013). However, it is often the case that only some orbital elements are known, and even they have associated measuring uncertainties. Then the phase space of the unknown elements has so many dimensions that its total coverage may be prohibitive. Another case where a clear-cut stability criterion would be useful is in the computer simulations of star clusters (Aarseth 1973, 2003; Heggie & Hut 2003). In this case all the elements of a triple subsystem are known, but its integration takes up many resources and will slow the overall cluster simulation. It is more efficient to leave stable triples aside from the main calculation until such time that encounters with other stars or other reasons cause interesting orbital evolution in this subsystem. In this work, we are mainly concerned with the latter situation. Harrington (1972) realized that the key quantity in assessing the stability of a three-body system is the ratio of the pericentre distance of the outer orbit over the inner orbit semimajor axis. He found that its value, which we call Q, has to be at least 3.5 for direct orbits and 2.75 for retrograde orbits, the exact value depending on the masses of the three bodies. In this paper our main variable to be determined is the minimum value of Q for stability, called Qst. We start by using a sufficiently large value Q and determine that the system is stable. Then the value of Q is lowered until an unstable system is found. Going to even smaller Q, the system is likely to be unstable, but even if it is not, we use the Q-value first encountered in the search for the unstable system to define Qst. For example, if we find that the orbit with Q = 5.0 is stable and that the next orbit with Q = 4.9 is unstable, we define Qst = 5.0, even if the system is stable again, say, in the range Q = 4.0–4.8. The primary orbital elements to be sampled in this paper are the cosine of the inclination cos i and the eccentricity of the inner orbit ein, and to a lesser extent the masses of the three bodies, and the outer eccentricity eout. The finite intervals of sampling lead to some scatter in Qst, but the scatter is mainly due to the postulate that the remaining orbital elements arise at random. The scatter in Qst means that our result is not an exact surface but a layer of finite thickness in Qst. Dvorak (1997) and Pilat-Lohinger, Funk & Dvorak (2003) call this layer mixed region. The upper surface of the layer is used when we answer the question whether the system is definitely stable under the chosen revolution number N. The lower surface of the layer tells us when the system is definitely unstable, while a surface between them gives a stability criterion that could be useful e.g. in N-body simulations. In later papers we will discuss how the boundary of the layer moves with varying N. In this paper we use N = 10 000 revolutions of the outer orbit. 2 ORBIT INTEGRATIONS For simplicity, we start with fixed mass values of m1 = 1.5, m2 = 0.5, and m3 = 0.5 units (solar mass, for example), where m1 and m2 form the inner binary of unit semimajor axis (e.g. 1 au) and with eccentricity ein, while the body with mass m3 is in an orbit of eccentricity eout = 0.5 about the centre of mass of the binary. These values have no particular significance. In later papers we will study how the ideas of this paper are extended to other values of masses and eout. In Section 4 we start a brief exploration to this direction. Most of the orbit integrations are carried out by a symplectic integrator that has the property of conserving total energy, using the Bulirsch–Stoer method (Mikkola 1997). For comparison, we have integrated a smaller number of cases by a three-body regularization code (Aarseth & Zare 1974) and found that the overall result is not integrator dependent. The Q value, the ratio of the initial pericentre distance of the outer orbit to the inner semimajor axis, is first set so high that the system is stable over the N = 10 000 revolutions. Then for the next orbits the initial parameters are varied, keeping eout constant while the major axis of the outer orbit becomes smaller in steps of equal intervals. Therefore also Q gets smaller values in a stepwise manner, even though the steps are not exactly equal. The typical Q-step is 0.25 units. In later experiments (Figs 5 and 7) it is about 0.1 units. The integrations are continued down to a value of Q where the systems are always unstable. Since this value is not known in advance, in practise the smallest Q is set to about unity. In terms of integration time, the solutions are found very fast once we are below the mixed region. The process is repeated for 50 values (sometimes 100 values that we call the full resolution) of cos i equally spaced between −1 and +1, and a diagram of Q versus cos i is generated. In this diagram the Qst versus cos i line is identified, and is called the stability line. It is always wiggly, not only because of the finite step sizes, but mostly since we sample the mixed region with fixed values of the ‘less-essential’ orbital elements. These are the initial angles of the pericentres, mean anomalies, and the nodes. Unless otherwise stated below, their values are set to zero. The asterisk (*) in the plots refer strictly speaking to these fixed values. Then a new value of ein is selected and a new diagram is generated. In the first part of this paper we report the sample of 13 diagrams that cover the inner eccentricity ein-axis well. The values we use for ein are 0, 0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.6, 0.75, 0.9, 0.95, and 0.99 (Figs 1– 3 give examples of these diagrams). Figure 1. View largeDownload slide Stability limit Q calculations displayed with Q (y-axis) as a function of cos i (x-axis). The asterisk (*) shows the first unstable system when the Q-value is reduced in steps from high towards low values. Our standard parameter values are m1 = 1.5, m2 = 0.5, m3 = 0.5, and eout = 0.5. In this case the inner eccentricity ein = 0.1. The curves display the model outlined in the text. The lower curve is the always unstable limit (unstable below the line), the uppermost curve outlines the always stable region (higher up). The middle line is a cut of the middle of the mixed region and corresponds to A = 1.45. Upper and lower curves correspond to A = 1.75 and 1.15. Figure 1. View largeDownload slide Stability limit Q calculations displayed with Q (y-axis) as a function of cos i (x-axis). The asterisk (*) shows the first unstable system when the Q-value is reduced in steps from high towards low values. Our standard parameter values are m1 = 1.5, m2 = 0.5, m3 = 0.5, and eout = 0.5. In this case the inner eccentricity ein = 0.1. The curves display the model outlined in the text. The lower curve is the always unstable limit (unstable below the line), the uppermost curve outlines the always stable region (higher up). The middle line is a cut of the middle of the mixed region and corresponds to A = 1.45. Upper and lower curves correspond to A = 1.75 and 1.15. Figure 2. View largeDownload slide As above, but for inner eccentricity ein = 0.5. Figure 2. View largeDownload slide As above, but for inner eccentricity ein = 0.5. Figure 3. View largeDownload slide As above, but for inner eccentricity ein = 0.9. Figure 3. View largeDownload slide As above, but for inner eccentricity ein = 0.9. 3 ANALYTIC MODEL Roy & Haddow (2003), Heggie (2006), and Valtonen & Karttunen (2006) studied the energy change δε/ε of the inner binary in a single three-body encounter when the outer orbit is parabolic, hyperbolic, or elliptic, respectively. Here ε is the binary energy and δε is the change arising from the encounter. As an illustration, let us take the parabolic case. From equation (19) of Roy & Haddow (2003) we find \begin{eqnarray} \frac{\delta \varepsilon }{\varepsilon } &=& \frac{m_3}{m_1 + m_2} \frac{\sqrt{\pi }}{4} \left\lbrace Q^{-3} K^{2.5} \text{e}^{-(2/3\,K)}\right\rbrace \nonumber \\ && \times\, \left\lbrace e_1 \left[-\sin (2 \omega + n t_0) \left(1-\cos 2 i) \right. \right. \right. \nonumber \\ && -\,\sin (2 \omega + n t_0) \cos 2 i \cos 2 \Omega \nonumber \\ && -\,3 \sin (2 \omega + n t_0) \cos 2 \Omega \nonumber\\ && -\,4 \cos (2 \omega + n t_0) \cos i \sin 2 \Omega ] \nonumber \\ && +\,e_2 (1 - e_{\text{in}}^2) [ \sin (2 \omega + n t_0)(1 - \cos 2 i)\nonumber \\ && -\,\sin (2 \omega + n t_0) \cos 2 i \cos 2 \Omega \nonumber \\ && -\,3 \sin (2 \omega + n t_0) \cos 2 \Omega \nonumber \\ && -\,4 \cos (2 \omega + n t_0) \cos i \sin 2 \Omega ] \nonumber \\ && +\,e_4 \sqrt{(}1 - e_{\text{in}}^2) [- 2 \cos (2 \omega + n t_0) \cos 2 i \sin 2 \Omega \nonumber \\ && -\,6 \cos (2 \omega + n t_0) \sin 2 \Omega \nonumber \\ && \left. -\,8 \sin (2 \omega + n t_0) \cos i \cos 2 \Omega ]\right\rbrace . \end{eqnarray} (1) The functions e1, e2, and e4 are defined in terms of the Bessel functions J−1(ein), J0(ein), J2(ein), and J3(ein): \begin{eqnarray*} e_1 &=& J_{-1}(e_{\rm in}) - 2 e_{\rm in} J_0(e_{\rm in}) + 2 e_{\rm in} J_2(e_{\rm in}) - J_3(e_{\rm in}), \\ e_2 &=& J_{-1}(e_{\rm in}) - J_3(e_{\rm in}), \\ e_4 &=& J_{-1}(e_{\rm in}) - e_{\rm in} J_0(e_{\rm in}) - e_{\rm in} J_2(e_{\rm in}) + J_3(e_{\rm in}). \end{eqnarray*} After substituting the first few terms of power series of the Bessel functions we get \begin{eqnarray*} -e_1 &=& \frac{5}{2} e_{\rm in} - \frac{19}{24} e_{\rm in}^3 + \frac{41}{768} e_{\rm in}^5, \\ -e_2 &=& \frac{1}{2} e_{\rm in} - \frac{1}{24} e_{\rm in}^3, \\ -e_4 &=& \frac{3}{2} e_{\rm in} - \frac{5}{24} e_{\rm in}^3, \end{eqnarray*} with the accuracy of better than 1 per cent for all eccentricities up to ein = 1. We use the definition \begin{equation*} K= Q^{1.5} \sqrt{2(m_1 + m_2) / (m_1 + m_2 + m_3)}, \end{equation*} and i, ω, and Ω are the inclination, the argument of the pericentre, and the longitude of the ascending node of the third body orbit relative to the binary centre. The mean motion of the binary is n and t0 is the time measured from the third body pericentre passage. At the limit of small inner eccentricity (i.e. putting $$e_{\rm in}^2 = 0$$) we get \begin{eqnarray} \frac{\delta \varepsilon }{\varepsilon } &=& \frac{m_3}{m_1 + m_2} \frac{\sqrt{\pi }}{4} \left\lbrace Q^{-3}K^{2.5} \text{e}^{-(2/3\,K)}\right\rbrace e_{\rm in}\nonumber\\ \nonumber && \times\, \left\lbrace 6 \sin (2 \omega + n t_0 +2 \Omega )(1 + \cos i)^2 \right. \\ & & \left. +\,4 \sin (2 \omega + n t_0)(1 - \cos ^2 i)\right\rbrace. \end{eqnarray} (2)Here we are interested in the absolute value of the energy change when the sinusoidal factors are essentially random. Their absolute values average to 2/π, and apart from this factor, the inclination dependence is of the form \begin{eqnarray} 6(1+\cos i)^2 + 4(1-\cos ^2i) \approx 3.6\left(\frac{5}{3}+\cos i\right)^2. \end{eqnarray} (3)The latter expression agrees with the former at better than 1 per cent level relative to its maximum value at cos i = 1. Besides being a slightly simpler function than the first one, the latter also makes more physical sense since using it $$\frac{\delta \varepsilon }{\varepsilon }$$ goes to a non-zero value at cos i = −1. The quantity in the first curly brackets is well approximated by a factor proportional to \begin{eqnarray} \left(Q/ Q_1 \right)^{-7}, \end{eqnarray} (4)where \begin{eqnarray} Q_1 = 2. 5 \left( 1 + m_3/ (m_1 + m_2) \right)^{1/3} \end{eqnarray} (5)(Valtonen & Karttunen 2006). This results from a numerical evaluation of the function in the first curly brackets of equation (1), and then finding the best power-law fit over a limited range of Q. Figs 1–3 show what the range of interest of Q is for this particular combination of masses. Thus the power −7.0 is not exact. For example, in our primary study of systems with m3 = 0.5, in the range Q = 3–3.5 the power-law index is −6.7, while in the range Q = 3–4 it is −7.4. The slope is exactly −7.0 at Q = 3.35. For other mass values, the slope of −7.0 occurs at somewhat different Q that scales with Q1. We may then calculate the coefficient $$\frac{\sqrt{\pi }}{4} \left\lbrace Q^{-3}K^{2.5} \text{e}^{-(2/3\,K)}\right\rbrace e_{\rm in}$$ at the representative value Q = 3.35 and put ein = 0.05 to represent a small initial eccentricity. Multiplied by the factor 3.6 from $$3.6(\frac{5}{3}+\cos i)^2$$ we get 2 × 10−3. The same numerical value is given by 0.0092(Q/Q1)−7 for the masses and Q in question. Considering various uncertainties during this derivation, we replace 0.0092 by 0.01 and get \begin{eqnarray} \frac{\delta \varepsilon }{\varepsilon } \approx 0.01 \frac{m_3}{m_1 + m_2} \left( Q/ Q_1 \right)^{-7} \left(\frac{5}{3} + \cos i \right)^2. \end{eqnarray} (6)This result was based on the study of parabolic encounters, but in fact very similar results arise if the outer orbit elliptic. In particular, we may confirm the numerical factor 0.01 and the splitting of the functional dependence into several factors that each depends only on a smaller number of parameters (Valtonen & Karttunen 2006). Thus in the following we apply this result to multiple elliptic outer-body encounters. The sinusoidal factor in curly brackets is phase dependent. Since we may assume that the repeated encounters do not keep the phase, this factor causes a drift in the energy space. We may model the drift by a random walk, with the step size of the order of the amplitude of the phase factor. There is also a drift in other orbital elements; in this work we are not able to calculate this effect, but refer below to some of the possible consequences of the drift in the ein–cos i plane. By energy conservation, the relative energy change of the binary δε/ε translates into the corresponding change in the outer orbit δE/E: \begin{eqnarray} \frac{\delta \varepsilon }{\varepsilon } &=& - \frac{\delta E}{\varepsilon } = - \frac{E}{\varepsilon }\frac{\delta E}{E} \nonumber\\ \nonumber &=& - \frac{m_3(m_1+m_2)}{m_1 m_2}\frac{a_{\rm in}}{a_{\rm out}}\frac{\delta E}{E}\\ &=& -\frac{m_3}{m_1 + m_2}\frac{(m_1+m_2)^2}{m_1 m_2 Q_1} \frac{1-e_{\rm out}}{Q/Q_1} \frac{\delta E}{E} \\ \nonumber &=& -\frac{m_3}{m_1 + m_2} M \frac{1-e_{\rm out}}{Q/Q_1} \frac{\delta E}{E}, \end{eqnarray} (7)where ain and aout are the inner and out semimajor axes, respectively, and \begin{eqnarray} M =\frac{0.4 (m_1+m_2)^{7/3}}{(m_1+m_2+m_3)^{1/3}m_1 m_2}. \end{eqnarray} (8)The functional dependence on the masses is split in two factors in order to help at the next step. After N encounters we expect the total energy to change by $$\sqrt{N} \delta E.$$ When this amounts to E, we definitely have an unstable situation. We may also use a more strict definition of the instability, saying that the instability arises when $$E = \lambda \sqrt{N} \delta E.$$ This defines a strictness factor λ such that it equals to unity when the instability means the total destruction of the hierarchy of the system (Huang & Innanen 1983), while a greater value of the strictness factor, e.g. λ = 10, is closer to the definition of Mardling & Aarseth (1999). We use λ = 1 throughout in this paper. We equate the amplitude of equation (6) to the absolute value of the right-hand side of equation (7) and solve for Q: \begin{eqnarray} Q &\approx & (\lambda \sqrt{N})^{1/6} M^{-1/6} \left(1 + \frac{m_3}{m_1 + m_2}\right)^{1/3}\nonumber \\ && \times\, (1 - e_{\rm out})^{-1/6} \left(\frac{5}{3} + \cos i\right)^{1/3}. \end{eqnarray} (9)This is called the stability limit Qst. The quantity M−1/6 is set to unity in the following. It is 0.925 if the binary masses are equal and the third body mass is zero. For the range of masses considered in this paper, it is within 24 per cent of this value. Even in other respects this cannot be considered an exact formula, as is obvious after several steps of simplifications. Rather it gives a motivation to search for a particular type of result. In the end we introduce a scaling coefficient A that is determined purely from experiments. Let us now consider arbitrary inner eccentricities ein. Then the form in the second curly brackets in equation (2) becomes, taking terms up to the order $$e_{\rm in}^5$$ in the derivation above (Roy & Haddow 2003), \begin{eqnarray} 12 \big\lbrace\big[ (1 &-& 0.444 e_{\rm in}^2 + 0.032 e_{\rm in}^4) \cos i + 0.5 \sqrt{1- e_{\rm in}^2} \nonumber \\ \nonumber &&\times\,\left( 1 -\, 0.139 e_{\rm in}^2 \right)\left(1 + \cos i^2\right) \big] \cos (2\omega + n t_0) \sin 2 \Omega \\ \nonumber &&+\, \big[ \sqrt{1 - e_{\rm in}^2} (1 - 0.139 e_{\rm in}^2) \cos i \\ \nonumber &&+\, 0.5 \left( 1-\, 0.444 e_{\rm in}^2 + 0.032 e_{\rm in}^4 \right) (1 + \cos i ^2) \big]\\ &&\times\,\sin (2\omega + n t_0) \cos 2 \Omega + 1/3 (1 - 0.12 e_{\rm in}^2)\nonumber\\ &&\times\, (1 - \cos i ^2) \sin (2 \omega + n t_0) \big\rbrace . \end{eqnarray} (10) Now it is apparent that the parameters ein and cos i are not separable, but are contained inside a single factor. In equation (2) the two parameters separated because ein was small and was a common multiplier in all the terms. In general this is not the case. There is one more consideration that we must worry about. During the evolution of the triple system it may wonder widely in the ein–cos i plane due to the eccentric Kozai resonance (Ford, Kozinsky & Rasio 2000; Katz, Dong & Malhotra 2011; Lithwick & Naoz 2011) whereby the stability limit tends to be determined by the region where the limit is highest during this wondering. The process tends to equalize the stability limit within −0.75 ≤ cos i ≤ 0.75, the effective classical (i.e. quadrupole) Kozai cycle range (Kozai 1962; Innanen et al. 1997; Valtonen et al. 2008). Fitting the calculated data points to different functional forms quickly shows that we need to introduce cos 3i terms into our formula that do not appear in the expression (10). It helps to describe the rather high Qst values in retrograde orbits in the range of −0.5 ≤ cos i ≤ 0. We do not find need for order higher than three in cos i in order to describe the results of our data set. In particular, it is clear that the second-order expression above is not useful as such. Also the universal $$(\frac{5}{3}+\cos i)^2$$ term for all ein is obviously not valid. Therefore we look for a form that is of third order in cos i and has coefficients that are polynomials of ein. The simplest form that could be used is of the third order in both cos i and ein. The coefficients of the polynomials have been determined by fitting our experiments (the 13 diagrams mentioned above, m1 = 1.5) to the polynomials of this order, and determining the best values for the coefficients. We used a least-squares fit to all our data points in these diagrams. Subsequently the coefficients of some high-order terms were found to be so small as to be negligible. Then these terms were dropped, and new fits were performed without them. This procedure was continued until all remaining terms were significant. The coefficients were then rounded to nearest simple fractions, making sure that this does not worsen the fit. In this way we find that $$\frac{5}{3} + \cos i$$ in the stability limit formula should be replaced by \begin{eqnarray} f(e_{\rm in}, \cos i) &=& \left\lbrace 1 - \frac{2}{3} e_{\rm in} \left[ 1 - \frac{1}{2} e_{\rm in}^2 \right] - 0.3 \cos i \bigg [ 1 \right.\nonumber \\ && \left. \left. -\,\frac{1}{2} e_{\rm in}+ 2 \cos i \left(1- \frac{5}{2}e_{\rm in}^{3/2} - \cos i\right) \right] \right\rbrace . \end{eqnarray} (11)In the last bracket the terms containing ein and $$e_{\rm in}^2$$ have been combined to a term $$e_{\rm in}^{3/2}$$ for simplicity, and without loss of accuracy. Remember that in our original formula of equation (1) the ein and cos i factors were not separable; their separation in different factors happened only at the low eccentricity limit in equation (2). Here we have returned to a factor containing both variables after making extensive use of numerical experiments. Let us denote the mass factor \begin{eqnarray} g (m_1, m_2, m_3) = \left( 1 + \frac{m_3}{m_1 + m_2} \right). \end{eqnarray} (12)In addition to the other factors of equation (9) there may be a coefficient of the order of unity on the right-hand side of this equation; we call it A. This coefficient has to be obtained by a fit to our data set because our complete formula was not derived from first principles; we used a number of simplifying assumptions. We first find the function f from theory, and then look for suitable values of A. Our complete formula is now \begin{eqnarray} Q_{\text{st}} = A \left( \lambda \sqrt{N} / (1 - e_{\rm out}) \right)^{1/6} (f \, g)^{1/3}. \end{eqnarray} (13) Note that the relative simplicity of this formula is based on equation (1). Even though it describes energy changes only in parabolic encounters, the studies for elliptic encounters give qualitatively similar results. One of the main advantages of this approach is the factorization where each factor depends on rather few parameters. There is a factor for masses, for the normalized pericentre distance, and a mixed inner eccentricity/inclination factor. The latter also has other angular dependences in the form of sinusoidal functions that we have averaged out. It is this feature of equation (1) that carries through the different stages of our derivation, and finally leads to a rather simple end result. The value of A is determined during the fit of the analytical functions to our data set. In our simulations we start with high values of Q and reduce values in finite steps until we hit the first unstable system. These points are plotted in our figures. Consequently, the last stable system is one Q-step higher. So, the stability border lies in between these two points. The value of A that is determined from the first unstable points is too low when we are looking to represent this instability line. From the fit we find A = 1.40. However, instability is one half-a-step higher or at A = 1.45. The step size comes from our initial choice for sampling along the Q-axis. The scatter about the midsurface is roughly Gaussian with standard deviation approximately 0.1 units of Q. Most of the data points should be within the layer of thickness of three standard deviations of the central surface, which is within a layer defined by A = 1.45 ± 0.3. This is true for all except two points. In N-body simulations one may want to use the smaller limit of A = 1.15. Then no computer time is wasted in the calculation of stable subsystems. The drawback of this choice is that many unstable systems from the mixed region will be treated as stable, and the integration of these subsystems is halted. A compromise may be to use the central plane of the mixed region, A = 1.45. One would need to experiment with actual N-body simulations to decide the optimum value of A. In other types of problems we may want to be absolutely sure of the stability of the triple system. Then we will want to use the upper value, A = 1.75. However, our experiments so far may be too sparse to determine this absolute upper limit. In order to make a more careful study of the upper and lower limits of the mixed region, we increased the number of simulations. Instead of using just one value of the longitude of the node Ω (Ω = 0 in practice), we started covering the whole range of its values. We note from expression (10) that sin 2Ω and cos 2Ω are independent factors that should be varied to get the whole range of possible outcomes for a given cos i. This contrasts with the longitude of the pericentre ω that appears only in combination with nt0 and is therefore, at least in theory, a random variable. In order to study the lower limit of the mixed region, the limit for the absolutely unstable systems, we made simulations at a given cos i and starting from a low Q-value. By orbit integration we checked the stability for each Ω-value. Initially experiments are unstable for all Ω-values. Then the Q-value is increased by one step. This is continued until at least one Ω-value results in a stable system. At that point we say that the absolute instability boundary has been found and that it lies between the last two Q-values. Similarly, when we study the upper limit of the mixed region, the limit for absolute stability, we start from a sufficiently high Q-value so that systems with all Ω are stable. Then we lower the Q-value by one step and repeat the process. When we discover first unstable system, for any value of Ω, we have crossed the stability boundary that lies between the last two Q-values. In this way we generate two boundary lines, well separated from each other. Fig. 4 shows an example. Here 50 equally spaced values of cos i and 10 values of randomly generated Ω were used for m2 = 0.2, m3 = 10, ein = 0.25, and eout = 0.5. We see that the line for the upper limit is effectively raised. Typically the A-value is increased by 0.15 units higher by using this method than by using points for single values of Ω. Figure 4. View largeDownload slide Stability limit points for the upper limit (stability, crosses) and lower limit (instability, stars) for m1 = 1.8, m2 = 0.2, m3 = 10, ein = 0.25, and eout = 0.5. Figure 4. View largeDownload slide Stability limit points for the upper limit (stability, crosses) and lower limit (instability, stars) for m1 = 1.8, m2 = 0.2, m3 = 10, ein = 0.25, and eout = 0.5. 10 similar diagrams were generated by varying m2 (either 0.2, 0.5, or 1.0) and ein (either 0.1, 0.25, 0.5, 0.75, 0.9, or 0.99) for fixed values of m3 = eout = 0.5. In all cases the upper boundary points follow the A = 1.45 line rather well, with the highest point at A = 2.3. The inner boundary line is of the same general shape as the upper boundary but typically 0.6 units lower in A. In this paper we are primarily concerned with the upper envelope of the mixed region. The study of the lower boundary is left for future work since it could be very sensitive to islands of stability (Dvorak 1997). In Fig. 5 we give an example of the upper boundary using 100 equally spaced cos i values and 20 randomly picked Ω values. Except for a resonance at i = 140°, the points are well confined between A = 1.45 and 1.75, i.e. the average A value for the upper boundary is A ≈ 1.6. Figure 5. View largeDownload slide The upper (stability) border for the inner eccentricity ein = 0.6 and masses m1 = 1.5, m2 = 0.5, m3 = 0.5, and eout = 0.5. The three lines are drawn for A = 1.15, 1.45, and 1.75. Figure 5. View largeDownload slide The upper (stability) border for the inner eccentricity ein = 0.6 and masses m1 = 1.5, m2 = 0.5, m3 = 0.5, and eout = 0.5. The three lines are drawn for A = 1.15, 1.45, and 1.75. The limit A = 1.75 is good everywhere except at the resonance around i = 140°. No similar feature is seen at i = 40°. A rather safe upper boundary of the instability layer (mixed region) in experiments reported so far is therefore \begin{equation*} Q_{\text{st}} = 2.4 \left( \lambda \sqrt{N} / (1 - e_{\rm out}) \right)^{1/6} (f \, g)^{1/3}. \end{equation*} The coefficient 2.4 (rather than 2.3) covers also the more extensive set of experiments reported in the next section, and illustrated in Appendix A. 4 UNIVERSALITY In order to test the validity of our function f, we have carried out a set of simulations to cover the range of masses at several inner eccentricities and longitudes of the ascending node with the constant value of the outer eccentricity of 0.5. We used a grid composed of five equally spaced values of ein (uniform from 0 to 0.99), 11 equally spaced values of both Ω and cos i (the latter uniform from −1 to +1), three values of m2 (0.1, 0.5, and 1.0), and three values of m3 (0.1, 1.0, and 10). Since m1 = 2 − m2, this creates a grid of nine specific combinations of mass. We generate 45 diagrams similar to Fig. 1, and for each point in these diagrams (5445 in all) we calculate the A-value using equation (13). The distribution of A-values is shown in Fig. 6. The distribution is well described by a Gaussian with the mean of A = 1.6 and the standard deviation σ = 0.26. Therefore all points are below A = 2.4 at the 3σ level. We may use this as the limit of absolute stability, with the exception of resonances that occur in a few diagrams, similar to the i = 140° resonance in our previous Fig. 5. The exceptionally high values of A arise only at a relatively narrow range of Ω usually close to 0° and 180°; a further study of these regions will appear in Paper II. Figure 6. View largeDownload slide Distribution of A-values (stability limit points) for all 5445 points in the experiments with varying masses and eccentricities. Figure 6. View largeDownload slide Distribution of A-values (stability limit points) for all 5445 points in the experiments with varying masses and eccentricities. At this point we went back to equation (11) to find out if the third-order dependence on ein and cos i is visible in the 45 new diagrams (Appendix A). In particular, we checked if another choice of coefficients 2/3, 1/2, …, etc. would improve the fit to this larger data set, by reducing the standard deviation. The lowest value found was practically equal to 0.26; thus in this way we confirm the validity of the coefficients in equation (11) at the level of ± 4 per cent. In addition to this grid, we have studied some individual cases, for instance m1 = m2 = 1, m3 = 0.1, and ein = eout = 0, that are quite different from our standard case. The result of the study with the full resolution, using 3 × 105 three-body solutions, is displayed in Fig. 7. Another case has the same initial values, except eout = 0.3 and the inclination resolution is lower (Fig. 8). These cases show that the general formula is applicable even though the outer orbit starts by being circular or at a small value. In an earlier paper (Valtonen et al. 2008) we have covered a wider range of eout. We also checked the dependence on the mass of the external body over a wider range in our initial system, i.e. when m1 = 1.5, m2 = 0.5, ein = 0.6, and eout = 0.5. The results at two definite values of the inclination cos i = ±0.8 are shown in Fig. 9. Figure 7. View largeDownload slide As in Fig. 5, but the 20 Ω simulations for m1 = m2 = 1, m3 = 0.1, and ein = eout = 0. Figure 7. View largeDownload slide As in Fig. 5, but the 20 Ω simulations for m1 = m2 = 1, m3 = 0.1, and ein = eout = 0. Figure 8. View largeDownload slide As above, but for the eccentricity eout = 0.3. Figure 8. View largeDownload slide As above, but for the eccentricity eout = 0.3. Figure 9. View largeDownload slide Upper (stability) limit Q (y-axis) as a function of m3 (mass of the external body) when m1 = 1.5, m2 = 0.5, ein = 0.6, and eout = 0.5. Curves for cos i = 0.8 (higher) and −0.8 (lower) are shown. The curves are displayed for A = 2. Figure 9. View largeDownload slide Upper (stability) limit Q (y-axis) as a function of m3 (mass of the external body) when m1 = 1.5, m2 = 0.5, ein = 0.6, and eout = 0.5. Curves for cos i = 0.8 (higher) and −0.8 (lower) are shown. The curves are displayed for A = 2. 5 DISCUSSION The inner eccentricity–inclination term has not been published previously. Considering only the ein dependence, and taking Qst to be a constant as a function of cos i, Eggleton & Kiseleva (1995) and Mardling & Aarseth (1999) give \begin{equation*} Q_{\text{st}} \propto (1 + e_{\rm in}), \end{equation*} while Bailyn (private communication) suggests \begin{equation*} Q_{\text{st}} \propto \left( 1 + \frac{1}{2} e_{\rm in}^2 \right). \end{equation*} Our data for cos i = 1 agree with an approximate formula (simpler than equation 11 with cos i = 1) \begin{equation*} Q_{\text{st}} \propto \left( 1 - \frac{2}{3} e_{\rm in} + 1.2 e_{\rm in}^2 \right). \end{equation*} Therefore the formulae given by earlier studies are not very satisfactory. Moreover, we note that the cos i dependence is clearly a function of ein in the experimental data set (Figs 1–3). The cos i dependence given in Valtonen & Karttunen (2006), \begin{equation*} f(\cos i) = \left\lbrace \frac{1}{3} +\left[ (1 + \cos i)(1.97 - \cos i) \right]^{0.8} \right\rbrace , \end{equation*} is a fair representation of the f(cos i) function at low eccentricity but not applicable to mid-range or high eccentricities. The same is true for the simpler form by Valtonen et al. (2008): \begin{equation*} f(\cos i) = \left\lbrace \frac{7}{4} + \frac{1}{2} \cos i - \cos i^{2} \right\rbrace . \end{equation*} Mardling (2008) and Mardling (private communication; a computer code distributed privately) suggested a practically flat function at high eccentricity and a function that varies from flat at positive cos i to gently sloping (towards cos i = −1) at negative cos i for mid-range and low eccentricity. It is an improvement over the flat function of Mardling & Aarseth (1999), but not a good description of the experimental data. Mardling & Aarseth (2001) keep Qst constant with ein, but assume f(cos i) = {1 + 0.3i/π}. This is also a rough approximation to the numerically calculated data. These expressions and our current formula and computer simulations show that retrograde orbits are more stable than direct orbits at the same Q-distance. The amount of difference in stability depends on the eccentricity of the inner orbit. The main exception to this rule occurs at the i = 140° resonance (Fig. 5). In a future paper we will also discuss the apparent resonance feature at i = 140°. At this resonance the general formula fails, even though in a very small volume of the phase space. The end result of this investigation is an analytical formula for determining the stability of a triple system, equation (13). The stability limit depends on the orbital parameters of the inner and outer binary and on the number of outer revolutions N required for stability. Since the latter depends on the problem at hand, its uncertainty may be absorbed in the coefficient A. Thus putting A ∼ 1 in N-body simulations, and A = 2.4 in tests for absolute stability, together with Λ = 1, may constitute a practical stability formula: \begin{equation*} Q_{\text{st}}= 10^{1/3} A [(f\,g)^2 /(1-e_{\rm out})]^{1/6}, \end{equation*} where the functions f and g are given by equations (11) and (12), respectively. At the limit of ein = i = m3 = 0, f g = 1. The factor 101/3 corresponds to N = 104, the standard value used in this study. Acknowledgements The authors thank the anonymous referee for thorough and helpful comments that helped to improve the paper considerably. REFERENCES Aarseth S. J., 1973, Vistas Astron. , 15, 13 https://doi.org/10.1016/0083-6656(73)90003-2 CrossRef Search ADS Aarseth S. J., 2003, Gravitational N-body Simulations: Tools and Algorithms . Cambridge Univ. Press, Cambridge Google Scholar CrossRef Search ADS Aarseth S. J., Zare K., 1974, Celest. Mech. , 10, 185 CrossRef Search ADS Clairaut A., 1752, Theorie de la Lune . Imperial Academy of Science, St. Petersburg Dvorak R., 1997, Celest. Mech. Dyn. Astron. , 68, 63 https://doi.org/10.1023/A:1008287614810 CrossRef Search ADS Eggleton P., Kiseleva L., 1995, ApJ , 455, 640 https://doi.org/10.1086/176611 CrossRef Search ADS Euler L., 1752, Recherches sur les irrégularités du mouvement de Jupiter et Saturne Ford E. B., Kozinsky B., Rasio F. A., 2000, ApJ , 535, 385 https://doi.org/10.1086/308815 CrossRef Search ADS Funk B., Schwarz R., Pilat-Lohinger E., Suli A., Dvorak R., 2009, Planet. Space Sci. , 57, 434 https://doi.org/10.1016/j.pss.2008.06.017 CrossRef Search ADS Harrington R. S., 1972, Celest. Mech. , 6, 322 CrossRef Search ADS Heggie D. C., 2006, in Flynn C., ed., Few Body Problem . Ann. Univ. Turkuensis, Ser. A, Vol. 358, p. 20 Heggie D. C., Hut P., 2003, The Gravitational Million-Body Problem: A Multidisciplinary Approach to Star Cluster Dynamics . Cambridge Univ. Press, Cambridge Google Scholar CrossRef Search ADS Huang T. Y., Innanen K. A., 1983, AJ , 88, 1064 https://doi.org/10.1086/113395 CrossRef Search ADS Innanen K. A., Zheng J. Q., Mikkola S., Valtonen M. J., 1997, AJ , 113, 1915 https://doi.org/10.1086/118405 CrossRef Search ADS Katz B., Dong S., Malhotra R., 2011, Phys. Rev. Lett. , 107, 181101 CrossRef Search ADS PubMed Kozai Y., 1962, AJ , 67, 591 https://doi.org/10.1086/108790 CrossRef Search ADS Lagrange J. L., 1766, Miscellanea Taurinensia , 3, 1762 Lagrange J. L., 1778, Hist. de l'acad. des sciences, 1774 = Lagrange 1867–92 , 6, 635 Laplace P. S., 1775, Mem. Acad. Sci. Paris , 1772 Laplace P. S., 1787, Mem. Acad. Sci. Paris, 1784, Oeuvres 11 (Paris, 1895) , 49 Laskar J., 2013, Progress Math. Phys. , 66, 239 CrossRef Search ADS Lithwick Y., Naoz S., 2011, ApJ , 742, 94 https://doi.org/10.1088/0004-637X/742/2/94 CrossRef Search ADS Mardling R. A., 2008, in Vesperini E., Sills A., Giersz M., eds, Proc. IAU Symp. Vol. 246, Dynamical Evolution of Dense Stellar Systems. Cambridge Univ. Press, Cambridge, p. 199 Mardling R., Aarseth S., 1999, in Steves B. A., Roy A. E., eds, The Dynamics of Small Bodies in the Solar System. Kluwer, Dordrecht, p. 385 Google Scholar CrossRef Search ADS Mardling R. A., Aarseth S. J., 2001, MNRAS , 321, 398 https://doi.org/10.1046/j.1365-8711.2001.03974.x CrossRef Search ADS Mikkola S., 1997, Celest. Mech. Dyn. Astron. , 67, 145 https://doi.org/10.1023/A:1008217427749 CrossRef Search ADS Newton I., 1687, Philosophiae Naturalis Principia Mathematica . Royal Society, London Google Scholar CrossRef Search ADS Pilat-Lohinger E., Funk B., Dvorak R., 2003, A&A , 400, 1085 CrossRef Search ADS Roy A., Haddow M., 2003, Celest. Mech. Dyn. Astron. , 87, 411 https://doi.org/10.1023/B:CELE.0000006767.34371.2f CrossRef Search ADS Valtonen M., Karttunen H., 2006, The Three-Body Problem . Cambridge Univ. Press, Cambridge Google Scholar CrossRef Search ADS Valtonen M., Mylläri A., Orlov V., Rubinov A., 2008, in Vesperini E., Sills A., Giersz M., eds, Proc. IAU Symp. Vol. 246, Dynamical Evolution of Dense Stellar Systems. Cambridge Univ. Press, Cambridge, p. 209 APPENDIX A: SAMPLE ILLUSTRATIONS FOR THE RANGE OF MASSES Here we present results of simulations described in the beginning of Section 4 for different masses and uniformly sampled Ω. Everywhere in what follows eout = 0.50. In addition to the three curves corresponding to A = 1.15, 1.45, and 1.75, one more curve for A = 2.0 was added. Figure A1. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.00. Figure A1. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.00. Figure A2. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.00. Figure A2. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.00. Figure A3. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.00. Figure A3. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.00. Figure A4. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.00. Figure A4. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.00. Figure A5. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.00. Figure A5. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.00. Figure A6. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.00. Figure A6. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.00. Figure A7. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.00. Figure A7. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.00. Figure A8. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.00. Figure A8. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.00. Figure A9. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.00. Figure A9. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.00. Figure A10. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.25. Figure A10. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.25. Figure A11. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.25. Figure A11. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.25. Figure A12. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.25. Figure A12. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.25. Figure A13. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.25. Figure A13. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.25. Figure A14. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.25. Figure A14. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.25. Figure A15. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.25. Figure A15. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.25. Figure A16. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.25. Figure A16. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.25. Figure A17. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.25. Figure A17. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.25. Figure A18. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.25. Figure A18. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.25. Figure A19. View largeDownload slide m1 = 1.00 , m2 = 1.00, m3 = 0.10, and ein = 0.50. Figure A19. View largeDownload slide m1 = 1.00 , m2 = 1.00, m3 = 0.10, and ein = 0.50. Figure A20. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.50. Figure A20. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.50. Figure A21. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.50. Figure A21. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.50. Figure A22. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.50. Figure A22. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.50. Figure A23. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.50. Figure A23. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.50. Figure A24. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.50. Figure A24. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.50. Figure A25. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.50. Figure A25. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.50. Figure A26. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.50. Figure A26. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.50. Figure A27. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.50. Figure A27. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.50. Figure A28. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.75. Figure A28. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.75. Figure A29. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.75. Figure A29. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.75. Figure A30. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.75. Figure A30. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.75. Figure A31. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.75. Figure A31. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.75. Figure A32. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.75. Figure A32. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.75. Figure A33. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.75. Figure A33. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.75. Figure A34. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.75. Figure A34. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.75. Figure A35. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.75. Figure A35. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.75. Figure A36. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.75. Figure A36. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.75. Figure A37. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.99. Figure A37. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.99. Figure A38. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.99. Figure A38. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.99. Figure A39. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.00. Figure A39. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.00. Figure A40. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.99. Figure A40. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.99. Figure A41. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.99. Figure A41. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.99. Figure A42. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.99. Figure A42. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.99. Figure A43. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.99. Figure A43. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.99. Figure A44. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.99. Figure A44. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.99. Figure A45. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.99. Figure A45. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.99. © 2018 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Monthly Notices of the Royal Astronomical Society Oxford University Press
# Stability of hierarchical triples – I. Dependence on inner eccentricity and inclination
, Volume 476 (1) – May 1, 2018
12 pages
/lp/ou_press/stability-of-hierarchical-triples-i-dependence-on-inner-eccentricity-5AzRYWRtVv
Publisher
Oxford University Press
ISSN
0035-8711
eISSN
1365-2966
D.O.I.
10.1093/mnras/sty237
Publisher site
See Article on Publisher Site
### Abstract
Abstract In simulations it is often important to decide if a given hierarchical triple star system is stable over an extended period of time. We introduce a stability criterion, modified from earlier work, where we use the closest approach ratio Q of the third star to the inner binary centre of mass in their initial osculating orbits. We study by numerical integration the orbits of over 1000 000 triple systems of the fixed masses and outer eccentricities eout, but varying inner eccentricities ein and inclinations i. 12 primary combinations of masses have been tried, representing the range encountered in stellar systems. The definition of the instability is either the escape of one of the bodies, or the exchange of the members between the inner and outer systems. An analytical approximation is derived using the energy change in a single close encounter between the inner and outer systems, assuming that the orbital phases in subsequent encounters occur randomly. The theory provides a fairly good description of the typical Qst, the smallest Q value that allows the system to be stable over N = 10 000 revolutions of the initial outer orbit. The final stability limit formula is Qst = 101/3A[( f g)2/(1 − eout)]1/6, where the coefficient A ∼ 1 should be used in N-body experiments, and A = 2.4 when the absolute long-term stability is required. The functions f (ein, cos i) and g(m1, m2, m3) are derived in the paper. At the limit of ein = i = m3 = 0, f g = 1. methods: numerical, celestial mechanics 1 INTRODUCTION A hierarchical triple system consists of a binary and a third body in orbit around the centre of mass of the binary. In order to have a clear separation of the inner and outer orbits, the pericentre of the outer orbit should be greater than the major axis of the binary. A stability criterion of hierarchical triples is required in understanding many systems arising in the Universe, for example in understanding the longevity of the Earth–Moon–Sun system (Newton 1687; Clairaut 1752). The stability of our planetary system falls in the same category of problems, initially with the question of the motion of the giant planets Jupiter and Saturn (Euler 1752; Lagrange 1766, 1778; Laplace 1775, 1787). In recent years similar questions have arisen in connection with multiple exoplanets in other stellar systems (see e.g. Funk et al. 2009). With enough information of the initial data, it is always possible to carry out numerical orbit integrations to determine the degree of stability, at least over some period of time (Laskar 2013). However, it is often the case that only some orbital elements are known, and even they have associated measuring uncertainties. Then the phase space of the unknown elements has so many dimensions that its total coverage may be prohibitive. Another case where a clear-cut stability criterion would be useful is in the computer simulations of star clusters (Aarseth 1973, 2003; Heggie & Hut 2003). In this case all the elements of a triple subsystem are known, but its integration takes up many resources and will slow the overall cluster simulation. It is more efficient to leave stable triples aside from the main calculation until such time that encounters with other stars or other reasons cause interesting orbital evolution in this subsystem. In this work, we are mainly concerned with the latter situation. Harrington (1972) realized that the key quantity in assessing the stability of a three-body system is the ratio of the pericentre distance of the outer orbit over the inner orbit semimajor axis. He found that its value, which we call Q, has to be at least 3.5 for direct orbits and 2.75 for retrograde orbits, the exact value depending on the masses of the three bodies. In this paper our main variable to be determined is the minimum value of Q for stability, called Qst. We start by using a sufficiently large value Q and determine that the system is stable. Then the value of Q is lowered until an unstable system is found. Going to even smaller Q, the system is likely to be unstable, but even if it is not, we use the Q-value first encountered in the search for the unstable system to define Qst. For example, if we find that the orbit with Q = 5.0 is stable and that the next orbit with Q = 4.9 is unstable, we define Qst = 5.0, even if the system is stable again, say, in the range Q = 4.0–4.8. The primary orbital elements to be sampled in this paper are the cosine of the inclination cos i and the eccentricity of the inner orbit ein, and to a lesser extent the masses of the three bodies, and the outer eccentricity eout. The finite intervals of sampling lead to some scatter in Qst, but the scatter is mainly due to the postulate that the remaining orbital elements arise at random. The scatter in Qst means that our result is not an exact surface but a layer of finite thickness in Qst. Dvorak (1997) and Pilat-Lohinger, Funk & Dvorak (2003) call this layer mixed region. The upper surface of the layer is used when we answer the question whether the system is definitely stable under the chosen revolution number N. The lower surface of the layer tells us when the system is definitely unstable, while a surface between them gives a stability criterion that could be useful e.g. in N-body simulations. In later papers we will discuss how the boundary of the layer moves with varying N. In this paper we use N = 10 000 revolutions of the outer orbit. 2 ORBIT INTEGRATIONS For simplicity, we start with fixed mass values of m1 = 1.5, m2 = 0.5, and m3 = 0.5 units (solar mass, for example), where m1 and m2 form the inner binary of unit semimajor axis (e.g. 1 au) and with eccentricity ein, while the body with mass m3 is in an orbit of eccentricity eout = 0.5 about the centre of mass of the binary. These values have no particular significance. In later papers we will study how the ideas of this paper are extended to other values of masses and eout. In Section 4 we start a brief exploration to this direction. Most of the orbit integrations are carried out by a symplectic integrator that has the property of conserving total energy, using the Bulirsch–Stoer method (Mikkola 1997). For comparison, we have integrated a smaller number of cases by a three-body regularization code (Aarseth & Zare 1974) and found that the overall result is not integrator dependent. The Q value, the ratio of the initial pericentre distance of the outer orbit to the inner semimajor axis, is first set so high that the system is stable over the N = 10 000 revolutions. Then for the next orbits the initial parameters are varied, keeping eout constant while the major axis of the outer orbit becomes smaller in steps of equal intervals. Therefore also Q gets smaller values in a stepwise manner, even though the steps are not exactly equal. The typical Q-step is 0.25 units. In later experiments (Figs 5 and 7) it is about 0.1 units. The integrations are continued down to a value of Q where the systems are always unstable. Since this value is not known in advance, in practise the smallest Q is set to about unity. In terms of integration time, the solutions are found very fast once we are below the mixed region. The process is repeated for 50 values (sometimes 100 values that we call the full resolution) of cos i equally spaced between −1 and +1, and a diagram of Q versus cos i is generated. In this diagram the Qst versus cos i line is identified, and is called the stability line. It is always wiggly, not only because of the finite step sizes, but mostly since we sample the mixed region with fixed values of the ‘less-essential’ orbital elements. These are the initial angles of the pericentres, mean anomalies, and the nodes. Unless otherwise stated below, their values are set to zero. The asterisk (*) in the plots refer strictly speaking to these fixed values. Then a new value of ein is selected and a new diagram is generated. In the first part of this paper we report the sample of 13 diagrams that cover the inner eccentricity ein-axis well. The values we use for ein are 0, 0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.6, 0.75, 0.9, 0.95, and 0.99 (Figs 1– 3 give examples of these diagrams). Figure 1. View largeDownload slide Stability limit Q calculations displayed with Q (y-axis) as a function of cos i (x-axis). The asterisk (*) shows the first unstable system when the Q-value is reduced in steps from high towards low values. Our standard parameter values are m1 = 1.5, m2 = 0.5, m3 = 0.5, and eout = 0.5. In this case the inner eccentricity ein = 0.1. The curves display the model outlined in the text. The lower curve is the always unstable limit (unstable below the line), the uppermost curve outlines the always stable region (higher up). The middle line is a cut of the middle of the mixed region and corresponds to A = 1.45. Upper and lower curves correspond to A = 1.75 and 1.15. Figure 1. View largeDownload slide Stability limit Q calculations displayed with Q (y-axis) as a function of cos i (x-axis). The asterisk (*) shows the first unstable system when the Q-value is reduced in steps from high towards low values. Our standard parameter values are m1 = 1.5, m2 = 0.5, m3 = 0.5, and eout = 0.5. In this case the inner eccentricity ein = 0.1. The curves display the model outlined in the text. The lower curve is the always unstable limit (unstable below the line), the uppermost curve outlines the always stable region (higher up). The middle line is a cut of the middle of the mixed region and corresponds to A = 1.45. Upper and lower curves correspond to A = 1.75 and 1.15. Figure 2. View largeDownload slide As above, but for inner eccentricity ein = 0.5. Figure 2. View largeDownload slide As above, but for inner eccentricity ein = 0.5. Figure 3. View largeDownload slide As above, but for inner eccentricity ein = 0.9. Figure 3. View largeDownload slide As above, but for inner eccentricity ein = 0.9. 3 ANALYTIC MODEL Roy & Haddow (2003), Heggie (2006), and Valtonen & Karttunen (2006) studied the energy change δε/ε of the inner binary in a single three-body encounter when the outer orbit is parabolic, hyperbolic, or elliptic, respectively. Here ε is the binary energy and δε is the change arising from the encounter. As an illustration, let us take the parabolic case. From equation (19) of Roy & Haddow (2003) we find \begin{eqnarray} \frac{\delta \varepsilon }{\varepsilon } &=& \frac{m_3}{m_1 + m_2} \frac{\sqrt{\pi }}{4} \left\lbrace Q^{-3} K^{2.5} \text{e}^{-(2/3\,K)}\right\rbrace \nonumber \\ && \times\, \left\lbrace e_1 \left[-\sin (2 \omega + n t_0) \left(1-\cos 2 i) \right. \right. \right. \nonumber \\ && -\,\sin (2 \omega + n t_0) \cos 2 i \cos 2 \Omega \nonumber \\ && -\,3 \sin (2 \omega + n t_0) \cos 2 \Omega \nonumber\\ && -\,4 \cos (2 \omega + n t_0) \cos i \sin 2 \Omega ] \nonumber \\ && +\,e_2 (1 - e_{\text{in}}^2) [ \sin (2 \omega + n t_0)(1 - \cos 2 i)\nonumber \\ && -\,\sin (2 \omega + n t_0) \cos 2 i \cos 2 \Omega \nonumber \\ && -\,3 \sin (2 \omega + n t_0) \cos 2 \Omega \nonumber \\ && -\,4 \cos (2 \omega + n t_0) \cos i \sin 2 \Omega ] \nonumber \\ && +\,e_4 \sqrt{(}1 - e_{\text{in}}^2) [- 2 \cos (2 \omega + n t_0) \cos 2 i \sin 2 \Omega \nonumber \\ && -\,6 \cos (2 \omega + n t_0) \sin 2 \Omega \nonumber \\ && \left. -\,8 \sin (2 \omega + n t_0) \cos i \cos 2 \Omega ]\right\rbrace . \end{eqnarray} (1) The functions e1, e2, and e4 are defined in terms of the Bessel functions J−1(ein), J0(ein), J2(ein), and J3(ein): \begin{eqnarray*} e_1 &=& J_{-1}(e_{\rm in}) - 2 e_{\rm in} J_0(e_{\rm in}) + 2 e_{\rm in} J_2(e_{\rm in}) - J_3(e_{\rm in}), \\ e_2 &=& J_{-1}(e_{\rm in}) - J_3(e_{\rm in}), \\ e_4 &=& J_{-1}(e_{\rm in}) - e_{\rm in} J_0(e_{\rm in}) - e_{\rm in} J_2(e_{\rm in}) + J_3(e_{\rm in}). \end{eqnarray*} After substituting the first few terms of power series of the Bessel functions we get \begin{eqnarray*} -e_1 &=& \frac{5}{2} e_{\rm in} - \frac{19}{24} e_{\rm in}^3 + \frac{41}{768} e_{\rm in}^5, \\ -e_2 &=& \frac{1}{2} e_{\rm in} - \frac{1}{24} e_{\rm in}^3, \\ -e_4 &=& \frac{3}{2} e_{\rm in} - \frac{5}{24} e_{\rm in}^3, \end{eqnarray*} with the accuracy of better than 1 per cent for all eccentricities up to ein = 1. We use the definition \begin{equation*} K= Q^{1.5} \sqrt{2(m_1 + m_2) / (m_1 + m_2 + m_3)}, \end{equation*} and i, ω, and Ω are the inclination, the argument of the pericentre, and the longitude of the ascending node of the third body orbit relative to the binary centre. The mean motion of the binary is n and t0 is the time measured from the third body pericentre passage. At the limit of small inner eccentricity (i.e. putting $$e_{\rm in}^2 = 0$$) we get \begin{eqnarray} \frac{\delta \varepsilon }{\varepsilon } &=& \frac{m_3}{m_1 + m_2} \frac{\sqrt{\pi }}{4} \left\lbrace Q^{-3}K^{2.5} \text{e}^{-(2/3\,K)}\right\rbrace e_{\rm in}\nonumber\\ \nonumber && \times\, \left\lbrace 6 \sin (2 \omega + n t_0 +2 \Omega )(1 + \cos i)^2 \right. \\ & & \left. +\,4 \sin (2 \omega + n t_0)(1 - \cos ^2 i)\right\rbrace. \end{eqnarray} (2)Here we are interested in the absolute value of the energy change when the sinusoidal factors are essentially random. Their absolute values average to 2/π, and apart from this factor, the inclination dependence is of the form \begin{eqnarray} 6(1+\cos i)^2 + 4(1-\cos ^2i) \approx 3.6\left(\frac{5}{3}+\cos i\right)^2. \end{eqnarray} (3)The latter expression agrees with the former at better than 1 per cent level relative to its maximum value at cos i = 1. Besides being a slightly simpler function than the first one, the latter also makes more physical sense since using it $$\frac{\delta \varepsilon }{\varepsilon }$$ goes to a non-zero value at cos i = −1. The quantity in the first curly brackets is well approximated by a factor proportional to \begin{eqnarray} \left(Q/ Q_1 \right)^{-7}, \end{eqnarray} (4)where \begin{eqnarray} Q_1 = 2. 5 \left( 1 + m_3/ (m_1 + m_2) \right)^{1/3} \end{eqnarray} (5)(Valtonen & Karttunen 2006). This results from a numerical evaluation of the function in the first curly brackets of equation (1), and then finding the best power-law fit over a limited range of Q. Figs 1–3 show what the range of interest of Q is for this particular combination of masses. Thus the power −7.0 is not exact. For example, in our primary study of systems with m3 = 0.5, in the range Q = 3–3.5 the power-law index is −6.7, while in the range Q = 3–4 it is −7.4. The slope is exactly −7.0 at Q = 3.35. For other mass values, the slope of −7.0 occurs at somewhat different Q that scales with Q1. We may then calculate the coefficient $$\frac{\sqrt{\pi }}{4} \left\lbrace Q^{-3}K^{2.5} \text{e}^{-(2/3\,K)}\right\rbrace e_{\rm in}$$ at the representative value Q = 3.35 and put ein = 0.05 to represent a small initial eccentricity. Multiplied by the factor 3.6 from $$3.6(\frac{5}{3}+\cos i)^2$$ we get 2 × 10−3. The same numerical value is given by 0.0092(Q/Q1)−7 for the masses and Q in question. Considering various uncertainties during this derivation, we replace 0.0092 by 0.01 and get \begin{eqnarray} \frac{\delta \varepsilon }{\varepsilon } \approx 0.01 \frac{m_3}{m_1 + m_2} \left( Q/ Q_1 \right)^{-7} \left(\frac{5}{3} + \cos i \right)^2. \end{eqnarray} (6)This result was based on the study of parabolic encounters, but in fact very similar results arise if the outer orbit elliptic. In particular, we may confirm the numerical factor 0.01 and the splitting of the functional dependence into several factors that each depends only on a smaller number of parameters (Valtonen & Karttunen 2006). Thus in the following we apply this result to multiple elliptic outer-body encounters. The sinusoidal factor in curly brackets is phase dependent. Since we may assume that the repeated encounters do not keep the phase, this factor causes a drift in the energy space. We may model the drift by a random walk, with the step size of the order of the amplitude of the phase factor. There is also a drift in other orbital elements; in this work we are not able to calculate this effect, but refer below to some of the possible consequences of the drift in the ein–cos i plane. By energy conservation, the relative energy change of the binary δε/ε translates into the corresponding change in the outer orbit δE/E: \begin{eqnarray} \frac{\delta \varepsilon }{\varepsilon } &=& - \frac{\delta E}{\varepsilon } = - \frac{E}{\varepsilon }\frac{\delta E}{E} \nonumber\\ \nonumber &=& - \frac{m_3(m_1+m_2)}{m_1 m_2}\frac{a_{\rm in}}{a_{\rm out}}\frac{\delta E}{E}\\ &=& -\frac{m_3}{m_1 + m_2}\frac{(m_1+m_2)^2}{m_1 m_2 Q_1} \frac{1-e_{\rm out}}{Q/Q_1} \frac{\delta E}{E} \\ \nonumber &=& -\frac{m_3}{m_1 + m_2} M \frac{1-e_{\rm out}}{Q/Q_1} \frac{\delta E}{E}, \end{eqnarray} (7)where ain and aout are the inner and out semimajor axes, respectively, and \begin{eqnarray} M =\frac{0.4 (m_1+m_2)^{7/3}}{(m_1+m_2+m_3)^{1/3}m_1 m_2}. \end{eqnarray} (8)The functional dependence on the masses is split in two factors in order to help at the next step. After N encounters we expect the total energy to change by $$\sqrt{N} \delta E.$$ When this amounts to E, we definitely have an unstable situation. We may also use a more strict definition of the instability, saying that the instability arises when $$E = \lambda \sqrt{N} \delta E.$$ This defines a strictness factor λ such that it equals to unity when the instability means the total destruction of the hierarchy of the system (Huang & Innanen 1983), while a greater value of the strictness factor, e.g. λ = 10, is closer to the definition of Mardling & Aarseth (1999). We use λ = 1 throughout in this paper. We equate the amplitude of equation (6) to the absolute value of the right-hand side of equation (7) and solve for Q: \begin{eqnarray} Q &\approx & (\lambda \sqrt{N})^{1/6} M^{-1/6} \left(1 + \frac{m_3}{m_1 + m_2}\right)^{1/3}\nonumber \\ && \times\, (1 - e_{\rm out})^{-1/6} \left(\frac{5}{3} + \cos i\right)^{1/3}. \end{eqnarray} (9)This is called the stability limit Qst. The quantity M−1/6 is set to unity in the following. It is 0.925 if the binary masses are equal and the third body mass is zero. For the range of masses considered in this paper, it is within 24 per cent of this value. Even in other respects this cannot be considered an exact formula, as is obvious after several steps of simplifications. Rather it gives a motivation to search for a particular type of result. In the end we introduce a scaling coefficient A that is determined purely from experiments. Let us now consider arbitrary inner eccentricities ein. Then the form in the second curly brackets in equation (2) becomes, taking terms up to the order $$e_{\rm in}^5$$ in the derivation above (Roy & Haddow 2003), \begin{eqnarray} 12 \big\lbrace\big[ (1 &-& 0.444 e_{\rm in}^2 + 0.032 e_{\rm in}^4) \cos i + 0.5 \sqrt{1- e_{\rm in}^2} \nonumber \\ \nonumber &&\times\,\left( 1 -\, 0.139 e_{\rm in}^2 \right)\left(1 + \cos i^2\right) \big] \cos (2\omega + n t_0) \sin 2 \Omega \\ \nonumber &&+\, \big[ \sqrt{1 - e_{\rm in}^2} (1 - 0.139 e_{\rm in}^2) \cos i \\ \nonumber &&+\, 0.5 \left( 1-\, 0.444 e_{\rm in}^2 + 0.032 e_{\rm in}^4 \right) (1 + \cos i ^2) \big]\\ &&\times\,\sin (2\omega + n t_0) \cos 2 \Omega + 1/3 (1 - 0.12 e_{\rm in}^2)\nonumber\\ &&\times\, (1 - \cos i ^2) \sin (2 \omega + n t_0) \big\rbrace . \end{eqnarray} (10) Now it is apparent that the parameters ein and cos i are not separable, but are contained inside a single factor. In equation (2) the two parameters separated because ein was small and was a common multiplier in all the terms. In general this is not the case. There is one more consideration that we must worry about. During the evolution of the triple system it may wonder widely in the ein–cos i plane due to the eccentric Kozai resonance (Ford, Kozinsky & Rasio 2000; Katz, Dong & Malhotra 2011; Lithwick & Naoz 2011) whereby the stability limit tends to be determined by the region where the limit is highest during this wondering. The process tends to equalize the stability limit within −0.75 ≤ cos i ≤ 0.75, the effective classical (i.e. quadrupole) Kozai cycle range (Kozai 1962; Innanen et al. 1997; Valtonen et al. 2008). Fitting the calculated data points to different functional forms quickly shows that we need to introduce cos 3i terms into our formula that do not appear in the expression (10). It helps to describe the rather high Qst values in retrograde orbits in the range of −0.5 ≤ cos i ≤ 0. We do not find need for order higher than three in cos i in order to describe the results of our data set. In particular, it is clear that the second-order expression above is not useful as such. Also the universal $$(\frac{5}{3}+\cos i)^2$$ term for all ein is obviously not valid. Therefore we look for a form that is of third order in cos i and has coefficients that are polynomials of ein. The simplest form that could be used is of the third order in both cos i and ein. The coefficients of the polynomials have been determined by fitting our experiments (the 13 diagrams mentioned above, m1 = 1.5) to the polynomials of this order, and determining the best values for the coefficients. We used a least-squares fit to all our data points in these diagrams. Subsequently the coefficients of some high-order terms were found to be so small as to be negligible. Then these terms were dropped, and new fits were performed without them. This procedure was continued until all remaining terms were significant. The coefficients were then rounded to nearest simple fractions, making sure that this does not worsen the fit. In this way we find that $$\frac{5}{3} + \cos i$$ in the stability limit formula should be replaced by \begin{eqnarray} f(e_{\rm in}, \cos i) &=& \left\lbrace 1 - \frac{2}{3} e_{\rm in} \left[ 1 - \frac{1}{2} e_{\rm in}^2 \right] - 0.3 \cos i \bigg [ 1 \right.\nonumber \\ && \left. \left. -\,\frac{1}{2} e_{\rm in}+ 2 \cos i \left(1- \frac{5}{2}e_{\rm in}^{3/2} - \cos i\right) \right] \right\rbrace . \end{eqnarray} (11)In the last bracket the terms containing ein and $$e_{\rm in}^2$$ have been combined to a term $$e_{\rm in}^{3/2}$$ for simplicity, and without loss of accuracy. Remember that in our original formula of equation (1) the ein and cos i factors were not separable; their separation in different factors happened only at the low eccentricity limit in equation (2). Here we have returned to a factor containing both variables after making extensive use of numerical experiments. Let us denote the mass factor \begin{eqnarray} g (m_1, m_2, m_3) = \left( 1 + \frac{m_3}{m_1 + m_2} \right). \end{eqnarray} (12)In addition to the other factors of equation (9) there may be a coefficient of the order of unity on the right-hand side of this equation; we call it A. This coefficient has to be obtained by a fit to our data set because our complete formula was not derived from first principles; we used a number of simplifying assumptions. We first find the function f from theory, and then look for suitable values of A. Our complete formula is now \begin{eqnarray} Q_{\text{st}} = A \left( \lambda \sqrt{N} / (1 - e_{\rm out}) \right)^{1/6} (f \, g)^{1/3}. \end{eqnarray} (13) Note that the relative simplicity of this formula is based on equation (1). Even though it describes energy changes only in parabolic encounters, the studies for elliptic encounters give qualitatively similar results. One of the main advantages of this approach is the factorization where each factor depends on rather few parameters. There is a factor for masses, for the normalized pericentre distance, and a mixed inner eccentricity/inclination factor. The latter also has other angular dependences in the form of sinusoidal functions that we have averaged out. It is this feature of equation (1) that carries through the different stages of our derivation, and finally leads to a rather simple end result. The value of A is determined during the fit of the analytical functions to our data set. In our simulations we start with high values of Q and reduce values in finite steps until we hit the first unstable system. These points are plotted in our figures. Consequently, the last stable system is one Q-step higher. So, the stability border lies in between these two points. The value of A that is determined from the first unstable points is too low when we are looking to represent this instability line. From the fit we find A = 1.40. However, instability is one half-a-step higher or at A = 1.45. The step size comes from our initial choice for sampling along the Q-axis. The scatter about the midsurface is roughly Gaussian with standard deviation approximately 0.1 units of Q. Most of the data points should be within the layer of thickness of three standard deviations of the central surface, which is within a layer defined by A = 1.45 ± 0.3. This is true for all except two points. In N-body simulations one may want to use the smaller limit of A = 1.15. Then no computer time is wasted in the calculation of stable subsystems. The drawback of this choice is that many unstable systems from the mixed region will be treated as stable, and the integration of these subsystems is halted. A compromise may be to use the central plane of the mixed region, A = 1.45. One would need to experiment with actual N-body simulations to decide the optimum value of A. In other types of problems we may want to be absolutely sure of the stability of the triple system. Then we will want to use the upper value, A = 1.75. However, our experiments so far may be too sparse to determine this absolute upper limit. In order to make a more careful study of the upper and lower limits of the mixed region, we increased the number of simulations. Instead of using just one value of the longitude of the node Ω (Ω = 0 in practice), we started covering the whole range of its values. We note from expression (10) that sin 2Ω and cos 2Ω are independent factors that should be varied to get the whole range of possible outcomes for a given cos i. This contrasts with the longitude of the pericentre ω that appears only in combination with nt0 and is therefore, at least in theory, a random variable. In order to study the lower limit of the mixed region, the limit for the absolutely unstable systems, we made simulations at a given cos i and starting from a low Q-value. By orbit integration we checked the stability for each Ω-value. Initially experiments are unstable for all Ω-values. Then the Q-value is increased by one step. This is continued until at least one Ω-value results in a stable system. At that point we say that the absolute instability boundary has been found and that it lies between the last two Q-values. Similarly, when we study the upper limit of the mixed region, the limit for absolute stability, we start from a sufficiently high Q-value so that systems with all Ω are stable. Then we lower the Q-value by one step and repeat the process. When we discover first unstable system, for any value of Ω, we have crossed the stability boundary that lies between the last two Q-values. In this way we generate two boundary lines, well separated from each other. Fig. 4 shows an example. Here 50 equally spaced values of cos i and 10 values of randomly generated Ω were used for m2 = 0.2, m3 = 10, ein = 0.25, and eout = 0.5. We see that the line for the upper limit is effectively raised. Typically the A-value is increased by 0.15 units higher by using this method than by using points for single values of Ω. Figure 4. View largeDownload slide Stability limit points for the upper limit (stability, crosses) and lower limit (instability, stars) for m1 = 1.8, m2 = 0.2, m3 = 10, ein = 0.25, and eout = 0.5. Figure 4. View largeDownload slide Stability limit points for the upper limit (stability, crosses) and lower limit (instability, stars) for m1 = 1.8, m2 = 0.2, m3 = 10, ein = 0.25, and eout = 0.5. 10 similar diagrams were generated by varying m2 (either 0.2, 0.5, or 1.0) and ein (either 0.1, 0.25, 0.5, 0.75, 0.9, or 0.99) for fixed values of m3 = eout = 0.5. In all cases the upper boundary points follow the A = 1.45 line rather well, with the highest point at A = 2.3. The inner boundary line is of the same general shape as the upper boundary but typically 0.6 units lower in A. In this paper we are primarily concerned with the upper envelope of the mixed region. The study of the lower boundary is left for future work since it could be very sensitive to islands of stability (Dvorak 1997). In Fig. 5 we give an example of the upper boundary using 100 equally spaced cos i values and 20 randomly picked Ω values. Except for a resonance at i = 140°, the points are well confined between A = 1.45 and 1.75, i.e. the average A value for the upper boundary is A ≈ 1.6. Figure 5. View largeDownload slide The upper (stability) border for the inner eccentricity ein = 0.6 and masses m1 = 1.5, m2 = 0.5, m3 = 0.5, and eout = 0.5. The three lines are drawn for A = 1.15, 1.45, and 1.75. Figure 5. View largeDownload slide The upper (stability) border for the inner eccentricity ein = 0.6 and masses m1 = 1.5, m2 = 0.5, m3 = 0.5, and eout = 0.5. The three lines are drawn for A = 1.15, 1.45, and 1.75. The limit A = 1.75 is good everywhere except at the resonance around i = 140°. No similar feature is seen at i = 40°. A rather safe upper boundary of the instability layer (mixed region) in experiments reported so far is therefore \begin{equation*} Q_{\text{st}} = 2.4 \left( \lambda \sqrt{N} / (1 - e_{\rm out}) \right)^{1/6} (f \, g)^{1/3}. \end{equation*} The coefficient 2.4 (rather than 2.3) covers also the more extensive set of experiments reported in the next section, and illustrated in Appendix A. 4 UNIVERSALITY In order to test the validity of our function f, we have carried out a set of simulations to cover the range of masses at several inner eccentricities and longitudes of the ascending node with the constant value of the outer eccentricity of 0.5. We used a grid composed of five equally spaced values of ein (uniform from 0 to 0.99), 11 equally spaced values of both Ω and cos i (the latter uniform from −1 to +1), three values of m2 (0.1, 0.5, and 1.0), and three values of m3 (0.1, 1.0, and 10). Since m1 = 2 − m2, this creates a grid of nine specific combinations of mass. We generate 45 diagrams similar to Fig. 1, and for each point in these diagrams (5445 in all) we calculate the A-value using equation (13). The distribution of A-values is shown in Fig. 6. The distribution is well described by a Gaussian with the mean of A = 1.6 and the standard deviation σ = 0.26. Therefore all points are below A = 2.4 at the 3σ level. We may use this as the limit of absolute stability, with the exception of resonances that occur in a few diagrams, similar to the i = 140° resonance in our previous Fig. 5. The exceptionally high values of A arise only at a relatively narrow range of Ω usually close to 0° and 180°; a further study of these regions will appear in Paper II. Figure 6. View largeDownload slide Distribution of A-values (stability limit points) for all 5445 points in the experiments with varying masses and eccentricities. Figure 6. View largeDownload slide Distribution of A-values (stability limit points) for all 5445 points in the experiments with varying masses and eccentricities. At this point we went back to equation (11) to find out if the third-order dependence on ein and cos i is visible in the 45 new diagrams (Appendix A). In particular, we checked if another choice of coefficients 2/3, 1/2, …, etc. would improve the fit to this larger data set, by reducing the standard deviation. The lowest value found was practically equal to 0.26; thus in this way we confirm the validity of the coefficients in equation (11) at the level of ± 4 per cent. In addition to this grid, we have studied some individual cases, for instance m1 = m2 = 1, m3 = 0.1, and ein = eout = 0, that are quite different from our standard case. The result of the study with the full resolution, using 3 × 105 three-body solutions, is displayed in Fig. 7. Another case has the same initial values, except eout = 0.3 and the inclination resolution is lower (Fig. 8). These cases show that the general formula is applicable even though the outer orbit starts by being circular or at a small value. In an earlier paper (Valtonen et al. 2008) we have covered a wider range of eout. We also checked the dependence on the mass of the external body over a wider range in our initial system, i.e. when m1 = 1.5, m2 = 0.5, ein = 0.6, and eout = 0.5. The results at two definite values of the inclination cos i = ±0.8 are shown in Fig. 9. Figure 7. View largeDownload slide As in Fig. 5, but the 20 Ω simulations for m1 = m2 = 1, m3 = 0.1, and ein = eout = 0. Figure 7. View largeDownload slide As in Fig. 5, but the 20 Ω simulations for m1 = m2 = 1, m3 = 0.1, and ein = eout = 0. Figure 8. View largeDownload slide As above, but for the eccentricity eout = 0.3. Figure 8. View largeDownload slide As above, but for the eccentricity eout = 0.3. Figure 9. View largeDownload slide Upper (stability) limit Q (y-axis) as a function of m3 (mass of the external body) when m1 = 1.5, m2 = 0.5, ein = 0.6, and eout = 0.5. Curves for cos i = 0.8 (higher) and −0.8 (lower) are shown. The curves are displayed for A = 2. Figure 9. View largeDownload slide Upper (stability) limit Q (y-axis) as a function of m3 (mass of the external body) when m1 = 1.5, m2 = 0.5, ein = 0.6, and eout = 0.5. Curves for cos i = 0.8 (higher) and −0.8 (lower) are shown. The curves are displayed for A = 2. 5 DISCUSSION The inner eccentricity–inclination term has not been published previously. Considering only the ein dependence, and taking Qst to be a constant as a function of cos i, Eggleton & Kiseleva (1995) and Mardling & Aarseth (1999) give \begin{equation*} Q_{\text{st}} \propto (1 + e_{\rm in}), \end{equation*} while Bailyn (private communication) suggests \begin{equation*} Q_{\text{st}} \propto \left( 1 + \frac{1}{2} e_{\rm in}^2 \right). \end{equation*} Our data for cos i = 1 agree with an approximate formula (simpler than equation 11 with cos i = 1) \begin{equation*} Q_{\text{st}} \propto \left( 1 - \frac{2}{3} e_{\rm in} + 1.2 e_{\rm in}^2 \right). \end{equation*} Therefore the formulae given by earlier studies are not very satisfactory. Moreover, we note that the cos i dependence is clearly a function of ein in the experimental data set (Figs 1–3). The cos i dependence given in Valtonen & Karttunen (2006), \begin{equation*} f(\cos i) = \left\lbrace \frac{1}{3} +\left[ (1 + \cos i)(1.97 - \cos i) \right]^{0.8} \right\rbrace , \end{equation*} is a fair representation of the f(cos i) function at low eccentricity but not applicable to mid-range or high eccentricities. The same is true for the simpler form by Valtonen et al. (2008): \begin{equation*} f(\cos i) = \left\lbrace \frac{7}{4} + \frac{1}{2} \cos i - \cos i^{2} \right\rbrace . \end{equation*} Mardling (2008) and Mardling (private communication; a computer code distributed privately) suggested a practically flat function at high eccentricity and a function that varies from flat at positive cos i to gently sloping (towards cos i = −1) at negative cos i for mid-range and low eccentricity. It is an improvement over the flat function of Mardling & Aarseth (1999), but not a good description of the experimental data. Mardling & Aarseth (2001) keep Qst constant with ein, but assume f(cos i) = {1 + 0.3i/π}. This is also a rough approximation to the numerically calculated data. These expressions and our current formula and computer simulations show that retrograde orbits are more stable than direct orbits at the same Q-distance. The amount of difference in stability depends on the eccentricity of the inner orbit. The main exception to this rule occurs at the i = 140° resonance (Fig. 5). In a future paper we will also discuss the apparent resonance feature at i = 140°. At this resonance the general formula fails, even though in a very small volume of the phase space. The end result of this investigation is an analytical formula for determining the stability of a triple system, equation (13). The stability limit depends on the orbital parameters of the inner and outer binary and on the number of outer revolutions N required for stability. Since the latter depends on the problem at hand, its uncertainty may be absorbed in the coefficient A. Thus putting A ∼ 1 in N-body simulations, and A = 2.4 in tests for absolute stability, together with Λ = 1, may constitute a practical stability formula: \begin{equation*} Q_{\text{st}}= 10^{1/3} A [(f\,g)^2 /(1-e_{\rm out})]^{1/6}, \end{equation*} where the functions f and g are given by equations (11) and (12), respectively. At the limit of ein = i = m3 = 0, f g = 1. The factor 101/3 corresponds to N = 104, the standard value used in this study. Acknowledgements The authors thank the anonymous referee for thorough and helpful comments that helped to improve the paper considerably. REFERENCES Aarseth S. J., 1973, Vistas Astron. , 15, 13 https://doi.org/10.1016/0083-6656(73)90003-2 CrossRef Search ADS Aarseth S. J., 2003, Gravitational N-body Simulations: Tools and Algorithms . Cambridge Univ. Press, Cambridge Google Scholar CrossRef Search ADS Aarseth S. J., Zare K., 1974, Celest. Mech. , 10, 185 CrossRef Search ADS Clairaut A., 1752, Theorie de la Lune . Imperial Academy of Science, St. Petersburg Dvorak R., 1997, Celest. Mech. Dyn. Astron. , 68, 63 https://doi.org/10.1023/A:1008287614810 CrossRef Search ADS Eggleton P., Kiseleva L., 1995, ApJ , 455, 640 https://doi.org/10.1086/176611 CrossRef Search ADS Euler L., 1752, Recherches sur les irrégularités du mouvement de Jupiter et Saturne Ford E. B., Kozinsky B., Rasio F. A., 2000, ApJ , 535, 385 https://doi.org/10.1086/308815 CrossRef Search ADS Funk B., Schwarz R., Pilat-Lohinger E., Suli A., Dvorak R., 2009, Planet. Space Sci. , 57, 434 https://doi.org/10.1016/j.pss.2008.06.017 CrossRef Search ADS Harrington R. S., 1972, Celest. Mech. , 6, 322 CrossRef Search ADS Heggie D. C., 2006, in Flynn C., ed., Few Body Problem . Ann. Univ. Turkuensis, Ser. A, Vol. 358, p. 20 Heggie D. C., Hut P., 2003, The Gravitational Million-Body Problem: A Multidisciplinary Approach to Star Cluster Dynamics . Cambridge Univ. Press, Cambridge Google Scholar CrossRef Search ADS Huang T. Y., Innanen K. A., 1983, AJ , 88, 1064 https://doi.org/10.1086/113395 CrossRef Search ADS Innanen K. A., Zheng J. Q., Mikkola S., Valtonen M. J., 1997, AJ , 113, 1915 https://doi.org/10.1086/118405 CrossRef Search ADS Katz B., Dong S., Malhotra R., 2011, Phys. Rev. Lett. , 107, 181101 CrossRef Search ADS PubMed Kozai Y., 1962, AJ , 67, 591 https://doi.org/10.1086/108790 CrossRef Search ADS Lagrange J. L., 1766, Miscellanea Taurinensia , 3, 1762 Lagrange J. L., 1778, Hist. de l'acad. des sciences, 1774 = Lagrange 1867–92 , 6, 635 Laplace P. S., 1775, Mem. Acad. Sci. Paris , 1772 Laplace P. S., 1787, Mem. Acad. Sci. Paris, 1784, Oeuvres 11 (Paris, 1895) , 49 Laskar J., 2013, Progress Math. Phys. , 66, 239 CrossRef Search ADS Lithwick Y., Naoz S., 2011, ApJ , 742, 94 https://doi.org/10.1088/0004-637X/742/2/94 CrossRef Search ADS Mardling R. A., 2008, in Vesperini E., Sills A., Giersz M., eds, Proc. IAU Symp. Vol. 246, Dynamical Evolution of Dense Stellar Systems. Cambridge Univ. Press, Cambridge, p. 199 Mardling R., Aarseth S., 1999, in Steves B. A., Roy A. E., eds, The Dynamics of Small Bodies in the Solar System. Kluwer, Dordrecht, p. 385 Google Scholar CrossRef Search ADS Mardling R. A., Aarseth S. J., 2001, MNRAS , 321, 398 https://doi.org/10.1046/j.1365-8711.2001.03974.x CrossRef Search ADS Mikkola S., 1997, Celest. Mech. Dyn. Astron. , 67, 145 https://doi.org/10.1023/A:1008217427749 CrossRef Search ADS Newton I., 1687, Philosophiae Naturalis Principia Mathematica . Royal Society, London Google Scholar CrossRef Search ADS Pilat-Lohinger E., Funk B., Dvorak R., 2003, A&A , 400, 1085 CrossRef Search ADS Roy A., Haddow M., 2003, Celest. Mech. Dyn. Astron. , 87, 411 https://doi.org/10.1023/B:CELE.0000006767.34371.2f CrossRef Search ADS Valtonen M., Karttunen H., 2006, The Three-Body Problem . Cambridge Univ. Press, Cambridge Google Scholar CrossRef Search ADS Valtonen M., Mylläri A., Orlov V., Rubinov A., 2008, in Vesperini E., Sills A., Giersz M., eds, Proc. IAU Symp. Vol. 246, Dynamical Evolution of Dense Stellar Systems. Cambridge Univ. Press, Cambridge, p. 209 APPENDIX A: SAMPLE ILLUSTRATIONS FOR THE RANGE OF MASSES Here we present results of simulations described in the beginning of Section 4 for different masses and uniformly sampled Ω. Everywhere in what follows eout = 0.50. In addition to the three curves corresponding to A = 1.15, 1.45, and 1.75, one more curve for A = 2.0 was added. Figure A1. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.00. Figure A1. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.00. Figure A2. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.00. Figure A2. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.00. Figure A3. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.00. Figure A3. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.00. Figure A4. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.00. Figure A4. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.00. Figure A5. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.00. Figure A5. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.00. Figure A6. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.00. Figure A6. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.00. Figure A7. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.00. Figure A7. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.00. Figure A8. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.00. Figure A8. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.00. Figure A9. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.00. Figure A9. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.00. Figure A10. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.25. Figure A10. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.25. Figure A11. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.25. Figure A11. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.25. Figure A12. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.25. Figure A12. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.25. Figure A13. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.25. Figure A13. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.25. Figure A14. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.25. Figure A14. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.25. Figure A15. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.25. Figure A15. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.25. Figure A16. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.25. Figure A16. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.25. Figure A17. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.25. Figure A17. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.25. Figure A18. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.25. Figure A18. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.25. Figure A19. View largeDownload slide m1 = 1.00 , m2 = 1.00, m3 = 0.10, and ein = 0.50. Figure A19. View largeDownload slide m1 = 1.00 , m2 = 1.00, m3 = 0.10, and ein = 0.50. Figure A20. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.50. Figure A20. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.50. Figure A21. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.50. Figure A21. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.50. Figure A22. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.50. Figure A22. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.50. Figure A23. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.50. Figure A23. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.50. Figure A24. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.50. Figure A24. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.50. Figure A25. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.50. Figure A25. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.50. Figure A26. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.50. Figure A26. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.50. Figure A27. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.50. Figure A27. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.50. Figure A28. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.75. Figure A28. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.75. Figure A29. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.75. Figure A29. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.75. Figure A30. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.75. Figure A30. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.75. Figure A31. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.75. Figure A31. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.75. Figure A32. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.75. Figure A32. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.75. Figure A33. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.75. Figure A33. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.75. Figure A34. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.75. Figure A34. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.75. Figure A35. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.75. Figure A35. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.75. Figure A36. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.75. Figure A36. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.75. Figure A37. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.99. Figure A37. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 0.10, and ein = 0.99. Figure A38. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.99. Figure A38. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 1.00, and ein = 0.99. Figure A39. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.00. Figure A39. View largeDownload slide m1 = 1.00, m2 = 1.00, m3 = 10.00, and ein = 0.00. Figure A40. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.99. Figure A40. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 0.10, and ein = 0.99. Figure A41. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.99. Figure A41. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 1.00, and ein = 0.99. Figure A42. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.99. Figure A42. View largeDownload slide m1 = 1.50, m2 = 0.50, m3 = 10.00, and ein = 0.99. Figure A43. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.99. Figure A43. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 0.10, and ein = 0.99. Figure A44. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.99. Figure A44. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 1.00, and ein = 0.99. Figure A45. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.99. Figure A45. View largeDownload slide m1 = 1.90, m2 = 0.10, m3 = 10.00, and ein = 0.99. © 2018 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society
### Journal
Monthly Notices of the Royal Astronomical SocietyOxford University Press
Published: May 1, 2018
## You’re reading a free preview. Subscribe to read the entire article.
### DeepDyve is your personal research library
It’s your single place to instantly
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations | 2018-12-16 07:15:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400827884674072, "perplexity": 1296.4626995860242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827281.64/warc/CC-MAIN-20181216051636-20181216073636-00525.warc.gz"} |