url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://solvedlib.com/n/let-f-x-y-2-z-tan-y2-i-23-in-x2-8-j-is-oriented-upward-jjs,15822136
|
# Let F(x, Y, 2) = z tan (y2)i + 23 In(x2 8)j is oriented upward JJs dsFind the flux of
###### Question:
Let F(x, Y, 2) = z tan (y2)i + 23 In(x2 8)j is oriented upward JJs ds Find the flux of across 5; the part of the paraboloid x2 + v2 + 2 = 29 that lies bove the plane and
#### Similar Solved Questions
##### In a negatively skewed distribution The mean is lower than the median and...
In a negatively skewed distribution The mean is lower than the median and the mode The mean is higher than the median and the mode &nb...
##### Question 1Not yet answeredMarked out of 20.00Flag questionIn experiment 1, the Voltage supplied across a certain circuit is inversely proportional to the Current flowing through a metallic conductorSelect one:TrueFalse
Question 1 Not yet answered Marked out of 20.00 Flag question In experiment 1, the Voltage supplied across a certain circuit is inversely proportional to the Current flowing through a metallic conductor Select one: True False...
##### PART 2: Heat of Fusion gL Ice Place ~60 mL of luke-warm water (~30'C) in the calorimeter: Weigh (0.0001g). Record the temperature (0.1 C) of the calorimeter: Add 5.0-10.0 g of ice which has been dried with a paper towel. Record the temperature (and stir) until the temperature stabilizes. Find the final mass (0.QCOLg) of the calorimeter; Repeat a-& with smaller sample of ice. Qn-line Instructions: Select wo different ice masses frumn sthe data table in Excel. Record all information prov
PART 2: Heat of Fusion gL Ice Place ~60 mL of luke-warm water (~30'C) in the calorimeter: Weigh (0.0001g). Record the temperature (0.1 C) of the calorimeter: Add 5.0-10.0 g of ice which has been dried with a paper towel. Record the temperature (and stir) until the temperature stabilizes. Find ...
##### Frobkm 1Hov MinyDrd Mad reurrunging Qabbccdd_ 1enanc {ome'5 > lusutvoicansieuppeans comrwheoEtenleWe nave duHerent boie; ano different oblects We Wan: distnbute the obiects into the oormauch inaal Doani eMply haa muny AuTGantai Justity Yobr unlwlrFookmi Howmany intcee solutions de;T Aniuer How manvinieeeriJulions doeI , 4n Justinyvour jnswelTnave Math >UnustnYourrhjve with I,Rund /erobkm &Counbhe numinon-negatiye Mekltsclulidne,714 22. JustitycuranswienPrank LctSbr subsct of (1,2
Frobkm 1 Hov MinyDrd Mad reurrunging Qabbccdd_ 1enanc {ome'5 > lusutvoicansie uppeans comrwheo Etenle We nave duHerent boie; ano different oblects We Wan: distnbute the obiects into the oormauch inaal Doani eMply haa muny AuTGantai Justity Yobr unlwlr Fookmi Howmany intcee solutions de;T An...
##### How do you evaluate (2+ \sqrt { 3} ) ( \sqrt { 2} + 3)?
How do you evaluate (2+ \sqrt { 3} ) ( \sqrt { 2} + 3)?...
##### Question 9 4 pts In order processing: Most companies receive orders several different ways Errors have...
Question 9 4 pts In order processing: Most companies receive orders several different ways Errors have little impact on cost Speed always is more important than accuracy Manual systems are more accurate than automated ones Order picking is more important than order entry...
##### -obtain wavelengths for the hydrogen emission lines and then calculate the energy levels according to bohr's...
-obtain wavelengths for the hydrogen emission lines and then calculate the energy levels according to bohr's theory - determine wavelengths in nm associated with following transitions of the H atom: nf = 1 with ni = 2, 3, 4 nf = 2 with ni = 3, 4, 5 nf = 3 with ni = 4, 5, 6 Hydrogen י...
##### Ch 11: End-of-Chapter Problems - The Basics of Capital Budgeting Q Search Back to Assignment Attempts:...
ch 11: End-of-Chapter Problems - The Basics of Capital Budgeting Q Search Back to Assignment Attempts: Keep the Highest: 11 1. Problem 11.01 Click here to read the eBook: Net Present Value (NPV) NPV Project L costs $70,000, its expected cash inflows are$13,000 per year for 12 years, and its WACC is...
##### Mab UL _Tzata FEdE dntokli Our uaci AuLT&innd 'nlaubdtrm Fha LDutk cuid LL C Ynelt
Mab UL _ Tzata FEdE dntokli Our uaci AuLT &innd 'nlaubdtrm Fha LDutk cuid LL C Ynelt...
##### Aresearcher wanted t0 determine capeted rooms contain more bacteria than uncarpeted rooms The table shows the results for the number of" bacteria per cubic foot for both types of rooms Capeted 14.6 11.5 13.6 95 11,5 10.5 14.7 Hm<kFull data set Uncareted_ 7,5 7.4 7.8 6.4 9.3 8.3 13.9Determine the P-value for this hypothesis test P-value = (Round to three decimal places as needed State the appropriate conclusion Choose the correct answer below: 0 A Reject Hp There Is not signiicant evide
Aresearcher wanted t0 determine capeted rooms contain more bacteria than uncarpeted rooms The table shows the results for the number of" bacteria per cubic foot for both types of rooms Capeted 14.6 11.5 13.6 95 11,5 10.5 14.7 Hm<k Full data set Uncareted_ 7,5 7.4 7.8 6.4 9.3 8.3 13.9 Determi...
##### Calculate the molarity ol an aqueous NaE' solution containing 29 3 g of NaE in 0.50 L efs solulion. Molarmass Ol NaE_is 41 99gmol Calculale the pressure In atm of 0.80 mol HltOpeu Qas &t temperature Of 24 Celcils _ volume of 3.6. (R equal 0,08206 L atm/mo Ki Whal would be the volume of 0.S0 mol of H? a SIP
Calculate the molarity ol an aqueous NaE' solution containing 29 3 g of NaE in 0.50 L efs solulion. Molarmass Ol NaE_is 41 99gmol Calculale the pressure In atm of 0.80 mol HltOpeu Qas &t temperature Of 24 Celcils _ volume of 3.6. (R equal 0,08206 L atm/mo Ki Whal would be the volume of 0.S0...
##### P 8-28: Columbine Granite Columbine Granite produces a number of different granite products from its granite...
P 8-28: Columbine Granite Columbine Granite produces a number of different granite products from its granite quarry in Geor- gia. The production process begins when a 50-foot-tall block of solid granite is excavated from the mountain quarry. These huge blocks of granite are drilled and broken into m...
##### 2. -0 points OSUniPhys1 27.2.WA.016 My Notes Ask Your Teacher The resistors in the circuit below...
2. -0 points OSUniPhys1 27.2.WA.016 My Notes Ask Your Teacher The resistors in the circuit below have the following values: R1 850 Ω R2 = 7.00 , and R3 3.20 O The two batte es each have a voltage of R2 R1 R3 (a) Find the current in through R3 Enter a number (b) How much power do the batteries ...
##### 3n0383 1 circleIn the first quadrant; 1 following Iine 1
3n0383 1 circle In the first quadrant; 1 following Iine 1...
##### Use the RefereneThe alpha carbon in an amino acid is the carbon attached to the carboxylic acid carbon True FalseSubmnit AnswerTry Another Versionitem attempt remaining
Use the Referene The alpha carbon in an amino acid is the carbon attached to the carboxylic acid carbon True False Submnit Answer Try Another Version item attempt remaining...
|
2022-08-15 13:46:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.449271559715271, "perplexity": 13368.131697019695}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572174.8/warc/CC-MAIN-20220815115129-20220815145129-00137.warc.gz"}
|
https://plainmath.net/8976/20-plus-6y-equal-18-%E2%88%92-10y-plus-14y
|
# 20 + 6y = –18 − 10y + 14y
Question
Functions
20 + 6y = –18 − 10y + 14y
2021-03-08
20 + 6y = -18 - 10y +14y
20 + 6y = -18 +4y
6y -4y = -18 -20
2y = -38
y = -19
### Relevant Questions
Find the function f given that the slope of the tangent line at any point (x, f(x)) is f '(x) and that the graph of f passes through the given point. $$f '(x)=5(2x − 1)^{4}, (1, 9)$$
Find the function f given that the slope of the tangent line at any point (x, f(x)) is f '(x) and that the graph of f passes through the given point. $$\displaystyle{f} '{\left({x}\right)}={5}{\left({2}{x}−{1}\right)}^{{{4}}},{\left({1},{9}\right)}$$
Explain the steps you would take to find the inverse of f(x) = 3x − 4. Then find the inverse.
the function that has an x-intercept of -2 and a y-intercept of $$\displaystyle−{\left(\frac{{2}}{{3}}\right)}$$
Let $$f(x) = x + 8$$ and $$g(x) = x2 − 6x − 7$$.
Find f(g(2))
What is the slope of a line perpendicular to the line whose equation is x - 3y = -18. Fully reduce your answer.
Which relation does not represent a function?
$$A. (0,8), (3,8), (1,6)$$
$$B. (4,2), (6,1), (8,9)$$
$$C. (1,20), (2,23), (9,26)$$
$$D. (0,3), (2,3), (2,0)$$
Let $$P(t)=100+20 \cos 6t,0\leq t\leq \frac{\pi}{2}$$. Find the maximum and minimum values for P, if any.
$$\displaystyle{A}.{\left({0},{8}\right)},{\left({3},{8}\right)},{\left({1},{6}\right)}$$
$$\displaystyle{B}.{\left({4},{2}\right)},{\left({6},{1}\right)},{\left({8},{9}\right)}$$
$$\displaystyle{C}.{\left({1},{20}\right)},{\left({2},{23}\right)},{\left({9},{26}\right)}$$
$$\displaystyle{D}.{\left({0},{3}\right)},{\left({2},{3}\right)},{\left({2},{0}\right)}$$
|
2021-05-11 11:20:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.635832667350769, "perplexity": 417.21669158266934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991982.8/warc/CC-MAIN-20210511092245-20210511122245-00272.warc.gz"}
|
http://www.ques10.com/p/22195/short-note-on-lc-oscillator/
|
× Close
Join the Ques10 Community
Ques10 is a community of thousands of students, teachers, and academic experts, just like you.
Join them; it only takes a minute
Question: short note on LC oscillator
0
Subject: Electronic Devices and Circuits II
Topic: Oscillators
Difficulty: Medium
edc2(34) • 358 views
modified 6 months ago • written 8 months ago by
0
• In LC oscillator the feedback network consists of inductor and capacitor .These LC component determines the frequency of oscillation.
• LC oscillators are also known as tuned oscillators. These oscillators can operate at high frequencies from 200 Khz to some Ghz.
• They are not suitable for low frequency operations because at low frequencies the value of inductor and capacitor is very large due to this the circuit becomes more bulky and expensive.
• Analysis of these type can revels that the following types of oscillators are obtained when the reactance elements are as designed.
Oscillator Type Reactance Element
Colpitts Oscillator C,C.L
Hartley Oscillator L,L,C
clapp Oscillator C,C,L and extra capacitor C
Tuned input and tuned output LC,LC
The basic LC oscillator tank circuit is as shown in the figure:
The frequency of oscillation of LC oscillator is given by F= 1/(2π√LC)
In case of colpitts oscillator C is the equivalent capacitor calculated by formula Ceq=(c1*c2)/((c1+c2))
In case of Hartley oscillator L is the equivalent inductance calculated BY formulaLeq =(L1+L2+2M).
M= mutual inductance between the two inductor.
It is easy to tune
Simple to construct and easy to operate at high frequency.
Poor frequency stability.
Applications
Used in local oscillator
RF source and high frequency function Generator.
|
2018-09-24 06:36:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817523121833801, "perplexity": 3840.5249616799642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160145.76/warc/CC-MAIN-20180924050917-20180924071317-00321.warc.gz"}
|
https://math.stackexchange.com/questions/912507/alternative-monty-hall-problem
|
# Alternative Monty Hall Problem
So the typical set up for Monty Hall problem, there are 3 doors where 2 have goats and 1 has a car. I, the contestant, get to randomly guess a door looking to get the one with the car, after this the host will open a door that will always be a goat. Thus out the two doors that are left, I have to choose to stay with the original door I chose or switch to the other door. As many analysis of this problem have been done, switching my choice gives me a higher probability of winning. This largely has to do with the fact that since the host always reveals a goat the asking of whether to stay or not, is the same as did you guess right or not, and you have $\frac{2}{3}$ of wrong so you should switch
Now it seems, this "strange" result largely has to do with the fact that the host always reveals a goat. But what if alternatively you had this situation
You are given 3 doors, 2 with a goat and 1 with a car. You randomly choose a door (looking to get one with the car). The host will randomly choose to reveal what is behind one of the 2 doors you haven't chosen. Given that he reveals goat, what is the probability of getting car if you chose to stay with choice?
My analysis of this problem goes as follows:
Let $D$ be event the door I guessed has car and Let $G$ reprsent the event that host reveals a goat thus what I want to calculate is $P(D|G)$ with this I have $$P(D|G)=\frac{P(D\cap G)}{P(G)}=\frac{P(G|D)P(D)}{P(G|D)P(D)+P(G|D^{c})P(D^{c})}=\frac{1\left(\frac{1}{3}\right)}{1\left(\frac{1}{3}\right)+\frac{1}{2}\left(\frac{2}{3}\right)}=\frac{1}{2}$$
So it seems it doesn't matter if I choose to switch or not, and this is the result most people come up with when first thinking of problem.
Question: First is my analysis correct for this problem? Second, is it true in general that if you guess out of $n$ doors and host reveals $k$ doors that all have goats, will the probability that car is behind door you choose just $\dfrac{1}{n-k}$?
UPDATE
So I ended up asking my statistics/probability professor about this question and he said the result I got was correct. He explained that the reasoning that the Monty Hall problem inherently causes confusion is because many don't notice the only randomness in the original problem is just in your choice while hosts choice of door is deterministic. Now the problem I asked now has two sets of randomness, your original choice of door and the hosts choice thus the problems are inherently different
• Hints here and here. Also could be interesting this. – user153012 Aug 28 '14 at 21:37
Your analysis is correct. Suppose that there are $n$ doors, one of which has a car, the other have goats. The host randomly chooses $k$ doors and opens them. I will use your notation, so $D$ is the event you have chosen the car and $G$ is the event the host reveals $k$ goats.
Then we have $$\mathbb{P}(D|G) = \frac{\mathbb{P}(D \cap G)}{\mathbb{P}(G)} = \frac{\frac{1}{n}}{\frac{n-k}{n}} = \frac{1}{n-k}.$$ This is because
• the probability $\mathbb{P}(G)$ that the host only reveals goats is $\frac{n-k}{n}$ (as it is the probability that the car is among one of the other $n-k$ doors),
• the probability $\mathbb{P}(D \cap G)$ that you have chosen the car and the host only reveals goat is $\frac{1}{n}$ as this is the same as the probability $\mathbb{P}(D)$ that you have chosen the car.
• That seems about right. I was also wondering if the question to this problem is the same as: let there be a deck of $n$ cards you choose one card out of the deck. The deck is then shuffled, you start taking cards off top of the deck, and you have chosen $k$ cards and none of them are your card. What is probability that next card is your card? – Kamster Aug 28 '14 at 21:51
• I think that is indeed an equivalent situation. – user133281 Aug 28 '14 at 21:53
• Ok cool, because I think the question explained this way that makes it intuitively obvious that the result should be $\frac{1}{n-k}$ and I always like to have intuitive explanations when I can – Kamster Aug 28 '14 at 21:53
1/2 because there are only 2 choices left at that point and one has to be the car.
• What I essentially tried to do was change the problem so it got the result that most people mistakenly get. I said instead revealing goat always like in original monty hall, he chooses door at random, if he happens to show us a goat, then what is probability that have car if stay. The fact that you can have case that car can be revealed now essentially changes the problem because the door reveal now actually gives us information instead of no information like in original monty hall – Kamster Sep 25 '14 at 6:36
• I reworded the problem in comments user133281 in terms of a deck of cards which is essentially the same problem that after the fact I saw was a lot easier to see the result I hypothesized it would be more intuitively – Kamster Sep 25 '14 at 6:40
• Ok I respect your opinion and I'll be sure to take that into account and think through problem more – Kamster Sep 25 '14 at 6:57
• – Kamster Sep 25 '14 at 7:13
• Also if you really want to verify which one is correct, do a simulation of about 1000 runs of the game and proportion of which you win in should be close to actual probability by law of large numbers – Kamster Sep 25 '14 at 7:30
Your analysis of the intuition looks good. I think your right that people think his way. But no. No matter the number of goats revealed, your odds of initially guessing right are always 1 in $n$.
It helps to imagine that there are 100 doors. And that the host reveals 98 goats. It is easier to see that you should switch.
• This does not answer the question. – user133281 Aug 28 '14 at 21:43
• Wow my first down votes. I tried to clarify. – amcalde Aug 28 '14 at 21:46
• But if the host, is randomly opening doors and the all happen to be goats, that should give me some information, not just leave with no information like before he revealed them – Kamster Aug 28 '14 at 21:48
• But you made your choice in advance. That shot is just $1/n$ because there was no information. – amcalde Aug 28 '14 at 21:49
• But the fact that randomly chosen doors reveal goats gives you information about your choice: it becomes more likely that you have chosen the car. Please note that this is an alternative Monty Hall problem. – user133281 Aug 28 '14 at 21:50
|
2019-05-21 15:31:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8061788082122803, "perplexity": 391.2169406149225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256426.13/warc/CC-MAIN-20190521142548-20190521164548-00011.warc.gz"}
|
https://www.gamedev.net/forums/topic/52626-frustum-culling/
|
• ### Announcements
• #### Wondering what's new and changed at GameDev.net?06/20/17
Check out the latest Staff Blog update that talks about what's changed, what's new, and what's up with these "Pixels".
#### Archived
This topic is now archived and is closed to further replies.
# frustum culling
## 2 posts in this topic
How do I test if a box is visible or not? I need it for my terrain engine who use octree
0
##### Share on other sites
Your frustrum is defined by six clipping planes (near/far/left/right/top/bottom).
Dot product of a point and plane tells in which side of the plane point is. If one or more of these corners is inside the frustrum, it's atleast partially visible (unless frustrum is reasonably bigger than your boxes).
Search for topics and you'll get a lot of topics conserning frustrum culling and plane equitations. There are much great topics. So did I too.
Edited by - stefu on June 27, 2001 5:02:13 PM
0
##### Share on other sites
http://www.cubic.org/~submissive/sourcerer/3dclip.htm
Tells you how to cull lines. To test if a box is completely outside of the frustum you can make 4 lines out of the box from opposite corners.
A___________B|\ |\| \ | \| \________|__\ | |H | |G|D_|________|C | \ | \ | \|__________\| E F
Make these line segments:
A - F
B - E
C - H
D - G
Test these 4 lines against your frustum. If all 4 lines are outside then the entire box is outside.
Seeya
Krippy
0
|
2017-06-28 15:51:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.255029559135437, "perplexity": 3555.5870534538158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323711.85/warc/CC-MAIN-20170628153051-20170628173051-00419.warc.gz"}
|
https://www.physicsforums.com/threads/calculation-of-casimir-effect.360057/
|
# Homework Help: Calculation of Casimir Effect
1. Dec 3, 2009
### Lancen
Hello I am trying to work out the Casimir force via the Abel-Plana equation. I have been following the derivation in http://arxiv.org/abs/quant-ph/0106045.
Specifically I can't figure out for the life of me how the author is going from the first part of equation 2.33 to the second part of it. I am trying to figure out if it something simple I am just over looking.
In my calculations the last term of the first equality has a 1/2 that refuses to go away. This prevents regularization using the Abel-Plana equation. The only way I can see around it, is to assume the integration over dk3 goes from -infinity to infinity and convert that to twice the integral of 0 to infinity. In which case one has to ask how does negative wave numbers make any sense?
I am typing this really late as I have spent all day on this, I will try to put up some equations tomorrow if I have time. But everything that is relevant is in the arXiv paper.
2. Dec 3, 2009
### diazona
wow :surprised that's a long paper...
That seems quite reasonable to me. Typically when you see an integral written without limits in a paper or book on quantum theory, it's implied that the integral is over the whole applicable region, which for a 1-dimensional integral is usually $-\infty$ to $+\infty$.
The negative wavenumbers represent waves that are traveling in the negative direction. You're familiar with the expression
$$e^{i(k x - \omega t)}$$
for a complex wave, right? For $k > 0$ (and $\omega > 0$), the wave's velocity is positive, as you can see if you use the stationary phase condition,
$$k x - \omega t = k(x - v t) = \phi_0$$
(As t increases, x must also increase to keep the combination (x - vt) a constant) But if you change to $k < 0$, it's the same kind of wave just moving in the negative direction. I seem to remember seeing a picture somewhere on Wikipedia that would show this rather well, but I can't find it now.
3. Dec 4, 2009
### Lancen
Yes I realized that too earlier today but then why is that same thing not also done with the dk1 and dk2 integrals? Also if you look a bit further up at the example of the simple massless scalar field in 1D the integral over dk (no subscript) this is equation 2.17, is explicitly from 0 to infinity. Of course this whole thing just keeps on getting more confusing because a bit further up equation 2.14, k does go from -infinity to infinity!
As of today the best explanation I can come up with is since in equation 2.29 the discrete sum is taken from -infinity and to infinity to account for two polarizations of photons (this being the 3D parallel plates example now rather then the 1D Scalar field) this is equivalent to multiplying a discrete sum from 0 to infinity by 2 which is the 1D scalar field energy of between the plates - equation 2.11 (the fact that the latter starts from 1 and the former starts from 0 can be rectified by subtracting out of the n=0 term).
Therefore by the same logic comparing equations 2.32 and 2.16 which are the free space energies of the vacuum without boundary conditions in 3D and 1D respectively one should multiply 2.32 also by 2 in order to account for photon polarization. This will resolve the issue. But then why would they not say that explicitly, but sneak it in in the middle of a damn derivation. Which anyone who didn't sit down with a pencil and tried working out the math themselves could have easily missed.
|
2018-05-28 01:42:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8615183234214783, "perplexity": 311.12145262032766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870604.65/warc/CC-MAIN-20180528004814-20180528024814-00090.warc.gz"}
|
http://machineawakening.blogspot.com/2007/09/hacking-my-way-to-parallels-tools-for.html
|
## Wednesday, September 12, 2007
### Hacking my way to Parallels Tools for Linux
So it turns out both the Windows and Linux machines formerly used in the Artificial Vision project are dead, yet I am supposed to hack toghether a demonstration version of our software for a fair the company is to attend. Since there is no hope in waiting for replacements, I decided to take my own MacBook and continue development on a pair of virtualized environments created over Parallels Desktop – which would be the equivalent to VMware in the Mac world.
For the Linux environment, I picked Kurumin Light, a lightweight version of the Knoppix-based Kurumin Linux distro. Installation went almost without incidents, save for one: in order for the virtual environment's graphical resolution match the dimensions of the enclosing window (and thus make the best possible use of the display), a set of extensions bundled with Parallels Desktop, called Parallels Tools, needs to be installed. Problem is, when I tried to install it, the procedure ended abruptly with a Unable to get layout name error message. A quick Google search led to two forum threads discussing the error, but none getting to a solution; so I decided to try and solve it myself.
Peeking into the installation file, I found it to be a self-extracting GZip archive, which I later found gets extracted to a temporary directory called /tmp/selfgzXXXXXXXXX -- where XXXXXXXXX is an arbitrary number -- in the beggining of the procedure. Among the extracted files, there is a script called xserver-config.py, which rewrites the X Server confguration file: it is this script that fails to find the X Server layout name, causing the installation to fail.
With this in mind, I devised the following walk-around to the Unable to get layout name error. All these steps must be executed from a root account:
1. Run sh parallels-tools.run, but don't confirm the installation just yet;
2. Open the /tmp/selfgzXXXXXXXXX/xserver-config.py script file on a plain-text editor. Change line 273 from
layoutId = None
to
layoutId = "Default Layout"
or any other name to your liking. Don't forget to save;
3. Confirm installation. Hopefully the script will now execute without errors;
4. Open /etc/X11/xorg.conf (or the equivalent X Server configuration file on your distro) and locate the "ServerLayout" Section. You'll notice there are two Identifier entries, consequence of the configuration script not telling apart its elbow from its ass... I mean, telling apart the Identifier entry on the original config file. This situation is bound to crash your X Server upon restart, so comment the original entry off. Also, if you'd like to make sure your pointer device will work properly, locate all "InputDevice" Sections with an Identifier value of "Parallels Mouse" and comment them off, save for one -- which one will depend on your system settings, but the entry with an Option value of "Device" "/dev/input/mice" is probably your best guess, if you aren't X Server savvy;
5. Restart your X Server, for example by hitting Ctrl + Alt + Backspace.
After executing these steps, I found pointer integration as well as dynamic resolution resizing were working – that is, until I restarted the virtual machine, and learned I would still have to add the "prluserd" service to the list of automatically started services. (Alternatively, I could start it by hand each time, by calling "prluserd" from a root terminal window, and then restart X. But I am not that crazy.)
I hope these steps work for others that are facing problems with Parallels Tools installation on Linux, though as usual I make no guarantees. Parallels Desktop is a wonderfully useful software, and with interface integration on, it enables a very pleasant virtualization experience.
|
2017-05-27 21:25:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2873682379722595, "perplexity": 2775.923385365165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609061.61/warc/CC-MAIN-20170527210417-20170527230417-00107.warc.gz"}
|
https://amjadmahayri.wordpress.com/2014/04/
|
## Mixture Density Networks
Motivation One of the coolest ideas I learned in this course is the probabilistic interpretation of Neural Networks. Instead of using the NN to predict directly the targets, use it to predict the parameters of a conditional distribution over the targets (given the input). So you’re learning for each input you have a conditional distribution over the target, each has the same form, but with different parameters. For example, we can assume that the target has a Gaussian conditional distribution, and the network predicts the mean of this distribution (we assume here that the variance is independent of the input, but this assumption be relaxed). We saw in class that if we train the network with the mean-squared error loss function we get the same solution as when we train it with the negative log-likelihood of this Gaussian. One might hope that we learn a more complex (multi-modal) conditional distribution for the target. This is actually the goal of Mixture density networks! we want to model the conditional distribution as a mixture of Gaussians, where each Gaussian component parameters are dependent on the input, i.e.: $P(y_n \mid x_n) = \sum_{k=1}^K \pi_k(x_n) \mathcal{N}_k(y_n \mid \mu_k(x_n), \sigma_k^2(x_n))$ The network here has 3 types of outputs: the mixing coefficient $\pi_k$, mean of the Gaussian component $\mu_k$, and its variance $\sigma^2_k$. One might think: we don’t have ground truths for those outputs, how could make the network learn them?! the answer is we don’t need ground truths, because the loss function we’re going to use is the negative log likelihood given the data, so we just update the parameters of the model as to minimize this loss function. For more discussion about this model you can check Bishop’s book chapter 5. Implementation I have written a Theano implementation of mixture density networks (MDN) which you can find here. I wrote it such that it supports multiple samples at once, so the Gaussian components are multivariate, and it also supports mini-batches of data. This made the implementation a little more interesting since I have to deal with 3d tensor for $\mu$. What I did is that instead of having one matrix for the output layer in the case of a standard MLP, you have a tensor for $\mu$, and two matrices for $\sigma^2$ and $\pi$. The activation function for $\mu$ is the same as the desired output, and for $\sigma^2$ and $\pi$ it’s a softplus and softmax respectively. Similar to what David has observed, the straight implementation of MDN would cause a lot of NaNs. A very important issue when implementing MDN is that you have the log-sum-exp expression in the log likelihood, which can be numerically unstable. This can be fixed using this trick. I also had to use a smaller initial learning rate than a the one I used in my previous MLP, otherwise I would get NaN in the likelihood. With these two tricks, I don’t get any more NaNs. For the RNADE paper trick, I tried multiplying the mean with the variance in the cost function, but this changes the gradients of the variance and it makes the performance worse. In addition, I didn’t find it helping at all. Multiplying the gradient of the $\mu$ directly with the $\sigma$ is a little tricky when you’re using Theano’s automatic differentiation, and that’s probably why when I checked the RNADE code I found that they’re computing the gradients without using Theano’s T.grad. Experiments We would like to compare the MDN model with a similar MLP, and we can compare them in terms of the mean negative log-likelihood (Mean NLL) and the MSE on the same set of validation set. Computing the log-likelihood of the MLP is easy, it’s just the log of a Gaussian, with the output of the network as the mean, and its variance is the maximum likelihood estimate from the data, which turns out to be the MSE. On the other hand, to compute the MSE for the MDN model, we need to sample from the target conditional distribution. We do that by doing the following for each input data point: we sample the component from the multinomial distribution over the components (parametrized by the mixing coefficients), which gives us a selected component, and then sample the prediction from the selected Gaussian component. I ran the first set of experiments on the AA phone data set. I took 100 sequences for training and 10 for validation. I trained two models. Both the MLP and the MDN take as input a frame of 240 samples, and output one sample. The dataset used to train the models has 162,891 training examples and 14,739 validation examples. The following plot shows training and validation mean NLL for the MDN for the following hyper-parameter configuration:
• 2 hidden layers each has 300 units with tanh activations
• initial learning rate MDN: 0.001
• linear annealing of learning rate to 0 starting after 50 epochs
• 128 samples per mini-batch
• 3 components
We can the Mean NLL decreasing which means the model is learning. The validation Mean NLL stabilizes after almost 100 epochs. What I am mainly interested in though is comparing the same MLP architecture with the MDN. Therefore, I used the pretty much the same hyper-parameters for both networks to see if we can get advantage by just having the mixture of Gaussians at the output layer. The following is a plot that shows results on the same validation set and using the following hyper-parameters: I was expecting the MDN to perform better than the MLP. However, we can see that the MLP is better than the MDN both in terms of MSE and Mean NLL. The minimum MSE in the MLP is 0.0222 and for the MDN is 0.0324, and the minimum Mean NLL for MLP is -1.29, and for the MDN is -0.77. This is actually the typical performance pattern in pretty much all experiments I did on this dataset. To investigate more I tried varying the number of components, and found that performance improves only a little as we increase components (For 10 components the minimum Mean NLL reaches -0.91). In both models I was not able to generate something that sounds like \aa\, but the following generated waveform from the MDN model shows that it was able to capture the periodicity of the \aa\ sound, but it’s still more peaky than a natural signal: We saw that the MDN doesn’t do better than the MLP in the \aa\ dataset, so it turns out we’re not benefiting from having a multi-model predictive distribution. To verify more, I performed another set of experiments on a more complicated task, where I used full utterances of one user (FCLT0) with the phoneme information (the current and next phonemes, as in the previous experiment). I trained on 9 utterances and validated on 1. The dataset has 402,939 training examples, and 70,621 validation examples. Using the same hyper-parameter settings, I got the following results: Here we see that the MDN beats the MLP in terms of the Mean NLL, but still doesn’t perform better on MSE. This is kind of surprising, as you might think that the MDN has a better model for the data, but it’s probably the variance of the sampling from MDN that’s increasing the error. This is still something interesting to investigate more into in the future.
|
2017-01-23 16:45:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6976431608200073, "perplexity": 442.71434448572035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00342-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.federalreserve.gov/econresdata/notes/ifdp-notes/2016/low-for-long-interest-rates-and-net-interest-margins-of-banks-in-advanced-foreign-economies-20160411.html
|
## IFDP Notes
### "Low-for-long" interest rates and net interest margins of banks in Advanced Foreign Economies
Stijn Claessens (FRB), Nicholas Coleman (FRB), and Michael Donnelly (FRB)1
1. Introduction
Since the global financial crisis (GFC), interest rates in many advanced economies have been low and in many cases are expected to remain low for some time. Low interest rates help economies recover and can enhance banks' balance sheets and performance by supporting asset prices and reducing non-performing loans. But persistently low interest rates may also erode the profitability of banks as low rates are typically associated with lower net interest margins -- NIMs, typically measured as net interest income divided by interest earning assets. While overall advanced economies' bank profitability, measured by return on assets, has recovered from the worst of the GFC, it remains low and many advanced economies' banks are facing profitability challenges, related to low net interest margins as well as weak loan and non-interest income growth. And while NIMs across many advanced economy banks have been trending down on a longer-term basis, they have fallen more sharply since the GFC, in part, as appears so, on account of lower interest rates.
But how strong is the link between interest rates and NIMs, and is this relationship different in low interest rate environments? This note explores the empirical evidence between changes in interest rates and NIMs for different interest rate environments to discover the potential adverse effects of a low interest rate on bank NIMs. Using cross-country evidence can be insightful to assess a situation that is not so common in any individual country. Overall, the new empirical analysis shows that low rates are contributing to weaker NIMs and identifies an adverse effect that is materially larger when interest rates are low. It suggests that these effects can be material for banks in some key advanced foreign economies (AFEs).2
2. Literature on the effects of low interest rates on banks' NIMs
In many ways, banks, of course, may benefit from low interest rates, directly (e.g., through valuation gains on securities they hold) and indirectly (e.g., as non-performing loans will be lower as borrowers' debt service will be less burdensome). The focus here is on the narrower question of the effects of low interest rates on banks' NIMs. Analytics and existing empirical findings suggest that, controlling for other factors, banks' NIMs are lower when interest rates are low. We briefly review the reasoning and this literature.
Low short-term interest rates can depress bank margins because for many types of deposits, banks are reluctant to lower deposit rates, especially below zero (while there is anecdotal evidence that some banks are passing negative rates onto corporate customers in select cases, banks have been reluctant to pass on negative policy rates to retail depositors). As a result, when interest rates decline, bank margins compress, since banks must pass on lower rates on assets based on contractual repricing terms (e.g., floating rate loans) and have an incentive to do so to those borrowers that have other financing choices (e.g., from corporate bond markets). Moreover, low long-term interest rates can depress NIMs by flattening the yield curve. Because banks transform short-dated liabilities into longer-dated assets, their NIMs are negatively affected by shallower yield curves. Modeling (see Appendix I) further suggests that, ceteris paribus, the effects are larger in a low-yield environment since, besides the reluctance to lower deposit rates below zero, spreads on loans over deposit rates can be expected to be lower.
Cross-country empirical evidence and studies for various individual countries support the negative effects of lower interest rates on net interest margins, with effects often found to be greater in low interest rate environments. Analyzing a sample of 108 relatively large international banks, many from Europe and Japan, and 16 from the United States, Borio, Gambacorta and Hofmann (2015) document the non-linear relationships between the interest rate level and the slope of the yield curve on one hand and banks' NIMs and profitability, i.e., return on assets, on the other hand. They confirm that effects on NIMs are stronger at lower levels of interest rates (50 basis points for a 1 percentage point change at a rate of 1% vs. 20 basis points at a rate of 6%) and when there is an unusually flat term structure.
Evidence for the United States (e.g., Genay and Podjasek, 2014) also finds that banks are adversely affected by interest rates that are low for an extended period of time through a narrower NIM. They also note, however, that the direct effects of low rates are small relative to the economic benefits, including through better support for asset quality.3 Analysis for Germany (Busch and Memmel, 2015) suggests that in normal interest rate environments the long-run effect of a 100 basis points change in the interest rate on NIMs is very small, close to 7 basis points. In the recent, low-interest rate environment, however, they find that banks' interest margins for retail deposits, especially for term deposits, have declined by up to 97 basis points. The Bundesbank's Financial Stability Review of September 2015, analyzing 1,500 banks, also finds that a persistently low interest rate is one of the main risk factors weighting on German banks' profitability. For Japan, analysis (Deutsche Bank, 2013) shows that the low-for-long interest rate there also contributed to the declining NIMs of Japanese banks. Over time, however, portfolio shifts towards investment in securities, a greater reliance on non-interest income, and a holding down of costs allowed Japanese banks' profitability to remain mostly positive. Evidence for other countries on the effects of (low) interest rates on NIMs and profitability is scarcer.
The literature has found that the direct effects of changes in interest rates on margins and profitability can vary by bank size. Analysis for U.S. banks suggests that in general rate changes have greater, short-run impact on small banks as they depend more on traditional intermediation of retail deposits, which are stickier in price, into loans, many of which are priced off floating rates. Large banks typically have greater ability to manage interest rate risks through the use of derivatives and the repricing of managed liabilities, and are thus less affected by low interest rates. Also, large banks, with their greater international reach, have more potential to increase lending abroad, and their more diversified business models can allow them to more easily expand non-interest income to offset lower margins. Since the GFC, however, large U.S. banks have seen their funding cost advantage erode and NIMs decline more than small banks, but this seems at least in part due to recent regulatory changes (Covas, Rezende and Voitech, 2015).
Capital markets seem to acknowledge some of the effects of low interest rate on banks' profitability. In their analysis, English, Van den Heuvel, and Zakrajsek (2012) find that while equity prices of U.S. banks fall following unanticipated increases in interest rates or a steepening of the yield curve, a large maturity gap weakens this effect, suggesting that on account of their maturity transformation function, banks gain relatively from a higher interest rate or a steeper yield curve. This shows that, conversely, a lower interest rate or a shallower yield curve hurts those banks that are more engaged in maturity transformation, at least relative to other banks.
3. New analysis
New cross-country analysis we conduct confirms and expands on these findings. We first describe below the data and sample of commercial banks we use, as well as some raw statistics. We then provide the methodology and empirical findings.
Data sample and raw comparisons. To investigate the impact of the short-term interest rate on banks' NIMs, a database was assembled with (yearly averages of) three-month and ten-year sovereign yields from Bloomberg and of bank balance sheet and income statement data from Bankscope at an annual frequency.4 The final sample contains 3,418 banks from 47 countries for 2005-2013.5 Unconsolidated data are used, where available, to isolate the effect of a country's interest rate on only the bank's operations in that country. Observations are only trimmed in cases where the data is logically inconsistent, for example when assets are below zero or when deposits are greater than liabilities. We additionally ignore observations where the NIM for a bank changes by more than ten percentage points from one year to the next.
To explore differential impacts, countries were classified each year as being in a low- or high-rate environment based on whether the interest rate on their 3-month sovereign bond was below or above 1.25 percent (other cutoffs were also tested and yielded similar results). Figure 1 shows the sample of countries covered and the range and median of the short-term yields in each. The variations in rates are large for many countries, and many countries are both in the high- and low-yield environment for some time (the median provides a sense of how long each country has been in each environment). Appendix Figure 1 shows the exact classification of countries in the low- and high-yield environments for 2005, 2009, and 2013. It is notable that many more countries, especially advanced economies, are in the low-yield environment post-GFC: 19 in 2009 vs. only two in 2005. These shifts help to estimate the differential impact of low interest rates on banks' NIM and profitability, and whether effects are greater the longer banks are in a low interest rate environment.
Figure 2 compares the broad composition of bank balance sheets in low- and high-rate environments. Overall, there do not appear to be major shifts in asset compositions or liability structures in low- versus high-rate environments that could be expected to drive differences in how net interest margins respond to interest rates. Banks have roughly the same loan-to-deposit and loan-to-asset ratios in the high-rate environment (the orange bars) as in the low-rate environment (the blue bars) at about 125 and 60 percent, respectively. Banks in the high-rate environment have slightly higher leverage ratios, while deposits-to-total liabilities and securities-over-assets are slightly higher in the low-rate environment, possibly because low rates are associated with lower economic and loan growth, less non-deposit borrowing, greater investment in safer securities, and lower profits and capital.
Figure 1: Range of 3-Month Sovereign Yield by Country (2005-2013)
Note: The figure shows the range of the three-month sovereign yield for each country from 2005-2013. Values used are yearly averages of the implied three-month rate published daily by Bloomberg.
Sources: Bloomberg, staff calculations.
Accessible version
Figure 2: Balance Sheet Composition
Source: Bankscope, staff analysis.
Accessible version
Figure 3 shows that average NIMs are higher in the high-rate environment (the orange bars) than in the low-rate environment (the blue bars). Profitability, measured by return on assets, is higher too in the high-rate environment, likely reflecting both higher NIMs and concurrent better overall economic and financial environments.
Figure 3: Banks' NIM and Profitability
Source: Bankscope, staff analysis.
Accessible version
Methodology and findings. To isolate the direct effects of changes in interest rates on NIMs in low- and high-rate environments, we perform an econometric analysis that holds other factors constant, including any correlations that interest rates might have with economic growth, demand for loans, or supply of deposits. We regress a bank's NIM for each year on the average level of the three-month sovereign rate in that year, a common proxy for banks' marginal funding costs, controlling for the bank's own lagged NIM, other time-varying bank characteristics, and a bank fixed effect, as well as GDP growth and the spread between the three-month and ten-year sovereign rates. The sample is then split by banks in low- and high-interest rate environments Specifically, the regressions use the following empirical specification:
$$y_{ijt}=\beta _{0}+\beta _{1}y_{ijt-1}+\theta _{1}3MonthRate_{jt}+\theta_{2}RateSpread_{jt}+\theta _{3}Low_{jt}+y_{1}GDPgrowth_{jt}+y_{2}X_{it}+\delta _{i}+\varepsilon _{ijt}$$
Where:
• $y_{ijt}$ is the NIM of bank $i$ in country $j$ in year $t$,
• 3Month$Rate_{jt}$ is the yearly average 3-month government bond yield,
• $Rat{eSpread}_{jt}$ is the spread between the 10-year government bond yield and 3-month government bond yield,
• $Low_{jt}$ is a dummy equal to 1 if the country is in a "low rate environment" which we define to be under 1.25% on the 3-month rate.
• ${GDPgrowth_{jt}}_{it}$ controls for the country's economic growth6
• $X_{it}$ are bank level controls, specifically total securities over total assets, deposits over total liabilities, and total equity capital over total assets
• $\delta _{i}$ is a bank fixed effect and $\varepsilon_{ijt }$ is an error term
Because the regressions control for each bank's average NIM and its country's general economic conditions, results can be interpreted as the direct effects of a change in the short-term interest rate on banks' NIMs.7
The baseline regression results show that a decrease in the short-term interest rate lowers NIMs in both a low- and high-rate rate environment, with effects symmetric for an interest rate increase. But other things equal, effects are statistically significantly larger in a low-rate environment. Figure 4 summarizes the regression results. For a representative bank, a one percentage-point decrease in the short-term rate is associated with a 9 basis-point decrease in NIM in the high-rate environment (the orange bars) versus a 17 basis-point decrease in NIM in the low-rate environment (the blue bars). Similar magnitudes and comparisons are found when using overall samples composed of different banks and countries.8 Even so, when conducting the analysis for individual countries, there is significant heterogeneity in effects. For example, a one percentage point decline in the 3-month sovereign rate leads to a 6 basis-point decline in NIMs in Austria, compared to a 27 basis-point decrease in Italy.
Figure 4: Effect of 1 p.p. Decrease in 3-Month Yield
Note. The figure above reflects average differences among banks and estimated effects of a decrease in the three-month sovereign yield, respectively, for banks in a "low" rate environment and a "high" rate environment.
Accessible version
We next run regressions analyzing separately the effects of changes in interest rate on changes in interest expenses and on changes in interest income. The greater effects on NIMs in the low-rate environment is largely driven by the greater pass-through of low interest rates on interest income than on interest expense. Specifically, a one percentage point decrease in the short-term rate is associated with a 63 basis-point decrease in the ratio of interest income to earning assets in the low-rate environment and only a 35 basis-point decrease in the high-rate environment, a 28 basis-point difference. The equivalent difference is about 20 basis points for the ratio of interest expense to liabilities. In other words, at low rates, banks have greater difficulty reducing their funding rates, while they have to pass the lower rates to a greater degree on to their borrowers, likely due to greater competition, including from non-bank lenders, and lower demand for loans -- as economic activity is less in times of low interest rates, leading NIMs to decline more.
We also analyzed if effects differ by banks' maturity mismatches. We defined a bank as having a "long" maturity (asset and liabilities separately) if it has an average balance sheet maturity over the sample period greater than the median maturity for banks in its country and as having a "short" maturity otherwise. We then analyze using the same methodology, for a smaller sample of banks, the impact of a one percentage point increase in the short-term interest rate on the same period interest income ratio (the interest expense ratio) differentiating by the maturity of the banks' assets (liabilities).9 Consistent with a priori expectations, the analysis shows that the highest contemporaneous pass-through from a decrease in interest rates to interest income is for banks with short asset maturities in the low interest rate environment. Banks with longer maturity assets see statistically significantly less pass-through, 64 basis points vs. 92 basis points (Figure 5). Similarly, although somewhat lower, the pass-through to interest expenses is significantly higher in the low interest rate environment than in the high environment for banks with short liability maturities.
Figure 5: Effect of 1 p.p. Decrease in 3-Month Sovereign Yield, by Duration
Note. The figure shows the estimated effect of a 1 percentage point decrease in the three-month sovereign yield on a bank's net interest income margin and interest expense margin adjusted for interest rate environment and a bank's balance sheet maturity. A bank is classified as having a "long" maturity if it has an average balance sheet maturity over the sample period greater than the median maturity for banks in its country, and is classified as having a "short" maturity otherwise. Assets are used to determine maturity for the interest income margin, while liabilities are used for the interest expense margin. This figure only includes those countries with over 100 banks reporting maturity information.
Sources: Bankscope, staff calculations.
Accessible version
4. Overall Effects for AFE Banking Systems and Conclusions
The cross-country analysis conducted suggests that low interest rates negatively affect many AFE banks' NIMs, which is consistent with several studies on individual countries. We can summarize our results by considering the situations of banking systems in four key AFEs, the euro area, Canada, Japan, and the UK, and comparing these to U.S. banks (Table 1). Wide differences remain between the relatively strong profitability reported by Canadian, U.K. and U.S. banks, and the weaker profitability reported by euro area and Japanese banks.10 These differences in profitability partially reflect differences in NIMs, which are typically lower in the AFEs than in the United States, as many AFE banks have higher shares of typically lower-yielding mortgages and sovereign debt. And the lower NIMs in the later period in turn likely reflect the large declines in sovereign yields in these economies.
Table 1: Advanced Economy Bank Profitability
Median Return on Assets - 2007 Median Return on Assets - 2013 Median Net Interest Margin - 2007 Median Net Interest Margin - 2013 3-Month Sovereign Yield - 2007 3-Month Sovereign Yield - 2014
Euro Area 33 24 253 235 398 14
Canada 54 56 229 214 425 92
Japan 23 18 181 139 45 3
United Kingdom 84 66 195 128 535 31
Advanced Foreign Economies 30 22 236 213 277 10
United States 96 81 391 382 450 5
Sources: Bankscope, staff analysis.
Using our regression results, our estimates suggest that the NIMs in these four banking systems declined by roughly 26 basis points due to the actual decreases in interest rates between 2007 and 2013 (Table 2), or roughly 82 percent of the median decline in NIMs observed over this period, which was 32 basis points. These impacts vary by interest rate declines and are between 3 basis points for Japan and 46 basis points for the U.K. If already in a low-rate environment, e.g., Japan and the euro area today, estimates suggest a NIM contraction of 17 basis points for every 1 percentage point further decline in the 3 month rate.
Table 2: Predicted and Observed Changes in Net Interest Margins from 2007-2013
Change in 3-Month Rate (b.p.) Predicted Change in Net Interest Margin (b.p.) Observed Change in Net Interest Margin (b.p.) Percent of Net Interest Margin Change Explained
Euro Area -373 -33 -34 97%
Japan -38 -3 -39 8%
United Kingdom -526 -46 -28 168%
Advanced Foreign Economies -297 -26 -32 82%
Note. Change in the 3-month rate is the change in the 3-month sovereign yield from 2007-2013. We predict a change of 8.8 basis points in a bank's net interest margin for every 100 basis point change in the sovereign's 3-month yield. Observed change is shown as the median change in net interest margins for banks in that sample.
There are caveats to this analysis, related to appropriate lags and potential other non-linearities between changes in interest rates and NIMs. First, there may be important non-linearities in the impact of interest rate changes in a low-yield environment compared to the high-yield environment not captured with our specification. Second, while we included one lag for the dependent variable, which was statistically significant with a coefficient of about 0.5, there are likely additional lags in the relationship between changes in interest rates and NIMs that are not captured (for example, as long-term loans are repriced over time at higher or lower interest rates, affecting NIMs many years after an interest rate change). Analysis using German banks, Memmel (2011), suggests for example that the full effects of repricing take place over a period of 1-1.5 years. This is consistent with the pattern in NIMs which seems to progressively decline as the banks are longer in a low interest rate environment (Figure 6). Third, the analysis only looks at the effect on current margins assuming no shifts in behavior, while changes in interest rates and the full effect on banks' NIMs (and profitability, capital adequacy and franchise values over time) may vary as banks adjust their funding structures, lending and investment portfolios and their non-interest activities. These adjustments have been found to be important in the case of Japan (see Deutsche Bank, 2013). Lastly, there have been many regulatory changes since the GFC that could also have affected banks' NIMs, as has been found for large U.S. banks (see Covas, Rezende and Voitech, 2015).
Figure 6: Change in NIM in a Low Rate Environment
Note. The figure above reflects the average change in net interest margins between the year prior to entering the low interest rate environment and each successive year after entering a low rate environment, t=1 through t=4.
Sources: Bankscope, staff analysis.
Accessible version
While there are these and other caveats to the analysis, nevertheless, the findings suggest strongly that when NIMs are low -- due to persistently low rates or otherwise -- the important issue is how banks can adjust their activities and cost structures so as to offset low rates' adverse effects on profitability and capital. Similar regressions of the effects of low interest rates on bank ROA show no consistent results, however, likely as the direct effects of changes in interest rates on NIMs are confounded by the volatility in other sources of income and costs, including gains on security holdings and provisioning, especially since the GFC, and possibly because banks are taking actions to offset the effects on ROAs of changes in NIMs due to interest rate declines (as well as in light of recent regulatory changes, heightened market pressures, and changed opportunities). Although institutions are making adjustments, such efforts take time, as Japan's experience shows, with limited immediate payoffs when facing weak cyclical conditions and deleveraging pressures. As such, banking systems in many low interest rate countries will face challenges. Until lost income can be offset through other actions, lower profitability will reduce financial institutions' ability to build and attract capital, increasing their vulnerability to shocks and declines in market confidence and undermining their ability to support the real economy.
Appendix I: Effects of low interest rates: Analytics and Modelling
Analytics
At the individual institution level, exposures to fluctuations in interest rates can vary significantly. Relevant factors determining this variation include the amount held of fixed income assets (e.g., (government) bonds), the maturity and repricing nature of liabilities and assets, and the related degree of maturity mismatches. The degree to which fluctuations in short-term interest rates impact banks' NIMs and profitability (and bank's equity market valuation) depends importantly on the maturity and repricing structure of banks' assets and liabilities, accounting for the use of hedging tools, and the degree to which banks can and do alter their balance sheets and activities in response to the changes in interest rates. Banks with shorter-term and frequently repriced assets (or liabilities) will experience a larger decrease in interest income (or interest expense) as the short-term interest rate falls compared to banks with longer-term and infrequently repriced assets (or liabilities). In addition to asset-liability mismatches, the impact of interest rate changes on banks' NIMs depends on banks' relative ability to pass on changes in the interest rate to depositors and borrowers.
Typically though, as they transform short-dated liabilities in longer-dated assets, banks are negatively affected by shallower yield curves which act to lower their NIMs and overall profitability. The NIM is effectively the mark-up that banks charge on their liabilities to fund their assets and reflects the liquidity transformation that banks perform, borrowing liquid deposits and funding themselves more generally with short-term liabilities and making illiquid loans and investing in longer-dated securities. Ceteris paribus, bank margins increase as the steepness of the yield curve increases as then the difference between bank (short-term) borrowing and (long-term) lending rates increases. Controlling for the steepness of the spread, the level of the short-term interest rate may be important as well for bank margins (see modelling). Especially when interest rates are close to zero, the de-facto lower bound for at least retail deposits, banks may see their margins compress as they have greater difficulty adjusting deposit rates down, while they still have to pass on the lower rate to their borrowers.
Effects of changes in interest rates on NIMs and profitability may vary by bank size. Large banks may be able to more effectively hedge interest rate risk so a change in the short-term interest rate could have a smaller short-run effect on changes in interest income than for small banks. At the same time, borrowers from large banks have greater opportunities to switch banks, forcing large banks to pass on low interest rate to a greater degree to their borrowers. Smaller banks may rely more on retail deposits, so a low interest rate could have a relatively less beneficial impact on their expenses.
While lower interest rates need not adversely affect banks, and the overall literature finds ambiguous results, pass-through of low interest rate can be expected to be even less to deposit rates at lower rates/ZLB. As a consequence, NIMs can decline. Institutions, notably those with more long-dated and fixed-price liabilities (i.e., insurance and pension funds), will typically see their net worth fall. Different than banks, contractual savings institutions have some time and scope though to adjust premiums and benefits. Distributional/adverse effects arise when some types of banks and other institutions (small, large, other) are more affected.
The impact of interest rate changes on overall bank profitability and capital adequacy positions is more ambiguous as it will vary on the state of the economy. For example, if low interest rate environments tend to happen in a period where demand for loans is low as well, or where banks are (capital or otherwise) constrained and otherwise deleveraging after a financial crisis, this may (further) suppress NIMs and overall profitability, especially when banks are also facing balance sheets problems and deleveraging, say in the wake of a crisis. The net impact on asset quality and non-performing loans, which feed into profitability, is more ambiguous as low rates make on one hand loan payments easier for borrowers but also may be associated with poorer quality borrowers getting loans. More generally, the state of the economy will importantly both influence the scope for profitable banking business and the level of interest rates. Related, effects of decreases in interest rates on the equity valuation of banks are often found to be positive, re?ecting the net effect of a combination of capital gains on longer-term assets and lower discount rates on future earnings, and expectations of higher future pro?ts, the latter as changes in interest rates (in part due to monetary policy actions) relate to expected economic growth and thereby loan demand and asset quality. When interest rates adversely affect banks' profitability and valuations, they can also impede banks' ability to raise new capital.
Besides the direct effects on banks' NIMs and profitability, banks may also alter their activities in responses to changes in interest rates. If short-term interest rates or spreads are not high enough to allow banks to reach their profit goals, banks may switch out of lending to opportunities to earn non-interest income, such as fee income from underwriting or asset management. This ability to engage in other activities will vary across countries depending on financial system structures, e.g., bank vs. market-based, and regulations. It may also vary across banks in ways related to bank size. Large banks are for example more likely to be able to engage in non-interest earning activities, such as investment banking or wealth management.
Theoretical Interpretation of NIM
Empirically, we find that the NIMs are higher when interest rates are higher, but NIMs are more sensitive to interest rates fluctuations when the level of interest rates is lower. This empirical result is consistent with a theoretical breakdown of the components of NIM.
First, consider the definition of the NIM:
$$Net\ Interest\ Margin=\frac{Interest\ Income-Interest\ Expense}{Average\ Earning\ Assets}$$
By definition, interest income can be written as the product of the bank's earning assets and the interest rate on lending, r, while interest expense can be written as the product of the bank's interest bearing liabilities and the interest rate on borrowing, r'.
$$Net\ Interest\ Margin=\frac{Avg.\ Earing\ Assets*r-Int.\ Bearing\ Liab.\ *r'}{Avg.\ Earning\ Assets}$$
We can write interest bearing liabilities as a ratio of average earning assets, so
$$Int.\ Bearing\ Liab.=\lambda *Avg.\ Earnign\ Assets, where\ \lambda =\frac{Int.\ Bearing\ Liabilities}{Avg.\ Earning\ Assets}$$
Likewise, we can define the ratio of the interest rate on borrowing and lending as: $r^{\prime }=\phi \ast r, where\ \phi =\frac{borrowing rate}{lending rate}$
Therefore,
$$Net\ Interest\ Margin=\frac{Avg.\ Earning\ Assets*r-Avg.\ Earning\ Assets*\ \lambda *\ \phi *r}{Average\ Earning\ Assets}$$
Taken together this becomes $\mathrm{NIM=r(1-\ }\mathrm{\lambda }\mathrm{\phi }\mathrm{)}$. In order to model how NIMs change as the interest rate level changes, it is easiest to assume a static balance sheet, i.e. that $\lambda$ is a fixed constant which yields three cases to consider how the borrowing rate relates to the lending rate.
Case 1: $\phi$ is static (e.g., the deposit rate is an unchanging proportion of the loan rate).
If this were the case then the rate of change of NIM in relation to r would be $\frac{dNIM}{dr}=1-\ \lambda \phi$
Because $\lambda$ and $\phi$ are both constants, the change in NIM would also be constant as r changes.
Case 2: $\phi =\frac{r-a}{r}$ (e.g., the deposit rate is a fixed spread below the loan rate).
If this were the case then we can substitute in $\frac{r-a}{r}$ for $\phi$ and we find that
$$NIM=\ r\left(1-\ \lambda \phi \right)=r\left[1-\ \lambda \left(\frac{r-a}{r}\right)\right]=r\left(1-\ \lambda \right) \ \lambda a$$
Then the rate of change of NIM in relation to r would be $\frac{dNIM}{dr}=1-\ \lambda$
Because we assume $\lambda$ to be a constant, the change in NIM would also be constant as r changes.
Case 3: $\phi =r^{\alpha -1}$, for $\alpha >1$ and $r>0$ and (e.g. the deposit rate grows as a proportion of the loan rate).
If this were the case then we can substitute in $r^{\alpha -1}$ for $\phi$ and we find that
$$NIM=\ r\left(1-\ \lambda \phi \right)=r\left(1-\lambda r^{\alpha -1}\right)=r-\lambda r^{\alpha }$$
Then the rate of change of NIM in relation to r would be $\frac{dNIM}{dr}=1-\ \lambda \alpha r^{\alpha -1}$
Because we assume $\lambda$ and $\alpha$ to be constant, the change in NIM would shrink as r increases, and would be greater when r is lower. In other words, the NIM would be more sensitive to changes in the interest rate when the interest rate is closer to zero.
Conclusion: Given our empirical results, we reject cases 1 and 2 as our tests are consistent with case 3. This result implies that there is an important non-linearity in the mapping of the short-term interest rate to the lending and borrowing interest rates.
Appendix Figure 1: Country Assignments by "Low" vs "High" 3-Month Sovereign Rate
Note: The figure shows how countries were classified for three years in the sample from 2005-2013. A country was classified as being in the "low" rate environment if its average three-month implied sovereign yield for that year was less than or equal to 1.25 percent and was classified as being in a "high" rate environment otherwise.
Sources: Bloomberg, staff calculations.
Accessible version
References
Adrian, Tobias, and Nellie Liang, 2014, "Monetary Policy, Financial Conditions, and Financial Stability," Federal Reserve Bank of New York Staff Reports, no. 690 September.
Borio, Claudio E. V. and Gambacorta, Leonardo and Hofmann, Boris, 2015, "The Influence of Monetary Policy on Bank Profitability," (October), BIS Working Paper No. 514.
Busch, Ramona and Christoph Memmel, 2015, "Banks' net interest margin and the level of interest rates," Discussion Papers 16/2015, Deutsche Bundesbank, Research Centre.
Bundesbank, 2015, "Financial Stability Review," September.
Covas, Francisco B. Marcelo Rezende, and Cindy M. Vojtech, 2015, "Why Are Net Interest Margins of Large Banks So Compressed?", FEDS Notes, October.
Dell'Ariccia, Giovanni, and Robert Marquez, 2013, "Interest Rates and the Bank Risk taking Channel," Annual Review of Financial Economics 5(1), 123--141.
Deutsche Bank, 2013, "Ultra-low interest rate: How Japanese banks have coped," June.
English, William B., Skander J. Van den Heuvel, and Egon Zakrajsek, 2012, "Interest Rate Risk and Bank Equity Valuations," FEDS Working Paper 2012-26.
ECB 2015, "Financial Stability Review," May, Frankfurt.
Genay, Hesna and Rich Podjasek, 2014, "What is the impact of a low interest rate environment on bank profitability?," Chicago Fed Letter, 324, July.
Memmel, Christoph, 2011, "Banks' exposure to interest rate risk, their earnings from term transformation, and the dynamics of the term structure," Journal of Banking and Finance, 35, 282--289.
1. We would like to thank William English and other Federal Reserve System colleagues for extensive comments. The views expressed in this note are those of the authors and should not be attributed to the Board of Governors of Federal Reserve System. Return to text
2. An overall assessment of how low interest rates may affect banks is beyond the scope of this note. For example, low interest rates can lead to valuation gains on securities, affect the quality of loans and related changes in loan-loss provisioning, etc. We also do not review whether low interest rates may lead to unhealthy reach for yield by banks (see Adrian and Liang (2014) and Dell'Ariccia and Marquez (2014) for (literature) reviews of the links between risk taking and interest rates). Return to text
3. Similarly, a study of 98 EU banks (ECB 2015) finds that macroeconomic factors, and not interest rates, have had the most importance for bank health since the global financial crisis. Return to text
4. Implied yields on currently outstanding three-month and ten-year bonds are used since not every country has at all points in time bonds maturing exactly three months or ten years later. These daily rates are then averaged over each year. Return to text
5. A limitation of Bankscope is that it focuses on relatively large banks within countries so results may be biased. That said, many smaller (and unlisted) banks are still included. Return to text
6. We also included in regressions commercial real estate prices, house prices, unemployment rates, and stock market performances. Because these data are not available for many countries and longer periods, it decreased coverage to such a degree that we preferred to use a more parsimonious specification. For those countries where we had these data, however, regression results were robust. Return to text
7. We additionally studied the impact of changes in the interest rate on banks' return on assets and of changes in the slope of the yield curve on NIMs and on return on assets. Here less consistent patterns emerged, as was also the case for Genay and Podjasek (2014). We suspect that over this period, which includes the global financial crisis, non-interest income and expense items, such as provisioning for non-performing loans, and large valuation gains and losses led to (even) greater volatility in banks' profitability, obscuring the direct effects of changes in interest rates. Return to text
8. Results hold for unbalanced or balanced samples, samples with or without U.S. banks, and trimming observations differently. Return to text
9. The sample was split into "long" versus "short" maturity of bank by first taking the average maturity for each bank over the sample period and then calculating the median average duration by country. Banks above the median average duration within each country were then classified to have a "long" and below a "short" duration. Besides differences in maturity structure, there can also be differences in the frequency of repricing of claims with the same final maturity, as for example, in fixed vs. variable rate mortgages. Consistent data on such differences across banks and countries is not available, however. Return to text
10. Accounting differences across countries can limit direct comparability of U.S. and AFE financial statements. For example, derivatives are reported to some extent at net values for U.S. banks but are largely at gross values under International Financial Reporting Standards, which has the effect of inflating assets and reducing return on asset figures for AFE banks. Gross derivatives represent roughly 15 percent for a representative European bank sample assets on a weighted basis, a material share, but not one that would substantially affect the profitability and NIM comparisons. Return to text
|
2019-03-22 04:27:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4915870726108551, "perplexity": 3419.290521936726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202628.42/warc/CC-MAIN-20190322034516-20190322060516-00290.warc.gz"}
|
https://physics.stackexchange.com/questions/507582/conceptually-understanding-rl-circuits
|
Conceptually understanding RL circuits
I'm struggling to conceptually understand the current-time profile of an RL circuit. Specifically, what causes the rate of change of current, $$\frac{\partial i}{\partial t}$$, to start off high when you first connect the battery and decrease with time?
I understand that the current through the inductor induces a magnetic field, $$\vec{B}$$, around the inductor which itself induces an emf, $$\epsilon_L$$, which acts to oppose the change in magnetic flux, $$\phi_B$$. This is a statement of Faraday's induction and Lenz's law.
To me, as soon as the battery is connected, the rate of change of current is at its maximum which means the induced emf across the inductor is at its maximum. is that right?
If it is, is it also correct to state that this reduces the rate of change in current and in so doing the induced emf decreases?
Would this not allow the current in the circuit to increase again?
• AH! I cracked it for myself. The rate of change of current is $\frac{\partial i}{\partial t} = \frac{\epsilon - iR}{L}$ and so at $i=0$ (also $t=0$) the rate of change of current is maximum at $\frac{\epsilon}{L}$ but as $i$ increases, this rate of change necessarily decreases leading to your standard logarithmic growth. It may seem trivial but this was genuinely what I was conceptually stuck on. – Jamie Smith Oct 11 '19 at 10:38
3 Answers
Assuming you are talking about a series RL circuit with an ideal inductor, it is correct that $$\frac{di}{dt}$$ is maximum and the voltage across the inductor is a maximum equal to the battery voltage when the battery is first connected to the circuit. The current is initially zero.
It is also correct that the rate of change in current then decreases. However, the amount of current is at the same time increasing until eventually it reaches a maximum of V/R where V is the battery voltage. Then the voltage across the inductor is zero and all the battery voltage is across the resistor.
Here are the relevant equations for the inductor current and inductor voltage as a function of time (assuming no initial current in the inductor):
$$i(t)=\frac{V}{R}(1-e^{-Rt/L})$$
$$v(t)=Ve^{-Rt/L}$$
Where $$V$$ is the battery voltage.
Hope this helps
When you say the $$di/dt$$ is initially "high" you are comparing it with the wrong thing, to understand what is going on.
If there was no inductor and just a resistor, the initial value of $$di/dt$$ would be (in theory) "infinite" as the current changed instantaneously from $$i = 0$$ to $$i = V/R$$.
The "back EMF" $$e_L$$ is initially equal and opposite to the applied voltage and reduces the rate of change of current to $$V/L$$. As the current increases, the voltage across the resistor increases and therefore the rate of change of current reduces even more, to $$(V-iR)/L$$. In the steady state condition, $$V = iR$$ and the rate of change of current is $$0$$.
In the circuit $$\mathcal E_{\rm battery}-\mathcal E_{\rm inductor} = V_{\rm resistor} \Rightarrow \mathcal E_{\rm battery} - L\dfrac{dI}{dt} =IR$$
On switch on $$I=0$$ as the current cannot changer instantaneously.
At this time $$\mathcal E_{\rm battery} = L\left [\dfrac{dI}{dt}\right]_{\rm maximum}$$ as $$\mathcal E_{\rm inductor}$$ cannot be larger than $$\mathcal E_{\rm battery}$$.
As time progresses and the current increases, hence $$IR$$ increases and so as $$\mathcal E_{\rm battery} -IR =\mathcal E_{\rm inductor}=L\dfrac{dI}{dt}$$ this means that $$\dfrac{dI}{dt}$$ decreases.
The rate of increase in current will decrease with time and the current will tend towards a steady value of $$\dfrac{\mathcal E_{\rm battery}}{R}$$.
|
2021-06-16 19:36:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.871350109577179, "perplexity": 176.50748617693938}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626008.14/warc/CC-MAIN-20210616190205-20210616220205-00062.warc.gz"}
|
http://www.slipstick.com/developer/send-email-outlook-reminders-fires/
|
# Send an email when an Appointment reminder fires
Last reviewed on December 30, 2013 — 100 comments
Another entry in my Lazy Programmer Series, this time I have a macro that sends an email message when a reminder fires. This macro was the result of a request for the ability to send messages to the sales team each morning with the day's agenda.
If you prefer to use an add-in, I have a list of reminder tools at Calendar Tools for Outlook
You can use the macro to send yourself reminders or even to compose an email message ahead of time (in the body of a an appointment form) and send it later. Outlook will need to be running and be able to connect to the mail server for the message to be generated and sent.
Because the message is composed when the reminder fires, the message time stamp will be the reminder time. Please don't abuse the trust others have in you: use this macro for legitimate purposes, not to convince someone you were working when you weren't!
Outlook needs to be running for these macros to work. Note, this will trigger the email security alert in older versions of Outlook. Use one of the tools listed at the end to dismiss the dialogs.
To use, press Alt+F11 to open the VBA editor then copy the code and paste it into ThisOutlookSession.
## Send a message to someone when a reminder fires
This macro checks for Appointment reminders and sends a message to the value in the location field. For this to be useful, you need to use a category, otherwise Outlook will attempt to send a message with every appointment reminder.
Private Sub Application_Reminder(ByVal Item As Object)
Dim objMsg As MailItem
Set objMsg = Application.CreateItem(olMailItem)
If Item.MessageClass <> "IPM.Appointment" Then
Exit Sub
End If
If Item.Categories <> "Send Message" Then
Exit Sub
End If
objMsg.To = Item.Location
objMsg.Subject = Item.Subject
objMsg.Body = Item.Body
objMsg.Send
Set objMsg = Nothing
End Sub
To use a template instead of the default message form, replace Set objMsg = Application.CreateItem(olMailItem) with Set objMsg = Application.CreateItemFromTemplate("C:\path\to\test-rule.oft")
## Send a message to yourself when a reminder fires
Private Sub Application_Reminder(ByVal Item As Object)
Dim objMsg As MailItem
Set objMsg = Application.CreateItem(olMailItem)
objMsg.To = "alias@domain.com"
objMsg.Subject = "Reminder: " & Item.Subject
' Code to handle the 4 types of items that can generate reminders
Select Case Item.Class
Case olAppointment '26
objMsg.Body = _
"Start: " & Item.Start & vbCrLf & _
"End: " & Item.End & vbCrLf & _
"Location: " & Item.Location & vbCrLf & _
"Details: " & vbCrLf & Item.Body
Case olContact '40
objMsg.Body = _
"Contact: " & Item.FullName & vbCrLf & _
"Phone: " & Item.BusinessTelephoneNumber & vbCrLf & _
"Contact Details: " & vbCrLf & Item.Body
Case olMail '43
objMsg.Body = _
"Due: " & Item.FlagDueBy & vbCrLf & _
"Details: " & vbCrLf & Item.Body
objMsg.Body = _
"Start: " & Item.StartDate & vbCrLf & _
"End: " & Item.DueDate & vbCrLf & _
"Details: " & vbCrLf & Item.Body
End Select
objMsg.Send
Set objMsg = Nothing
End Sub
## Select an appointment and send a message
With a few tweaks, the macro above can be used to send a message by selecting the appointment then running the macro.
1. Press Alt+F11 to open the VBA editor.
2. Right click on Project1 and choose Insert > Module.
3. Paste the code below into the Module.
4. Get the GetCurrentItem function from Outlook VBA: work with open item or selected item and paste it into the module.
Public Sub App_Reminder()
Dim Item As AppointmentItem
Dim objMsg As MailItem
Set objMsg = Application.CreateItem(olMailItem)
Set Item = GetCurrentItem()
With objMsg
' .To = Item.Location
.Subject = Item.Subject
.Body = Item.Body
.Display ' use .Send to send it instead
End With
Set objMsg = Nothing
Set Item = Nothing
End Sub
Pop up a dialog
You can use the code on this page to do pretty much anything VBA can do when the reminder fires.
This simple code sample displays a dialog box to remind you.
Private Sub Application_Reminder(ByVal Item As Object)
If Item.MessageClass <> "IPM.Appointment" Then
Exit Sub
End If
MsgBox "You have an appointment for " & vbCrLf _
& Item.Subject & vbCrLf _
& "on " & Format(Item.Start, "mmm dd") & vbCrLf _
& "Time: " & Format(Item.Start, "hh:mm AM/PM") _
& vbCrLf & "Location: " & Item.Location
End Sub
### Tools
Open a webpage when a Task reminder fires
To send an email daily using a Task, see E-Mail: Send daily (vboffice.net)
#### Written by Diane Poremsky
A Microsoft Outlook Most Valuable Professional (MVP) since 1999, Diane is the author of several books, including Outlook 2013 Absolute Beginners Book. She also created video training CDs and online training classes for Microsoft Outlook. You can find her helping people online in Outlook Forums as well as in the Microsoft Answers and TechNet forums.
Please post long or more complicated questions at Outlookforums.
### 100 responses to “Send an email when an Appointment reminder fires”
1. I have a question: how does this run?
Once it is copied into the OUTLOOK SESSION as indicated, is every REMINDER going to generate an EMail? For example I have various AGENDA ITEMS as I program all my activities in OUTLOOK and they all have a reminder that pops up in the REMINDER WINDOW, will I receive a REMINDER Email for all of these?
Additionally, would it be possible to make it AGENDA ITEM specific, ie. for AGENDA ITEM with POP UP REMINDER one could elect to ALSO get an Email REMINDING of the AGENDA ITEM.
Roger Bertrand, P. Eng.
2. Ok. Thanks. Is there a way to assign the REMINDER FIRING specifically to a particular agenda item?I have all teh AGENDA ITEMS with notification so they pop up in the notification window and there I track them. Now some of these I would like to receive a REMINDER EMAIL so I could assign it a FOLLOW UP and PRIORITY. Is this feasible?
As I understand this VBA code, it would send me an EMAIL for all the AGENDA ITEMS I have setup since they all have a NOTIFICATION REMINDER in my case, correct or wrong?
3. Please let me rephrase my question: all my agenda items have a category and all have a reminder so they pop up in the notification window. Therefore under this scheme would I reeceive an Email reminder?
Secondly, fi I wanted to make this process selcetive, that is for a specific agenda Item I would like that in addition to the Pop Up Reminder I would receive a Reminder Email, can that be done?
Thanks,
Roger
4. Please let me ask you if this would work: if I want only some messages to have an Email fired to me, would it work if I created a new category, say "Send EMail", and then set this category along with any other category to items I would like to have an Email sent to me?
Where would I insert that code so only those I want a reminder Email are fileterd through that catefory:
If Item.Categories = "Send Email" Then
I assume I could do the same on the first VBA to ahve only Items flagged with the "SEND EMAIL" category fire the EMail Reminder?
Thanks ,
Roger
PS: excuse my low level knowledge of VBA, I am not that trained to using VBA with OUTLOOK. I use a lot more with EXCEL.
5. I created this macro and it seemed to work the first week, but on subsequent weeks, the email is not going out. I checked and the code is still there. The Category is set to Send Message and is the only category. But I do have multiple email addresses separated by semicolons in the Location field.
I am using Windows 7 and Office 2007 if that makes a difference.
Can you suggest a place to begin further troubleshooting?
Thank you.
6. The following is a code i got working to send me an email when a reminder fires however it does not include any details about the reminder....Can you help?? thanks so much
Private Sub Application_Reminder(ByVal EventItem As Object)
Dim MailMsg As MailItem
Set MailMsg = Application.CreateItem(olMailItem)
If EventItem.MessageClass = "IPM.Appointment" Then
Call SendApptMail(MailMsg, EventItem)
End If
Set MailMsg = Nothing
End Sub
Sub SendApptMail(Mail_Item As Object, Appt_Item As Object)
Mail_Item.To = "email@email.com"
Mail_Item.Subject = "Reminder Time Off"
Mail_Item.HTMLBody = "Just a reminder that in the near future someone has requested a day off. Thank You"
Mail_Item.Send
End Sub
7. Hello I want to start off by saying by no means am I knowledgable with VB. I am trying to set it up as described, but one of the things I find confusing is why would one want to send it back to your own e-mail account. I tried to alter the code with my text pager e-mail address and I get a complie error everytime the reminder pops up. I appreaciate any help you can offer. Thanks Howard
8. Ok it works. I was copying and pasting inproperly. One problem I do have though is I get the following message- "A program is trying to access E-Mail addresses you have stored in Outlook. Do you want to allow this? If this is unexpected, it may be a virus and you should choose "No" Allow access for 1 thru 10 mimutes. I want to say thanks for getting back to me so fast and also say thanks for your code. Howard
9. Thank you for your help, but its not going to work for me . IT gives me no control over the computer whatsoever. I am even surprised that I was able to execute the vb code without permission from god himself. It's ashame cause they wont get me a smart phone, and I am simply too busy out in the field to remember every meeting or appointment. I was so pysched when it ran that I would have actually been able to do something to make myself more productive. Thanks for the help. Howard
10. I have implimented your send an email and open web page task VBA and have the page opening and the task emailed. What I would like to have happen is to have the opened page sent via email with the opened page in the body of the email. Can this be done?
11. Hi Diane,
Would it be possible to edit the code so that when every reminder fires up, and email alert will be sent also to the attendees of the meeting for example?
12. Hi Dianne,
A bit off topic, but is there a way that I could create an email containing similar information to your example but attach the macro to a button in the toolbar so that I can select the appointment, push the button and it will generate the email?
13. Thanks Dianne,
I have that working fantastically. Just one other thing... Everything works now as long as I have the appointment either open or selected. But at the moment it only works if the appointment is selected in the main calendar window. Is there a way to have it work if the required appointment is selected in the to-do bar?
14. Thanks Dianne,
Much appreciated.
15. Hello Diane
I am trying to use your macro but it didn't work, so I'd like to check what I have done wrong! I copied and pasted your macro into Outlook ThisSessionOutlook and saved it. I then created an Appointment in Outlook Calander with the location as my own email address. The reminder pops up but there is no email sent to my address. Any idea what I did wrong?
Cheers
Stewart
16. Did you have any luck with this Diane? I have hit a wall with this one.
17. If the Calendar is the active window and an appointment is selected in the To-Do bar the following error is received:-
Run-time error '91':
Object variable or With block variable not set
If any other window is active (Mail, Contacts, Tasks etc) and an appointment is selected in the To-Do bar the following error is received...
Run-time error '13':
Type mismatch
Hopefully that will give some insight into where I am going wrong.
18. Hello Diane. I wouid like to use an Add-in that checks for Appointment reminders and sends a message to the address in the location field or another field. So not to my own email address. After looking at the Add-Ins using the link, I can't see one that seems to do this. They seem to only send emails reminders to the event organiser. Can you suggest an Add-In? Thanks, Stewart.
19. Hi - I'm sort of a newbie at this stuff. Your code is very helpful, I'm sure, and thank you - but I'm not sure exactly where in the code I enter the pertinent details? Let me give you an example. Let's say I want to write an email to akivaf@gmail.com every day with the subject "Test" and the body, "If this works, then the macro worked." The appointment on my calendar will be every day at 8:00 AM, and I made the category "Test" which I have placed it in. Given those instructions, can you please tell me EXACTLY what to do, maybe paste the code with my pertinent details in it so I can know what to do? THANK YOU SO MUCH in advance!
20. Also, I have Outlook 2013, does that change anything? Also, what if I want to send to multiple email addresses?
21. Sorry Diane, you lost me. How would I add the macro to the Calendar Tools/Appointment Ribbon?
It is essential that the script sent a reminder of the meeting to all those listed in the "To" field and how to do it I do not know (
23. Works like a charm. You are amazing! Thank you.
24. This worked perfectly on the first day but hasn't since (4 days). OL14.06 x32 on W7 SP1 x64. Any suggestions?
25. You got it. I never thought to check the security as it worked fine the first day without touching the macro security. Restarting Outlook enforced the security. Kinda odd. Thanks again.
26. And... I even figured out how to easily digitally sign this code so I could return the security setting to the highest possible. Thanks again!
27. THANK YOU! You're wonderful! Also, an easy way to send to multiple addresses ... you can simply create a group list in your contacts and type the group name in the location field. Works perfectly.
28. Thanks to this site for the idea of using the Reminder event for an appointment, etc. and for using the fields of the appointment to populate the email data. Perhaps the edits/additions I have done will be helpful to those who want to append a signature to the email.
' Run when a reminder fires
Private Sub Application_Reminder(ByVal Item As Object)
Dim oMsg As MailItem
Set oMsg = Application.CreateItem(olMailItem)
' Only handle appointment reminders (may change later)
If Item.MessageClass "IPM.Appointment" Then
Exit Sub
End If
Dim oAppt As Outlook.AppointmentItem
Set oAppt = Item
' Only do for Category "Send Reminder Message"
If Not InStr(oAppt.Categories, "Send Reminder Message") >= 0 Then
Exit Sub
End If
oMsg.To = Item.Location
oMsg.Subject = Item.Subject
oMsg.Body = Item.Body
Dim sig As String
oMsg.HTMLBody = Item.Body & "" & sig ' oMsg.HTMLBody
' Try to specify sending account
Dim oAccount As Outlook.Account
For Each oAccount In Application.Session.Accounts
If oAccount.DisplayName = "alias@domains.com" Then
oMsg.SendUsingAccount = oAccount
Exit For
End If
Next
oMsg.Send
Set oMsg = Nothing
End Sub
Private Function ReadSignature(sigName As String) As String
Dim oFSO, oTextStream, oSig As Object
Dim appDataDir, sig, sigPath, fileName As String
sigPath = appDataDir & "" & sigName
Set oFSO = CreateObject("Scripting.FileSystemObject")
Set oTextStream = oFSO.OpenTextFile(sigPath)
' fix up relative path references to images in the sig file
fileName = Replace(sigName, ".htm", "") & "_files/"
sig = Replace(sig, fileName, appDataDir & "" & fileName)
End Function
29. Hi Diane,
Tried the above code and it worked for the first day and stopped working after that. I've also changed the security level but it hasn't worked since. Here's my code:
Private Sub Application_Reminder(ByVal Item As Object)
Dim objMsg As MailItem
Set objMsg = Application.CreateItem(olMailItem)
If Item.MessageClass <> "IPM.Appointment" Then
Exit Sub
End If
If Item.Categories <> "Test" Then
Exit Sub
End If
objMsg.To = "abc@hotmail.com"
objMsg.CC = "abc@hotmail.com"
objMsg.Subject = "Test"
objMsg.Body = "Testing Testing 1 2 3"
objMsg.Send
Set objMsg = Nothing
End Sub
30. Hi Diane,
Tried that too.. Still doesn't work. Macro security is set at "No security checks for macros"
31. Hi Diane,
I'm trying to send the email from a template. From what I understand from your other threads, I just need to edit the "Set objMsg" row. So the code should look like:
Private Sub Application_Reminder(ByVal Item As Object)
Dim objMsg As MailItem
Set objMsg = Application.CreateItem(olMailItem)
If Item.MessageClass "IPM.Appointment" Then
Exit Sub
End If
If Item.Categories "Test" Then
Exit Sub
End If
objMsg.To = "Doe, John"
objMsg.CC = "Doe, John"
objMsg.Subject = "Template"
objMsg.Send
Set objMsg = ("C:\templates\template.oft")
End Sub
Is this correct?
32. Hi Diane,
I have attempted to run the code for "Send a message to someone when a reminder fires"
1. The person I need to remind when an appointment reminder fires is me.
2. If I use an email address that is outside our exchange network... menaing i have to type in an email address such as a hotmail , and or live account it works fine.
3. If I am just using my internal email address that is in the GAL. then it gives me an error: "Outlook does not Recognize One or More Names" and highlights the objMsg.Send. Im not sure what is happening as does this only work with typed in email addresses? when I type in my corporate email address it reverts to my named refered in the GAL. I hope this makes sense, thanks
33. Hi Diane,
The code work but will not send the email it only goes to the outbox. How can I fix that?
Here is the code I am using:
Private Sub Application_Reminder(ByVal Item As Object)
Dim objMsg As MailItem
Set objMsg = Application.CreateItemFromTemplate("C:\Users\MCWIED\Documents\Cal.oft")
If Item.MessageClass "IPM.Appointment" Then
Exit Sub
End If
If Item.Categories "Send Message" Then
Exit Sub
End If
objMsg.To = Item.Location
objMsg.Subject = Item.Subject
objMsg.Body = Item.Body
objMsg.Send
Set objMsg = Nothing
End Sub
34. Hi Diane,
This is exactly what I need for some bills. We have a billing agreement in place and I need to send a reminder every 2 weeks at 10:00 to the business to have them process the agreement. I also need to copy our accounts email in on this as well.
1) Do I copy & paste this new code (send email on reminder) under the Auto BCC code?
2) What is the string to include a CC/To for the other email? (process@company & accounts@company)
It's not a meeting so I don't want to use the "attendees" function, it's just an email to process a payment.
Lastly, when this triggers will the Auto BCC also trigger? (I want this in case of dispute of email being sent).
Karen
35. Genius! Thank you Diane.
This is my finished code :) A little bit of everything in here :D I'm thrilled I managed to piece it together and it works.
-----
Private Sub Application_Reminder(ByVal Item As Object)
Dim objMsg As MailItem
Set objMsg = Application.CreateItem(olMailItem)
If Item.MessageClass "IPM.Appointment" Then
Exit Sub
End If
If Item.Categories "Bus - ACCs FK" Then
Exit Sub
End If
objMsg.To = Item.Location
objMsg.Subject = Format(Now, "YYYYMMDD - HH:mm:ss") & " | " & Item.Subject
objMsg.Body = Item.Body
objMsg.BCC = "deviceemail@mydevice"
objMsg.Send
Set objMsg = Nothing
End Sub
-----
I don't check my regular BCC addresses on my phone so I need to hardcode my phone email so I can check it when I'm out at an appointment so I know that this email goes through. It's a time sensitive one so I need to make sure I have the date in the subject (or body, i'm going to try both :D) so the receiver knows it's a new reminder.
Using multiple categories for things isn't a problem, i'll just keep creating categories as my variable emails are required. In this case the receiver, the CC & BCC will never change so it will have it's own category.
Thank you :D
36. Hi Diane,
Sorry to bug you :) I've learnt a lot the last few days and your site has helped me tremendously.
I now want to expand my code a little further.
I'm trying to automate this new email schedule as much as possible, with a set & forget ideal.
-> Appointment Reminder triggers email
-> Email is sent (with attachment, coded)
-> Email is saved to HDD folder
-> EMail (sent) is moved from the Outlook Sent Box to Accounts/Sent Outlook folder
I've managed to get the email to send as HTML (excluding the inserted time), and I've been able to attach 1 file (I haven't tried to include 2), so now I need to file & tidy my sent box & HD emails. :)
So, these are my 2 questions:
1) I have managed to get HTML to work in the email:
objMsg.HTMLBody = Format(Now, "Long Date") & Item.Body
The HTML is edited in my appointment screen & it works great.
I can not however work out how to also get the automated time to become HTML. I have tried all sorts of variations in the VB code.
eg: objMsg.HTMLBody = Format(Now, "Long Date") & Item.Body
However nothing seems to work to get the date to be in the HTML format, it throws a syntax error no matter how it's coded.
2) Also, I'd like to be able to save the email to my HD (specifically my backup folder that syncs to dropbox & box), nothing fancy in the saveas title, just the email as it is: Subject = Format(Now, "YYYYMMDD - HH:mm:ss") & " | " & Item.Subject
so it would be: 20131208 - 0054 | Accounts Reminder.msg
folder: e:/blah/blah/accounts/company/notices/.msg
I know I can do a bulk saveas using VBA (which I'd need to look into to schedule it monthly), but I'd like to be able to do it at the time the email sends.
2.b) Can I add to this script a move to folder (outlook) at the end of the "generate new mail" script? re: -> EMail (sent) is moved from the Outlook Sent Box to Accounts/Sent Outlook folder
Thank you for help & support in advance.
Karen
37. Hi,
Many thanks for this. I was hoping somebody would be able to help me with categories, I want to be specific about what I send to who so I've created separate categories. I was just wondering how to incorporate this into the code? What I need to ask the code to do is: if it is for category A send an email, if it's for category B send an email. I'm getting very confused with IF statements!
Thanks :D
38. Hi Diane,
Sorry, I'm really struggling with this. I've spent a few days playing around but can't get it to do what I want.
For an examples sake say that I have 2 categories:
Category A - reminder to send to cata@example.co.uk
Category B - reminder to send to catb@example.co.uk
I can only ever get Category A to send. I've tried a few different ways but it keeps failing, I would be very grateful for any help.
Thanks,
Emily :D
39. It works perfectly! Thank you for your help. :)
40. Diane, I used the Send a message to someone when a reminder fires code and copied it in and it fires as expected, problem is I get a runtime error Outlook does not recognize one or more name. When I debug, it points to the objMsg.Send line. Any clues?
41. hi,
Can you help me with code that trigger email if I have not sent email to particular address before particular time..... thanks in advance
42. ReRead description and was able to see what I was doing wrong. Code works great. Thanks so much Diane.
43. Hi, I'm missing something, I added the code you developed into the ThisOutlookSession. Then you keep mentioning macro's. Do I copy this same code and place it in a macro or reference it? I haven't done anything like this before and it looks like exactly what I need. Any further instructions would be appreciated.
Thanks
44. Hi Diane,
Thanks for the great information. I'm new to this and I'm trying to implement the "Select an appointment and send a message" code and have two questions. Is the code you have in this step all that is needed. The text reads "With a few tweaks, the macro above can be used to send a message by selecting the appointment then running the macro." Does that mean you need the previous text and what you have listed in that section?
Additionally, I don't really understand the step: "Get the GetCurrentItem function from..." I followed the link and tried a couple things but it kept resulting in compile errors so I'm missing something. Ideally this would work if I ran the macro with the appointment opened or if I have it selected.
45. Hi Diane,
I'm trying to have a reminder email only for appointments that I run the macro on manually. I tried working with "Select an appointment and send a message" but I don't understand the part pertaining to the GetCurrentItem function. Ideally I could run the macro whether I had an item open or if I had it selected in the calendar. If I had to choose between the two I wold prefer to select the appointment and run the macro. I followed the link and tried a couple options but I kept getting run-time errors. I'm new to VBA so I'm probably missing something.
The way I read it, do I paste the "Send a message to yourself when a reminder fires" in the ThisOutlookSession and then use the "Select an Appointment code in a module?
Thanks,
46. Hi Diane,
This is working great for me apart from when I create recurring appointments, the reminder comes up but the email doesn't fire. Should it work?
Thanks,
Emma
47. Thanks for the reply and sorry for spamming you with the same question. That worked but I think I misunderstood what the macro was doing. I am trying to accomplish a combination of the Send a message to yourself and Send a message when a reminder fires. I want to receive an email notification for some items but not all. My thought is that I would select the appointment I need the reminder to occur on, run the macro and then receive the email when the reminder fires.
Thanks again!
48. I have more people I need to send the e-mail too then can fit in the location field. About 40 people in all. Any idea how to set this up? When I create a contact group, the macro errors out when it try's to send.
49. Run-time error'-2147467259 (80004005)':
Outlook does not recognize one or more of the names.
Current the group name is Test. I have (2) e-mails in the group and both are correct. In the location field, I have entered Test.
50. Hello, I am using your script (the 1st on), and it works great.
But I need additional function - to send a link with that e-mail.
I've used:
ObjMsg.Body = strLink & " " & " This e-mail was generated automatically"
But it sends the link as a plain text only. Do you have any advice for me?
Thank you
51. Hello,
I solved it by this:
ObjMsg.HTMLBody = "" & _
"My Excel" & _
"This email was generated automatically" & ""
52. Hi Diane,
I am brand new to this and most of it is over my head. I have tried a few of the codes but not having much luck because i don't know what to fill in where in the code. I successfully used the "send reminder to yourself" code but put my coworkers email in instead of mine.
Here is what i'm trying to do. I have a shared calendar for myself and 2 coworkers. I am the only one that receives reminders, so i was hoping to add the code so that when i receive the reminder they get an email about it. I want this to only happen when reminder is for the calendar "Pharmacy Schedule" and not for my personal email calendar. I this possible?
Can you please help by pasting code here and showing me where to put their emails: ie coworker1@email.com and coworker2@email.com. and Pharmacy Schedule. Thanks in advance!
53. Thank you Diane! It's working! This solved a big problem for me, greatly appreciated!
54. Hi Diane, i was wondering if the above code work if lets say i have alot appointment and i wish to send a reminder to attendee based on different appointment. Is it possible to do that?
Thanks!
55. Hi Diane,
I have no problem getting this working but unfortunately when it does fire it just keeps spilling out messages until I turn the reminders to none or delete/dismiss the message.
Looks good but at the minute too effective :)
Thanks for your time, you must be irritated by supporting this a couple of years after you posted it!
Cheers,
Matt
56. Thanks Dianne, So something like this?
Private Sub Application_Reminder(ByVal Item As Object)
Dim objMsg As MailItem
Set objMsg = Application.CreateItem(olMailItem)
If Item.MessageClass <> "IPM.Appointment" Then
Exit Sub
End If
If Item.Categories <> "Appraisal Reminders" Then
Exit Sub
End If
objMsg.To = Item.Location
objMsg.Subject = Item.Subject
objMsg.Body = Item.Body
objMsg.Send
Set objMsg = Nothing
Item.Categories = "Message Sent"
Item.Save
End Sub
57. Private Sub Application_Reminder(ByVal Item As Object) Dim objMsg As MailItem Set objMsg = Application.CreateItem(olMailItem) If Item.MessageClass "IPM.Appointment" Then Exit Sub End If If Item.Categories "Send Message" Then Exit Sub End If objMsg.To = Item.Location objMsg.Subject = Item.Subject objMsg.Body = Item.Body objMsg.Send Set objMsg = Nothing End Sub
It works well for me.. but I would like to have additional cc field.. how can this be done?
If the Post Coment button disappears, press your Tab key.
|
2014-10-01 14:15:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2950219511985779, "perplexity": 3063.0700562737316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663460.43/warc/CC-MAIN-20140930004103-00374-ip-10-234-18-248.ec2.internal.warc.gz"}
|
http://docs.ocean.dwavesys.com/en/latest/examples/scheduling.html
|
# Constrained Scheduling¶
This example solves a binary constraint satisfaction problem (CSP). CSPs require that all a problem’s variables be assigned values that result in the satisfying of all constraints. Here, the constraints are a company’s policy for scheduling meetings:
• Constraint 1: During business hours, all meetings must be attended in person at the office.
• Constraint 2: During business hours, participation in meetings is mandatory.
• Constraint 3: Outside business hours, meetings must be teleconferenced.
• Constraint 4: Outside business hours, meetings must not exceed 30 minutes.
Solving such a CSP means finding meetings that meet all the constraints.
The purpose of this example is to help a new user to formulate a constraint satisfaction problem using Ocean tools and solve it on a D-Wave system. Other examples demonstrate more advanced steps that might be needed for complex problems.
## Example Requirements¶
To run the code in this example, the following is required.
If you installed dwave-ocean-sdk and ran dwave config create, your installation should meet these requirements.
## Solution Steps¶
Section Solving Problems on a D-Wave System describes the process of solving problems on the quantum computer in two steps: (1) Formulate the problem as a binary quadratic model (BQM) and (2) Solve the BQM with a D-wave system or classical sampler. In this example, Ocean’s dwavebinarycsp tool builds the BQM based on the constraints we formulate.
## Formulate the Problem¶
D-Wave systems solve binary quadratic models, so the first step is to express the problem with binary variables.
• Time of day is represented by binary variable time with value $$1$$ for business hours and $$0$$ for hours outside the business day.
• Venue is represented by binary variable location with value $$1$$ for office and $$0$$ for teleconference.
• Meeting duration is represented by variable length with value $$1$$ for short meetings (under 30 minutes) and $$0$$ for meetings of longer duration.
• Participation is represented by variable mandatory with value $$1$$ for mandatory participation and $$0$$ for optional participation.
For large numbers of variables and constraints, such problems can be hard. This example has four binary variables, so only $$2^4=16$$ possible meeting arrangements. As shown in the table below, it is a simple matter to work out all the combinations by hand to find solutions that meet all the constraints.
All Possible Meeting Options.
Time of Day Venue Duration Participation Valid?
Business hours Office Short Mandatory Yes
Business hours Office Short Optional No (violates 2)
Business hours Office Long Mandatory Yes
Business hours Office Long Optional No (violates 2)
Business hours Teleconference Short Mandatory No (violates 1)
Business hours Teleconference Short Optional No (violates 1, 2)
Business hours Teleconference Long Mandatory No (violates 1)
Business hours Teleconference Long Optional No (violates 1, 2)
Non-business hours Office Short Mandatory No (violates 3)
Non-business hours Office Short Optional No (violates 3)
Non-business hours Office Long Mandatory No (violates 3, 4)
Non-business hours Office Long Optional No (violates 3, 4)
Non-business hours Teleconference Short Mandatory Yes
Non-business hours Teleconference Short Optional Yes
Non-business hours Teleconference Long Mandatory No (violates 4)
Non-business hours Teleconference Long Optional No (violates 4)
Ocean’s dwavebinarycsp enables the definition of constraints in different ways, including by defining functions that evaluate True when the constraint is met. The code below defines a function that returns True when all this example’s constraints are met.
def scheduling(time, location, length, mandatory):
return (location and mandatory) # In office and mandatory participation
return ((not location) and length) # Teleconference for a short duration
The next code lines create a constraint from this function and adds it to CSP instance, csp, instantiated with binary variables.
>>> import dwavebinarycsp
>>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)
>>> csp.add_constraint(scheduling, ['time', 'location', 'length', 'mandatory'])
This tool, dwavebinarycsp, can also convert the binary CSP to a BQM. The following code does so and displays the BQM’s linear and quadratic coefficients, $$q_i$$ and $$q_{i,j}$$ respectively in $$\sum_i^N q_ix_i + \sum_{i<j}^N q_{i,j}x_i x_j$$, which are the inputs for programming the quantum computer.
>>> bqm = dwavebinarycsp.stitch(csp)
>>> bqm.linear
{'length': -2.0, 'location': 2.0, 'mandatory': 0.0, 'time': 2.0}
{('location', 'length'): 2.0,
('mandatory', 'length'): 0.0,
('mandatory', 'location'): -2.0,
('time', 'length'): 0.0,
('time', 'location'): -4.0,
('time', 'mandatory'): 0.0}
## Solve the Problem by Sampling¶
For small numbers of variables, even your computer’s CPU can solve CSPs quickly. Here we solve both classically on your CPU and on the quantum computer.
### Solving Classically on a CPU¶
Before using the D-Wave system, it can sometimes be helpful to test code locally. Here we select one of Ocean software’s test samplers to solve classically on a CPU. Ocean’s dimod provides a sampler that simply returns the BQM’s value (energy) for every possible assignment of variable values.
>>> from dimod.reference.samplers import ExactSolver
>>> sampler = ExactSolver()
>>> solution = sampler.sample(bqm)
Valid solutions—assignments of variables that do not violate any constraint—should have the lowest value of the BQM, and ExactSolver() orders its assignments of variables by ascending order, so the first solution has the lowest value (lowest energy state). The code below sets variable min_energy to the BQM’s lowest value, which is in the first record of the returned result.
>>> min_energy = next(solution.data(['energy']))[0]
>>> print(min_energy)
-2.0
The code below prints all those solutions (assignments of variables) for which the BQM has its minimum value.
>>> for sample, energy in solution.data(['sample', 'energy']):
... if energy == min_energy:
... time = 'business hours' if sample['time'] else 'evenings'
... location = 'office' if sample['location'] else 'home'
... length = 'short' if sample['length'] else 'long'
... mandatory = 'mandatory' if sample['mandatory'] else 'optional'
... print("During {} at {}, you can schedule a {} meeting that is {}".format(time, location, length, mandatory))
...
During evenings at home, you can schedule a short meeting that is optional
During evenings at home, you can schedule a short meeting that is mandatory
During business hours at office, you can schedule a short meeting that is mandatory
During business hours at office, you can schedule a long meeting that is mandatory
### Solving on a D-Wave System¶
We now solve on a D-Wave system using sampler DWaveSampler() from Ocean software’s dwave-system. We also use its EmbeddingComposite() composite to map our unstructured problem (variables such as time etc.) to the sampler’s graph structure (the QPU’s numerically indexed qubits) in a process known as minor-embedding. The next code sets up a D-Wave system as the sampler.
Note
In the code below, replace sampler parameters in the third line. If you configured a default solver, as described in Using a D-Wave System, you should be able to set the sampler without parameters as sampler = EmbeddingComposite(DWaveSampler()). You can see this information by running dwave config inspect in your terminal.
>>> from dwave.system.samplers import DWaveSampler
>>> from dwave.system.composites import EmbeddingComposite
>>> sampler = EmbeddingComposite(DWaveSampler(endpoint='https://URL_to_my_D-Wave_system/', token='ABC-123456789012345678901234567890', solver='My_D-Wave_Solver'))
Because the sampled solution is probabilistic, returned solutions may differ between runs. Typically, when submitting a problem to the system, we ask for many samples, not just one. This way, we see multiple “best” answers and reduce the probability of settling on a suboptimal answer. Below, we ask for 5000 samples.
>>> response = sampler.sample(bqm, num_reads=5000)
The code below prints all those solutions (assignments of variables) for which the BQM has its minimum value and the number of times it was found.
>>> total = 0
... for sample, energy, occurrences in response.data(['sample', 'energy', 'num_occurrences']):
... total = total + occurrences
... if energy == min_energy:
... time = 'business hours' if sample['time'] else 'evenings'
... location = 'office' if sample['location'] else 'home'
... length = 'short' if sample['length'] else 'long'
... mandatory = 'mandatory' if sample['mandatory'] else 'optional'
... print("{}: During {} at {}, you can schedule a {} meeting that is {}".format(occurrences, time, location, length, mandatory))
... print("Total occurrences: ", total)
...
1676: During business hours at office, you can schedule a long meeting that is mandatory
1229: During business hours at office, you can schedule a short meeting that is mandatory
1194: During evenings at home, you can schedule a short meeting that is optional
898: During evenings at home, you can schedule a short meeting that is mandatory
Total occurrences: 5000
## Summary¶
In the terminology of Ocean Software Stack, Ocean tools moved the original problem through the following layers:
• Application: scheduling under constraints. There exist many CSPs that are computationally hard problems; for example, the map-coloring problem is to color all regions of a map such that any two regions sharing a border have different colors. The job-shop scheduling problem is to schedule multiple jobs done on several machines with constraints on the machines’ execution of tasks.
• Method: constraint compilation.
• Sampler API: the Ocean tool builds a BQM with lowest values (“ground states”) that correspond to assignments of variables that satisfy all constraints.
• Sampler: classical ExactSolver() and then DWaveSampler().
• Compute resource: first a local CPU then a D-Wave system.
|
2019-02-23 11:30:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29041481018066406, "perplexity": 7131.684698779993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249500704.80/warc/CC-MAIN-20190223102155-20190223124155-00539.warc.gz"}
|
https://en.wikipedia.org/wiki/Planar_separator_theorem
|
Planar separator theorem
In graph theory, the planar separator theorem is a form of isoperimetric inequality for planar graphs, that states that any planar graph can be split into smaller pieces by removing a small number of vertices. Specifically, the removal of O(√n) vertices from an n-vertex graph (where the O invokes big O notation) can partition the graph into disjoint subgraphs each of which has at most 2n/3 vertices.
A weaker form of the separator theorem with O(√n log n) vertices in the separator instead of O(√n) was originally proven by Ungar (1951), and the form with the tight asymptotic bound on the separator size was first proven by Lipton & Tarjan (1979). Since their work, the separator theorem has been reproven in several different ways, the constant in the O(√n) term of the theorem has been improved, and it has been extended to certain classes of nonplanar graphs.
Repeated application of the separator theorem produces a separator hierarchy which may take the form of either a tree decomposition or a branch-decomposition of the graph. Separator hierarchies may be used to devise efficient divide and conquer algorithms for planar graphs, and dynamic programming on these hierarchies can be used to devise exponential time and fixed-parameter tractable algorithms for solving NP-hard optimization problems on these graphs. Separator hierarchies may also be used in nested dissection, an efficient variant of Gaussian elimination for solving sparse systems of linear equations arising from finite element methods.
Bidimensionality theory of Demaine, Fomin, Hajiaghayi, and Thilikos generalizes and greatly expands the applicability of the separator theorem for a vast set of minimization problems on planar graphs and more generally graphs excluding a fixed minor.
Statement of the theorem
As it is usually stated, the separator theorem states that, in any n-vertex planar graph G = (V,E), there exists a partition of the vertices of G into three sets A, S, and B, such that each of A and B has at most 2n/3 vertices, S has O(√n) vertices, and there are no edges with one endpoint in A and one endpoint in B. It is not required that A or B form connected subgraphs of G. S is called the separator for this partition.
An equivalent formulation is that the edges of any n-vertex planar graph G may be subdivided into two edge-disjoint subgraphs G1 and G2 in such a way that both subgraphs have at least n/3 vertices and such that the intersection of the vertex sets of the two subgraphs has O(√n) vertices in it. Such a partition is known as a separation.[1] If a separation is given, then the intersection of the vertex sets forms a separator, and the vertices that belong to one subgraph but not the other form the separated subsets of at most 2n/3 vertices. In the other direction, if one is given a partition into three sets A, S, and B that meet the conditions of the planar separator theorem, then one may form a separation in which the edges with an endpoint in A belong to G1, the edges with an endpoint in B belong to G2, and the remaining edges (with both endpoints in S) are partitioned arbitrarily.
The constant 2/3 in the statement of the separator theorem is arbitrary and may be replaced by any other number in the open interval (1/2,1) without changing the form of the theorem: a partition into more equal subsets may be obtained from a less-even partition by repeatedly splitting the larger sets in the uneven partition and regrouping the resulting connected components.[2]
Example
A planar separator for a grid graph.
Consider a grid graph with r rows and c columns; the number n of vertices equals rc. For instance, in the illustration, r = 5, c = 8, and n = 40. If r is odd, there is a single central row, and otherwise there are two rows equally close to the center; similarly, if c is odd, there is a single central column, and otherwise there are two columns equally close to the center. Choosing S to be any of these central rows or columns, and removing S from the graph, partitions the graph into two smaller connected subgraphs A and B, each of which has at most n/2 vertices. If r ≤ c (as in the illustration), then choosing a central column will give a separator S with r ≤ √n vertices, and similarly if c ≤ r then choosing a central row will give a separator with at most √n vertices. Thus, every grid graph has a separator S of size at most √n, the removal of which partitions it into two connected components, each of size at most n/2.[3]
The planar separator theorem states that a similar partition can be constructed in any planar graph. The case of arbitrary planar graphs differs from the case of grid graphs in that the separator has size O(√n) but may be larger than √n, the bound on the size of the two subsets A and B (in the most common versions of the theorem) is 2n/3 rather than n/2, and the two subsets A and B need not themselves form connected subgraphs.
Constructions
Lipton & Tarjan (1979) augment the given planar graph by additional edges, if necessary, so that it becomes maximal planar (every face in a planar embedding is a triangle). They then perform a breadth-first search, rooted at an arbitrary vertex v, and partition the vertices into levels by their distance from v. If l1 is the median level (the level such that the numbers of vertices at higher and lower levels are both at most n/2) then there must be levels l0 and l2 that are O(√n) steps above and below l1 respectively and that contain O(√n) vertices, respectively, for otherwise there would be more than n vertices in the levels near l1. They show that there must be a separator S formed by the union of l0 and l2, the endpoints e of an edge of G that does not belong to the breadth-first search tree and that lies between the two levels, and the vertices on the two breadth-first search tree paths from e back up to level l0. The size of the separator S constructed in this way is at most √8√n, or approximately 2.83√n. The vertices of the separator and the two disjoint subgraphs can be found in linear time.
This proof of the separator theorem applies as well to weighted planar graphs, in which each vertex has a non-negative cost. The graph may be partitioned into three sets A, S, and B such that A and B each have at most 2/3 of the total cost and S has O(√n) vertices, with no edges from A to B.[4] By analysing a similar separator construction more carefully, Djidjev (1982) shows that the bound on the size of S can be reduced to √6√n, or approximately 2.45√n.
Holzer et al. (2009) suggest a simplified version of this approach: they augment the graph to be maximal planar and construct a breadth first search tree as before. Then, for each edge e that is not part of the tree, they form a cycle by combining e with the tree path that connects its endpoints. They then use as a separator the vertices of one of these cycles. Although this approach cannot be guaranteed to find a small separator for planar graphs of high diameter, their experiments indicate that it outperforms the Lipton–Tarjan and Djidjev breadth-first layering methods on many types of planar graph.
Simple cycle separators
For a graph that is already maximal planar it is possible to show a stronger construction of a simple cycle separator, a cycle of small length such that the inside and the outside of the cycle (in the unique planar embedding of the graph) each have at most 2n/3 vertices. Miller (1986) proves this (with a separator size of √8√n) by using the Lipton–Tarjan technique for a modified version of breadth first search in which the levels of the search form simple cycles.
Alon, Seymour & Thomas (1994) prove the existence of simple cycle separators more directly: they let C be a cycle of at most √8√n vertices, with at most 2n/3 vertices outside C, that forms as even a partition of inside and outside as possible, and they show that these assumptions force C to be a separator. For otherwise, the distances within C must equal the distances in the disk enclosed by C (a shorter path through the interior of the disk would form part of the boundary of a better cycle). Additionally, C must have length exactly √8√n, as otherwise it could be improved by replacing one of its edges by the other two sides of a triangle. If the vertices in C are numbered (in clockwise order) from 1 to √8√n, and vertex i is matched up with vertex √8√ni + 1, then these matched pairs can be connected by vertex-disjoint paths within the disk, by a form of Menger's theorem for planar graphs. However, the total length of these paths would necessarily exceed n, a contradiction. With some additional work they show by a similar method that there exists a simple cycle separator of size at most (3/√2)√n, approximately 2.12√n.
Djidjev & Venkatesan (1997) further improved the constant factor in the simple cycle separator theorem to 1.97√n. Their method can also find simple cycle separators for graphs with non-negative vertex weights, with separator size at most 2√n, and can generate separators with smaller size at the expense of a more uneven partition of the graph. In 2-connected planar graphs that are not maximal, there exist simple cycle separators with size proportional to the Euclidean norm of the vector of face lengths that can be found in near-linear time.[5]
Circle separators
According to the Koebe–Andreev–Thurston circle-packing theorem, any planar graph may be represented by a packing of circular disks in the plane with disjoint interiors, such that two vertices in the graph are adjacent if and only if the corresponding pair of disks are mutually tangent. As Miller et al. (1997) show, for such a packing, there exists a circle that has at most 3n/4 disks touching or inside it, at most 3n/4 disks touching or outside it, and that crosses O(√n disks).
To prove this, Miller et al. use stereographic projection to map the packing onto the surface of a unit sphere in three dimensions. By choosing the projection carefully, the center of the sphere can be made into a centerpoint of the disk centers on its surface, so that any plane through the center of the sphere partitions it into two halfspaces that each contain or cross at most 3n/4 of the disks. If a plane through the center is chosen uniformly at random, a disk will be crossed with probability proportional to its radius. Therefore, the expected number of disks that are crossed is proportional to the sum of the radii of the disks. However, the sum of the squares of the radii is proportional to the total area of the disks, which is at most the total surface area of the unit sphere, a constant. An argument involving Jensen's inequality shows that, when the sum of squares of n non-negative real numbers is bounded by a constant, the sum of the numbers themselves is O(√n). Therefore, the expected number of disks crossed by a random plane is O(√n) and there exists a plane that crosses at most that many disks. This plane intersects the sphere in a great circle, which projects back down to a circle in the plane with the desired properties. The O(√n) disks crossed by this circle correspond to the vertices of a planar graph separator that separates the vertices whose disks are inside the circle from the vertices whose disks are outside the circle, with at most 3n/4 vertices in each of these two subsets.[6][7]
This method leads to a randomized algorithm that finds such a separator in linear time,[6] and a less-practical deterministic algorithm with the same linear time bound.[8] By analyzing this algorithm carefully using known bounds on the packing density of circle packings, it can be shown to find separators of size at most
${\displaystyle {\sqrt {\frac {2\pi }{\sqrt {3}}}}\left({\frac {1+{\sqrt {3}}}{2{\sqrt {2}}}}+o(1)\right){\sqrt {n}}\approx 1.84{\sqrt {n}}.}$[9]
Although this improved separator size bound comes at the expense of a more-uneven partition of the graph, Spielman & Teng (1996) argue that it provides an improved constant factor in the time bounds for nested dissection compared to the separators of Alon, Seymour & Thomas (1990). The size of the separators it produces can be further improved, in practice, by using a nonuniform distribution for the random cutting planes.[10]
The stereographic projection in the Miller et al. argument can be avoided by considering the smallest circle containing a constant fraction of the centers of the disks and then expanding it by a constant picked uniformly in the range [1,2]. It is easy to argue, as in Miller et al., that the disks intersecting the expanded circle form a valid separator, and that, in expectation, the separator is of the right size. The resulting constants are somewhat worse.[11]
Spectral partitioning
Spectral clustering methods, in which the vertices of a graph are grouped by the coordinates of the eigenvectors of matrices derived from the graph, have long been used as a heuristic for graph partitioning problems for nonplanar graphs.[12] As Spielman & Teng (2007) show, spectral clustering can also be used to derive an alternative proof for a weakened form of the planar separator theorem that applies to planar graphs with bounded degree. In their method, the vertices of a given planar graph are sorted by the second coordinates of the eigenvectors of the Laplacian matrix of the graph, and this sorted order is partitioned at the point that minimizes the ratio of the number of edges cut by the partition to the number of vertices on the smaller side of the partition. As they show, every planar graph of bounded degree has a partition of this type in which the ratio is O(1/√n). Although this partition may not be balanced, repeating the partition within the larger of the two sides and taking the union of the cuts formed at each repetition will eventually lead to a balanced partition with O(√n) edges. The endpoints of these edges form a separator of size O(√n).
Edge separators
A variation of the planar separator theorem involves edge separators, small sets of edges forming a cut between two subsets A and B of the vertices of the graph. The two sets A and B must each have size at most a constant fraction of the number n of vertices of the graph (conventionally, both sets have size at most 2n/3), and each vertex of the graph belongs to exactly one of A and B. The separator consists of the edges that have one endpoint in A and one endpoint in B. Bounds on the size of an edge separator involve the degree of the vertices as well as the number of vertices in the graph: the planar graphs in which one vertex has degree n − 1, including the wheel graphs and star graphs, have no edge separator with a sublinear number of edges, because any edge separator would have to include all the edges connecting the high degree vertex to the vertices on the other side of the cut. However, every planar graph with maximum degree Δ has an edge separator of size O(√(Δn)).[13]
A simple cycle separator in the dual graph of a planar graph forms an edge separator in the original graph.[14] Applying the simple cycle separator theorem of Gazit & Miller (1990) to the dual graph of a given planar graph strengthens the O(√(Δn)) bound for the size of an edge separator by showing that every planar graph has an edge separator whose size is proportional to the Euclidean norm of its vector of vertex degrees.
Papadimitriou & Sideri (1996) describe a polynomial time algorithm for finding the smallest edge separator that partitions a graph G into two subgraphs of equal size, when G is an induced subgraph of a grid graph with no holes or with a constant number of holes. However, they conjecture that the problem is NP-complete for arbitrary planar graphs, and they show that the complexity of the problem is the same for grid graphs with arbitrarily many holes as it is for arbitrary planar graphs.
Lower bounds
A polyhedron formed by replacing each of the faces of an icosahedron by a mesh of 100 triangles, an example of the lower bound construction of Djidjev (1982).
In a √n × √n grid graph, a set S of s < √n points can enclose a subset of at most s(s − 1)/2 grid points, where the maximum is achieved by arranging S in a diagonal line near a corner of the grid. Therefore, in order to form a separator that separates at least n/3 of the points from the remaining grid, s needs to be at least √(2n/3), approximately 0.82√n.
There exist n-vertex planar graphs (for arbitrarily large values of n) such that, for every separator S that partitions the remaining graph into subgraphs of at most 2n/3 vertices, S has at least √(4π√3)√n vertices, approximately 1.56√n.[2] The construction involves approximating a sphere by a convex polyhedron, replacing each of the faces of the polyhedron by a triangular mesh, and applying isoperimetric theorems for the surface of the sphere.
Separator hierarchies
Separators may be combined into a separator hierarchy of a planar graph, a recursive decomposition into smaller graphs. A separator hierarchy may be represented by a binary tree in which the root node represents the given graph itself, and the two children of the root are the roots of recursively constructed separator hierarchies for the induced subgraphs formed from the two subsets A and B of a separator.
A separator hierarchy of this type forms the basis for a tree decomposition of the given graph, in which the set of vertices associated with each tree node is the union of the separators on the path from that node to the root of the tree. Since the sizes of the graphs go down by a constant factor at each level of the tree, the upper bounds on the sizes of the separators also go down by a constant factor at each level, so the sizes of the separators on these paths add in a geometric series to O(√n). That is, a separator formed in this way has width O(√n), and can be used to show that every planar graph has treewidth O(√n).
Constructing a separator hierarchy directly, by traversing the binary tree top down and applying a linear-time planar separator algorithm to each of the induced subgraphs associated with each node of the binary tree, would take a total of O(n log n) time. However, it is possible to construct an entire separator hierarchy in linear time, by using the Lipton–Tarjan breadth-first layering approach and by using appropriate data structures to perform each partition step in sublinear time.[15]
If one forms a related type of hierarchy based on separations instead of separators, in which the two children of the root node are roots of recursively constructed hierarchies for the two subgraphs G1 and G2 of a separation of the given graph, then the overall structure forms a branch-decomposition instead of a tree decomposition. The width of any separation in this decomposition is, again, bounded by the sum of the sizes of the separators on a path from any node to the root of the hierarchy, so any branch-decomposition formed in this way has width O(√n) and any planar graph has branchwidth O(√n). Although many other related graph partitioning problems are NP-complete, even for planar graphs, it is possible to find a minimum-width branch-decomposition of a planar graph in polynomial time.[16]
By applying methods of Alon, Seymour & Thomas (1994) more directly in the construction of branch-decompositions, Fomin & Thilikos (2006a) show that every planar graph has branchwidth at most 2.12√n, with the same constant as the one in the simple cycle separator theorem of Alon et al. Since the treewidth of any graph is at most 3/2 its branchwidth, this also shows that planar graphs have treewidth at most 3.18√n.
Other classes of graphs
Some sparse graphs do not have separators of sublinear size: in an expander graph, deleting up to a constant fraction of the vertices still leaves only one connected component.[17]
Possibly the earliest known separator theorem is a result of Jordan (1869) that any tree can be partitioned into subtrees of at most 2n/3 vertices each by the removal of a single vertex.[6] In particular, the vertex that minimizes the maximum component size has this property, for if it did not then its neighbor in the unique large subtree would form an even better partition. By applying the same technique to a tree decomposition of an arbitrary graph, it is possible to show that any graph has a separator of size at most equal to its treewidth.
If a graph G is not planar, but can be embedded on a surface of genus g, then it has a separator with O((gn)1/2) vertices. Gilbert, Hutchinson & Tarjan (1984) prove this by using a similar approach to that of Lipton & Tarjan (1979). They group the vertices of the graph into breadth-first levels and find two levels the removal of which leaves at most one large component consisting of a small number of levels. This remaining component can be made planar by removing a number of breadth-first paths proportional to the genus, after which the Lipton–Tarjan method can be applied to the remaining planar graph. The result follows from a careful balancing of the size of the removed two levels against the number of levels between them. If the graph embedding is given as part of the input, its separator can be found in linear time. Graphs of genus g also have edge separators of size O((gΔn)1/2).[18]
Graphs of bounded genus form an example of a family of graphs closed under the operation of taking minors, and separator theorems also apply to arbitrary minor-closed graph families. In particular, if a graph family has a forbidden minor with h vertices, then it has a separator with O(hn) vertices, and such a separator can be found in time O(n1 + ε) for any ε > 0.[19]
An intersection graph of disks, with at most k = 5 disks covering any point of the plane.
The circle separator method of Miller et al. (1997) generalizes to the intersection graphs of any system of d-dimensional balls with the property that any point in space is covered by at most some constant number k of balls, to k-nearest-neighbor graphs in d dimensions,[6] and to the graphs arising from finite element meshes.[20] The sphere separators constructed in this way partition the input graph into subgraphs of at most n(d + 1)/(d + 2) vertices. The size of the separators for k-ply ball intersection graphs and for k-nearest-neighbor graphs is O(k1/dn1 − 1/d).[6]
Applications
Divide and conquer algorithms
Separator decompositions can be of use in designing efficient divide and conquer algorithms for solving problems on planar graphs. As an example, one problem that can be solved in this way is to find the shortest cycle in a weighted planar digraph. This may be solved by the following steps:
• Partition the given graph G into three subsets S, A, B according to the planar separator theorem
• Recursively search for the shortest cycles in A and B
• Use Dijkstra's algorithm to find, for each s in S, the shortest cycle through s in G.
• Return the shortest of the cycles found by the above steps.
The time for the two recursive calls to A and B in this algorithm is dominated by the time to perform the O(√n) calls to Dijkstra's algorithm, so this algorithm finds the shortest cycle in O(n3/2 log n) time.
A faster algorithm for the same shortest cycle problem, running in time O(n log3n), was given by Wulff-Nilsen (2009). His algorithm uses the same separator-based divide and conquer structure, but uses simple cycle separators rather than arbitrary separators, so that the vertices of S belong to a single face of the graphs inside and outside the cycle separator. He then replaces the O(√n) separate calls to Dijkstra's algorithm with more sophisticated algorithms to find shortest paths from all vertices on a single face of a planar graph and to combine the distances from the two subgraphs. For weighted but undirected planar graphs, the shortest cycle is equivalent to the minimum cut in the dual graph and can be found in O(n log log n) time,[21] and the shortest cycle in an unweighted undirected planar graph (its girth) may be found in time O(n).[22] (However, the faster algorithm for unweighted graphs is not based on the separator theorem.)
Frederickson proposed another faster algorithm for single source shortest paths by implementing separator theorem in planar graphs in 1986.[23] This is an improvement of Dijkstra's algorithm with iterative search on a carefully selected subset of the vertices. This version takes O(n √(log n)) time in an n-vertex graph. Separators are used to find a division of a graph, that is, a partition of the edge-set into two or more subsets, called regions. A node is said to be contained in a region if some edge of the region is incident to the node. A node contained in more that one region is called a boundary node of the regions containing it. The method uses the notion of a r-division of an n-node graph that is a graph division into O(n/r) regions, each containing O(r) nodes including O(√r) boundary nodes. Frederickson showed that an r-division can be found in O(n log n) time by recursive application of separator theorem.
The sketch of his algorithm to solve the problem is as follows.
1. Preprocessing Phase: Partition the graph into carefully selected subsets of vertices and determine the shortest paths between all pairs of vertices in these subsets, where intermediate vertices on this path are not in the subset. This phase requires a planar graph G0 to be transformed into G with no vertex having degree greater than 3. From a corollary of Euler's formula, the number of vertices in the resulting graph will be n ≤ 6n0 -12, where n0 is the number of vertices in G0 . This phase also ensures the following properties of a suitable r-division. A suitable r-division of a planar graph is an r-division such that,
• each boundary vertex is contained in at most three regions, and
• any region that is not connected consists of connected components, all of which share boundary vertices with exactly the same set of either one or two connected regions.
2. Search Phase:
• Main Thrust: Find Shortest distances from the source to each vertex in the subset. When a vertex v in the subset is closed, d(w) must be updated for all vertices w in the subset such that a path exists from v to w.
• Mop-up: Determine shortest distances to every remaining vertex.
Henzinger et. al. extended Frederickson's r-division technique for the single source shortest path algorithm in planar graphs for nonnegative edge-lengths and proposed a linear time algorithm.[24] Their method generalizes Frederickson's notion of graph-divisions such that now an (r,s)-division of an n-node graph be a division into O(n/r) regions, each containing r{O(1)} nodes, each having at most s boundary nodes. If an (r, s)-division is repeatedly divided into smaller regions, that is called get a recursive division. This algorithm uses approximately log*n levels of divisions. The recursive division is represented by a rooted tree whose leaves are labeled by distinct edge of G. The root of the tree represents the region consisting of full-G, the children of the root represent the subregions into which that region is divided and so on. Each leaf (atomic region) represents a region containing exactly one edge.
Nested dissection is a separator based divide and conquer variation of Gaussian elimination for solving sparse symmetric systems of linear equations with a planar graph structure, such as the ones arising from the finite element method. It involves finding a separator for the graph describing the system of equations, recursively eliminating the variables in the two subproblems separated from each other by the separator, and then eliminating the variables in the separator.[3] The fill-in of this method (the number of nonzero coefficients of the resulting Cholesky decomposition of the matrix) is O(n log n),[25] allowing this method to be competitive with iterative methods for the same problems.[3]
Klein, Mozes and Weimann [26] gave an O(n log2 n)-time, linear-space algorithm to find the shortest path distances from s to all nodes for a directed planar graph with positive and negative arc-lengths containing no negative cycles. Their algorithm uses planar graph separators to find a Jordan curve C that passes through O(√n) nodes (and no arcs) such that between n/3 and 2n/3 nodes are enclosed by C. Nodes through which C passes are boundary nodes. The original graph G is separated into two subgraphs G0 and G1 by cutting the planar embedding along C and duplicating the boundary nodes. For i = 0 and 1, in Gi the boundary nodes lie on the boundary of a single face Fi .
The overview of their approach is given below.
• Recursive call: The first stage recursively computes the distances from r within Gi for i = 0, 1.
• Intra-part boundary-distances: For each graph G i compute all distances in Gi between boundary nodes. This takes O(n log n) time.
• Single-source inter-part boundary distances: A shortest path in G passes back and forth between G0 and G1 to compute the distances in G from r to all the boundary nodes. Alternating iterations use the all-boundary-distances in $G0 and$G1 . The number of iterations is O(√n), so the overall time for this stage is O(n α(n)) where α(n) is the inverse Ackermann function.
• Single-source inter-part distances: The distances computed in the previous stages are used, together with a Dijkstra computation within a modified version of each Gi , to compute the distances in G from r to all the nodes. This stage takes O(n log n) time.
• Rerooting single-source distances: The distances from r in G are transformed into nonnegative lengths, and again Dijkstra’s algorithm is used to compute distances from s. This stage requires O(n log n) time.
An important part of this algorithm is the use of Price Functions and Reduced Lengths. For a directed graph G with arc-lengths ι(·), a price function is a function φ from the nodes of G to the real numbers. For an arc uv, the reduced length with respect to φ is ιφ(uv) = ι(uv) + φ(u) − φ(v). A feasible price function is a price function that induces nonnegative reduced lengths on all arcs of G. It is useful in transforming a shortest-path problem involving positive and negative lengths into one involving only nonnegative lengths, which can then be solved using Dijkstra’s algorithm.
The separator based divide and conquer paradigm has also been used to design data structures for dynamic graph algorithms[27] and point location,[28] algorithms for polygon triangulation,[15] shortest paths,[29] and the construction of nearest neighbor graphs,[30] and approximation algorithms for the maximum independent set of a planar graph.[28]
Exact solution of NP-hard optimization problems
By using dynamic programming on a tree decomposition or branch-decomposition of a planar graph, many NP-hard optimization problems may be solved in time exponential in √n or √n log n. For instance, bounds of this form are known for finding maximum independent sets, Steiner trees, and Hamiltonian cycles, and for solving the travelling salesman problem on planar graphs.[31] Similar methods involving separator theorems for geometric graphs may be used to solve Euclidean travelling salesman problem and Steiner tree construction problems in time bounds of the same form.[32]
For parameterized problems that admit a kernelization that preserves planarity and reduces the input graph to a kernel of size linear in the input parameter, this approach can be used to design fixed-parameter tractable algorithms the running time of which depends polynomially on the size of the input graph and exponentially on √k, where k is the parameter of the algorithm. For instance, time bounds of this form are known for finding vertex covers and dominating sets of size k.[33]
Approximation algorithms
Lipton & Tarjan (1980) observed that the separator theorem may be used to obtain polynomial time approximation schemes for NP-hard optimization problems on planar graphs such as finding the maximum independent set. Specifically, by truncating a separator hierarchy at an appropriate level, one may find a separator of size O(n/√log n) the removal of which partitions the graph into subgraphs of size c log n, for any constant c. By the four-color theorem, there exists an independent set of size at least n/4, so the removed nodes form a negligible fraction of the maximum independent set, and the maximum independent sets in the remaining subgraphs can be found independently in time exponential in their size. By combining this approach with later linear-time methods for separator hierarchy construction[15] and with table lookup to share the computation of independent sets between isomorphic subgraphs, it can be made to construct independent sets of size within a factor of 1 − O(1/√log n) of optimal, in linear time. However, for approximation ratios even closer to 1 than this factor, a later approach of Baker (1994) (based on tree-decomposition but not on planar separators) provides better tradeoffs of time versus approximation quality.
Similar separator-based approximation schemes have also been used to approximate other hard problems such as vertex cover.[34] Arora et al. (1998) use separators in a different way to approximate the travelling salesman problem for the shortest path metric on weighted planar graphs; their algorithm uses dynamic programming to find the shortest tour that, at each level of a separator hierarchy, crosses the separator a bounded number of times, and they show that as the crossing bound increases the tours constructed in this way have lengths that approximate the optimal tour.
Graph compression
Separators have been used as part of data compression algorithms for representing planar graphs and other separable graphs using a small number of bits. The basic principle of these algorithms is to choose a number k and repeatedly subdivide the given planar graph using separators into O(n/k) subgraphs of size at most k, with O(n/√k) vertices in the separators. With an appropriate choice of k (at most proportional to the logarithm of n) the number of non-isomorphic k-vertex planar subgraphs is significantly less than the number of subgraphs in the decomposition, so the graph can be compressed by constructing a table of all the possible non-isomorphic subgraphs and representing each subgraph in the separator decomposition by its index into the table. The remainder of the graph, formed by the separator vertices, may be represented explicitly or by using a recursive version of the same data structure. Using this method, planar graphs and many more restricted families of planar graphs may be encoded using a number of bits that is information-theoretically optimal: if there are Pn n-vertex graphs in the family of graphs to be represented, then an individual graph in the family can be represented using only (1 + o(n))log2Pn bits.[35] It is also possible to construct representations of this type in which one may test adjacency between vertices, determine the degree of a vertex, and list neighbors of vertices in constant time per query, by augmenting the table of subgraphs with additional tabular information representing the answers to the queries.[36][37]
Universal graphs
A universal graph for a family F of graphs is a graph that contains every member of F as a subgraphs. Separators can be used to show that the n-vertex planar graphs have universal graphs with n vertices and O(n3/2) edges.[38]
The construction involves a strengthened form of the separator theorem in which the size of the three subsets of vertices in the separator does not depend on the graph structure: there exists a number c, the magnitude of which at most a constant times √n, such that the vertices of every n-vertex planar graph can be separated into subsets A, S, and B, with no edges from A to B, with |S| = c, and with |A| = |B| = (n − c)/2. This may be shown by using the usual form of the separator theorem repeatedly to partition the graph until all the components of the partition can be arranged into two subsets of fewer than n/2 vertices, and then moving vertices from these subsets into the separator as necessary until it has the given size.
Once a separator theorem of this type is shown, it can be used to produce a separator hierarchy for n-vertex planar graphs that again does not depend on the graph structure: the tree-decomposition formed from this hierarchy has width O(√n) and can be used for any planar graph. The set of all pairs of vertices in this tree-decomposition that both belong to a common node of the tree-decomposition forms a trivially perfect graph with O(n3/2) vertices that contains every n-vertex planar graph as a subgraph. A similar construction shows that bounded-degree planar graphs have universal graphs with O(n log n) edges, where the constant hidden in the O notation depends on the degree bound. Any universal graph for planar graphs (or even for trees of unbounded degree) must have Ω(n log n) edges, but it remains unknown whether this lower bound or the O(n3/2) upper bound is tight for universal graphs for arbitrary planar graphs.[38]
Notes
1. ^
2. ^ a b
3. ^ a b c George (1973). Instead of using a row or column of a grid graph, George partitions the graph into four pieces by using the union of a row and a column as a separator.
4. ^
5. ^
6. ^
7. ^
8. ^
9. ^
10. ^
11. ^
12. ^ Miller (1986) proved this result for 2-connected planar graphs, and Diks et al. (1993) extended it to all planar graphs.
13. ^
14. ^ a b c
15. ^
16. ^
17. ^
18. ^ Kawarabayashi & Reed (2010). For earlier work on separators in minor-closed families see Alon, Seymour & Thomas (1990), Plotkin, Rao & Smith (1994), and Reed & Wood (2009).
19. ^
20. ^
21. ^
22. ^ Greg n. Frederickson, Fast algorithms for shortest paths in planar graphs, with applications, SIAM J. Computing, pp. 1004-1022, 1987.
23. ^ Monika R. Henzinger , Philip Klein , Satish Rao , Sairam Subramanian, \textit{Faster shortest-path algorithms for planar graphs}, Journal of Computer and System Science, Vol. 55, Issue 1, August 1997.
24. ^
25. ^ Philip N. Klein, Shay Mozes and Oren Weimann, Shortest Paths in Directed Planar Graphs with Negative Lengths: a Linear-Space O(n log2 n)-Time Algorithm}, Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, 2009.
26. ^
27. ^ a b
28. ^
29. ^
30. ^
31. ^
32. ^
33. ^
34. ^
35. ^
36. ^
37. ^ a b
|
2016-07-27 02:35:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8028682470321655, "perplexity": 384.553014217092}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825358.53/warc/CC-MAIN-20160723071025-00205-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://acius.co.uk/projects/planetary-mining-rovers/
|
## Planetary Mining Rovers
Logic programming is an excellent method for creating and solving multi-agent system problems. This project focuses on one type of problem: planetary rover mining, where agents collaborate by collecting and depositing resources.
## What is the Project?
This project focuses on three scenarios that use multiple agents cooperatively to mine resources and deposit them back at their starting point (base). Each environment increases in difficulty where the first contains one agent, the second uses two agents, and the third uses four agents.
## Scenario 1
Scenario 1 is the simplest of the scenarios, requiring the collection of one resource using one agent. It follows the basic_agent.asl to scan and mine a single gold node in a 10x10 environment. Being a solo agent, it uses a combination of scanning and mining with a scan radius of 3 and a capacity of 3. The agent moves around the environment in a pattern, scanning for the resource. Once found, it uses the A* algorithm to find the shortest path and mines the ore until maximum capacity. Next, it uses the A* algorithm again to return home and deposit the ore. The agent moves back and forth between the resource and the base until depleted.
## Scenario 2
Scenario 2 requires the collection of four resource nodes using two agents. It uses a combination of a dedicated scanner scanner_s2.asl that communicates resource node locations and A* paths to a dedicated miner miner_s2.asl. The miner waits for the scanner to finish scanning the map (finding all resource nodes) before signalled to begin mining operations. The miner begins mining the nodes in sequence, fully depleting a node before moving on to the next one.
## Scenario 3
Scenario 3 requires the collection of eight resource nodes, split into two types (gold and diamond) using four agents. The implementation is identical to scenario 2 but with minor additions to accommodate the additional agents. It uses two dedicated scanners, both following the scanner_s3.asl agent format, where the agents are assigned separate quadrants of the map (half each). The agents move to their quadrants and then begin scanning, sending the found resources to each scanner (preventing duplicate resource locations) and the respective miner. When the scanners get low on energy, they return to base, concluding the map scanning and signal the miners to begin mining. The miners cannot start mining until both scanners have returned to base. The remaining two agents are miners, one for each resource, where both follow the miner_s3.asl agent format. As with scenario 2, the agents deplete a resource and deposit it back at base before moving to the next one.
|
2022-01-27 03:28:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4642016887664795, "perplexity": 2323.640942252326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00055.warc.gz"}
|
https://math.stackexchange.com/questions/2658564/when-is-hk-a-subgroup/2670084
|
# When is $HK$ a subgroup?
Let $H$ and $K$ be subgroups of a group $G$. My question is: when is $$HK = \{hk: h\in H, k\in K\}$$ a subgroup of $G$?
If $G$ is abelian, then this is clearly the case. I believe I have shown that if $H$ and $K$ are both normal subgroups of $G$, then $HK$ is a subgroup.
Are there any more general results where this is the case?
I do see things like this giving a list of equivalent statements, but I was wondering what statements imply that $HK$ is a subgroup.
• It is enough for one of $H,K$ to be normal. But that is not a necessary condition. You can have $HK=G$ for example. – almagest Feb 20 '18 at 13:08
• In general, $HK$ is a subgroup if and only if $HK=KH$. – Derek Holt Feb 20 '18 at 13:09
• @almagest: and it doesn't matter which one? – John Doe Feb 20 '18 at 13:09
• @DerekHolt the OP knows that, he is after sufficient conditions that are not in the Wikipedia article. – Arnaud Mortier Feb 20 '18 at 13:09
• @almagest: Please write this as an answer so that I can accept something. – John Doe Feb 26 '18 at 13:30
We use the result provided by Goldy. We can show the following corollary.
Corollary. Let $H$ and $K$ be subgroups of a group $G$. (a) If $H\leq N_{G}(K)$, then $HK$ is a subgroup of $G$. (b) If $K\unlhd G$ then $HK\leq G$.
Proof. (a) We show that $HK=KH$.
$HK \subset KH$: let $h\in H$ and $k\in K$. Then $h\in N_{G}(K)$. So $hKh^{-1}=K$. Hence $hkh^{-1}=l$ for some $l\in K$. So $hk=lh\in KH$.
$KH\subset HK$: let $k\in K$ and $h\in H$. Then $h^{-1}\in N_{G}(K)$. So $h^{-1}kh=l$ for some $l\in K$. Hence $kh=hl\in HK$.
So $HK=KH$. By the result provided by Goldy, $HK\leq G$.
(b) Since $K\unlhd G$, we have $gKg^{-1}=K$ for all $g\in G$. So $N_{G}(K)=G$. Hence $H\leq N_{G}(K)$. By part (a), $HK\leq G$.
Corollary. Let $H$ and $K$ are subgroups of a group $G$. (a) If $H\leq N_{G}(K)$ or $K\leq N_{G}(H)$, then $HK\leq G$. (b) If $H\unlhd G$ or $K\unlhd G$, then $HK\leq G$.
Proof. (a) If $H\leq N_{G}(K)$, then by the first corollary, $HK\leq G$. If $K\leq N_{G}(H)$, then by the first corollary, $KH\leq G$. By the previous result, $HK=KH$. So $HK\leq G$.
(b) If $K\unlhd G$, then $HK\leq G$. If $H\unlhd G$, then $KH\leq G$. So $HK=KH\leq G$.
Edit: one more case.
Corollary. Let $H$ and $K$ be subgroups of a group $G$. If $H\subset K$ or $K\subset H$, then $HK\leq G$.
Proof. If $H\subset K$, then $K\subset HK\subset K$. So $HK=K\leq G$. If $K\subset H$, then $H\subset HK\subset H$. So $HK=H\leq G$.
$HK$ is a subgroup of $G$ iff $HK=KH$.
Let $HK$ be a subgroup of $G$. If $x\in HK$ is any element, then $x^{-1}\in HK$. This implies, $x^{-1}=hk \implies x=k^{-1}h^{-1}\in KH$. Thus, $HK\subseteq KH$. Similarly, $KH\subseteq HK$. Hence, $HK=KH$.
Conversely, let $KH=HK$. Let $x,~y\in HK$. Then $x=h_1k_1$ and $y=h_2k_2$ for some $h_1,h_2\in H$, $k_1,k_2\in K$. This implies, $xy^{-1}=h_1(k_1k_2^{-1})h_2^{-1}$. Now, $(k_1k_2^{-1})h_2^{-1}\in KH=HK$, thus $(k_1k_2^{-1})h_2^{-1}=hk$ for some $h\in H$, $k\in K$. Thus, $xy^{-1}=h_1(hk)=(h_1h)k\in HK$. Hence, $HK$ is a subgroup, as $x, y\in HK\implies xy^{-1}\in HK$.
|
2020-05-25 21:12:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9752381443977356, "perplexity": 62.660186743568644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347389355.2/warc/CC-MAIN-20200525192537-20200525222537-00001.warc.gz"}
|
https://mersenneforum.org/showthread.php?s=2bb382471b366f6a9efcb9239eef2627&p=549710
|
mersenneforum.org Leyland Primes (x^y+y^x primes)
Register FAQ Search Today's Posts Mark Forums Read
2020-07-01, 23:24 #375
pxp
Sep 2010
Weston, Ontario
11·13 Posts
Quote:
Originally Posted by rogue The ABC line needs to have "// Sieved to xxx" on the first line when used as input. This is probably the issue you are seeing. Nevertheless I can see if anything else is wrong.
I noticed that the help file had sieve start "3" as the default, suggesting perhaps that -p2 might be an out of range argument. As a result, I reworked my shared pages to exclude even L(x,y). That brings the number of terms per page down from ~11000 to ~7300. The first three lines of my first page are now:
Code:
ABC $a^$b$c*$b^$a // Sieved to 3 102503 5888 +1 80340 64561 +1 Last fiddled with by pxp on 2020-07-01 at 23:27 Reason: added link 2020-07-02, 15:43 #376 chris2be8 Sep 2009 3·613 Posts rogue, can you post a sample file that works for you. Then pxp can run that to check his binary is OK, then change 1 thing at a time until it's doing what he wants. I've found that starting from a working example makes problem solving much easier. Chris 2020-07-02, 16:46 #377 rogue "Mark" Apr 2003 Between here and the 10110100110102 Posts Apparently I did not do a "make clean" before I did the make. The attached should work with the proper ABC file. I also found an issue in avx_powmod that only impacts non-Windows builds. That is also fixed. Code: ./xyyxsieve -ixyyx.in -p3 -P1e6 xyyxsieve v1.5, a program to find factors numbers of the form x^y+y^x Sieve started: 3 < p < 1e6 with 1324 terms (78917 <= x <= 283782, 23 <= y <= 78832) (expecting 1219 factors) Sieve completed at p=1000193. Processor time: 13.87 sec. (0.00 sieving) (0.98 cores) 503 terms written to xyyx.pfgw Primes tested: 78512. Factors found: 821. Remaining terms: 503. Time: 14.16 seconds. Attached Files xyyxsieve.7z (57.8 KB, 2 views) 2020-07-02, 18:08 #378 pxp Sep 2010 Weston, Ontario 11×13 Posts Quote: Originally Posted by rogue The attached should work with the proper ABC file. Code: mm5:~ pxp$ cd /Users/pxp/Desktop/rogue
mm5:rogue pxp$./xyyxsieve -i386434.txt -p3 -P2e9 xyyxsieve v1.5, a program to find factors numbers of the form x^y+y^x Sieve started: 3 < p < 2e9 with 7203 terms (78911 <= x <= 1283705, 2 <= y <= 78900) (expecting 6834 factors) p=5957291, 4.586K p/sec, 6757 factors found at 1.52 sec per factor, 0.3% done. ETC 2020-07-03 06:54 p=10411189, 4.588K p/sec, 6768 factors found at 5.53 sec per factor, 0.5% done. ETC 2020-07-03 02:52 p=14994013, 4.595K p/sec, 6775 factors found at 8.68 sec per factor, 0.7% done. ETC 2020-07-03 01:09 p=24406297, 4.597K p/sec, 6785 factors found at 12.15 sec per factor, 1.2% done. ETC 2020-07-02 23:34 p=29201503, 4.594K p/sec, 6789 factors found at 15.19 sec per factor, 1.5% done. ETC 2020-07-02 23:08 p=34041223, 4.597K p/sec, 6794 factors found at 12.15 sec per factor, 1.7% done. ETC 2020-07-02 22:48 p=38899507, 4.573K p/sec, 6798 factors found at 15.18 sec per factor, 1.9% done. ETC 2020-07-02 22:33 p=43805231, 4.583K p/sec, 6800 factors found at 30.36 sec per factor, 2.2% done. ETC 2020-07-02 22:21 This is on one of my Mac minis. I'll let it run to the end. Thank you! 2020-07-02, 19:43 #379 rogue "Mark" Apr 2003 Between here and the 2×11×263 Posts Quote: Originally Posted by pxp This is on one of my Mac minis. I'll let it run to the end. Thank you! You're welcome. Glad to be of service. You can use ^C to stop at any time. The program will finish processing the current chunk, then save and exit. If 2e9 isn't deep enough you can start sieving again using -ixyyx.pfgw. You will not need to use -p as it will grab the initial prime from the input file. 2020-07-03, 17:20 #380 pxp Sep 2010 Weston, Ontario 11×13 Posts Code: Processor time: 22549.57 sec. (1.62 sieving) (0.99 cores) 333 terms written to xyyx.pfgw Primes tested: 98222288. Factors found: 6870. Remaining terms: 333. Time: 22668.14 seconds. I took this up to 5e9 which required an additional 8.5 hours and found 16 new factors, slightly less than 2 new factors per hour. On my Mac mini, a PRP-test of a number this size required half an hour, so roughly equivalent to what subsequent sieving might accomplish. I have replaced my three previously shared pre-sieved Leyland number files with their post-sieved outputs: http://chesswanks.com/num/LLPHbdl/386434.txt http://chesswanks.com/num/LLPHbdl/386435.txt http://chesswanks.com/num/LLPHbdl/386436.txt They contain 317, 325, and 303 terms, respectively. I think I can PRP-test any one of these in under a week and I intend to try in the near future. But first I will generate more sieved pages. I ran multiple terminal windows to generate the three files. My initial attempt at this ran off the same xyyxsieve, little realizing that the ongoing xyyx.pfgw files overwrote each other. So I ended up cloning the folder containing xyyxsieve and re-ran each terminal window off its own folder. I suppose a future version of xyyxsieve could output a .pfgw file with a name that matches the input file name. 2020-07-03, 17:35 #381 rogue "Mark" Apr 2003 Between here and the 169A16 Posts Quote: Originally Posted by pxp I ran multiple terminal windows to generate the three files. My initial attempt at this ran off the same xyyxsieve, little realizing that the ongoing xyyx.pfgw files overwrote each other. So I ended up cloning the folder containing xyyxsieve and re-ran each terminal window off its own folder. I suppose a future version of xyyxsieve could output a .pfgw file with a name that matches the input file name. You can have multiple tabs in a Terminal window or multiple Terminal windows. You can override the name of the output file by using -o and specifying the file name. 2020-07-04, 02:57 #382 LaurV Romulan Interpreter Jun 2011 Thailand 2·7·613 Posts Quote: Originally Posted by kar_bon So your file is like Code: (85085,34812) (92856,14509) Do this with a text editor: - remove the "(" - remove the ")" - replace the "," with " +1 " (notice the spaces) Now the file should look like this Code: 85085 +1 34812 92856 +1 14509 I didn't keep the track of the development you talk about here, but what is described by Karsten can be achieved with a single regex command using perl or any text editor that accepts regex search and replace. For example, to do this with pn2 or with notepad++, open a search/replace box, by pressing ctrl+f or ctrl+h or alt+r (depending on the program or OS or settings you have for shortcuts) or just take it from the menu, then go to replace tab, be careful to check the "regular expressions" box, then in the "find what" box type "^$$(\d*),(\d*)$$$" (without quotes, grrrr! how do I tell the forum's matjax not to mess with my expression? what a crakpot haha, he thinks this is math ...Should I use "code" tags?) and in the "replace with" box type "\1 +1 \2" (without quotes, and mind the spaces around the "+1"). Click "replace all.
To have the plus at the end, just use "\1 \2 +1" in the "replace with" box.
For who didn't hear about regular expressions, this translates to "if you find two groups of any number of digits each, alone on the row (i.e. no other text, the ^ and \$ signs represent beginning and end of the row) which are between parenthesis and separated by c comma, extract the the two groups in two different strings (called \1 and \2, this is what the internal parenthesis do, in the "find what" string) and rearrange them according with the "replace with" box, possibly adding some text (the +1 and spaces) around them". That's all. No magic.
Or, actually, Magic!
You still need to add the header line "ABC blah blah" by hand (i.e. typing )
Edit: picture (Notepad++ used for exemplification) because Mathjax messed my expression, I know there was a way to block this, which Serge wrote here in the past, but we forgot the tag... was it "noeval", or what?
One more observation, in Notepad++ the regex search/replace is undo-able (if you mess the expression, just press undo and retry till you learn the right way)
Last fiddled with by LaurV on 2020-07-04 at 04:20
Similar Threads Thread Thread Starter Forum Replies Last Post Batalov XYYXF Project 16 2019-08-04 00:32 carpetpool Miscellaneous Math 3 2017-08-10 13:47 emily Math 34 2017-07-16 18:44 davar55 Puzzles 9 2016-03-15 20:55 troels munkner Miscellaneous Math 4 2006-06-02 08:35
All times are UTC. The time now is 18:22.
Sat Jul 4 18:22:03 UTC 2020 up 101 days, 15:55, 2 users, load averages: 2.63, 2.47, 2.41
|
2020-07-04 18:22:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5834224820137024, "perplexity": 8255.686531270303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886516.43/warc/CC-MAIN-20200704170556-20200704200556-00052.warc.gz"}
|
https://socratic.org/questions/59367da97c01495f5e3e3b82
|
# How are chemical formulae of different compounds represented?
Jun 12, 2017
#### Answer:
Is that all you want? We could be here for days!
#### Explanation:
Chemical symbols, $H$, $H e$, $C$, $N$, $O$ $e t c .$ describe the 100 or so known elements, which, as far as chemists know, constitute all matter. The elements are systematically grouped in the Periodic Table.
Chemical formulae represent the combinations of atoms to give molecules and compounds of different elements, e.g. ${H}_{2}$, $C {H}_{4}$, $C {O}_{2}$, ${C}_{6} {H}_{12} {O}_{6}$.
And chemical equations are a simple shorthand means to represent how elements and compounds chemically interact to form new compounds and materials. A typical chemical reaction is the combustion of methane gas with dioxygen, the which underlies our industrial civilization:
$C {H}_{4} \left(g\right) + 2 {O}_{2} \left(g\right) \rightarrow C {O}_{2} \left(g\right) + 2 {H}_{2} O \left(l\right) + \Delta$
Using the known masses of (given quantities) of elements we can thus use the given equation to represent mass and energy transfer: i.e. $16 \cdot g$ $\text{methane}$ is combusted by $64 \cdot g$ of $\text{dioxygen}$ to give $44 \cdot g$ $\text{carbon dioxide}$ and $36 \cdot g$ of water. Such a combustion also results in a measurable energy transfer, and such energy (here represented by the symbol $\Delta$) can be used to do useful work. Both mass and energy may thus be quantitatively assessed by such chemical equations.
All known chemical reactions CONSERVE mass and CONSERVE energy. What does this mean in the given context?
|
2019-08-23 10:50:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 19, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329216480255127, "perplexity": 1032.965431986181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318375.80/warc/CC-MAIN-20190823104239-20190823130239-00509.warc.gz"}
|
https://www.zbmath.org/?q=an%3A1029.62042
|
# zbMATH — the first resource for mathematics
Generalized likelihood ratio statistics and Wilks phenomenon. (English) Zbl 1029.62042
Summary: Likelihood ratio theory has had tremendous success in parametric inference, due to the fundamental theory of Wilks. Yet, there is no general applicable approach for nonparametric inferences based on function estimation. Maximum likelihood ratio test statistics in general may not exist in nonparametric function estimation settings. Even if they exist, they are hard to find and can not be optimal as shown in this paper.
We introduce generalized likelihood statistics to overcome the drawbacks of nonparametric maximum likelihood ratio statistics. A new Wilks phenomenon is unveiled. We demonstrate that a class of the generalized likelihood statistics based on some appropriate nonparametric estimators are asymptotically distribution free and follow $$\chi^2$$-distributions under null hypotheses for a number of useful hypotheses and a variety of useful models including Gaussian white noise models, nonparametric regression models, varying coefficient models and generalized varying coefficient models.
We further demonstrate that generalized likelihood ratio statistics are asymptotically optimal in the sense that they achieve optimal rates of convergence. They can even be adaptively optimal by using a simple choice of adaptive smoothing parameters. Our work indicates that the generalized likelihood ratio statistics are indeed general and powerful for nonparametric testing problems based on function estimation.
##### MSC:
62G10 Nonparametric hypothesis testing 62G07 Density estimation 62G20 Asymptotic properties of nonparametric inference 62J12 Generalized linear models (logistic models)
Full Text:
|
2021-04-17 15:37:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47986963391304016, "perplexity": 704.9229695771212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038460648.48/warc/CC-MAIN-20210417132441-20210417162441-00263.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/197940-finite-reflection-groups-two-dimensions-grove-benson-section-2-1-a.html
|
# Thread: Finite Reflection Groups in Two Dimensions - Grove & Benson Section 2.1
1. ## Finite Reflection Groups in Two Dimensions - Grove & Benson Section 2.1
I am seeking to understand Finite Reflection Groups and am reading Grove and Benson (G&B) : Finite Reflection Groups
Grove and Benson, in Section 21 Orthogonal Transformations in Two Dimensions define T as a linear transformation belonging to the groups of all orthogonal transformations $\displaystyle O( {\mathbb{R}}^2 )$.
G&B point out that the vector $\displaystyle x_1 = (cos \ \theta /2 , sin \ \theta /2)$ is an eigenvector having eigenvalue 1 for T and that similarly $\displaystyle x_2 = ( - sin \ \theta /2 , cos \ \theta /2 )$ is an eigenvector with eigenvalue -1 and $\displaystyle x_1 \ \bot \ x_2$ [see attachment]
G&B then state that if $\displaystyle x = {\lambda}_1 x_1 + {\lambda}_2 x_2$ , then $\displaystyle Tx = {\lambda}_1 x_1 - {\lambda}_2 x_2$ and T sends x to its mirror image with respect to the line L (see Figure 2.2(b) in attachement)
The transformation T is called the refection through L or the reflection along $\displaystyle x_2$.
G&B then say "observe that $\displaystyle Tx = x - 2( x , x_2) x_2$ for all $\displaystyle x \in {\mathbb{R}}^2$"
Can someone help me show (explicitly and formally) that $\displaystyle Tx = x - 2( x , x_2) x_2$ for all $\displaystyle x \in {\mathbb{R}}^2$ ?
And further (and possibly more important) can someone help me get a geometric sense of what the formula above means? ie why do G&B highlight this particular relationship?
Would very much appreciate such help
Peter
2. ## Re: Finite Reflection Groups in Two Dimensions - Grove & Benson Section 2.1
Tx = T(λ1x1 + λ2x2) = λ1T(x1) + λ2T(x2)
= λ1x1 - λ2x2 = λ1x1 + λ2x2 - 2λ2x2
= x - 2λ2x2 (this is just simple algebra, to this point).
now (x,x2) = (λ1x1 + λ2x2,x2) = λ1(x1,x2) + λ2(x2,x2)
and since x1,x2 are orthogonal, (x1,x2) = 0, therefore:
(x,x2) = λ2(x2,x2) = λ2|x2|2.
but x2 lies on the unit circle, therefore it has length (and thus length squared) of 1.
so (x,x2) = λ2, and we have:
Tx = x - 2λ2x2 = x - 2(x,x2)x2.
so why?
the quantity (x,x2)x2 is a little misleading (because we are dealing with UNIT vectors x1,x2).
normally, this is written as: [(x,x2)/(x2,x2)]x2 which is known as:
the projection of the vector x in the direction of the vector x2. geometrically this is: "the part of x that lies on the line generated by x2".
normally, we project a vector onto the unit vectors e1 = (1,0) and e2 = (0,1).
that is: if x = (x1,x2) = x1e1 + x2e2, then:
the projection of x in the direction of e1 is [(x.e1)/(e1.e1)]e1
= [(x1*1 + x2*0)/(1*1 + 0*0)]e1 = x1e1 = (x1,0).
3. ## Re: Finite Reflection Groups in Two Dimensions - Grove & Benson Section 2.1
Thanks ... that post was most helpful, particularly the bit about the misleading nature of the formula as stated ... that clarified a few issues for me.
Peter
### finite reflection groups benson pdf
Click on a term to search for related topics.
|
2018-04-27 05:23:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8791441321372986, "perplexity": 1823.3826631486277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949036.99/warc/CC-MAIN-20180427041028-20180427061028-00303.warc.gz"}
|
https://findatwiki.com/Dark_diversity
|
# Dark diversity
Dark diversity is the set of species that are absent from a study site but present in the surrounding region and potentially able to inhabit particular ecological conditions. It can be determined based on species distribution, dispersal potential and ecological needs.[1] The term was introduced in 2011 by three researchers from the University of Tartu and was inspired by the idea of dark matter in physics since dark diversity too cannot be directly observed.[2][3][4]
## Overview
Dark diversity is part of the
inhabit a particular site and that are present in the surrounding region or landscape.[5] Dark diversity comprises species that belong to a particular species pool but that are not currently present at a site.[2] Dark diversity is related to "habitat-specific" or "filtered" species pool which only includes species that can both disperse to and potentially inhabit the study site.[4][5] For example, if fish diversity in a coral reef site has been sampled, dark diversity includes all fish species from the surrounding region that are currently absent but can potentially disperse to and colonize the study site. Because all sampling will also miss some species actually present at a site, we also have the related idea of 'phantom species' – those species present at a site but not detected within the sampling units used to sample the community at that site.[6]
The existence of these phantom species means that routine measures of colonization and extinction at a site will always overestimate true rates because of "pseudo-turnover."
Dark diversity name is borrowed from dark matter: matter which cannot be seen and directly measured, but its existence and properties are inferred from its gravitational effects on visible matter. Similarly, dark diversity cannot be seen directly when only the sample is observed, but it is present if broader scale is considered, and its existence and properties can be estimated when proper data is available. With dark matter we can better understand distribution and dynamics of galaxies; with dark diversity we can understand composition and dynamics of ecological communities.
## Habitat specificity and scale
Dark diversity is the counterpart of observed diversity (
microhabitat in an old-growth forest
) or broader (e.g. terrestrial habitat). Thus, habitat specificity does not mean that all species in dark diversity can inhabit all localities within study sample, but there must be ecologically suitable parts.
Habitat-specificity is making the distinction between dark diversity and beta diversity. If beta diversity is the association between alpha and gamma diversity, dark diversity connects alpha diversity and habitat-specific (filtered) species pool. Habitat-specific species pool only these which can potentially inhabit focal study site.[2] Observed diversity can be studied at any scale, and sites with varying heterogeneity. This is also true for dark diversity. Consequently, as local observed diversity can be linked to very different sample sizes, dark diversity can be applied at any study scale (1x1 m sample in a vegetation, bird count transect in a landscape, 50x50 km UTM grid cell).
## Methods to estimate dark diversity
Region size determines likelihood of dispersal to study site and selecting appropriate scale depends on research question. For a more general study, a scale comparable to
biogeographic
region can be used (e.g. a small country, a state, or radius of few hundred km). If we want to know which species potentially can inhabit study site in the near future (for example 10 years), landscape scale is appropriate.
To separate ecologically suitable species, different methods can be used.
Environmental niche modelling can be applied for a large number of species. Expert opinion can be used.[7] Data on species' habitat preferences is available in books, e.g. bird nesting habitats. This can also be quantitative, for example plant species indicator values, according to Ellenberg. A recently developed method estimates dark diversity from species co-occurrence matrices.[8] An online tool is available for the co-occurrence method.[9]
## Usage
Dark diversity allows meaningful comparisons of biodiversity. The community completeness index can be used:
${\displaystyle \log \left({\frac {\text{observed diversity}}{\text{dark diversity}}}\right)}$.[10]
This express the local diversity at the relative scale, filtering out the effect of regional species pool. For example, if completeness of plant diversity was studied at the European scale, it did not exhibit the latitudinal pattern seen with observed richness and species pool values. Instead, high completeness was characteristic to regions with lower human impact, indicating that anthropogenic factors are among the most important local scale biodiversity determinants in Europe.[11]
Dark diversity studies can be combined with functional ecology to understand why species pool is poorly realized in a locality. For example, if functional traits were compared between grassland species in observed diversity and dark diversity, it becomes evident, that dark diversity species have in general poorer dispersal abilities.[12]
Dark diversity can be useful in prioritizing nature conservation,[13] to identify in different regions most complete sites. Dark diversity of alien species, weeds and pathogens can be useful to prepare for future invasions in time.
Recently, dark diversity concept was used in to explain mechanisms behind plant diversity-productivity relationship.[14]
|
2023-03-27 19:25:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33688709139823914, "perplexity": 3154.7016486177376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00480.warc.gz"}
|
https://rigidity.readthedocs.io/en/v1.4.1/code/errors.html
|
# Errors¶
This submodule contains exception classes that are used by Rigidity to handle different actions from the rule classes.
exception rigidity.errors.DropRow[source]
Bases: rigidity.errors.RigidityException
When a rule raises this error, the row that is being processed is dropped from the output.
with_traceback()
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
|
2021-10-24 06:49:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2545335292816162, "perplexity": 5238.648900052532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585911.17/warc/CC-MAIN-20211024050128-20211024080128-00279.warc.gz"}
|
https://www.physicsforums.com/threads/1st-order-expansion-problem.755321/
|
# 1st order expansion problem!
1. May 26, 2014
### ks_wann
I'm a bit frustrated at the moment, as this minor problem should be fairly easy. But I seem to go wrong at some point...
So I've got to do a 1st order expansion of the function
f=\frac{\cos(\theta)}{\sin(\theta)}\ln(\frac{L\sin(\theta)}{d\cos( \theta)}+1)
and my steps are:
f(0)=0 \\
f^{\prime}(\theta)=-\frac{1}{\sin^2(\theta)}\ln(\frac{L\sin(\theta)}{d\cos(\theta)}+1)+ \frac{\cos(\theta)}{\sin(\theta)}\frac{d\cos(\theta)}{L\sin(\theta)+d \cos(\theta)}(\frac{\cos^2(\theta)+\sin^2(\theta)}{\cos^2(\theta)}\frac{L}{d}).
When I insert theta=0 I end up divding by 0...
Furthermore, when I make my computer do the expansion, I get the correct result from the assignment I'm working on.
If anyone could help me out, I'd be grateful!
2. May 26, 2014
### ehild
f(0) is not defined, you need the limits at theta=0. Better expand the logarithm. You know that ln(1+x) ≈ x-x/2 if |x|<<1.
ehild
Last edited: May 26, 2014
3. May 26, 2014
### LCKurtz
Or, if you let $x = \tan\theta$ and $k=\frac L d$ you have $\frac{\ln(1+kx)}{x}$. You can find the limit as $x\to 0$ using L'Hospital's rule.
4. May 27, 2014
### benorin
If you're not sure about the substitution, you would have:
$$\lim_{\theta\to 0}f(\theta )=\lim_{\theta\to 0}\frac{\log\left(1+\frac{L}{d}\tan\theta\right)}{\tan\theta} = \lim_{\theta\to 0}\frac{\frac{d}{d\theta}\log\left(1+\frac{L}{d}\tan\theta\right)}{ \frac{d}{d\theta} \tan\theta}= \cdots$$
5. May 28, 2014
### ehild
The function can be written in simpler form as benorin has shown:
$$f(\theta )=\frac{\log\left(1+\frac{L}{d}\tan\theta\right)}{\tan\theta}$$
Use the Taylor expansion of log(1+x) = x - x2/2. Let be x=tan(theta).
$$f(\theta )≈\frac{\frac{L}{d}\tan\theta-\left(\frac{L}{d}\tan\theta\right)^2/2}{\tan\theta}$$
Simplify by tan(θ): You get an expression linear in tan(θ). You can expand tan(θ) with respect to θ...
ehild
6. May 30, 2014
### ks_wann
Thanks for all of your answers, it really helped me out. I've reread the series chapter of my calculus book, and I've come to a much better understanding of that subject in general.
I basically expand the logarithm in the function, and then I expand the function, which gave the correct answer.
|
2017-08-23 01:58:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8369301557540894, "perplexity": 1383.9031010485173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.70/warc/CC-MAIN-20170823000718-20170823020718-00653.warc.gz"}
|
http://www.ck12.org/arithmetic/Division-of-Fractions-by-Whole-Numbers/lesson/Division-of-Fractions-by-Whole-Numbers-MSM6/
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Division of Fractions by Whole Numbers
## Understand how to divide a fraction by a whole number.
%
Progress
MEMORY METER
This indicates how strong in your memory this concept is
Progress
%
Division of Fractions by Whole Numbers
Source: https://pixabay.com/en/diary-pen-calculator-case-work-582976/
Ray spends of his day at work. He needs to divide his time equally between 3 different projects. How much time should Ray spend on each project? Find the answer in terms of hours.
In this concept, you will learn how to divide fractions by whole numbers.
### Dividing Fractions by Whole Numbers
Think about what is happening when you divide a fraction by a whole number. You are taking a part of something and splitting it up into more parts. Here is a division problem.
This problem is asking you to take one-half and divide it into three parts. Here is a picture of one-half.
Divide each half into three parts.
Each section is of the whole. One-half divided by 3 is
There are two things to remember when dividing fractions. The first is that you can solve the problem by using the inverse operation. The inverse or opposite of division is multiplication. The second is that you will multiply by the reciprocal of the divisor. Remember that the reciprocal of a fraction is a fraction with the numerator and denominator change places.
To divide a fraction, multiply by the reciprocal of the divisor. Here is the division problem again.
First, change the operation to multiplication and change 3 to its reciprocal. 3 can be written as the fraction . The reciprocal of is
Then, multiply the fractions to solve.
The answer is the same as the diagram above. Dividing a fraction by whole number is the same as multiplying a fraction by the reciprocal of the divisor.
### Examples
#### Example 1
Earlier, you were given a problem about Ray's day at work.
Ray needs to evenly divide of his day between 3 projects. Divide by 3 to find how much time he should spend on each project.
First, set up a division problem.
Then, change the division problem. Multiply by the reciprocal of the divisor.
Next, multiply the fractions.
Now find of a day in terms of hours. One day is 24 hours. The word "of" tell you to multiply by 24.
Ray should spend hours on each project.
#### Example 2
Divide the fraction: . Answer in simplest form.
First, change the operation to multiplication and change 4 to its reciprocal.
Then, multiply the fractions.
Next, simplify the fraction. The greatest common factor of 6 and 32 is 2.
#### Example 3
Divide the fraction: . Answer in simplest form.
First, change the expression. Multiply by the inverse of the divisor.
Then, multiply the fractions.
The fraction is in simplest form.
#### Example 4
Divide the fraction: . Answer in simplest form.
First, change the expression. Multiply by the inverse of the divisor.
Then, multiply the fractions. You can simplify the 3s before multiplying.
The fraction is in simplest form.
#### Example 5
Divide the fraction: . Answer in simplest form.
First, change the expression. Multiply by the inverse of the divisor.
Then, multiply the fractions.
Next, simplify the fraction by the greatest common factor of 2.
### Review
Divide each fraction and whole number.
1.
2.
3.
4.
5.
6.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
Inverse Operation
Inverse operations are operations that "undo" each other. Multiplication is the inverse operation of division. Addition is the inverse operation of subtraction.
reciprocal
The reciprocal of a number is the number you can multiply it by to get one. The reciprocal of 2 is 1/2. It is also called the multiplicative inverse, or just inverse.
|
2016-12-10 01:35:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8786048889160156, "perplexity": 1205.4504003475745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542932.99/warc/CC-MAIN-20161202170902-00120-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://archive.lib.msu.edu/crcmath/math/math/c/c867.htm
|
## Cusp Form
A cusp form is a Modular Form for which the coefficient in the Fourier Series
(Apostol 1997, p. 114). The only entire cusp form of weight is the zero function (Apostol 1997, p. 116). The set of all cusp forms in (all Modular Forms of weight ) is a linear subspace of which is denoted .
|
2021-12-05 01:40:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9575613141059875, "perplexity": 570.4816083230411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363134.25/warc/CC-MAIN-20211205005314-20211205035314-00517.warc.gz"}
|
https://injuryprevention.bmj.com/content/25/2/98
|
Article Text
Cost-effectiveness of neighbourhood slow zones in New York City
Free
1. Boshen Jiao1,
2. Sooyoung Kim2,
3. Jonas Hagen3,
4. Peter Alexander Muennig1
1. 1 Global Research Analytics for Population Health, Columbia University Mailman School of Public Health, New York, NY, USA
2. 2 Ecole des Hautes Etudes en Santé Publique, Rennes, France
3. 3 Columbia University Graduate School of Architecture, Planning and Preservation, New York, NY, USA
1. Correspondence to Boshen Jiao, Global Research Analytics for Population Health, Columbia University Mailman School of Public Health , New York, NY, USA; bj2361{at}cumc.columbia.edu
## Abstract
Background Neighbourhood slow zones (NSZs) are areas that attempt to slow traffic via speed limits coupled with other measures (eg, speed humps). They appear to reduce traffic crashes and encourage active transportation. We evaluate the cost-effectiveness of NSZs in New York City (NYC), which implemented them in 2011.
Methods We examined the effectiveness of NSZs in NYC using data from the city’s Department of Transportation in an interrupted time series analysis. We then conducted a cost-effectiveness analysis using a Markov model. One-way sensitivity analyses and Monte Carlo analyses were conducted to test error in the model.
Results After 2011, road casualties in NYC fell by 8.74% (95% CI 1.02% to 16.47%) in the NSZs but increased by 0.31% (95% CI −3.64% to 4.27%) in the control neighbourhoods. Because injury costs outweigh intervention costs, NSZs resulted in a net savings of US$15 (95% credible interval: US$2 to US$43) and a gain of 0.002 of a quality-adjusted life year (QALY, 95% credible interval: 0.001 to 0.006) over the lifetime of the average NSZ resident relative to no intervention. Based on the results of Monte Carlo analyses, there was a 97.7% chance that the NSZs fall under US$50 000 per QALY gained.
Conclusion While additional causal models are needed, NSZs appeared to be an effective and cost-effective means of reducing road casualties. Our models also suggest that NSZs may save more money than they cost.
• neighbourhood slow zones
• traffic injury
• cost-effectiveness
• New York City
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
## Introduction
As more people around the world drive, the rates of fatalities from automobile crashes are climbing sharply.1 2 Automobiles may also contribute to the global obesity epidemic, pulmonary disease, heart disease and other health problems related to passive transport and air pollution.3 As a result, urban planners and public health policy-makers from Sweden to Indonesia are teaming up in an attempt to find new ways to mitigate the public health threats associated with driving and ensure that pedestrians and bicycles can safely venture out on our roads.4
However, the speed limits, traffic cameras and road modifications that are needed to improve the safety of our roads produce regulatory and time costs for society while irking drivers. At the same time, residents do not want fast traffic in their neighbourhoods because it is a safety hazard for their children and a noise nuisance in their homes.5 Neighbourhood slow zones (NSZs) with a speed limit of 20 mph (32 km/h) therefore provide allies for policy-makers (drivers who are also home owners) in their efforts to calm traffic.6
Earlier work suggests that a 1 mph reduction in speed will reduce traffic injuries by 5%.7 Evidences from quasi-experimental studies have shown that 20 mph zones can significantly slow down the traffic speed, so as to prevent both fatal and non-fatal traffic injuries.7–9 However, the impact of speed changes on societal costs is complex because a reduction in speed can produce shifts from fatalities to an increased incidence of debilitating injury.10 Moreover, investments are required to implement NSZs, including signs, pavement markings, speed bumps and increased enforcement.11 While NYC claims that NSZs have reduced crashes with injuries within these areas by over 14%, questions remain regarding the causal impacts of NSZs on mean traffic speeds, injury rates and exercise activity.11 Given that they are both politically palatable and life-saving, 20 mph zones serve as a potentially powerful public health tool, but it is not known whether they are cost-effective.
In 2011, New York City (NYC) started establishing NSZs, in which traffic speed limit was reduced from 25 to 20 mph.11 We ask whether it is plausible that NSZs are cost-effective, even when excluding potentially important benefits, such as their impacts on obesity and diabetes—two widely recognised health risk factors associated with neighbourhood walkability.3 12–14
## Methods
### Overview and definitions
We built a Markov model using TreeAge Pro 2016 to evaluate the cost-effectiveness of NSZs for road casualties when compared with no NSZ (no intervention). We used NYC as a hypothetical case study and then provided sensitivity analyses on model inputs so that users can extrapolate our findings onto other contexts. Our model estimated the costs and health outcomes for a hypothetical cohort of 36-year-old New Yorkers (the median age in NYC).15 They were followed until age 90 years or death, whichever came first. From a societal perspective, we included all costs, including construction, maintenance and reconstruction costs of NSZs, the medical costs of fatal and non-fatal traffic injuries and productivity losses due to traffic injuries. The quality-adjusted life year (QALY) was used as a health outcome measure. One QALY is roughly equal to 1 year of life spent in perfect health. To calculate the incremental cost-effectiveness ratio (ICER), we divided the changes costs associated with NSZs (including the cost of implementation, as well as savings from lower medical and productivity costs) by the additional gains in QALYs. A 3% discount rate was used following recommendations of the Panel on Cost-effectiveness in Health and Medicine.16
### Intervention effect
To quantify the impact of NSZs on traffic injury reduction, we conducted a controlled interrupted time series analysis. The outcome measure of the analysis was the annual number of road casualties, including both fatal and non-fatal traffic injuries. We used 2009–2016 crash data published by the New York City Department of Transportation (NYCDOT).17 The NYCDOT crash data record the date and geographic locations of fatal and non-fatal traffic injuries in NYC.
To generate the annual counts of road casualties outside of and within NSZs, we combined the shapefiles of crash data and NSZs (available on the NYCDOT website), using geographic information system software, QGIS V.2.14.10.17
The study time period was classified as either preintervention period (the years before NSZ implementation) or postintervention period (the years after NSZ implementation). The traffic injuries that occurred during the year of implementation were not included in the analysis for two reasons: (1) the specific start and completion dates for NSZ construction were not public available and (2) it could take time for drivers to adapt to the newly established NSZs.
To account for the potential time trend of traffic injuries in NYC, we selected one corresponding control neighbourhood for each of the NSZs. The selection criteria for matching were the following: (1) it is in the same borough with the NSZ and (2) it had a similar preimplementation time trend of traffic injuries as the NSZ. We then generated the annual count of road casualties in control neighbourhoods using the shapefiles published by New York City Department of City Planning.18
We performed conditional fixed effects Poisson regressions for the NSZs and the control neighbourhoods using Stata V.13. In doing so, it is possible to adjust for autocorrelation within time series.19 We used Grundy et al’s approach.8 The calendar year and NSZ were set as time and panel identification variables, respectively. We ran the regressions using Stata’s command xtpoisson with the number of road casualties as the dependent variable and a dichotomous variable identifying the preintervention (coded 0) or the postintervention period (coded 1) as the independent variables. The coefficients of the independent variables represent the effect of NSZs or control neighbourhoods on road casualties.
If drivers attempt to avoid NSZs, they might increase the risk of crashes in the areas adjacent to the NSZs.8 We conducted an additional controlled time series analysis to test this hypothesis. We randomly selected eight adjacent neighbourhoods. Then, as above, we selected a neighbourhood as control zone for each of these eight neighbourhoods and performed conditional fixed effects Poisson regressions.
### Probabilities
The probabilities used as model inputs are listed in table 1. Our hypothetical cohort was exposed to the probability of traffic injury (fatal, serious and minor) and death by other causes. The age-specific mortality rate for other causes was derived from a US life table.20 We used the traffic injury rate, the proportion of serious traffic injuries and the case fatality ratio in 2011 for NYC as the ‘status quo’ (no NSZs), since NSZs included in our study were implemented and completed between 2012 and 2015.11 21 The NYCDOT reports the locations of road casualties in NYC, of which 1.11% occurred in NSZs (data are available in online supplementary file 1).17 We assumed that any missing crash data were randomly distributed across roads within the city. To estimate the background traffic injury rate in NSZs, we multiplied the total number of traffic injuries in NYC by 1.11% and then divided it by the population of NSZs.22
### supplementary file 1
Table 1
Values used in the Markov model evaluating NSZs relative to the no intervention
Table 3
## Discussion
We find that NSZs are an effective and cost-effective means of reducing road casualties. Our effectiveness results are in line with previous quasi-experimental studies conducted in London.7–9 However, our effect size was much smaller than that claimed by the NYCDOT. This is likely because the data for that study came from 2012, when only four NSZs had been implemented and only 1-year postintervention data were available for analysis.11
In the USA, the cost of medical care and productivity losses linked to traffic injuries exceeded US$80 billion every year.26 Our models suggest that NSZs appear to be a cost-effective—possibly even cost saving—way to improve population health. Two previous economic evaluations conducted in London also showed that 20 mph zones can yield net benefits.27 28 Additionally, they suggest that the net benefits are larger in high-casualty areas relative to low-casualty areas. Our one-way sensitivity analysis implies the same; we find that an increase in the probability of traffic injury within a given NSZ would save more money. Very few public health interventions and only a handful of medical interventions actually save both money and lives.29 Moreover, when multiple sources of parameter uncertainty are included, there is only a 2.3% chance of observing an ICER as high as US$50 000/QALY gained. Even at this cost, NSZs fall well within the range of investments that American’s find acceptable.30 While the cost savings and gains in healthy life are small (about equal to one life saved every 2–3 years in NYC), the loss of healthy lives to preventable causes is a priority under NYC’s Vision Zero initiative.31
While traffic-calming measures are broadly accepted in many European countries and Japan as necessary inconveniences to combat global warming, obesity, diabetes and injury prevention, they are quite difficult to implement in many other places.32 Even NSZs that are limited to residential neighbourhoods can be challenging to implement due to driver complaints. Our study highlights the need for larger public education campaigns about the health and economic threats posed by automobiles in a world that is both rapidly urbanising and has one billion (and counting) vehicles on the road.33
Our study suffers from a number of limitations. Foremost, given the lack of causal estimates specific to NSZs, we rely on estimates from a single interrupted time series analysis in NYC. However, there is a large literature, including causal studies, supporting various components of NSZs as impacting mean traffic speeds and crash rates. For example, speed humps and posted speed limits have been shown to reduce traffic speeds,32 34–38 and these are core components of the NSZs we study. Likewise, traffic speed is associated with crash risk and is causally linked to one’s risk of injury or death.10 32 Another consideration is that we did not model the complex systems dynamics of implementing NSZs. It is plausible that NSZs can produce virtuous or harmful cycles in which drivers either slowly adapt to slower speeds or lash out against them, thereby jeopardising other traffic calming measures. Our model was also limited by a lack of secondary outcomes data. Because it is difficult to estimate the psychological well-being, exercise impacts and pollution impacts associated with slower traffic, we included only the costs and benefits of injury reduction. On the other hand, while traffic calming has been shown to increase cycling and walking,39 40 it can also potentially increase driving time and therefore time sitting along with automobile pollution. Since we find that NSZs save money and lives, adding these additional savings would strengthen our already robust findings.
Our analysis suggests that NSZs save money and lives in NYC. This is encouraging news, especially considering the effects that slow-speed zones can have in terms of improving traffic safety. There may be additional benefits to these zones, such as increasing the comfort of residents and the safety of pedestrians and bicyclists. These possible cobenefits of slow zones should be explored in future research, and such variables could be included in future analyses. Road safety changes such as NSZs could have a huge positive impact on population health globally, as well as the environment and human settlements. Our analysis indicates that the health improvements of such interventions could come at a very reasonable cost, perhaps ranking among vaccines in terms of their cost-effectiveness.
### What is already known on this subject
• Neighbourhood slow zones (NSZs) with a speed limit of 20 mph have been implemented in New York City (NYC) to prevent traffic crashes.
• NYC claims that NSZs have reduced crashes with injuries within these areas by 14%.
• We demonstrate that NSZs save money and lives.
• Road causalities did not increase in the areas adjacent to NSZs.
## Acknowledgments
The authors thank Dr Zafar Zafari and Dr Zohn Rosen for their input and contributions to the study.
## Footnotes
• Contributors BJ: substantial contributions to the conception or design of the work, analysis or interpretation of data for the work and drafting the work and revising it critically for important intellectual content; SK: substantial contributions to interpretation of data for the work and drafting the work; JH: substantial contributions to the conception or design of the work and revising it critically for important intellectual content; PAM: substantial contributions to the conception or design of the work and drafting the work and revising it critically for important intellectual content. All authors: final approval of the version to be published.
• Funding This study was funded by Global Research Analytics for Population Health at the Mailman School of Health, Columbia University. All the authors have approved the final manuscript for submission.
• Competing interests None declared.
• Provenance and peer review Not commissioned; externally peer reviewed.
|
2022-12-04 17:54:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2182716280221939, "perplexity": 4072.7724602876356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00846.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=487138
|
## Trig right triangle solving question How do you know if you should pick tan40 degrees
Trig right triangle solving question How do you know if you should pick tan, cot, etc for the last part?
if b = 2 , A =40 , find a,c, and B
I found all of them except for c , how do you get c
There's a part almost at the end that goes like tan40 degrees = a/2 and Cos 40 degrees = 2/c
Why do you know to pick Cos 40 , why not tan40 or cot of 40 for that last part there?
PhysOrg.com science news on PhysOrg.com >> Hong Kong launches first electric taxis>> Morocco to harness the wind in energy hunt>> Galaxy's Ring of Fire
Recognitions: Science Advisor What "last part there"? You haven't included whatever you're talking about in your post. A guess about the use of cosine: To find an unknown with a trig function, you must have an equation that has the unknown in it along with other known information. If you want to find c and cosine involves c, use cosine. There is often more than one way to solve for an unknown. If c is the hypotenuse you can also use $$c^2 = a^2 + b^2$$ to solve for it after you find b.
Quote by land_of_ice Trig right triangle solving question How do you know if you should pick tan, cot, etc for the last part? if b = 2 , A =40 , find a,c, and B I found all of them except for c , how do you get c There's a part almost at the end that goes like tan40 degrees = a/2 and Cos 40 degrees = 2/c Why do you know to pick Cos 40 , why not tan40 or cot of 40 for that last part there?
Can you upload a picture that explains this?
My guess is, although Stephen Tashi is right, it's usually best to only use values given in a question if you can, just in case your values calculated in other parts of the question are wrong, which could cause confusion if you get odd answers later on.
Recognitions:
Gold Member
Staff Emeritus
## Trig right triangle solving question How do you know if you should pick tan40 degrees
The convention is that, in a triangle, sides labeled a, b, c are opposite angles labeled A, B, C respectively.
And, in right triangles, by convention, C is the right angle and c is the hypotenuse.
Back when you first learned about trig functions you were supposed to have learned something like:
sine= opposite side/hypotenuse,
cosine= near side/hypotenuse
tangent = opposite side/near side
Given angle A and side b, you think- a is the side opposite angle A and b is the other leg, the side "near" angle A so the appropriate formula is "tangent= opposite side/near side": tan(A)= a/b or tan(40)= a/2. Then solve for a.
To find c, given only angle a and angle b, you think, c is the hypotenuse and b is the "near" side to A so the appropriate formula is "cosine= near side over hypotenuse":
cos(A)= b/c or cos(40)= 2/c. Then solve for c.
Of course, if, at this point, you have already solved for a, you could think "sine= opposite side over hypotenuse": sin(40)= a/c. Or, as Stephen Tashi said, you could use $c^2= a^2+ b^2$. However, I agree with sjb-2812 that it is better to use the initially given values- you will not propagate arthmetic errors.
|
2013-05-19 20:15:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7828754186630249, "perplexity": 1103.6476602970658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698063918/warc/CC-MAIN-20130516095423-00043-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/275029/hodge-realisation-of-mixed-tate-motives
|
# Hodge Realisation of Mixed Tate Motives
For a field $k$ which satisfies Beilinson-Soule vanishing conjecture, the from Levine's paper,
https://www.uni-due.de/~bm0032/publ/TateMotives.pdf
There exists an abelian category of mixed Tate motives $\text{TM}_k$. If $\sigma:k\rightarrow \mathbb{C}$ is an embedding, we have a Hodge realisation functor, $$R_{\sigma}:\text{TM}_k \rightarrow \text{TH}_{\mathbb{Q}}$$ where $\text{TH}_{\mathbb{Q}}$ is the abelian subcategory of $\mathbb{Q}$-mixed Hodge structures generated by Tate objects $\mathbb{Q}$ and is closed under extensions. From Hodge conjecture and conservative conjecture, $R_{\sigma}$ is exact and fully faithful. But since $\text{TM}_k$ is so nice and simple (compared to the category of pure motives or even the conjectured category of mixed motives ), I am wondering whether it has been proved that $R_{\sigma}$ is exact and fully faithful. Any references?
• Proving exactness should be quite easy. On the other hand, I don't think that $R_{\sigma}$ can be fully faithful unless $\sigma$ is an isomorphism. In the latter case the answer to your question depens on the motivic cohomology of complex numbers (with rational coefficients) and probably not much is known about it; cf. mathoverflow.net/questions/269354/…. – Mikhail Bondarko Jul 9 '17 at 15:54
|
2019-09-17 21:34:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9380238652229309, "perplexity": 182.23236991084238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573121.4/warc/CC-MAIN-20190917203354-20190917225354-00324.warc.gz"}
|
https://leanprover-community.github.io/archive/stream/217875-Is-there-code-for-X%3F/topic/Linear.20combinations.20stay.20in.20a.20submodule.html
|
## Stream: Is there code for X?
### Topic: Linear combinations stay in a submodule
#### Sophie Morel (Jul 18 2020 at 15:39):
Is there a lemma in the library that says that a linear combination of elements of a submodule is still in the submodule ? I wanted to use that result when doing one of the lftcm2020 linear algebra exercises, but I only found submodule.sum_mem that works for a sum, and submodule.smul_mem that works for a scalar product. (There's a related result called finsupp.mem_span_iff_total that says that the span of a family if the set of its linear combinations, but you'd need to combine it with the fact that a submodule is the span of the family of all its elements. Also, it assumes that the base semiring is a ring for some reason, which I don't think is necessary.) Anyway, I ended up adding this in my local copy of module.lean :
lemma linear_combination_mem {t : finset ι} {f : ι → M} (r : ι → R) :
(∀ c ∈ t, f c ∈ p) → (∑ i in t, r i • f i) ∈ p :=
λ hyp, submodule.sum_mem _ (λ i hi, submodule.smul_mem _ _ (hyp i hi))
(Just after smul_mem_iff' on line 504. Here R is a semiring, M is a semimodule over R, ι is a type, p is a submodule of M.)
#### Anne Baanen (Jul 18 2020 at 16:00):
I don't believe this lemma existed yet - the model solution uses a combination of sum_mem and smul_mem as in your proof for linear_combination_mem.
#### Kevin Buzzard (Jul 18 2020 at 16:18):
@Sophie Morel this would be a helpful PR and if you golf it a bit by changing : (∀ c ∈ t, f c ∈ p) → to (hyp : ∀ c ∈ t, f c ∈ p) : and deleting λ hyp, ("move as much as possible to the left of the colon") then it looks mathlib-ready. Do you have push rights to non-master branches of mathlib? If you don't know, then you probably don't -- what is your github login?
#### Johan Commelin (Jul 18 2020 at 17:03):
@Sophie Morel Also, small tip: if you want syntax highlighting in your Zulip posts, then you can use #backticks like so
lean code goes here
#### Sophie Morel (Jul 18 2020 at 17:21):
@Johan Commelin Thanks !
#### Sophie Morel (Jul 18 2020 at 17:21):
@Kevin Buzzard, do you mean something like this :
lemma linear_combination_mem {t : finset ι} {f : ι → M} (r : ι → R)
(hyp : ∀ c ∈ t, f c ∈ p) : (∑ i in t, r i • f i) ∈ p :=
submodule.sum_mem _ (λ i hi, submodule.smul_mem _ _ (hyp i hi))
My github login is smorel394, and I am pretty confident that I don't have push rights to anything.
#### Kevin Buzzard (Jul 18 2020 at 17:22):
@maintainers could Sophie have push rights to non-master branches of mathlib?
#### Kevin Buzzard (Jul 18 2020 at 17:23):
Yes that looks great. A three line PR adding a missing lemma in an appropriate place is the best kind of PR. The maintainers have a better eye than I do however
#### Chris Hughes (Jul 18 2020 at 17:24):
Kevin Buzzard said:
@maintainers could Sophie have push rights to non-master branches of mathlib?
Done
#### Sophie Morel (Jul 18 2020 at 17:32):
@Kevin Buzzard @Chris Hughes Thank ! I'll try to figure out the git stuff after dinner. (Scott told me about this page https://leanprover-community.github.io/contribute/index.html, so I should be ok.)
Last updated: May 16 2021 at 05:21 UTC
|
2021-05-16 12:24:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1968124508857727, "perplexity": 2050.676957265137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00467.warc.gz"}
|
http://www.maths.lth.se/matstat/staff/umberto/miscellanea.html
|
Hi, that's what the plot on the left says. You can reproduce it by means of the following MATLAB code (thanks to Mike Croucher's blog):
[x y] = meshgrid( linspace(-3,3,50), linspace(-5,5,50) );
z = exp(-x.^2-0.5*y.^2).*cos(4*x) + exp(-3*((x+0.5).^2+0.5*y.^2));
idx = ( abs(z)>0.001 );
z(idx) = 0.001 * sign(z(idx));
figure('renderer','opengl')
patch(surf2patch(surf(x,y,z)), 'FaceColor','interp');
set(gca, 'Box','on', ...
'XColor',[.3 .3 .3], 'YColor',[.3 .3 .3], 'ZColor',[.3 .3 .3], 'FontSize',8)
title('$e^{-x^2 - \frac{y^2}{2}}\cos(4x) + e^{-3((x+0.5)^2+\frac{y^2}{2})}$', ...
'Interpreter','latex', 'FontSize',12)
view(35,65)
colormap( [flipud(cool);cool] )
World Community Grid
The World Community Grid (WCG) mission is to create the largest public computing grid benefiting humanity. I donate the time my computer is turned on, but is idle, to WCG's projects, that is to public and not-for-profit organizations to use in humanitarian research that might otherwise not be completed due to the high cost of the computer infrastructure required in the absence of a public grid. Anybody can join, and I suggest to do so. It takes seconds to register and download the required software. Click here to know more.
Below is a widget displaying in real time my WCG username, the time my pc has been devoted to WCG's projects computations and the related projects.
Comics
1. Ah, what would be life without Non Sequitur, "the Wiley Miller's wry look at the absurdities of everyday life"?!
pic 1
pic 2
2. PhD, Piled Higher & Deeper, a grad student comic strip by Jorge Cham. If you are into research you cannot miss this! Procrastinate with purpose and pride!!
pic 1
pic 2
(first of all some self-celebration)
SDE Toolbox: by myself, simulates and estimates stochastic differential equations. Warning: implemented inferential methods are rather outdated and the toolbox is no more developed.
Lightspeed Toolbox: this library by T. Minka provides highly optimized versions of primitive functions such as repmat.
MATLAB tips and tricks: a useful collection of articles on good programming practice & computational tricks.
MATLAB utilities: an impressive collection of functions by P.J. Acklam.
Statistics Toolbox: this toolbox by A. Holtsberg provides several functions for statistical computations.
Good programming practice: an instructive thread from the newsgroup comp.soft-sys.matlab.
Another link on good programming practice: avoiding the use of global variables.
Optimization 1: a collection of iterative optimization methods by C.T. Kelley.
Optimization 2: CONDOR by F. Vanden Berghen, a very nice direct optimizator using trust regions.
Optimization 3: SolvOpt by A. Kuntsevich and F. Kappel, an algorithm for smooth and non-smooth optimization problems.
Optimization 4: another collection of optimizers by H. Bruun Nielsen.
Manuals 1: "Numerical Computing with MATLAB" by the creator of MATLAB C. Moler. The individual chapters of this book are downloadable in pdf format.
Manuals 2: "MATLAB array manipulation tips and tricks" by P.J. Acklam. A fundamental reference to exploit the MATLAB calculus capabilities, write fast programs and vectorize code. Highly recommended. A companion toolbox implementing the ideas suggested in the manual is available.
Manuals 3: "Writing fast MATLAB code" by P. Getreuer. Another recommended reference.
Manuals 4: "MATLAB tips and tricks" by G. Peyré. A list of useful tips and tricks with concise pieces of code and comments.
Manuals 5: "Writing MATLAB/CMEX Code" by P. Getreuer. To combine the power of MATLAB and C.
Markov Chain Monte Carlo
The following description and the beautiful picture below are from Jürgen Brauer's website: the picture shows 3 MCMC chains used for searching the minimum of the 3D Rosenbrock optimizer test function (which is non-convex). Note that only the red points are used at the end. The other points are discarded, since they stem from the "burn-in" period, where we start at a random point in the 3D state space, which does not have to do something with the underlying probability density function.
|
2017-07-24 10:49:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28622934222221375, "perplexity": 5539.41852341703}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424846.81/warc/CC-MAIN-20170724102308-20170724122308-00082.warc.gz"}
|
https://www.physicsforums.com/threads/at-what-rate-is-this-function-increasing.395092/
|
At what rate is this function increasing?
1. Apr 13, 2010
IntegrateMe
When x = 16,the rate at which $$\sqrt x$$ is increasing is $$\frac {1}{k}$$ times the rate at which x is increasing. What is the value of k?
2. Apr 13, 2010
IntegrateMe
I thought it would be 4 but the answer is 8.
3. Apr 13, 2010
jrosen13
Its the ratio of the derivatives evaluated at x=16,
(2 sqrt(x))^-1
|
2017-11-22 22:37:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6773739457130432, "perplexity": 761.5819889345642}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806676.57/warc/CC-MAIN-20171122213945-20171122233945-00001.warc.gz"}
|
http://www.mathynomial.com/problem/2070
|
# Problem #2070
2070 Betsy designed a flag using blue triangles, small white squares, and a red center square, as shown. Let $B$ be the total area of the blue triangles, $W$ the total area of the white squares, and $R$ the area of the red square. Which of the following is correct? $[asy] unitsize(3mm); fill((-4,-4)--(-4,4)--(4,4)--(4,-4)--cycle,blue); fill((-2,-2)--(-2,2)--(2,2)--(2,-2)--cycle,red); path onewhite=(-3,3)--(-2,4)--(-1,3)--(-2,2)--(-3,3)--(-1,3)--(0,4)--(1,3)--(0,2)--(-1,3)--(1,3)--(2,4)--(3,3)--(2,2)--(1,3)--cycle; path divider=(-2,2)--(-3,3)--cycle; fill(onewhite,white); fill(rotate(90)*onewhite,white); fill(rotate(180)*onewhite,white); fill(rotate(270)*onewhite,white); [/asy]$ $\text{(A)}\ B = W \qquad \text{(B)}\ W = R \qquad \text{(C)}\ B = R \qquad \text{(D)}\ 3B = 2R \qquad \text{(E)}\ 2R = W$ This problem is copyrighted by the American Mathematics Competitions.
Note: you aren't logged in. If you log in, we'll keep a record of which problems you've solved.
|
2018-02-20 17:24:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26506510376930237, "perplexity": 552.1867268682535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813059.39/warc/CC-MAIN-20180220165417-20180220185417-00370.warc.gz"}
|
https://brainly.in/question/306978
|
# If a cone of radius 10 cm is divided into two parts by drawing a plane through the midpoint of its axis, parallel to its base find the ratio of volume of two parts
2
by KPSINGH1
2016-03-19T17:59:23+05:30
Even I want the same answer
yeah you are right answer is right. its a ratio so the both volume will have 3 in denominator, and finally they will cancel each other. so answer is 2:7.
thanks
wlcm
2016-03-19T18:35:23+05:30
Let a cone of radius r = 10 cm
height = h cm
a/q,
volume of complete cone = πr²h/3
volume of part (i) = π (r')² h/2
Volume of part (ii) = total volume - Volume of part(i)
= πr²h - π(r')² h/2
= πh{r² - (r')²/2}
Now,
l² = r² + h²
⇒ r² = l² - h² -------------(1)
and
(l/2)² = (r')² + (h/2)²
(r')² = (l/2)² - (h/2)²
r' = r/2
⇒ (r')² = r²/4
Now,
Volume of part (ii) = πh[r² - (r²/4)*(1/2)]
= πh(r² - r²/8)
= 7πhr²/8
And,
Volume of Part (i) = π(r/2)² h = πr²h/4
Ratio of volume of two parts = volume of part (i) ÷ volume of part (ii)
= πr²h/4 ÷ 7πr²h/8
= 2/7 = 2:7
answer is wrong i missed a denominator 3 of formula its pi r square h by 3.
bu t process is right
but on net the answer is 2:7
oops 1:7
hmmm solve it, then u will get the answer.
|
2017-01-17 01:09:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8388381600379944, "perplexity": 2760.1590468438512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00103-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://www.gradesaver.com/textbooks/science/physics/conceptual-physics-12th-edition/chapter-35-reading-check-questions-comprehension-page-682/25
|
## Conceptual Physics (12th Edition)
A meterstick moving at $99.5\%$ the speed of light would appear to be one-tenth its original length.
Let $L_{o}$ represent the original, proper length, and L the measured length of the moving object. $$L = L_{o} \sqrt{1-\frac{ v^{2} } { c^{2} }}$$ $$L = L_{o} \sqrt{1-0.995^{2}} = \frac{1}{10} L_{o}$$ This is discussed on page 676.
|
2017-12-13 07:54:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8919613361358643, "perplexity": 626.171227995326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522205.7/warc/CC-MAIN-20171213065419-20171213085419-00591.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-2-solving-equations-2-2-solving-two-step-equations-lesson-check-page-91/10
|
Chapter 2 - Solving Equations - 2-2 Solving Two-Step Equations - Lesson Check - Page 91: 10
No, you cannot.
Work Step by Step
The 3 is being divided by 5. If we were to add 3 first, we would be acting like the three was not being divided by 5, which would mean the equation would have to be $\frac{d}{5}-3.$ However, because the equation is actually $\frac{d-3}{5}$, we have to multiply by 5 first so that the 3 is no longer being divided by 5.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2021-01-22 06:31:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6429164409637451, "perplexity": 467.8007014655738}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529128.47/warc/CC-MAIN-20210122051338-20210122081338-00281.warc.gz"}
|
https://www.rosettacommons.org/docs/latest/scripting_documentation/RosettaScripts/Filters/filter_pages/ScoreTypeFilter
|
Back to Filters page.
## ScoreType
Computes the energy of a particular score type for the entire pose and if that energy is lower than threshold, returns true. If no score_type is set, it filters on the entire scorefxn.
<ScoreType name="(score_type_filter &string)" scorefxn="(score12 &string)" score_type="(total_score &string)" threshold="(&float)"/>
|
2019-01-21 08:51:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22052128612995148, "perplexity": 10788.81873199681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583763839.28/warc/CC-MAIN-20190121070334-20190121092334-00114.warc.gz"}
|
https://www.nat-hazards-earth-syst-sci.net/18/3383/2018/
|
Journal cover Journal topic
Natural Hazards and Earth System Sciences An interactive open-access journal of the European Geosciences Union
Journal topic
Nat. Hazards Earth Syst. Sci., 18, 3383-3402, 2018
https://doi.org/10.5194/nhess-18-3383-2018
Nat. Hazards Earth Syst. Sci., 18, 3383-3402, 2018
https://doi.org/10.5194/nhess-18-3383-2018
Research article 21 Dec 2018
Research article | 21 Dec 2018
# Analysis of the risk associated with coastal flooding hazards: a new historical extreme storm surges dataset for Dunkirk, France
Analysis of the risk associated with coastal flooding hazards
Yasser Hamdi1, Emmanuel Garnier2, Nathalie Giloy1, Claire-Marie Duluc1, and Vincent Rebour1 Yasser Hamdi et al.
• 1Institute for Radiation Protection and Nuclear Safety, BP17, 92 262 Fontenay-aux-Roses CEDEX, France
• 2UMR 6249 CNRS Chrono-Environnement, University of Besançon, Besançon, France
Abstract
This paper aims to demonstrate the technical feasibility of a historical study devoted to French nuclear power plants (NPPs) which can be prone to extreme coastal flooding events. It has been shown in the literature that the use of historical information (HI) can significantly improve the probabilistic and statistical modeling of extreme events. There is a significant lack of historical data on coastal flooding (storms and storm surges) compared to river flooding events. To address this data scarcity and to improve the estimation of the risk associated with coastal flooding hazards, a dataset of historical storms and storm surges that hit the Nord-Pas-de-Calais region during the past five centuries was created from archival sources, examined and used in a frequency analysis (FA) in order to assess its impact on frequency estimations. This work on the Dunkirk site (representative of the Gravelines NPP) is a continuation of previous work performed on the La Rochelle site in France. Indeed, the frequency model (FM) used in the present paper had some success in the field of coastal hazards and it has been applied in previous studies to surge datasets to prevent coastal flooding in the La Rochelle region in France.
In a first step, only information collected from the literature (published reports, journal papers and PhD theses) is considered. Although this first historical dataset has extended the gauged record back in time to 1897, serious questions related to the exhaustiveness of the information and about the validity of the developed FM have remained unanswered. Additional qualitative and quantitative HI was extracted in a second step from many older archival sources. This work has led to the construction of storm and coastal flooding sheets summarizing key data on each identified event. The quality control and the cross-validation of the collected information, which have been carried out systematically, indicate that it is valid and complete in regard to extreme storms and storm surges. Most of the HI collected is in good agreement with other archival sources and documentary climate reconstructions. The probabilistic and statistical analysis of a dataset containing an exceptional observation considered as an outlier (i.e., the 1953 storm surge) is significantly improved when the additional HI collected in both literature and archives is used. As the historical data tend to be extreme, the right tail of the distribution has been reinforced and the 1953 “exceptional” event does not appear as an outlier any more. This new dataset provides a valuable source of information on storm surges for future characterization of coastal hazards.
1 Introduction
As the coastal zone of the Nord-Pas-de-Calais region in northern France is densely populated, coastal flooding represents a natural hazard threatening the costal populations and facilities in several areas along the shore. The Gravelines nuclear power plant (NPP) is one of those coastal facilities. It is located near the community of Gravelines in northern France, approximately 20 km from Dunkirk and Calais. The Gravelines NPP is the sixth largest nuclear power station in the world, the second largest in Europe and the largest in Western Europe.
.
Figure 1Map of the location (a) and an old plan of the Dunkirk city with the measure point of Bergues sluice (b)
Extreme weather conditions could induce strong surges that could cause coastal flooding. The 1953 North Sea flood was a major flood caused by a heavy storm that occurred on the night of Saturday, 31 January and morning of Sunday, 1 February. The floods struck many European countries and France was no exception. It hit particularly hard along the northern coast of France, from Dunkirk to the Belgian border. Indeed, it has been shown in an unpublished study that Dunkirk is fairly representative of the Gravelines NPP in terms of extreme sea levels. In addition, the harbor of Dunkirk is an important military base containing a lot of archives. The site of Dunkirk has therefore been selected as site of interest in the present paper (Fig. 1). An old map of Dunkirk city is presented in Fig. 1b (we shall return to this map at a later stage in this paper). It is a common belief today that the Dunkirk region is vulnerable and subject to several climate risks (e.g., Maspataud et al., 2013). More severe coastal flooding events, such as the November 2007 North Sea and the March 2008 Atlantic storms, could have had much more severe consequences especially if they had occurred at high tide (Maspataud et al., 2013; Idier et al., 2012). It is important for us to take into account the return periods of such events (especially in the current context of global climate change and projected sea-level rise) in order to manage and reduce coastal hazards, implement risk prevention policies and enhance and strengthen coastal defence against coastal flooding.
The storm surge frequency analysis (FA) represents a key step in the evaluation of the risk associated with coastal hazards. The frequency estimation of extreme events (induced by natural hazards) using probability functions has been extensively studied for more than a century (e.g., Gumbel, 1935; Chow, 1953; Dalrymple, 1960; Hosking and Wallis, 1986, 1993, 1997; Hamdi et al., 2014, 2015). We generally need to estimate the risk associated with an extreme event in a given return period. Most extreme value models are based on available at-site recorded observations only. A common problem in FA and estimation of the risk associated with extreme events is the estimation from a relatively short gauged record of the flood corresponding to 100–1000 year return periods. The problem is even more complicated when this short record contains an outlier (an observation much higher than any others in the dataset). This is the case with several sea-level time series in France and characterizes the Dunkirk surge time series as well.
The 1953 storm surge was considered as an outlier in our previous work (Hamdi et al., 2014) and in previous research (e.g., Bardet et al., 2011). Indeed, although the Gravelines NPP is designed to sustain very low probabilities of failure and despite the fact that no damage was reported at the French NPPs, the 1953 coastal flooding had shown that the extreme sea levels estimated with the current statistical approaches could be underestimated. It seems that the local FA is not really suitable for a relatively short dataset containing an outlier.
Indeed, a poor estimation of the distribution parameters may be related to the presence of an outlier in the sample (Hamdi et al., 2015), and must be properly addressed in the FA. One would expect that one or more additional extreme events in a long period (500 years for instance) would, if properly included in the frequency model (FM), improve the estimation of a quantile at the given high-return period. The use of other sources of information with more appropriate FMs is required in the frequency estimation of extremes. Worth noting is that this recommendation is not new and dates back several years. The value of using other sources of data in the FA of extreme events has been recognized by several authors (e.g., Hosking and Wallis, 1986; Stedinger and Cohn, 1986). Through other sources of information, we are able to refer here to events that occurred not only before the systematic period (gauging period) but also during gaps of the recorded time series. Water marks left by extreme floods, damage reports and newspapers are reliable sources of historical information (HI). It can also be found in the literature, archives, unpublished written records, etc. It may also arise from verbal communications from the general public. Paleoflood and dendrohydrology records (the analysis and application of tree-ring records) can be useful as well. A literature review on the use of HI in flood FAs with an inventory of methods for its modeling has been published by Ouarda et al. (1998). Attempts to evaluate the usefulness of HI for the frequency estimation of extreme events are numerous in the literature (e.g., Guo and Cunnane, 1991; Ouarda et al., 1998; Gaal et al., 2010; Payrastre et al., 2011; Hamdi, 2011; Hamdi et al., 2015). Hosking and Wallis (1986) have assessed the value of HI using simulated flood series and historical events generated from an extreme value distribution and quantiles are estimated by the maximum likelihood method with and without the historical event. The accuracy of the quantile estimates was then assessed and it was concluded that HI is of great value provided either that the flood frequency distribution has at least three unknown parameters or that gauged records are short. It was also stated that the inclusion of HI is unlikely to be useful in practice when a large number of sites are used in a regional context. Data reconstructed using HI are often imprecise, and we should consider their inaccuracy in the analysis (by using thresholds of perception, range and lower bound data, etc.). However, as it was shown in the literature, even with important uncertainty, the use of HI is a viable means of decreasing the influence of outliers by increasing their representativeness in the sample (Hosking and Wallis, 1986; Wang, 1990; Salas et al., 1994; Payrastre et al., 2011). A frequency estimation of extreme storm surges based on the use of HI has rarely been studied explicitly in the literature (Bulteau et al., 2015; Hamdi et al., 2015) despite its significant impact on social and economic activities and on NPPs' safety. Bulteau et al. (2015) have estimated extreme sea levels by applying a Bayesian model to the La Rochelle site in France. This same site was used as a case study by Hamdi et al. (2015) to characterize the coastal flooding hazard. The use of a skew surge series containing an outlier in local frequency estimation is limited in the literature as well. For convenience, we would like to recall here the definition of a skew surge: it is the difference between the maximum observed water level and the maximum predicted tidal level regardless of their timing during the tidal cycle (a tidal cycle contains one skew surge).
It is often possible to augment the storm surges record with those that occurred before and after gauging began. Before embarking on a thorough and exhaustive research of any HI related to coastal flooding that hit the area of interest, potential sources of historical coastal flooding data for the French coast (Atlantic and English Channel) and more specifically for the Charente-Maritime region were identified in the literature (e.g., Garnier and Surville, 2010). The HI collected has been very helpful in the estimation of extreme surges at La Rochelle, which was heavily affected by the storm Xynthia in 2010 that generated a water level that has so far been considered as an outlier (Hamdi et al., 2015). Indeed, these results for the La Rochelle site have encouraged us to build a more complete historical database covering all the extreme coastal flooding that occurred over the past five centuries on the French coast (Atlantic and English Channel). This database has been completed and is currently the subject of a working group involving several French organizations for maintenance. However, only the historical storm surges that hit the Nord-Pas-de-Calais region during this period are presented herein.
The main objective of the present work is the collection of HI on storms and storm surges that occurred in the last five centuries and to examine its impact on the frequency estimation of extreme storm surges. The paper is organized as follows: HI collected in the literature and its impact on the FA results is presented in Sects. 2 and 3. Section 4 presents the HI recovered from archival sources, the quality control thereof and its validation. In Sect. 5, the FM is applied using both literature and archival sources. The results are discussed in the same section before concluding and presenting some perspectives in Sect. 6.
2 Use of HI to improve the frequency estimation of extreme storm surges
The systematic storm surge series is obtained from the corrected observations and predicted tide levels. The tide gauge data is managed by the French Oceanographic Service (SHOM – Service Hydrographique et Océanographique de la Marine) and measurements are available since 1956. The R package TideHarmonics (Stephenson, 2017) is used to calculate the tidal predictions. In order to remove the effect of sea level rise, the initial mean sea level (obtained by tidal analysis) is corrected for each year by using an annual linear regression, before calculating the predictions. The regression is obtained by calculating daily means using a Demerliac filter (Simon, 2007). Monthly and annual means are calculated with respect to the Permanent Service for Mean Sea Level (PSMSL) criteria (Holgate et al., 2013). This method is inspired by the method used by SHOM for its analysis of high water levels during extreme events (SHOM, 2015). The available systematic surge dataset was obtained for the period from 1956 to 2015.
The effective design of coastal defense is dependent on how high a design quantile (1000-year storm surge for instance) will be. But this is always estimated with uncertainty and not precisely known. Indeed, any frequency estimation is given with a confidence interval (CI) of which the width depends mainly on the size of the sample used in the estimation. Some other sources of uncertainties (such as the use of trends related to climate change) can be considered in the frequency estimation (Katz et al., 2002). As mentioned in the introductory section, samples are often short and characterized by the presence of outliers. The CIs are rather large and in some cases more than 2 or 3 times (and even more) the value of the quantile. Using the upper limit of this CI would likely lead to a more expensive design of the defensive structure. One could just use the most likely estimate and neglect the CI but it is more interesting to consider the uncertainty as often estimated in frequency analyses. The width of the CI (i.e., inversely related to the sample size) can be reduced by increasing the sample size. In the present work, we focus on increasing the number of observations by adding information about storm surges induced by historical events. Additional storm surges can be subdivided into two groups:
1. HI during gaps in systematic records;
2. HI before the gauging period (can be found in the literature and/or collected by historians in archives).
3 HI during and before the gauging period
A historical research devoted to the French NPPs located on the Atlantic and English Channel coast was a genuine scientific challenge due to the time factor and the geographic dispersion of the nuclear sites. To be considered in the FA, a historical storm surge must be well documented; its date must be known and some information on its magnitude must be available. Mostly, available information concerns the impact and the societal disruption caused at the time of the event (Baart et al., 2011).
## 3.1 HI collected in the literature
As mentioned above, a common issue in frequency estimations is the presence of gaps within the datasets. Failure of the measuring devices and damage, mainly caused by natural hazards (storms, for instance), are often the origin of these gaps. Human errors, strikes, wars, etc., can also give rise to these gaps. Nevertheless, these gaps are themselves considered as dependent events. It is therefore necessary to ensure that the occurrence of the gaps and the observed variable are independent. Whatever the origin and characteristics of the missing period, the use of the full set of extreme storm surges that occurred during the gaps is strongly recommended to ensure the exhaustiveness of the information. This will make the estimates more robust and reduce associated uncertainties. Indeed, by delving into the literature and the web, one can obtain more information about this kind of events. Maspataud (2011) was able to collect sea-level measurements that were taken by regional maritime services during a storm event in the beginning of 1995, a time where the Dunkirk tide gauge was not working. This allowed the calculation of the skew surge, which was estimated by the author at 1.15 m on 2 January 1995. This storm surge is high enough to be considered as an extreme event. In fact, it was exceeded only twice during the systematic period (5 January 2012 and 6 December 2013).
Table 1Date, localization, water and surge levels (m) of collected storms within Nord-Pas-de-Calais area.
1 No reference leveling given. 2 NGF: the French Ordnance Datum (Nivellement Général Français). 3 TAW: Tweede Algemeene Waterpassing (a reference level used in Belgium for water levels).
In the relatively short-term pre-gauging period, a literature review was conducted in order to get an overview of the storm events and associated surges that hit the Nord-Pas-de-Calais region in France during the last two centuries. The following documents and storm databases on local, regional or national scales are available.
• The “Plan de Prévention de Risques Littoraux (PPRL)”: refers to documents made by the French state on a communal scale, describing the risks a coastal zone is subject to, e.g., coastal flooding and erosion, and preventive measures in case of a hazard happening. To highlight the vulnerability of a zone, an inventory of storms and coastal inundation within the considered area is attached to this document.
• Deboudt (1997) and Maspataud (2011) describe the impact of storms on coastal areas for the study region.
• The VIMERS Project gives information on the evolutions of the Atlantic depressions that hit Brittany (DREAL Bretagne, 2017).
• The NIVEXT Project presents historical tide gauge data and the corresponding extreme water and surge levels for storm events (SHOM, 2015).
• Lamb (1991) provides synoptic reconstructions of the major storms that hit the British Isles from the 16th century up until today.
According to the literature, the storm of 31 January to 1 February 1953 caused the greatest surge and was the most damaging within the study area. This event has been well analyzed and documented (Sneyers, 1953; Rossiter, 1953; Gerritsen, 2005; Wolf and Flather, 2005): a depression formed over the northern Atlantic Ocean close to Iceland moving eastward over Scotland and then changing its direction to southeastwards over the North Sea, accompanied by strong northerly winds. An important surge was generated by this storm that, in conjunction with a high spring tide, resulted in particularly high sea levels. Around the southern parts of the North Sea the maximum surges exceeded 2.25 m, reaching 3.90 m at Harlingen, Netherlands. Large areas were flooded in Great Britain, northern parts of France, Belgium, the Netherlands and the German Bight, causing the death of more than 2000 people. Le Gorgeu and Guitonneau (1954) indicate that during this event, the water level exceeded the predicted water level at the eastern dyke of Dunkirk by more than 2.40 m (Table 1). Bardet et al. (2011) included a storm surge equal to 2.13 m in their regional frequency analysis. Both authors indicate the same observed water level, i.e., 7.90 m, but the predicted water level differs: while in 1954 the predicted water level was estimated at 5.50 m, the predictions were reevaluated to 5.77 m by the SHOM using the harmonic method (SHOM, 2016). A storm surge of 2.13 m is therefore used in the present study. Nevertheless, as also shown in Table 1, some other storms (1897, 1949 and 1995) that induced important storm surges and coastal floods occurred within the area of interest. Appendix A presents a description of these events which are quite well documented in the literature. In the Appendix, a description of some other historical events (of which the information provided did not allow the estimation of a storm surge value) is included as well.
## 3.2 HI collected in the archives
For the longer term, the HI collection process involves the exploration and consultation, in a context of a permanent multi-scalar approach, of HI which can be seen as a real documentary puzzle with a large number of historical sources and archives. Indeed, NPPs are generally located, for obvious safety reasons, in sparsely populated and isolated areas which is why these sites were subject to little anthropogenic influence in the past. However, this difficulty does not forfeit a historical perspective due to the rich documentary resources for studying an extreme event on different scales, ranging from the site itself to that of the region (Garnier, 2015, 2017, 2018). In addition, this may be an opportunity for researchers and a part of the solution because it also allows a risk assessment at ungauged sites.
First, it is important to distinguish between “direct data” (also referred to as “direct evidence”) and “indirect data” (also referred to as “proxy data”). The first refers to all information from the archives that describes an extreme event (a storm surge event for instance) that occurred at a known date. If their content is mostly instrumental, such as meteorological records presented in certain ordinary books or by the Paris Observatory (since the 17th century), sometimes accurate descriptions of extreme climatic events are likewise found. The “proxy data” rather indicate the influence of certain storm initiators and triggers such as wind and pressure. Concretely, they provide information indirectly on coastal flooding for example.
Private documents or “ego-documents” (accounts and ordinary books, private diaries, etc.) are used in many ways during 16th to 19th centuries. Authors recorded local facts, short news and latest events, and amongst them, weather incidents. These misidentified historical objects may contain a lot of valuable meteorological data. These private documents most often take the form of a register or a journal in which the authors record various events (economic, social and political) as well as weather information. Other authors use a more integrated approach to describe a weather event by combining observations of extreme events, instrumental information, phenology (impact on harvests), prices in local markets and possibly its social expression (scarcity, emotions, riots, etc.). All these misidentified sources are another opportunity for risk and climate historians to better understand the natural and coastal hazards (coastal flooding, earthquakes, tsunamis, landslides, etc.) of the past. Some of these private documents may be limited to weather tables completely disconnected from their socio-economic and climatic contexts. Most of the consulted documents and archives describe the history of coastal flooding in the area of interest. Indeed, the historical inventory identifies and describes damaging coastal flooding that occurred on the northern coast of France (Nord-Pas-de-Calais and Dunkirk) over the past five centuries. It presents a selection of remarkable coastal floods that occurred in this area and integrates not only old events but also those occurring after the gauging period began. The information is structured around storms and coastal flooding summary sheets. Accompanied and supported by a historian, several research and field missions were carried out and a large number of archival sources explored and, whenever possible, exploited. The historical analysis began with the consultation of the documentary information stored in the rich library of the communal archive of Dunkirk, Gravelines, Calais and Saint-Omer. The most consulted documents were obtained directly from the municipal archives because the Municipal Acts guarantee a chronological continuity at least from the end of the 16th century up to the French Revolution (1789). Very useful for spotting extreme events, they unfortunately provide poor instrumental information. We therefore also considered data from local chronicles of annals of the city of Dunkirk, as well as reports written by scientists or naturalists to describe tides at Calais, Gravelines, Dunkirk, Nieuwpoort and Ostend. Most of them contain old maps, technical reports, sketches or plans of dykes, sluices and docks designed by engineers of the 18th to 20th centuries and from which it may be possible to estimate water levels reached during extreme events. Bibliographical documents are mostly chronicles, annals and memoirs written after the disaster. Finally, for the more recent period, available local newspapers were consulted.
Multiplying the sources and trying to cross-check events allowed us to constitute a database of 73 events. We focused the research on the period between 1500 and 1950, since most of the time tide gauge observations are available after 1950. The first event took place in 1507 and the last in 1995. Depending on how it is mentioned in the archive and as shown in Fig. 2a, the collated events were split in two groups. Storm surge events are events where there is a clear mention of flooding within the sources. Are considered as storms, events where only information about strong wind and gales are available. Except for the 19th century, we have many more storm surge events than storms events. All the collected events are summarized in Table 2.
## 3.3 Data quality control
First of all, it is appropriate to remember that the storm surge is the variable of interest in our historical research. It should, however, be stressed here that the total sea level, as it is a more operational information, is likely to be available most often. The conversion to the storm surge is performed afterwards by subtracting the predicted levels (which are calculated using the tide coefficients).
As mentioned earlier, archival documents are of different natures and qualities. We therefore decided to classify them by their degree of reliability according to a scale ranging between 1 and 4.
• Degree 1. Not very reliable historical source (it is impossible to indicate the exact documentary origin). It is particularly the case for HI found on the web.
• Degree 2. Information found in scientific books talking about storms without clearly mentioning the sources.
• Degree 3. Books, newspapers, reports and eyewitness statements citing historical events and clearly specifying its archival sources.
• Degree 4. The highest level of reliability. Information is taken from a primary source (e.g., an original archival report talking about a storm written by an engineer in the days following the event).
Although the information classified as a category 1 document is not very reliable, it still gives the information that something happened at a date and is therefore not outright and immediately ignored. Typically this type of document needs to be cross-checked with other documents. As shown in Fig. 2b, the classification of the data reveals a good reliability of collected information as there are no sources classified in category 1 and less than 10 % of the sources are in category 2. It is worth noting that paradoxically, the older the information, the more reliable the archival document is.
Figure 2Distribution in time of the type of the events in the data base (a); quality of the data (b).
Some other data quality related issues must be dealt with especially when using old data and when to merge it with recent ones in a same inference: how to deal with old data uncertainties? How should the evolution of some physiographic parameters around the site of interest (bathymetry, topography, land cover, etc.) be dealt with? To what extent can we be sure that events which occurred hundreds of years ago are representative of the actual risk level?
All types of data indeed require quality control and need to be corrected and homogenized if necessary to ensure that they are reflecting real and natural variations of the studied phenomena rather than the influence of other factors. This is particularly the case for historical data that have been taken in different site conditions and have not been taken using modern standards and techniques (Brázdil et al., 2010). And finally, as mentioned in the introductory section, the use of old data improves the frequency estimation of extreme events significantly even if they are inaccurate. The objective of the present paper is then to collect the information and to quantify it in order to obtain approximate values of the variable of interest, without seeking accurate reconstructions.
Table 2Details of 1500–2015 Nord-Pas-de-Calais historical storms and storm surges sources.
MAS-O: Saint-Omer Municipal Archives – Historical collection of Jean Hendricq bourgeois of Saint Omer; MAD: Dunkirk Municipal Archives; MAC: Calais Municipal Archives – thematic sheets.
## 3.4 The historical surge dataset
The concern is that it is not always possible to estimate a storm surge or a sea level from the information collected for each event. We focus herein on the reconstruction of some events of the 18th century (1720–1767) where certain HI makes it possible to estimate water levels. As depicted in Fig. 2a, out of the 73 events, 40 are identified as events causing coastal floods, but not all the sources contain quantitative data or at least some information about water level reached. We selected herein the events with the most information about some characteristics of the event (the water level reached, wind speed and direction and in some cases measured information). Table 3 shows a synthesis of the six events which we will analyze in more detail, showing the tide coefficient (obtained from the SHOM website), some wind characteristics and water levels reached in Dunkirk and other cities. The tide coefficient is a ratio of the semi-diurnal amplitude by the mean spring neap tide amplitude introduced by Laplace in the 19th century and commonly used in France since then. Today, the coefficient 100 is attributed by definition to the semi-diurnal amplitude of equinox spring tides of Brest. Therefore, the range of the coefficient lies between 20 and 120, i.e., the lowest and highest astronomical tides. Calculated for each tide at Brest harbor, it is applied to the complete Metropolitan French Atlantic and Channel coastal zone (Simon, 2007). As with the short-term HI, a description of these events which are quite well documented in the literature is presented in Appendix B with a description of some other historical events (of which the available information did not allow an estimate of a storm surge value). Some other HI about other extreme storms, occurring in the period 1767–1897, were collected in the archives and identified as events causing coastal floods. A description of these events is also presented in Appendix B. To be able to reduce the CI of the high return levels (RLs) (the 1000-year one for instance), it is insufficient to have the time window (the historical period), as the observations or estimates of high surges are unknown. A fixed time window and magnitudes of the available high storm surges are required to improve the estimates of probabilities of failure. The exhaustiveness assumption of the HI on this time window will therefore be too crude and will make no sense. The historical period 1770–1897 was therefore eliminated from inference. Fortunately, these discontinuities in the historical period can be managed in the FM (Hamdi et al., 2015). Two non-successive time windows, 1720–1770 and 1897–2015, will therefore be used as historical periods in the inference.
Table 3HI about water levels in Dunkirk and other cities (unless otherwise stated, Heights are given in French royal foot which corresponds to 0.325 m).
a Source: SHOM. b Reconstructed water levels. c Foot of Brussels (1 ft = 0.273 m).
The extreme storm surges that occurred during the 1720–1767 time window are then analyzed and the development of a methodology to estimate the surges induced by the events from the last part of the 18th and the 19th century is undergoing. Table 3 shows estimated water levels (for Dunkirk, Gravelines, Calais, Ostend and Nieuwpoort) compared to the associated Mean High-Water Springs (MHWS) which is the highest level reached by spring tides (on the average over a period of time often equal to 19 years). De Fourcroy de Ramecourt (1780) presented the water levels in “royal foot” of Paris, where 1 foot corresponds to 0.325 m and is divided into 12 inches (1 inch = 0.027 m) except for the Ostend levels that are given in the Flemish–Austrian foot (corresponding to 0.272 m and divided in 11 inches). As a first approach the height of the surge above the MHWS level was estimated, which has the advantage that the local reference level does not need to be transposed into the French leveling system and as the historic sea level is considered, there is no need to assess sea level rise, which due to climate change can be discarded. De Fourcroy de Ramecourt (1780) gave water levels for the following five cities in their respective leveling system: in Calais, zero corresponds to a fixed point on the Citadelle sluice; in Gravelines, zero corresponds to a fixed point on the sluice of the river Aa. For Dunkirk, the “likely low tide of mean spring tides” is considered as a zero point and marked on the docks of the Bergues sluice; we will subsequently refer to this zero as “Bergues Zero”. The location of the measure point of the Bergues sluice is presented in Fig. 1b on an old map of Dunkirk city. The difference between the observed water levels and the MHWS is the surge above MHWS. The three levels are about the same height, ranging from 1.46 to 1.62 m. We calculated the surge above MHWS for Calais, Gravelines, Nieuwpoort and Ostend; they are shown in the second-to-last column of Table 3. It is interesting to note that, for the 1763 and 1767 events, the highest levels were reconstructed in Ostend and the lowest levels in Calais.
For the sake of convenience and for more precision, we needed to transform the surges above MHWS presented in the second-to-last column of Table 3 into skew surges. This refinement required the development of a tide coefficient-based methodology. Indeed, the tide coefficient for each storm event indicates whether surge above MHWS is over- or underrated or approximately correct. As this coefficient is calculated for the Brest site and applied to the whole coastal zone, a table showing expected mean levels in Dunkirk for each tide coefficient was established. One tide coefficient estimated at Brest can have different high water levels than those taken at Dunkirk. For this study, it was assumed that the historic MHWS corresponds to the tide coefficient 95. In the developed methodology, all the 2016 high tides for each tide coefficient are used and the water levels for each tide coefficient are averaged. The difference ΔWL between this averaged level and the water level corresponding to the tide coefficient 95 (the actual MHWS) is then calculated and added (or subtracted) to the historic surge above MHWS. Where we have two surges, the mean of the two values is considered. Results for the Dunkirk surges are shown in the last column of Table 4.
Table 4Historical skew surges induced by the 1720–1767 events. Heights are given in metres.
Figure 3Two examples of HI as presented in the archives. Top panel: the 1767 extreme storm surge event in Dunkirk (De Fourcroy de Ramecourt, 1780); bottom panel: a profile of the Dunkirk harbor dock from the municipal archives of Dunkirk (ref. 1Fi42, 1740). Translation of the text highlighted in red is as follows: surcote de Pleine mer: skew surge; La marée extraordinairement haute, du 2 janvier de cette année: the tide was extraordinarily high this year on 2 January; Point fixe de l'Echelle de Dunkerque: fixed point of the Dunkirk scale; Niveau probable de la basse-Mer moyenne: probable level of the mean low water.
In addition to the water levels reached during events and in specific years, other types of HI (lower bounds and ranges) can be collected. For instance, De Fourcroy de Ramecourt (1780) stated that the highest water level measured during the period 1720-1767 was the one induced by the 1767 extraordinary storm. Paradoxical though it may seem at first sight, the skew surge caused by the 1763 storm is greater than the 1767 one. A plausible explanation is that the 1767 event occurred when the tide was higher than that of 1763. Figure 3 shows two examples of HI collected in the archives.
For the Dunkirk series, it is interesting to see that it is easier to estimate storm surges induced by events from the 18th century, as the water levels were either measured or reconstructed only a few years after the events took place. During research for his thesis, Pouvreau (2008) started an inventory of existing tide gauge data available in different archive services in France. According to him, the first observations of the sea level in Dunkirk were made in 1701 and 1702, where time and height were reported. Observations were also made in 1802 and another observation campaign was held during 1835. The first longer series dates from 1865 to 1875. For the 20th century, only sparse data is available for the first half of the century. Pouvreau (2008) only listed the data found in the archives of the National Geographic Institute (Institut Géographique National – IGN), the Marine Hydrographic and Oceanographic Service (Service Hydrographique et Océanographique de la Marine – SHOM) and the Historical Service of Defense (Service Historique de la Défense – SHD). During the present study we found evidence that sea levels were measured at the Bergues sluice during the 18th century and that various hydrographic campaigns were carried out during the 19th century (De Fourcroy de Ramecourt, 1780). This research and initial analysis of historical data shows the potential of the data collected, as we were able to quantify some historical skew surges, but it also shows how difficult and time-consuming the transformation of descriptive information into skew surge values is, and that more detailed analysis will be necessary to estimate the other historical surges. It was concluded that all historic surges appear to be almost at least as high as the highest systematic surge. In response to the specific question “what could impact the variable of interest throughout the whole historical period?”, old and recent data were then compared. For example, the reconstructed skew surges were compared to the systematic ones. The reconstructed skew surge heights obtained from the tide gauge data, the quantified surges from the literature and the reconstructed values from this study were also compared, as the hypothesis is made that water levels measured at the tide gauge and the different locations of Dunkirk harbor are comparable. At this point we are not able to conclude on the evolution of the tides throughout the centuries. Historic tide gauge data from cities in the north of France is currently being digitized and reconstructed at the French Oceanographic Service (SHOM – Service Hydrographique et Océanographique de la Marine) and University of Côte d'Opale (Latapy et al., 2017). Further, it is worth noting that the current tide gauge is situated at the entrance of the harbor. The predicted water levels may differ within the inner harbor area, where the reconstructed surges were estimated. Hydrodynamic modeling could help estimate the difference between water levels at the entrance of the harbor area (Bulteau et al., 2015).
4 Frequency estimation of extreme storm surges using HI
In this work, we suggest a method of incorporating the HI developed by Hamdi et al. (2015). The proposed FM (POTH) is based on the Peaks-Over-Threshold model using HI. The POTH method uses two types of HI: over-threshold supplementary (OTS) and historical maxima (HMax) data which are structured in historical periods. Both kinds of historical data can only be complementary to the main systematic sample. The POTH FM was applied to the Dunkirk site to assess the value of historical data in characterizing the coastal flooding hazard and more particularly in improving the frequency estimation of extreme storm surges.
## 4.1 Settings of the POT frequency model
To prepare the systematic POT sample and in order to exploit all available data separated by gaps, the surges recorded since 1956 were concatenated to form one systematic series. However, it makes for subjectivity in what should be taken as a reasonable threshold for the POT frequency model. Indeed, the use of a threshold that is too low can introduce a bias in the estimation by using observations which may not be extreme data, which violates the principle of the extreme value theory. On the other hand, the use of a too-high threshold will reduce the sample of extreme data. Coles (2001) has shown that stability plots constitute a graphical tool for selecting the optimal value of the threshold. The stability plots are the estimates of the GPD parameters and the mean residual life-plot as a function of the threshold when using the POT approach. It was concluded that a POT threshold equal to 0.75 m (corresponding to a rate of events equal to 1.4 events per year) is an adequate choice. The POT sample with an effective duration ws of 46.5 years (from 1956 to 2015) is represented by the grey bars in Fig. 4a, c and e. As homogeneity, stationarity and randomness of time series are prerequisites in a FA (Rao and Hamed, 2001), non-parametric tests such as the Wilcoxon test for homogeneity (Wilcoxon, 1945), the Kendall test for stationarity (Mann, 1945), and the Wald–Wolfowitz test for randomness (Wald and Wolfowitz, 1943) are applied. These tests were passed by the Dunkirk station at the 5 % level of significance.
Figure 4The GPD fitted to the POTH surges in Dunkirk: (a, b) the 1953 event as historical data; (c, d) with historical data from literature and (e, f) with historical data from literature and archives. The 1995 event is considered as systematic.
## 4.2 The POTH frequency model
The HI is used in the present paper as HMax data. A HMax data period corresponds to a time interval of known duration ${w}_{{H}_{\mathrm{Max}}}$ during which historical nk-largest values are available. Periods are assumed to be potentially disjointed from the systematic period. The distribution of the HMax exceedances is assumed to be a Generalized Pareto one (GPD). The observed distribution function of HMax and systematic data are constructed in the same way with the Weibull rule. To estimate the distribution parameters by using the maximum likelihood technique in the POTH model, let us assume a set of POT systematic observations Xsys,i with a set of historical HMax surges ${X}_{{H}_{\mathrm{Max}},i}$ and assume that the systematic and historical storm surges are available with a density function fX(.). Under the assumption that the surges are independent and identically distributed, the global likelihood function of the whole data sample is any function L(G|θ) proportional to the joint probability density function fX(.) evaluated at the observed sample and it is the product of the likelihood functions of the particular types of events and information. The global log-likelihood can be expressed as
$\begin{array}{}\text{(1)}& \mathrm{\ell }\left(G\mathrm{|}{\mathit{\theta }}_{↔}\right)=\stackrel{\mathrm{systematic}\phantom{\rule{0.25em}{0ex}}\mathrm{data}}{\overbrace{\mathrm{\ell }\left({X}_{\mathrm{sys},i}\mathrm{|}\mathit{\theta }\right)}}+\stackrel{{H}_{\mathrm{Max}}\phantom{\rule{0.25em}{0ex}}\mathrm{data}}{\overbrace{\mathrm{\ell }\left({X}_{H\mathrm{Max},i}\mathrm{|}\mathit{\theta }\right)}}.\end{array}$
Let us assume a set of n POT systematic observations Xi and a selected threshold us and consider ws the total duration. For a homogeneous Poisson process with rate λ, the log-likelihood ℓ(Xsys,i|θ) is
$\begin{array}{ll}\mathrm{\ell }\left({X}_{\mathrm{sys},i}\mathrm{|}\mathit{\theta }\right)& =n\mathrm{log}\left(\mathit{\lambda }{w}_{\mathrm{s}}\right)-\mathrm{log}\left(n\mathrm{!}\right)-\mathit{\lambda }{w}_{\mathrm{s}}\\ \text{(2)}& & +\sum _{i=\mathrm{1}}^{n}\mathrm{log}f\left({X}_{\mathrm{sys},i},\mathit{\theta }\right).\end{array}$
For the HMax data, it takes the form
$\begin{array}{ll}\mathrm{\ell }\left({X}_{{H}_{\mathrm{Max},i}}\mathrm{|}\mathit{\theta }\right)& ={n}_{k}\mathrm{log}\left(\mathit{\lambda }{w}_{{H}_{\mathrm{Max}}}\right)-\mathit{\lambda }{w}_{{H}_{\mathrm{Max}}}\left[\mathrm{1}-F\left({X}_{k},\mathit{\theta }\right)\right]\\ \text{(3)}& & +\sum _{i=\mathrm{1}}^{{n}_{k}}\mathrm{log}f\left({X}_{{H}_{\mathrm{Max}},i},\mathit{\theta }\right).\end{array}$
The reader is referred to Hamdi et al. (2015) for more details about each term of these expressions.
## 4.3 Settings of the frequency model with HI (POTH)
An important question arises in regard to the exhaustiveness of the HI collected in a well-defined time window (called herein the historical period). In order to properly perform the FA, this criterion must be fulfilled. Indeed, we have good evidence to believe that other than the 1995 storm surge, the surges induced by the 1897, 1949 and 1953 storms are the biggest for the period 1897–2015. The POTH FM was first applied with a single historical datum, which is that of 1953, represented by the red bar in Fig. 4a. It not complicated to demonstrate that this event is undoubtedly an outlier. Indeed, in order to detect outliers, the Grubbs–Beck test was used (Grubbs and Beck, 1972). As mentioned in the previous section, some historical extreme events experienced by Dunkirk city are available in the literature. Only this information (including the 1953 event) is considered in this first part of the case study.
Table 5The HI dataset (from literature and archives). Surges are given in m and ${w}_{{H}_{\mathrm{Max}}}$ and ws in years.
Table 6The T-year quantiles and relative widths of their 70 % CI (all of the durations are given in years).
Otherwise, HI is most often considered in the FA models for pre-gauging data. Little or no attention has been given to non-recorded extreme events that occurred during the systematic missing periods. As mentioned earlier in this paper, the sea level measurement induced by the 1995 storm was missed and a value of the skew surge (1.15 m) was reconstructed from information found in the literature (Maspataud, 2011). As this event is of ordinary intensity and has taken place very recently, it is considered as systematic data even if this type of data can be managed by the POTH FM by considering it as HI (Hamdi et al., 2015). The HI collected from both literature and archives with some model settings are summarized in Table 5 and the POTH sample with a historical period of 72.51 years is presented in Fig. 4b. Parameters characterizing datasets including both systematic and HI were introduced in Hamdi et al. (2015). The HI is used herein as HMax data that complements the systematic record (with an effective duration Deff equal to ws) on one historical period (1897–2015) with a known duration ${w}_{\mathrm{h}}={w}_{{H}_{\mathrm{Max}}}=\mathrm{2015}-\mathrm{1897}+\mathrm{1}-{D}_{\mathrm{eff}}$ (wh=72.51 years) and three different historical data (nk=3). Other features of the POTH FM have been used. A parametric method (based on the Maximum Likelihood) for estimating the Generalized Pareto Distribution (GPD) parameters considering both systematic and historical data have been developed and used. The maximum likelihood method was selected for its statistical features, especially for large series and for the ease with which any additional information (i.e., the HI) is incorporated in it. On the other hand, the plotting positions exceedance formula based on both systematic observations and HI (Hirsch, 1987; Hirsch and Stedinger, 1987; Guo, 1990) is proposed to calculate the observed probabilities and has been incorporated into the POTH FM considered herein. For systematic data, there are several formulas that can be used to calculate the observed probabilities. Based on several studies (e.g., Alam and Matin, 2005; Makkonen, 2006), the Weibull plotting position rule was used herein (${p}_{\mathrm{emp}}=i/\left(n+\mathrm{1}\right)$). The reader is referred to Hamdi et al. (2015) for more theoretical details on the POTH model and on the Renext package used to perform all the estimations and fits.
5 Results and discussion
We report herein the results of the FA applied to the Dunkirk tide gauge. As with any sensitive facility, high return levels (RLs) (100, 500 and 1000-year extreme surges, for instance) are needed for the safety of NPPs. The results are presented in the form of probability plots in the right panel of Fig. 4d–f. The theoretical distribution function is represented by the solid line in this figure, while the dashed lines represent the limits of the 70 % CIs. The HI is depicted by the empty red circles, while the black full ones represent the systematic sample. The results (estimates of the desired RLs and uncertainty parameters) are also summarized in Table 6. Fitting the GPD to the sample of extreme POTH storm surges yields the relative widths ΔCIST of the 70 % CIs (the variance of the RL estimates are calculated with the delta method).
The FA was firstly performed considering systematic surges and the 1953 storm surge as historical data. It can be seen that the fit of the POTH sample including the 1953 historical event (with wh equal to 16.5 years) presented in Fig. 4d (called hereafter the initial fitting), is poor at the right tail and more specifically, at the largest storm surge (the historical data of 2.13 m occurred in 1953) which have a much lower observed return period than its estimated one. The estimates of the RLs of interest and uncertainty parameters (the relative width ΔCIST of the 70 % CIs) are presented in columns 2–3 of Table 6. These initial findings are an important benchmark as we follow the evolution of the results to evaluate the impact of additional HI. 100-, 500- and 1000-year quantiles given by the POTH FM with the 1897, 1949 and 1953 historical storm surges included are about 3 %–6 % higher than those obtained by the initial POTH FM. This result was expected as the additional historical surges are higher than all the systematic ones. The relative widths of the CIs are about 20 %–25 % narrower.
Unlike the 1897 historical event, the 1949 and 1953 ones have a lower observed return period than their estimated one. A plausible explanation for this result is that the body of the distribution is better fitted than the right tail one and this is a shortcoming directly related to the exhaustiveness assumption used in the POTH FM. Indeed, as stated in Hamdi et al. (2015) and as mentioned above, a major limitation of the developed FM arises when the assumption related to the exhaustiveness of the information is not satisfied. This is obviously worrying for us because the POTH FM is based on this assumption. Overall, using additional data in the local FM has improved the variances associated with the estimation of the GPD parameters but did not conduct robust estimates with a better fitting (particularly at the right tail, the high RLs being very sensitive to the historical values) if the assumption of exhaustiveness is still strong. This first conclusion is likewise graphically backed by the CIs plots shown in Fig. 4e. Nevertheless, as the impact of historical data becomes more significant, there is an urgent need to carry out a deeper investigation of all the historical events that occurred in the region of interest (Nord-Pas-de-Calais) over the longest possible historical period. In order to have robust estimates and reduced uncertainties, it is absolutely necessary that the collected information be as complete as possible.
The robustness of the POTH FM is one of the more significant issues we must deal with. The main focus of this discussion is the assessment of the impact of the additional HI (collected from the archives) on the frequency estimates for high RLs. The same FM was performed but with the long-term additional HI (collected in the archives) and different settings (Table 5). The results of the POTH FM using HI from both literature and archives (called hereafter the full FM) are likewise summarized in the last two columns of Table 6. The results are also presented in the form of a probability plot (Fig. 4f). Figure 7 consists of two subplots related to the FA of the Dunkirk extreme surges. The left side (Fig. 4c) shows collected data: the systematic surges are represented by the grey bars, the historical surges extracted from the literature by red bars and those extracted from the archives (estimated and corrected with regards to the tide coefficients) are represented by the green ones. We can also see the two time windows (the blue background areas in the graph), 1720–1770 and 1897–2015, used in the POTH FM as historical periods. The right side shows the results of the full FM. As mentioned earlier in this paper, to consider the full POTH FM, six historical storm surges distributed equally (nk=3) over two non-successive time windows: 1720-1770 (${w}_{{H}_{\mathrm{Max}\mathrm{1}}}=\mathrm{50}$ years) and 1897–2015 (${w}_{{H}_{\mathrm{Max}\mathrm{2}}}=\mathrm{72.5}$ years, knowing that ws=46.5 years) are used as historical data. In the plotting positions, the archival historical surges are represented by green squares, while those found in the literature are depicted by red circles. The fitting presented in Fig. 4f shows a good adequacy between the plotting positions and theoretical distribution function (calculated probabilities of failure). Indeed, all the points of the observed distribution are not only inside the CI, but even better, they are almost on the theoretical distribution curve. Table 6 shows the following results.
• The RLs of interest had increased by only 10 to 20 cm. This is an important element of robustness. Indeed, adding or removing one or more extreme values from the dataset does not significantly affect the desired RLs. In other words, it is important that the developed model is not very sensitive (in terms of RLs used as design bases) to a modification in the data regarding very few events. As a matter of fact, the model owes this robustness to the exhaustiveness of the available information.
• The relative widths of CIs with no archival HI included are 1.5 times larger than those given by the full model. This means that the user of the developed model is more confident in the estimations when using the additional HI collected in the archives.
After collecting HI about the most extreme storm surge events in the 18th and 20th centuries, it was first found that the 1953 event is still the most important one in terms of magnitude. The developed POTH FM attributed a 200-year return period to this event. The value of the surge induced by the 1953 storm is between 1.75 and 2.50 m. That said, it is interesting to note that this CI includes the value of 2.40 m estimated by Le Gorgeu and Guitonneau (1954). This may be a reason to think that the continuation of our work on the quantification of the skew surges that occurred in the 19th century will perhaps reveal extreme surges similar to that induced by the 1953 storm.
6 Conclusion and perspectives
To improve the estimation of risk associated with exceptional high surges, HI about storms and coastal flooding events for the Nord-Pas-de-Calais was collected by historians for the 1500–1950 period. Qualitative and quantitative information about all the extreme storms that hit the region of interest were extracted from a large number of archival sources. In this paper, we presented the case study of Dunkirk in which the exceptional surge induced by the 1953 violent storm appears as an outlier. In a second step, the information collected (in both literature and archives) was examined. Quality control and cross-validation of the collected information indicate that our list of historic storms is complete as regards extreme storms. Only events that occurred in the periods 1720–1770 and 1897–2015 were estimated and used in the POTH FM as historical data. To illustrate challenges and opportunities for using this additional data and analyzing extremes over a longer period than was previously possible, the results of the FA of extreme surges was presented and analyzed. The assessment of the impact of additional HI is carried out by comparing theoretical quantiles and associated confidence intervals, with and without archival historical data, and constitutes the main result of this paper.
The conclusions drawn in previous studies were examined in greater depth in the present paper. Indeed, on the basis of the results obtained previously (Hamdi et al., 2015) and in the present paper, the following conclusions are reached:
• the use of additional HI over longer periods than the gauging one can significantly improve the probabilistic and statistical treatment of a dataset containing an exceptional observation considered as an outlier (i.e., the 1953 storm surge);
• as the HI collected in both literature and archives tend to be extreme, the right-tail distribution has been reinforced and the 1953 “exceptional” event does not appear as an outlier any more;
• and as this additional information is exhaustive (relative to the corresponding historical periods), the RLs of interest increased very slightly and the confidence intervals were reduced significantly.
An in-depth study could help to thoroughly improve the quantification method of the historical surges and apply the developed model on other sites of interest. Finally, an attempt to carry out the estimation of the surges induced by the events from 1767 to the end of the 19th century is ongoing at the time of writing.
Data availability
Data availability.
Storm surges and water levels estimated from historical information are presented in this paper. Unfortunately, data as they were presented in archives and primary sources (the original information) cannot be published herein because the sources are confidential.
Appendix A: HI collected in the literature
## A1 1 March 1949
A violent storm with mean hourly wind speeds reaching almost 30 m s−1 and gusts of up to 38.5 m s−1 (Volker, 1953) was the cause of a storm surge that reached the coast of northern France and Belgium at the beginning of March 1949. The tide gauge of Antwerp in the Escaut estuary measured a water level higher than 7 m Tweede Algemene Waterpasing (TAW, a Belgian chart datum for which the 0 corresponds to the mean water level during low tide at Oostende Harbor) which classifies this event as a buitengewone stormvloed, an extraordinary storm surge (Codde and De Keyser, 1967). For the Dunkirk area two sources reporting water levels were found: the first saying that 7.30 m was reached as a maximum water level at the eastern dyke in Dunkirk, exceeding the predicted high tide, i.e., 5.70 m, with 1.60 m (Le Gorgeu and Guittoneau, 1954). A second document relates that the maximum water level reached was about 7.55 m at Malo-les-Bains, which would mean a surge of 1.85 m (DREAL Nord-Pas-de-Calais, 2017). It is worth noting that the use of proxy data (i.e., the descriptions of events in the historical sources summarized in Table 1) to extract sea-level values and to create storm-surge databases is seriously limited. For the 1791 and 1808 storms, there is sufficient evidence that extreme surge events took place (extreme water level on Walcheren Island) but the sources are not informative enough to estimate water levels reached in Dunkirk. A surge of 1.25 m is given for the storm of 1921. The problem is that the type of surge (instantaneous or skew), the exact location at which it was recorded and the hydro-meteorological parameters are not reported. For the skew surge of 1949, two different values at two locations are given. There are predicted and observed water levels for the storms of 1905 and 1953 in Calais, which indicate that the difference is a skew surge, but likewise neither the exact location nor the information about the reference level are furnished. The need for tracing back to “direct data” describing a storm and its consequences becomes clear, as well as performing a cross-check of the data on a spatial and factual level, as Brazdil (2000) also suggests.
## A2 28 November 1897
What was felt as stormy winds in Ireland on 27 November 1897 became an eastward-moving storm with gale-force winds over Great Britain, Denmark and Norway (Lamb, 1991). This storm caused interruption of telephone communications between the cities of Calais, Dunkirk and Lille and great damage to the coastal areas (Le Stéphanois, 30 November 1897). At Malo-les-Bains, a small town close to Dunkirk, the highest water level reached 7.36 m, although the high tide was predicted at 5.50 m, resulting in a skew surge of 1.86 m that caused huge damage to the port infrastructures (DREAL Nord-Pas-de-Calais, 2017).
## A3 14 January 1808
During the night from 14 to 15 January 1808, “a terrible storm, similar to a storm that hit the region less than a year before on February 18, 1807” hit the coasts of the most northern parts of France up to the Netherlands. This storm caused severe flooding in the Dunkirk area as well as in the Zeeland area in the southwestern parts of the Netherlands where the water rose up to 25 feet on the isle of Walcheren (i.e., 7.62 m). The journal also reports more than 200 deaths. For the Dunkirk area, the last time the water levels rose as high as in January 1808 was 2 February 1791. Unfortunately, this source does not provide any information that we can quantify or any information on the meteorological and weather conditions that we can use to reconstruct the storm surge value.
Appendix B: HI collected in the archives
## B1 1720–1767
In essays written by a mathematician of the French Royal Academy of Science, De Froucroy de Ramecourt (1780), who describes the tide phenomenon on the Flemish coast, some extreme water levels observed within the study area are reported and described. The author refers to five events that occurred during the period 1720 to 1767. The same information is confirmed by a Flemish scientist, Dom Mann (1777, 1780). De Froucroy de Ramecourt (1780) witnessed the water levels induced by the 1763 and 1767 storms and reconstructed the level induced by the 1720 event in Dunkirk. Water levels at that time are given for the cities of Dunkirk, Gravelines and Calais in the pied du roi unit (“foot of the king” was a French measuring unit, corresponding to 0.325 m) above local mean low-water springs. The French water levels are completed by measurements made in Flemish–Austrian feet (1 Flemish–Austrian foot is equal to 0.272 m) above the highest astronomical tides for the cities of Ostend and Nieuwpoort (De Fourcroy de Ramecourt, 1780; Mann, 1777, 1780). The upper panel of Fig. 3 shows an example of HI as presented in the archives (De Fourcroy de Ramecourt, 1780).
The 1720 event is a memorable event for the city of Dunkirk, as the water level during spring tide was increased by the strong gales blowing from northwestern direction which destroyed the cofferdam built by the British in 1714, cutting the old harbor off from sea access and prohibiting any maritime trade, thus slowly causing the ruin of the city. The socio-cultural impact of the natural destruction of the cofferdam was huge, as it restarted trading in the city (Chambre de Commerce de Dunkerque, 1895; Plocq, 1873; de Belidor, 1788). In 1736, the only sea level available is given for Gravelines harbor, but extreme water levels are confirmed in the sources as they mention at least 4 feet (French unit “foot of the king” corresponding to 0.325 m) of water in a district of Calais, and water levels that overtopped the docks of the harbor in Dunkirk (Municipal Archive of Dunkirk DK291, Demotier, 1856). As mentioned above, communal and municipal archives contain plans of dykes, docks and sluices in Dunkirk harbor designed by engineers with the means available at that time, and such sketches were recovered. A 1740 sketch showing a profile of the Dunkirk harbor dock is presented in the lower panel of Fig. 3 for illustrative purposes only. The use of these plans and sketches in the estimation of some historical storm surges is ongoing. The lower-lying streets of Gravelines were accidentally flooded by the high water levels in March 1750. The fact that an extreme water level was also reported in Ostend for the same day confirms the regional aspect of the event. The surge of 1763 occurred in a period with mean tidal range, but water levels exceeded the level of mean spring high tide in Dunkirk, Calais and Ostend. Unfortunately, no more information about the flooded area is available. Strong west–northwesterly winds caused by a quick drop in pressure produced high water levels from Calais up to the Flemish cities. It is, at least for the period from 1720 to 1767, the highest water level ever seen and known. The 1720 and 1767 events show good evidence of the wind direction and wind intensity, while in various sources, except for the water levels reported, the events from 1736, 1750 and 1763 are always cited together and described as “extraordinary sea levels that are accompanied or caused by strong winds blowing from southwest to north” (De Lalande, 1781; De Fourcroy de Ramecourt, 1780; Mann, 1777, 1780). As with the 1897–2015 historical/systematic periods, the same question related to the exhaustiveness of the HI collected in the 1720–1770 historical period arises. As our historical research on extreme storm surges occurred in this time window was very thorough, we have good reason to believe that the surges induced by the 1720, 1763 and 1767 storms are the biggest for that historical period.
## B2 1767–1897
For the 1778, 1791, 1808 and 1825 events, the sources clearly report that winds were blowing from northwesterly directions and that in Dunkirk the quays and docks of the harbor were overtopped as the highest water levels were reached. We know that, after the event of February 1825, at least 19 storm events occurred and we have good evidence to believe that some of them induced extreme surges, but either the information available is not sufficient to draw an approximate value of the water level, or the quantification of the storm surges induced by these events is complicated and time-consuming.
## B3 1936
The 1936 event can be considered as a lower bound, as the document from the archive testifies that the “water level was at least 1m higher than the predicted tide” during the storm that occurred on the night of 1 December 1936 (Municipal Archives of Dunkirk 4S 881). The 1936 event, which can be described as a moderately extreme storm, is the only one collected over the 50-year time window (1897–1949). As the surge lower bound value induced by this event is too small (i.e., exceeded more than 10 times during the systematic period), it could be exceeded several times during the 1897–1949 period. Its involvement in the statistical inference will have the opposite effect and will not only increase the width of the CI but will also degrade the quality of the fit. The 1936 historical event was therefore eliminated from inference.
Competing interests
Competing interests.
The authors declare that they have no conflicts of interest.
Acknowledgements
Acknowledgements.
The authors thank the municipal archives of Dunkirk and Gravelines for their support during the collection of historical information.
Edited by: Ira Didenkulova
Reviewed by: three anonymous referees
References
Alam, M. J. B. and Matin, A.: Study of plotting position formulae for Surma basin inBangladesh, J. Civ. Eng., 33, 9–17, 2005.
Almache de Calais: Société d'Agriculture du Commerce Sciences et Arts de Calais, Almanach de la Ville et du Canton de Calais pour 1845, Calais: Imprimerie de D le Roy, 1845.
Baart, F., Bakker, M. A. J., Van Dongeren, A., den Heijer, C., Van Heteren, S., Smit, M. W. J., Van Koningsveld, M., and Pool, A.: Using 18th century storm-surge data from the Dutch Coast to improve the confindence in flood-risk estimate, Nat. Hazards Earth Syst. Sci., 11, 2791–2801, https://doi.org/10.5194/nhess-11-2791-2011, 2011.
Bardet, L., Duluc, C.-M., Rebour, V., and L'Her, J.: Regional frequency analysis of extreme storm surges along the French coast, Nat. Hazards Earth Syst. Sci., 11, 1627–1639, https://doi.org/10.5194/nhess-11-1627-2011, 2011.
Baron C. de Warenghien: Extrait des Actes de la Société, in Bulletin Union Faulconnier, société historique de Dunkerque Tome XXI, Union Faulconnier, Dunkerque, 437–463, 1924.
Barron, B.: Appareillage immédiat, Editions du Camp du Drap d'Or, 2007.
Bossaut, M. A.: Le Portrait de Dunkerque après le Traité d'Utrecht, Mémoires de la Société Dunkerquoise pour l'Encouragement des Sciences, des Lettres et des Arts XXXe Volume, Dunkerque, Imprimerie Dunkerquoise, 1898.
Bouchet, E.: Histoire Populaire de Dunkerque au Moyen-Age, in: Bulletin Union Faulconnier, société historique de Dunkerque Tome XIV, Union Faulconnier, Dunkerque, 243–319, 1911.
Brazdil, R.: Historical Climatology: Definition, Data, Methods, Results, Geografický Časopis, 52, 99–121, 2000.
Brázdil, R., Dobrovolný, P., Luterbacher, J., Moberg, A., Pfister, C., Wheeler, D., and Zorita, E., European climate of the past 500 years: new challenges for historical climatology, Climatic Change, 101, 7–40, https://doi.org/10.1007/s10584-009-9783-z, 2010.
Bulteau, T., Idier, D., Lambert, J., and Garcin, M.: How historical information can improve estimation and prediction of extreme coastal water levels: application to the Xynthia event at La Rochelle (France), Nat. Hazards Earth Syst. Sci., 15, 1135–1147, https://doi.org/10.5194/nhess-15-1135-2015, 2015.
Chambre de Commerce de Dunkerque: Notice sur la ville et le port de Dunkerque, publiée par les soins de la chambre de commerce, Paul Michel, Dunkerque, 1895.
Chow, V. T.: Frequency analysis of hydrologic data, Eng. Expt. Stn. Bull. 414, University of Illinois, Urbana, Illinois, 80 pp., 1953.
Codde, R. and De Keyser, L.: Altas De Belgique, Mer du Nord Littoral/Estuaire de l'Escaut-Escaut Maritime, Comité National de Géographie, available at: http://www.atlas-belgique.be/cms/uploads/oldatlas/atlas1/Atlas1-FR-18A-B.PDF (last access: 11 June 2018), 1967.
Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer, Berlin, 2001.
Dalrymple, T.: Flood Frequency Analyses, Manual of Hydrology: Part 3. Water Supply Paper 1543-A, USGS, available at: http://pubs.er.usgs.gov/publication/wsp1543A (last access: 1 December 2017), 1960.
de Belidor, B. F.: Architecture Hydraulique, Seconde Partie: L'art de Dirig Les Eaux a L'avantage de la Defense, Du Commerce et de l'Agriculture, Barrois, Hydraulica, 412 pp., 1788.
Deboudt, P.: Etude de géomorphologie historique des littoraux dunaires du Pas-de-Calais et nord-Est de la Manche, PhD thesis, Université de Lille 1, Lille, 269 pp., 1997.
De Fourcroy de Ramecourt: Observations sur les marées à la côte de flandre, in: Mémoires de mathématique et de physique, edited by: Moutard, P., Académie Royale des Sciences par divers Savans, & lûs dans les Assemblées, Paris, 1780.
De Lalande, J. J. L. F.: Traité du flux et du reflux de la mer – d'après la théorie et les observations, Astronomie, Paris, 1781.
Derode, V.: Histoire de Dunkerque, E. Reboux, Lille, 1852.
Demotier, C.: Depuis les temps les plus reculés jusqu'à nos jours, Annales de Calais, Calais, 1856.
DREAL Bretagne: Etude Vimers des événements de tempête en Bretagne, available at: http://www.bretagne.developpement-durable.gouv.fr/etude-vimers-des-evenements-de-tempete-en-bretagne-a2705.html, last access: 14 March 2017.
DREAL Nord: Pas de Calais, Détermination de l'aléa de submersion marine intégrant les conséquences du changement climatique en région Nord – Pas de Calais. Phase 1: Compréhension du fonctionnement du littoral, https://www.hauts-de-france.developpement-durable.gouv.fr/IMG/pdf/50292_-_sub_npc_-_phase_1_-_version_4.pdf, last access: 23 February 2017.
de Bertrand, R.: Notice Historique sur Zuydcoote, in Mémoires de la Société dunkerquoise pour l'encouragement des sciences, des lettres et des arts, Typographie E. Vandalle, Dunkerque, 214–342, 1855.
Faulconnier, P.: Description Historique de Dunkerque, Bruges, Piere Vande Cappelle & Andre Wydts, 1730.
Gaal, L., Szolgay, J., Kohnova, S., Hlavcova, K., and Viglione, A.: Inclusion of historical information in flood frequency analysis using a Bayesian MCMC technique: A case study for the power dam Orlik, Czech Republic, Contrib. Geophys. Geodesy, 40, 121–147, https://doi.org/10.2478/v10126-010-0005-5, 2010.
Garnier, E.: A historic experience for a strenthened resilience. European societies in front of hydro-meteors 16th–20th centuries, in: Prevention of hydrometeorological extreme events – Interfacing sciences and policies, edited by: Quevauviller, P., John Wiley & Sons, Chichester, 3–26, 2015.
Garnier, E.: Xynthia, February 2010. Autopsy of a foreseeable catastrophe, in: Coping with coastal storms, edited by: Quevauviller, P., Garnier, E., and Ciavola, P., John Wiley & Sons, Chichester, 111–148, https://doi.org/10.1002/9781119116103.ch3, 2017.
Garnier, E. and Surville, F.: La tempête Xynthia face à l'histoire, in: Submersions et tsunamis sur les littoraux français du Moyen Âge à nos jours, Le croît Vif., Saintes, France, 2010.
Garnier, E., Ciavola, P., Armaroli, C., Spencer, T., and Ferreira, O.: Historical analysis of storms events: case studies in France, England, Portugal and Italy, Coast. Eng., 134, 10–23, https://doi.org/10.1016/j.coastaleng.2017.06.014, 2018.
Gerritsen, H.: What happened in 1953? The Big Flood in the Netherlands in retrospect, Philos. T. Roy. Soc. A, 363, 1271–1291, 2005.
Gonsseaume, C.: Un épisode de la révolution à Verton, Dossiers archéologiques, historiques et culturels du Nord et du Pas-de-Calais, 27, 21–25, 1988.
Grubbs, F. E. and Beck, G.: Extension of sample sizes and percentage points for significance tests of outlying observations, Technometrics, 14, 847–854, 1972.
Gumbel, E. J.: Les valeurs extrêmes des distributions statistiques, Annales de l'Institut Henri Poincaré, 5, 115–158, 1935.
Guo, S. L.: Unbiased plotting position formulae for historical floods, J. Hydrol., 121, 45–61, https://doi.org/10.1016/0022-1694(90)90224-L, 1990.
Guo, S. L. and Cunnane, C.: Evaluation of the usefulness of historical and palaelogical floods in quantile estimation, J. Hydrol., 129, 245–262, https://doi.org/10.1016/0022-1694(91)90053-K, 1991.
Harrau, L. A.: Rosendael, in Bulletin Union Faulconnier, société historique de Dunkerque Tome I, Union Faulconnier, Dunkerque, 218–280, 1898.
Harrau, L. A.: Histoire de Gravelines, in Bulletin Union Faulconnier, société historique de Dunkerque Tome III, Union Faulconnier, Dunkerque, 357–374, 1901.
Harrau, L. A.: Histoire de Gravelines - Chapitre 5, in Bulletin Union Faulconnier, société historique de Dunkerque Tome VI, Union Faulconnier, Dunkerque, 5–78, 1903.
Hamdi, Y.: Frequency analysis of droughts using historical information – new approach for probability plotting position: deceedance probability, Int. J. Global Warm., 3, 203–218, https://doi.org/10.1504/IJGW.2011.038380, 2011.
Hamdi, Y., Bardet, L., Duluc, C.-M., and Rebour, V.: Extreme storm surges: a comparative study of frequency analysis approaches, Nat. Hazards Earth Syst. Sci., 14, 2053–2067, https://doi.org/10.5194/nhess-14-2053-2014, 2014.
Hamdi, Y., Bardet, L., Duluc, C.-M., and Rebour, V.: Use of historical information in extreme-surge frequency estimation: the case of marine flooding on the La Rochelle site in France. Nat. Hazards Earth Syst. Sci., 15, 1515–1531, https://doi.org/10.5194/nhess-15-1515-2015, 2015.
Hirsch, R. M.: Probability plotting position formulas for flood records with historical information, J. Hydrol., 96, 185–199, https://doi.org/10.1016/0022-1694(87)90152-1, 1987.
Hirsch, R. M. and Stedinger, J. R.: Plotting positions for historical floods and their precision, Water Resour. Res., 23, 715–727, https://doi.org/10.1029/WR023i004p00715, 1987.
Holgate, S. J., Matthews, A., Woodworth, P. L., Rickards, L. J., Tamisiea, M. E., Bradshaw, E., Foden, P. R., Gordon, K. M., Jevrejeva, S., and Pugh, J.: New data systems and products at the permanent service for mean sea level, Coast. Res., 29, 493–504, https://doi.org/10.2112/JCOASTRES-D-12-00175.1, 2013.
Hosking, J. and Wallis, J.: The value of historical data in flood frequency analysis, Water Resour. Res., 22, 1606–1612, https://doi.org/10.1029/WR022i011p01606, 1986.
Hosking, J. and Wallis, J.: Some statistics in regional frequency analysis, Water Resour. Res., 29, 271–181, https://doi.org/10.1029/92WR01980, 1993.
Hosking, J. and Wallis, J.: Regional frequency analysis: an approach based on L-moments, Cambridge University Press, Cambridge, 1997.
Idier, D., Dumas, F., and Muller, H.: Tide-surge interaction in the English Channel, Nat. Hazards Earth Syst. Sci., 12, 3709–3718, https://doi.org/10.5194/nhess-12-3709-2012, 2012.
Katz, R. W., Parlange, M. B., and Naveau, P.: Statistics of extremes in hydrology, Adv. Water Resour., 25, 1287–1304, https://doi.org/10.1016/S0309-1708(02)00056-8, 2002.
Lamb, H.: Historic Storms of the North Sea, British Isles and Northzest Europe, Cambridge University Press, Cambridge, 1991.
Landrin, C.: Tablettes historiques du Calaisis, Imprimerie régionale, 1888.
Latapy, A., Arnaud, H., Pouvreau, N., and Weber N.: Reconstruction of sea level changes in Northern France for the past 300 years and their relationship with the evolution of the coastal zone, in: Coast 2017, Bordeaux, https://doi.org/10.13140/RG.2.2.14180.07041, 2017.
La Voix du Nord newspaper: 2–4 Mars 1949, Tempête sur le littoral, 1949.
La Voix du Nord newspaper: 4 Février 1953, Tragiques inondations sur les côtes de l'ouest, 1953.
La Voix du Nord newspaper: 17 September 1966, Dunkerque, Malo-lesBains, 1966.
Lefebvre: Histoire Générale et Particulière de la ville de Calais et du Calaisis Tome II, Paris, Guillaume François Debure, 1766.
Le Gorgeu, V. and Guitonneau, R.: Reconstruction de la Digue de l'Est à Dunkerque, Coast. Eng., 5, 555–586, available at: https://icce-ojs-tamu.tdl.org/icce/index.php/icce/article/viewFile/2043/1716 (last access: 21 September 2018), 1954.
Le Gravelinois newspaper: 19 March 1898, MAD, 1898.
Le Nord Maritime newspaper: 13 January 1899, MAD, 1899.
Lemaire, A.: Ephémérides dunkerquoises revues, considérablement augmentées, Maillard et Vandenbusche, Dunkerque, 1857.
Le Tellier, J. L.: Abrégé de l'Histoire de Dunkerque, in Bulletin Union Faulconnier, société historique de Dunkerque Tome XXIV, Union Faulconnier, Dunkerque, 143–205, 1927.
Makkonen, L.: Plotting Positions in Extreme Value Analysis, J. Appl. Meteorol. Clim., 45, 334–340, https://doi.org/10.1175/JAM2349.1, 2006.
Mann, D.: Mémoire sur l'ancien état de la flandre maritime, les changements successifs, & les causes qui les ont produits, Mémoires de l'académie impériale et royale des sciences et belles-lettres de bruxelles, Académie Impériale des Sciences de Belles-Lettres de Bruxelles, Bruxelles, 1777.
Mann, D.: Mémoire sur l'histoire-naturelle de la mer du nord, & sur la pêche qui s'y fait, Mémoires de l'académie impériale et royale des sciences et belles-lettres de bruxelles, Académie Impériale des Sciences de Belles-Lettres de Bruxelles, Bruxelles, 1780.
Mann, H. B.: Nonparametric tests against trend, Econometrica, 3, 245–259, 1945.
Maspataud, A.: Impacts des tempêtes sur la morpho-dynamique du profil côtier en milieu macrotial, PhD thesis, Université du Littoral Côte d'Opale, Dunkerque, 470 pp., 2011.
Maspataud, A., Ruz, M., and Vanhée, S.: Potential impacts of extreme storm surges on a low-lying densely populated coastline: the case of Dunkirk area, Northern France, Nat. Hazards, 66, 1327–1343, https://doi.org/10.1007/s11069-012-0210-9, 2013.
Moreel, L.: Ghyvelde, Bay-Dunes à travers les Ages, in Bulletin Union Faulconnier, société historique de Dunkerque Tome XXVIII, Union Faulconnier, Dunkerque, 125–204, 1931.
Ouarda, T. B. M. J., Rasmussen, P. F., Bobée, B., and Bernier, J.: Utilisation de l'information historique en analyse hydrologique fréquentielle, Rev. Sci. Eau, 11, 41–49, https://doi.org/10.7202/705328ar, 1998.
Payrastre, O., Gaume, E., and Andrieu H.: Usefulness of historical information for flood frequency analyses: Developments based on a case study, Water Resour. Res., 47, W08511, https://doi.org/10.1029/2010WR009812, 2011.
Pouvreau, N.: Trois cents ans de mesures marégraphiques en france: Outils, méthodes et tendances des composantes du niveau de la mer au port de brest, PhD thesis, Université de La Rochelle, La Rochelle, 466 pp., 2008.
Plocq, M. A.: Port et Rade de Dunkerque, Impr. Nationale, Paris, 1873.
Rao, A. R. and Hamed, K. H.: Flood Frequency Analysis, CRC Press, Boca Raton, Florida, USA, 2000.
Rossiter, J. R.: The North Sea surge of 31 January and 1 February 1953, Philos. T. Roy. Soc. A, 246, 371–400, 1953.
Salas, J. D., Wold, E. E., and Jarrett, R. D.: Determination of flood characteristics using systematic, historical and paleoflood data, in: Coping with floods, edited by: Rossi, G., Harmoncioglu, N., and Yevjevich, V., Kluwer, Dordrecht, 111–134, 1994.
SHOM: Rapport technique du Projet NIVEXT: Niveaux marins extrêmes, Camille DAUBORD, Contributeurs associés: André, G. and Goirand, V., Marca Kerneis, France, 444 pp., 2015.
SHOM: Horaires de Marée, available at: http://maree.shom.fr/, last access: 10 November 2016.
Simon, B.: La marée océanique côtière, Institut océanographique, 433 pp., 2007.
Sneyers, R.: La tempête et le débordement de la mer du 1er février 1953, Ciel et Terre, 69, 97–107, 1953.
Stedinger, J. R. and Baker, V. R.: Surface water hydrology – Historical and paleoflood information, Rev. Geophys., 25, 119–124, https://doi.org/10.1029/RG025i002p00119, 1987.
Stedinger, J. R. and Cohn, T.: Flood frequency analysis with historical and paleoflood information, Water Resour. Res., 22, 785–793, https://doi.org/10.1029/WR022i005p00785, 1986.
Stephenson, A.: TideHarmonics R package, available at: https://cran.r-project.org/web/packages/TideHarmonics/TideHarmonics.pdf, last access: 3 November 2017.
Union Faulconnier: Société historique de dunkerque tome XV, Union Faulconnier, Dunkerque, 1912.
Volker, M.: La marée de tempête du 1er février 1953 et ses conséquences pour les Pays-Bas, La Houille Blanche, 797–806, https://doi.org/10.1051/lhb/1953013, 1953.
Wald, A. and Wolfowitz, J.: An exact test for randomness in serial correlation, Ann. Math., 14, 378–388, 1943.
Wang, Q. J.: Unbiased estimation of probability weighted moments and partial probability weighted moments from systematic and historical flood information and their application to estimating the GEV distribution, J. Hydrol., 120, 115–124, https://doi.org/10.1016/0022-1694(90)90145-N, 1990.
Wilcoxon, F.: Individual comparisons by ranking methods, Biometrics Bull., 1, 80–83, 1945.
Wolf, J. and Flather, R. A.: Modelling waves and surges during the 1953 storm, Philos. T. Roy. Soc. A, 363, 1359–1375, https://doi.org/10.1098/rsta.2005.1572, 2005.
Zandyck, H.: Histoire météorologique et Médicale de Dunkerque de 1850 à 1860, Paris, P. Asselin, 1861.
|
2019-01-17 05:16:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5422822833061218, "perplexity": 2975.978996011334}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658702.9/warc/CC-MAIN-20190117041621-20190117063621-00403.warc.gz"}
|
https://gasstationwithoutpumps.wordpress.com/2020/06/05/compensation-in-impedance-analyzer/
|
# Gas station without pumps
## 2020 June 5
### Compensation in impedance analyzer
Filed under: Circuits course — gasstationwithoutpumps @ 00:18
Tags: , ,
The Analog Discovery 2 has an impedance analyzer that includes short-circuit and open-circuit compensation to correct for the impedances of the test fixture, and I’ve been thinking about how that might be computed internally. The open-circuit and short-circuit compensation can be applied independently or together, but each requires making and recording an impedance at each frequency for which impedance analysis is done.
Since there are three impedances that are measured (open-circuit, short-circuit, and device-under-test), I came up with two circuits that could model the test setup:
The measurement is made at the two ports, and Z_DUT is the device being measured—the other two impedances are parasitic ones of the test fixture that we are trying to eliminate.
Let’s look at the short-circuit compensation first. For the first model, if we replace Z_DUT with a short circuit, we measure an impedance of $Z_{sc} = Z_{s1}$, while for the second circuit we measure $Z_{sc} = Z_{p2} || Z_{s2}$. In the first model, we can do short-circuit compensation as $Z_{DUT} = Z_{m} - Z_{sc}$, where $Z_{m}$ is the measured impedance with the DUT in place. For the second circuit, we would need to measure another value to determine the appropriate correction to get $Z_{DUT}$.
For open-circuit compensation, in the first model we get $Z_{oc} = Z_{s1} + Z_{p1}$ and in the second model we get $Z_{oc} = Z_{p2}$. So for the first model we would need another measurement to get $Z_{DUT}$, but for the second model $Z_m=Z_{oc} || Z_{DUT}$, so $Z_{DUT} = \frac{1}{1/Z_m - 1/Z_{oc}} = \frac{Z_m Z_{oc}}{Z_{oc}-Z_m}$.
If we do both compensations, we can use either model, but the corrections we end up with are slightly different.
For the first model, we have $Z_m = Z_{s1} + (Z_{p1} || Z_{DUT}) = Z_{sc} + ((Z_{oc}-Z_{sc}) || Z_{DUT})$. We can rearrange this to $Z_m - Z_{sc} = (Z_{oc}-Z_{sc}) || Z_{DUT}$, or $\frac{1}{Z_m - Z_{sc}} = \frac{1}{Z_{oc}-Z_{sc}} + \frac{1}{Z_{DUT}}$.
We can simplify that to $Z_{DUT}=\frac{1}{1/(Z_m-Z_{sc}) - 1/(Z_{oc}-Z_{sc})} = \frac{(Z_m-Z_{sc})(Z_{oc}-Z_{sc})}{Z_{oc}-Z_m}$. If $Z_{sc}=0$, this simplifies to our open-compensation formula, and if $Z_{oc}\rightarrow\infty$, this approaches our formula for short-circuit compensation.
For the second model, the algebra is a little messier. We have $Z_m = Z_{oc} || (Z_{s2} + Z_{DUT})$, which can be rewritten as $\frac{1}{Z_m} - \frac{1}{Z_{oc}} = \frac{1}{Z_{s2} + Z_{DUT}}$, or $Z_{s2}+Z_{DUT} = \frac{1}{1/Z_m - 1/Z_{oc}} = \frac{Z_m Z_{oc}}{Z_{oc} - Z_m}$.
We also have $1/Z_{sc} = 1/Z_{p2} + 1/Z_{s2}$, so $Z_{s2} = \frac{1}{1/Z_{sc} - 1/Z_{oc}}=\frac{Z_{sc}Z_{oc}}{Z_{oc} - Z_{sc}}$, and so
$Z_{DUT} = \frac{Z_m Z_{oc}}{Z_{oc} - Z_m} - \frac{Z_{sc}Z_{oc}}{Z_{oc} - Z_{sc}}$
$Z_{DUT} = Z_{oc} \left( \frac{Z_m}{Z_{oc}-Z_m} - \frac{Z_{sc}}{Z_{oc}-Z_{sc}}\right)$
$Z_{DUT} = Z_{oc} \left( \frac{Z_mZ_{oc} - Z_{sc}Z_{oc}}{({Z_{oc}-Z_m})(Z_{oc}-Z_{sc})}\right)$
$Z_{DUT} = \frac{Z_{oc}^2(Z_m - Z_{sc})}{({Z_{oc}-Z_m})(Z_{oc}-Z_{sc})}$
Once again, when $Z_{sc}=0$, this formula simplifies to our formula for just open-circuit compensation, and when $Z_{oc}\rightarrow\infty$, this approaches out formula for short-circuit compensation.
We can make the two formulas look more similar, by using the same denominator for both, making the formula for the first model
$Z_{DUT} = \frac{(Z_{oc}-Z_{sc})^2(Z_m - Z_{sc})}{({Z_{oc}-Z_m})(Z_{oc}-Z_{sc})}$
That is, the only difference is whether we scale by $Z_{oc}^2$ or correct the open-circuit measurement to use $(Z_{oc}-Z_{sc})^2$. At low frequencies (with any decent test jig) the open-circuit impedance is several orders of magnitude larger than the short-circuit impedance, so which correction is used hardly matters, but at 10MHz, changing the compensation formula can make a big difference.
For example, for the following compensation measurements using the flywires, a breadboard, and some short leads with alligator clips to make the test fixture
the choice of compensation formula would make a 3% difference is the reported impedance at 10MHz. Notice that $Z_{oc}$ is approximately a small capacitor and $Z_{sc}$ is approximately a small resistor in series with a small inductor. Shorter wires and no breadboard can make these parasitic values much smaller, so that the compensation is not so crucial. For example, here are measurements of the impedance analyzer board:
Because the open-circuit impedance here is much higher than the input impedance of the measuring oscilloscope channel, I believe that corrections have already been made for known characteristics of the oscilloscope channels.
The exact values of the $Z_{oc}$ measurements are often limited by the noise in measuring the current through the sense resistor, at least at lower frequencies, where the impedance of the parasitic capacitance is very high.
|
2020-08-03 20:18:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 33, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889884948730469, "perplexity": 832.684372448175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735833.83/warc/CC-MAIN-20200803195435-20200803225435-00424.warc.gz"}
|
https://www.semanticscholar.org/paper/Sutured-annular-Khovanov-Rozansky-homology-Queffelec-Rose/1f3d87da95bd288dd6965cbfd853957d4d50b689
|
# Sutured annular Khovanov-Rozansky homology
@article{Queffelec2015SuturedAK,
title={Sutured annular Khovanov-Rozansky homology},
author={Hoel Queffelec and David E. V. Rose},
journal={arXiv: Quantum Algebra},
year={2015}
}
• Published 26 June 2015
• Mathematics
• arXiv: Quantum Algebra
We introduce an sl(n) homology theory for knots and links in the thickened annulus. To do so, we first give a fresh perspective on sutured annular Khovanov homology, showing that its definition follows naturally from trace decategorifications of enhanced sl(2) foams and categorified quantum gl(m), via classical skew Howe duality. This framework then extends to give our annular sl(n) link homology theory, which we call sutured annular Khovanov-Rozansky homology. We show that the sl(n) sutured…
Evaluations of annular Khovanov-Rozansky homology
• Mathematics
• 2019
Author(s): Gorsky, Eugene; Wedrich, Paul | Abstract: We describe the universal target of annular Khovanov-Rozansky link homology functors as the homotopy category of a free symmetric monoidal
• Mathematics
• 2018
We provide a finite dimensional categorification of the symmetric evaluation of $\mathfrak{sl}_N$-webs using foam technology. As an output we obtain a symmetric link homology theory categorifying the
Computing annular Khovanov homology
• Mathematics
• 2015
We deVne a third grading on Khovanov homology, which is an invariant of annular links but changes by 1 under stabilization. We illustrate the use of our computer implementation, and give some example
Khovanov homology for links in thickened multipunctured disks
We define a variant of Khovanov homology for links in thickened disks with multiple punctures. This theory is distinct from the one previously defined by Asaeda, Przytycki, and Sikora, but is related
• Mathematics
• 2018
We use categorical annular evaluation to give a uniform construction of both $\mathfrak{sl}_n$ and HOMFLYPT Khovanov-Rozansky link homology, as well as annular versions of these theories. Variations
Khovanov homology and categorification of skein modules
• Mathematics
• 2018
For every oriented surface of finite type, we construct a functorial Khovanov homology for links in a thickening of the surface, which takes values in a categorification of the corresponding gl(2)
Invariants of 4-manifolds from Khovanov-Rozansky link homology
• Mathematics
• 2019
We use Khovanov-Rozansky gl(N) link homology to define invariants of oriented smooth 4-manifolds, as skein modules constructed from certain 4-categories with well-behaved duals. The technical heart
Annular Khovanov homology and knotted Schur–Weyl representations
• Mathematics
Compositio Mathematica
• 2017
Let $\mathbb{L}\subset A\times I$ be a link in a thickened annulus. We show that its sutured annular Khovanov homology carries an action of $\mathfrak{sl}_{2}(\wedge )$ , the exterior current algebra
Extremal weight projectors II.
• Mathematics
• 2018
In previous work, we have constructed diagrammatic idempotents in an affine extension of the Temperley-Lieb category, which describe extremal weight projectors for sl(2), and which categorify
## References
SHOWING 1-10 OF 45 REFERENCES
Khovanov Homology, Sutured Floer Homology, and Annular Links
• Mathematics
• 2010
Lawrence Roberts, extending the work of Ozsvath-Szabo, showed how to associate to a link, L, in the complement of a fixed unknot, B, in S^3, a spectral sequence from the Khovanov homology of a link
On knot Floer homology in double branched covers
Let L be a link in an thickened annulus. We specify the embedding of this annulus in the three sphere, and consider its complement thought of as the axis to L. In the right circumstances this axis
Khovanov's homology for tangles and cobordisms
We give a fresh introduction to the Khovanov Homology theory for knots and links, with special emphasis on its extension to tangles, cobordisms and 2-knots. By staying within a world of topological
Khovanov homology is a skew Howe 2-representation of categorified quantum sl(m)
• Mathematics
• 2012
We show that Khovanov homology (and its sl(3) variant) can be understood in the context of higher representation theory. Specifically, we show that the combinatorially defined foam constructions of
Khovanov homology is a skew Howe 2–representation of categorified quantum m
• Mathematics
• 2015
We show that Khovanov homology (and its sl3 variant) can be understood in the context of higher representation theory. Specifically, we show that the combinatorially defined foam constructions of
Categorification of the Kauffman bracket skein module of I-bundles over surfaces
• Mathematics
• 2004
Khovanov defined graded homology groups for links LR 3 and showed that their polynomial Euler characteristic is the Jones polyno- mial of L. Khovanov's construction does not extend in a
Refined composite invariants of torus knots via DAHA
• Mathematics
• 2015
We define composite DAHA-superpolynomials of torus knots, depending on pairs of Young diagrams and generalizing the composite HOMFLY-PT polynomials in the theory of the skein of the annulus. We
Clasp technology to knot homology via the affine Grassmannian
We categorify all the Reshetikhin–Turaev tangle invariants of type A. Our main tool is a categorification of the generalized Jones–Wenzl projectors (a.k.a. clasps) as infinite twists. Applying this
sl.N/-link homology (N 4) using foams and the Kapustin-Li formula
• Mathematics
• 2009
We use foams to give a topological construction of a rational link homology categorifying the slN link invariant, for N>3. To evaluate closed foams we use the Kapustin-Li formula adapted to foams by
|
2022-05-18 20:17:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8593068718910217, "perplexity": 2565.664856753715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522309.14/warc/CC-MAIN-20220518183254-20220518213254-00159.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/132539/explaining-a-low-mass-brown-dwarf
|
# Explaining a Low-Mass Brown Dwarf
Considering the well-established relationships between stellar mass, surface temp, and luminosity, how unusual would it be to find a star (or brown dwarf) that possesses about half the mass of an average member of its spectral category?
Follow-up questions: If such an anomaly did exist, what sort of phenomena could be responsible? Would it be relatively stable over a human lifespan? Or would it necessarily be a transient phenomenon like most variable stars?
The specific example in question is a brown dwarf with a mass ~7110 times Earth, radius ~12 times Earth's, and a spectral black-body temp of 1400K (placing it in the L8 to L9 spectral class). For a sci-fi story, I've constructed a setting with this brown dwarf, Kabina, in orbit of the star Phi2 Ceti (renamed Bahram in everyday parlance), and multiple colonized planets orbiting both the primary star and the brown dwarf.
On reviewing my notes recently and double-checking the numbers, I realized that late L-class brown dwarfs tend to have a mass 1.5 to 3 times the figure I was working with. Whereas brown dwarfs of similar mass are liable to be less than half as hot and bright.
I've calculated the orbits and surface temps of the orbiting planets based on these numbers, so if I need to fix the properties of my brown dwarf, I need to also rework the planetary orbits in order to maintain habitability. If, on the other hand, I can find a plausible explanation for the anomaly, it becomes a cool little detail of the system to add scientific curiosity, and I don't have to rewrite the calendars I've already drawn up for the colonies in orbit.
This question asks for hard science. All answers to this question should be backed up by equations, empirical evidence, scientific papers, other citations, etc. Answers that do not satisfy this requirement might be removed. See the tag description for more information.
$$\sim$$7000 Earth masses and a surface temperature of 1700 K aren't unreasonable for a brown dwarf. The lower mass limit is thought to be around 13 Jupiter masses (or 4100 Earth masses) (see e.g. Spiegel et al. 2010), and we see temperatures as low as 500 K in certain Y-class brown dwarfs. 12 Earth radii seems a bit small, but not unrealistic. To be honest, you chose decent parameters. This is certainly much too light for a red dwarf, of course; the hydrogen fusing limit is about 75-80 Jupiter masses, which is, give or take, three times as high as your choice. However, I would argue that you've chosen a fairly realistic brown dwarf.
We expect a brown dwarf - or a planet, or an M star - to cool over time, eventually reaching fairly low temperatures. A graph from Burrows et al. 2001 (taken from these slides) should emphasize that point:
A brown dwarf should reach the temperature you desire within about a few hundreds of millions of years.
Here are examples of brown dwarfs with similar properties:
The latter two are actually pretty similar to yours, in terms of mass and temperature.
If there was a brown dwarf of, say, 10 Jupiter masses, well, it might instead be classified as a rogue planet or sub-brown dwarf, rather than a brown dwarf. Cha 110913-773444 is an object like this (Luhman et al, 2005). We don't know how these bodies form; it's possible that the low mass is simply due to a dearth of material around the body early in its life.
• So tl:dr it is not an anomaly and needs no explanation? – Mołot Dec 10 '18 at 17:14
• The WISE bodies are both in the same temperature range, late-L class brown dwarves. But their masses are from 1.5 to 3.3 times my figure. COROT-3b has similar mass, but as far as I can tell is a late-T class brown dwarf with a temp of ~550K. So I have a body with the mass of a T-class dwarf, but the temp and luminosity of an L-class dwarf of twice the mass. (See additional paragraphs above for added context.) – Rich Durst Dec 10 '18 at 17:40
• The graph you shared brings up an issue that I don't think I ever accounted for: the age of the brown dwarf. If it formed with the primary star, it would be about 1.9Gy old. But if I can justify it being separately formed and then captured, I can tweak the age so as to get the temp and luminosity I want. Then again, if it's a late-captured body with its own satellites, that leads to a whole other set of orbital complications. – Rich Durst Dec 10 '18 at 17:41
• @RichDurst That's a possibility. I've also considered a scenario where the circumstellar disk from which the brown dwarf formed was fairly low-mass - which would explain an even lower mass, if you wanted it - and where perhaps the formation of the brown dwarf was delayed for a significant amount of time. Perhaps the cluster where the system formed was dominated by OB stars, and the strong winds depleted the disk? There are definitely scenarios where this could work - and yes, your capture idea is absolutely possible. – HDE 226868 Dec 10 '18 at 18:05
• @Mołot Basically, yes. The temperature and mass don't quite line up, as Rich said, but they aren't extreme, and it's not too unlikely for a brown dwarf to have both, depending on its age and initial evolution. – HDE 226868 Dec 10 '18 at 18:06
|
2019-04-24 13:48:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6976068019866943, "perplexity": 1098.0533654251253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578643556.86/warc/CC-MAIN-20190424134457-20190424160457-00096.warc.gz"}
|
http://publish.uwo.ca/~pzsambok/calc%201000a%20002%2016au/notes/notes/03%20-%20Exponential%20Functions.html
|
# Exponential functions 2
## Graphs of the exponential functions, and the Laws of exponents
• We have defined the exponents $$b^x$$ for all $$x$$, which implies that the exponential functions $$f(x)=b^x$$ have domain $$(-\infty,\infty)$$
• Notice on the graphs that there are 3 qualitatively different behaviours of the exponential functions $$f(x)=b^x$$, depening on the value of $$b$$:
• If $$b=1$$, then we have $$1^x=1$$, that is the function $$f(x)=1^x$$ has constant value 1, or in other words, its range is $$[1,1]$$.
• If $$b\ne1$$, then the range of $$f(x)=b^x$$ is $$(0,\infty)$$.
• If $$b>1$$, then $$f(x)=b^x$$ is strictly monotonously increasing.
• If $$b<1$$, then $$f(x)=b^x$$ is strictly monotonously decreasing.
• Note also that all the graphs go through the point $$(0,1)$$. This corresponds to that for all $$b>0$$, we have $$f(x=0)=b^0=1$$.
• The Laws of exponents are the following formulas. $b^{x+y}=b^xb^y,\quad b^{x-y}=\tfrac{b^x}{b^y},\quad b^{xy}=(b^x)^y,\quad (ab)^x=a^xb^x$
Exercises: 1.4:4,19,24
## Application: population modelling
• Exponential functions are used frequently in Mathematical models of populations. The textbook has examples about bacteria, humans and viruses, let's check out the human one.
• We are given the population (in millions) of the Earth at the years $$1900+t$$ for $$t=0,10,\dotsc,110$$.
• The method of least squares (which you can learn about in a Linear Algebra class) tells us which function of the form $$f(t)=ab^t$$ approximates the data the best.
• Mathematical models can be used for prediction. This model predicts that by 2020, the population of the Earth will be $$f(t=120)\approx7573.549$$ million.
• The half-life of an atom is the period of time during which half of any given quantity disintegrates
• For example, the half-life of strontium-90, $${}^{90}\mathrm{Sr}$$, is 25 years.
• Let $$m(t)$$ denote the mass of a sample of $${}^{90}\mathrm{Sr}$$, starting from $$m(t=0)=24$$ mg.
• By definition of half-life, we have $$m(t=25)=\frac1224$$, $$m(t=50)=\frac1424$$, etc.
• In general, we have $$m(t)=(\frac12)^{t/25}24=2^{-t/25}24$$.
• For example, after 40 years, the mass of the sample is $$m(t=40)=2^{-40/25}24\approx7.9$$ mg.
• Exercise: 1.4.34
## The number $$e$$
• Let $$b>0$$. Consider the tangent line to the graph of $$f(x)=b^x$$ at $$x=0$$.
• Notice that the tangent line is the graph of the function $$g(x)=1+f'(x=0)$$. We will start talking about derivatives two weeks later.
• You can see that the bigger $$b$$ is, the bigger the slope is.
• The number $$e$$ is defined to be the value $$b=e$$ such that the function $$f(x)=b^x$$ has slope 1 at $$x=0$$.
• The corresponding exponential function $$f(x)=e^x$$ is called the natural exponential function.
## Euler's formula and the Trigonometric addition formulas
• You don't have to know about complex numbers in this class, but I can't resist telling you about the following way of proving the Trigonometric addition formulas
• You can do complex arithmetic by introducing the imaginary number $$i$$. This number has the propety $$i^2=-1$$.
• Therefore, a complex number in general is of the form $$a+bi$$, for real numbers $$a,b$$. $$a$$ is called its real part, and $$b$$ is called its imaginary part
• Addition and subtraction is as usual: $$(a_1+b_1i)\pm(a_2+b_2i)=(a_1\pm a_2)+(b_1\pm b_2)i$$.
• Multiplication uses distributivity, and the magic property $$i^2=-1$$: $(a_1+b_1i)(a_2+b_2i)=a_1a_2+b_1b_2i^2+a_1b_2i+a_2b_1i=(a_1a_2-b_1b_2)+(a_1b_2+a_2b_1)i.$
• Two complex numbers $$a_1+b_1i$$ and $$a_2+b_2i$$ are equal precisely when both their real and imaginary parts agree, that is $$a_1=a_2$$ and $$b_1=b_2$$.
## Euler's formula and the Trigonometric addition formulas 2
• Euler's formula tells us how to take natural exponents with complex numbers: $e^{a+bi}=e^a(\cos(b)+\sin(b)i)$
• This means that if $$a=0$$, we get $$e^{bi}=\cos(b)+\sin(b)i$$.
• Let's apply this to the exponent law $$e^{(a+b)i}=e^{ai}e^{bi}$$: $\begin{multline*} \cos(a+b)+\sin(a+b)i=e^{(a+b)i}=e^{ai}e^{bi} \\ =(\cos(a)+\sin(a)i)(\cos(b)+\sin(b)i)=(\cos(a)\cos(b)-\sin(a)\sin(b))+(\cos(a)\sin(b)+\cos(b)\sin(a))i \end{multline*}$
• BAM! Equating the real parts gives the formula for cos: $\cos(a+b)=\cos(a)\cos(b)-\sin(a)\sin(b),$ and equating the imaginary parts gives the formula for sin: $\sin(a+b)=\cos(a)\sin(b)+\cos(b)\sin(a).$
|
2018-02-22 22:38:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9847184419631958, "perplexity": 293.2498633351301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814292.75/warc/CC-MAIN-20180222220118-20180223000118-00440.warc.gz"}
|
https://stats.hohoweiya.xyz/2019/06/27/bayesian-cg/
|
# WeiYa's Work Yard
## Linear Solvers
For the solution of linear systems,
$$Ax^*=b\label{1}$$
where $A\in\IR^{d\times d}$ is an invertible matrix and $b\in \IR^d$ is a vector, while $x^*\in\IR^d$ is to be determined.
• iterative methods: Krylov subspace methods (among the most successful at obtaining an approximate solution at low cost)
• direct methods: LU or Cholesky decomposition
The conjugate gradient (CG) method is a popular iterative method and perhaps the first instance of a Krylov subspace method, but the convergence is slowed when the system is poorly conditioned. Then we can consider solving equivalent preconditioned systems, by solving
• left-preconditioning: $P^{-1}Ax^*=P^{-1}b$
• right-preconditioning: $AP^{-1}Px^*=b$
where $P$ is chosen both so that
1. $P^{-1}A$ (or $AP^{-1}$) has a lower condition number than $A$ itself
2. computing the solution of systems $Py=c$ is computationally inexpensive for arbitrary $y$ and $c$
In situations where numerical error cannot practically be made negligible, an estimate for the error $x_m-x^*$ must accompany the output $x_m$ of any linear solver. In practice, the algorithm is usually terminated when this reaches machine precision, which can require a very large number of iterations and substantial computational effort. This often constitutes the principal bottleneck in contemporary applications.
The contribution of the paper is to demonstrate how Bayesian analysis can be used to develop a richer, probabilistic description for the error in estimating the solution $x^*$ with an iterative method.
## Probabilistic Numerical Methods
Bayesian probabilistic numerical methods posit a prior distribution for the unknown (here $x^*$), and condition on a finite amount of information about $x^*$ to obtain a posterior that reflects the level of uncertainty in $x^*$, given the finite information obtained.
Recent work: Hennig (2015) treated the problem as an inference problem for the matrix $A^{-1}$, and established correspondence with existing iterative methods by selection of different matrix-valued Gaussian priors within a Bayesian framework.
In contrast, the paper places a prior on the solution $x^*$ rather than the matrix $A^{-1}$.
## Probabilistic Linear Solver
The Bayesian method is defined by the choice of prior and the information on which the prior is to be conditioned.
• the information about $x^*$ is linear and is provided by search directions $s_i,i=1,\ldots,m « d$, through the matrix-vector products
$y_i:=(s_i^TA)x^* = s_i^Tb$
The matrix-vector products on the right-hand-side are assumed to be computed without error, which implies that a likelihood model in the form of a Dirac distribution:
$p(y\mid x) = \delta(y-S_m^TAx)\,,$
where $S_m$ denotes the matrix whose columns are $s_1,\ldots,s_m$.
Linear information is well-adapted to inference with stable distribution(Let $X_1$ and $X_2$ be independent copies of a random variable $X$. Then $X$ is said to be stable if, for any constants $\alpha,\beta>0$, the random variable $\alpha X_1+\beta X_2$ has the same distribution as $\gamma X+\delta$ for some constants $\gamma > 0$ and $\delta$.) Let $x$ be a random variable, which will be used to model epistemic uncertainty regarding the true solution $x^*$, and endow $x$ with the prior distribution
$p(x) = \calN(x;x_0,\Sigma_0)\,,$
where $x_0$ and $\Sigma_0$ are each assumed to known a-priori.
Now there exists a unique Bayesian probabilistic numerical method which outputs the conditional distribution $p(x\mid y_m)$, where $y_m=[y_1,\ldots,y_m]^T$ satisfies $y_m=S_m^TAx^*=S_m^Tb$.
The problem can be seen as
$x_m = \arg\min_{x\in\calK_m}\Vert x-x^*\Vert_A\,,$
where $\calK_m$ is a sequence of $m$-dimensional linear subspaces of $\IR^d$, or
$x_m = \arg\min_{x\in\calK_m}f(x)\,,$
where $f(x)=\frac 12x^TAx-x^Tb$.
Let $S_m\in \IR^{d\times m}$ denote a matrix whose columns are arbitrary linearly independent search directions $s_1,\ldots,s_m$, with $\range(S_m)=\calK_m$. Let $x_0$ denote an arbitrary starting point for the algorithm. Then $x_m =x_0+S_mc$ for some $c\in\IR^m$ which can be computed by solving $\nabla f(x_0+S_mc)=0$.
In CG, the search directions are $A$-conjugate, then
$x_m = x_0 + S_m(S_m)^T(b-Ax_0)\,,$
which lends itself to an iterative numerical method
$x_m = x_{m-1} + s_m(s_m)^T(b-Ax_{m-1})\,.$
## Search Directions
Two choices:
• Optimal Information
• Conjugacy: Bayesian Conjugate Gradient Method the name is for the same reason as in CG, as search directions are chosen to be the direction of gradient descent subject to a conjugacy requirement, albeit a different one than in standard CG
## Krylov Subspace Method
The Krylov subspace $K_m(M, v), M\in \IR^{d\times d}, v\in \IR^d$ is defined as $$K_m(M, v) := \span(v, Mv, M^2v, \ldots, M^mv).$$ For a vector $w\in\IR^d$, the shifted Krylov subspace is defined as $$w+K_m(M, v) :=\span(w+v,w+Mv,\ldots,w+M^nw)\,.$$
It is well-known that CG is a Krylov subspace method for symmetric positive-definite matrices $A$, meaning that
$x_m = \argmin_{x\in x_0+\calK_{m-1}(A,r_0)}\Vert x-x^*\Vert_A\,.$
## Prior Choice
### Covariance Structure
• Natural Prior
• Preconditioner Prior
• Krylov Subspace Prior
### Covariance Scale
Treat the prior scale as an additional parameter to be learned, and can consider $\nu$ in a hierarchical Bayesian framework.
## Code
Published in categories Note
|
2021-08-01 23:20:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9149778485298157, "perplexity": 374.8727215312669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154277.15/warc/CC-MAIN-20210801221329-20210802011329-00344.warc.gz"}
|
https://www.physicsforums.com/threads/loop-and-allied-qg-bibliography.7245/page-13
|
# Loop-and-allied QG bibliography
Gold Member
Dearly Missed
A third new Bojowald
we've been getting a new Bojowald paper every few days. this is the the third that have been posted lately, since 14 november actually----so three in just the past week.
http://arxiv.org/abs/gr-qc/0511108
Spherically Symmetric Quantum Geometry: Hamiltonian Constraint
Martin Bojowald, Rafal Swiderski
33 pages
AEI-2005-171, NI05065
"Variables adapted to the quantum dynamics of spherically symmetric models are introduced, which further simplify the spherically symmetric volume operator and allow an explicit computation of all matrix elements of the Euclidean and Lorentzian Hamiltonian constraints. The construction fits completely into the general scheme available in loop quantum gravity for the quantization of the full theory as well as symmetric models. This then presents a further consistency check of the whole scheme in inhomogeneous situations, lending further credence to the physical results obtained so far mainly in homogeneous models. New applications in particular of the spherically symmetric model in the context of black hole physics are discussed."
Ooops, make that FOUR Bojo papers appearing in the past 7 days, here is another, this time in the Astronomy-Astrophysics department:
http://arxiv.org/abs/astro-ph/0511557
Universe scenarios from loop quantum cosmology
Martin Bojowald
16 pages, 8 figures, plenary talk at "Pomeranian Workshop in Fundamental Cosmology", Pobierowo, Sep 2005
AEI-2005-168
"Loop quantum cosmology is an application of recent developments for a non-perturbative and background independent quantization of gravity to a cosmological setting. Characteristic properties of the quantization such as discreteness of spatial geometry entail physical consequences for the structure of classical singularities as well as the evolution of the very early universe. While the singularity issue in general requires one to use difference equations for a wave function of the universe, phenomenological scenarios for the evolution are based on effective equations implementing the main quantum modifications. These equations show generic bounces as well as inflation in diverse models, which have been combined to more complicated scenarios."
Last edited:
Gold Member
Dearly Missed
new Martin Reuter, papers by Garrett, by Torsten and Helge
http://arxiv.org/abs/hep-th/0511260
Asymptotic Safety in Quantum Einstein Gravity: nonperturbative renormalizability and fractal spacetime structure
O. Lauscher, M. Reuter
29 pages, latex, 1 figure, invited paper at the Blaubeuren Workshop 2005 on Mathematical and Physical Aspects of Quantum Gravity
MZ-TH/05-26
"The asymptotic safety scenario of Quantum Einstein Gravity, the quantum field theory of the spacetime metric, is reviewed and it is argued that the theory is likely to be nonperturbatively renormalizable. It is also shown that asymptotic safety implies that spacetime is a fractal in general, with a fractal dimension of 2 on sub-Planckian length scales."
=====================
http://arxiv.org/abs/gr-qc/0511120
Clifford bundle formulation of BF gravity generalized to the standard model
A. Garrett Lisi
24 pages
"The structure and dynamics of the standard model and gravity are described by a Clifford valued connection and its curvature."
congratulations.
===============
http://arxiv.org/abs/gr-qc/0511089
Differential Structures - the Geometrization of Quantum Mechanics
Torsten Asselmeyer-Maluga, Helge Rosé
13 pages, 2 figures
"The usual quantization of a classical space-time field does not touch the non-geometrical character of quantum mechanics. We believe that the deep problems of unification of general relativity and quantum mechanics are rooted in this poor understanding of the geometrical character of quantum mechanics. In Einstein's theory gravitation is expressed by geometry of space-time, and the solutions of the field equation are invariant w.r.t. a certain equivalence class of reference frames. This class can be characterized by the differential structure of space-time. We will show that matter is the transition between reference frames that belong to different differential structures, that the set of transitions of the differential structure is given by a Temperley-Lieb algebra which is extensible to a C*-algebra comprising the field operator algebra of quantum mechanics and that the state space of quantum mechanics is the linear space of the differential structures. Furthermore we are able to explain the appearance of the complex numbers in quantum theory. The strong relation to Loop Quantum Gravity is discussed in conclusion."
Gold Member
Dearly Missed
31 dimensionless physical constants
http://arxiv.org/abs/astro-ph/0511774
Dimensionless constants, cosmology and other dark matters
Max Tegmark (MIT), Anthony Aguirre (UCSC), Martin Rees (Cambridge), Frank Wilczek (MIT)
29 pages, 12 figs
"We identify 31 dimensionless physical constants required by particle physics and cosmology, and emphasize that both microphysical constraints and selection effects might help elucidate their origin. Axion cosmology provides an instructive example, in which these two kinds of arguments must both be taken into account, and work well together. If a Peccei-Quinn phase transition occurred before or during inflation, then the axion dark matter density will vary from place to place with a probability distribution. By calculating the net dark matter halo formation rate as a function of all four relevant cosmological parameters and assessing other constraints, we find that this probability distribution, computed at stable solar systems, is arguably peaked near the observed dark matter density. If cosmologically relevant WIMP dark matter is discovered, then one naturally expects comparable densities of WIMPs and axions, making it important to follow up with precision measurements to determine whether WIMPs account for all of the dark matter or merely part of it."
============
http://arxiv.org/abs/astro-ph/0511780
A Quantitative Occam's Razor
Rafael D. Sorkin (Syracuse University)
16 pages
International Journal of Theoretical Physics, 22:1091-1104 (1983)
"This paper derives an objective Bayesian "prior" based on considerations of entropy/information. By this means, it produces a quantitative measure of goodness of fit (the "H-statistic") that balances higher likelihood against the number of fitting parameters employed. The method is intended for phenomenological applications where the underlying theory is uncertain or unknown.
For example, it can help decide whether the large angle anomalies in the CMB data should be taken seriously.
I am therefore posting it now, even though it was published before the arxiv existed."
================
http://arxiv.org/abs/math.DG/0511710
Higher Gauge Theory
John C. Baez, Urs Schreiber
10 encapsulated Postscript figures
Differential Geometry; Category Theory
"Just as gauge theory describes the parallel transport of point particles using connections on bundles, higher gauge theory describes the parallel transport of 1-dimensional objects (e.g. strings) using 2-connections on 2-bundles. A 2-bundle is a categorified version of a bundle: that is, one where the fiber is not a manifold but a category with a suitable smooth structure. Where gauge theory uses Lie groups and Lie algebras, higher gauge theory uses their categorified analogues: Lie 2-groups and Lie 2-algebras. We describe a theory of 2-connections on principal 2-bundles and explain how this is related to Breen and Messing's theory of connections on nonabelian gerbes. The distinctive feature of our theory is that a 2-connection allows parallel transport along paths and surfaces in a parametrization-independent way. In terms of Breen and Messing's framework, this requires that the "fake curvature" must vanish. In this paper we summarize the main results of our theory without proofs."
Last edited:
Gold Member
Dearly Missed
Abstracts page for the September QG '05 conference
this fall there were TWO major international quantum gravity conferences Loops '05, which was in October at AEI-Golm outside Berlin, and QG '05, which was held in September on the island of Sardinia
the Loops '05 program is here
http://loops05.aei.mpg.de/index_files/Programme.html
and the recorded talks (usually with slides as well) are online
here is the homepage
http://loops05.aei.mpg.de/
there were 156 registered participants of which 11 were from US institutions, by my count.
there is no separate page with all the abstracts assembled together,
but by clicking on the speaker's name in the program you can get the title and abstract of the talk.
This conference has been discussed in several PF threads, including one that John Baez started.
=======================
If only for completeness, we should also compare the other conference QG '05.
http://www.phy.olemiss.edu/GR/qg05/ [Broken]
here is a page listing the conference talks with abstracts:
http://www.phy.olemiss.edu/GR/qg05/abstracts.html [Broken]
here is the list of participants---it says there were 101:
http://www.phy.olemiss.edu/cgi-bin/qg05/pr_participants.cgi
At this conference, by my count, 72 people gave talks, of whom 7 were from institutions in the USA. A ten percent showing---roughly comparable to what occurred at the other large Quantum Gravity conference: Loops '05.
Here are some samples of the abstracts, to give a taste:
Daniel Terno (dterno@perimeterinstitute.ca)
Thursday, September 15th, 18:10, Parallel session VI: Black holes
Quantum black holes: entropy and entanglement on the horizon
Abstract: Considering a horizon as a surface beyond which no information is accessible we conclude that the spin network states that are associated with it should be globally SU(2) invariant. We derive the Bekenstein-Hawking entropy and the logarithmic correction with the prefactor 3/2, which is independent from the size of the elementary spin that is used in the calculation. The logarithmic correction turns to be equal to the quantum mutual information (total amount of classical and quantum correlations) between parts of the spin network that describes the horizon. We analyze the relation between the microscopic and the macroscopic surface area, when the elementary patches of the surface are coarse-grained. Joint work with Etera Livine.
Charles Wang (c.wang@abdn.ac.uk)
Monday, September 12th, 18:10, Parallel session II: Quantum gravity
Towards conformal loop quantum gravity
Abstract: In a recent publication [C. H.-T. Wang, Phys. Rev. D 71, 124026 (2005)], the author has presented a new canonical formulation of GR by extending the ADM phase space to that consisting of York's mean extrinsic curvature time, conformal three-metric and their momenta. In addition to the Hamiltonian and diffeomorphism constraints, the resulting theory contains a new first class constraint, called the conformal constraint. The extended algebra of constraints has as subalgebra the Lie algebra for the conformorphisms of the spatial hypersurface. The structure of the new constraints suggests that conformal metric may be used to formulate the unitary functional evolution of quantum gravity with respect to the York time. This talk will outline a further enlarged phase space of GR by incorporating spin gauge as well as conformal symmetries. Remarkably, a new set of gauge variables for canonical GR is found that is shown to be free from a parameter of the Barbero- Immirzi type due to the inherent conformal invariance of the formalism. A discussion is then given of the prospect of constructing a theory of conformal loop quantum gravity to address both the conceptual problem of time and technical problem of functional calculus in quantum gravity.
Ruth Williams (rmw7@damtp.cam.ac.uk)
Monday, September 12th, 12:00, Plenary session
Discrete quantum gravity
Abstract: Discrete approaches to quantum gravity, including Regge calculus, dynamical triangulations and spin foam models, will be reviewed briefly. A fuller account will be given of recent progress in quantum Regge calculus.
James Ryan (jpr25@cam.ac.uk)
Tuesday, September 13th, 18:10, Parallel session III: Quantum gravity
A group field theory for 3d quantum gravity coupled to a scalar field
Abstract: We present a new group field theory model, which incorporates both 3-dimensional gravity and matter coupled to gravity. We show that the Feynman diagram amplitudes of this model are given by Riemannian quantum gravity spin foam amplitudes coupled to a scalar matter field. We briefly discuss the features of this model and its possible generalisations.
Matej Pavsic (matej.pavsic@ijs.si)
Thursday, September 15th, 17:45, Parallel session V: Gauge theories and quantisation
Spin gauge theory of gravity in Clifford space
Abstract: A theory in which a 16-dimensional curved Clifford space (C-space) provides a realization of Kaluza-Klein theory is investigated. No extra dimensions of spacetime are needed: "extra dimensions" are in C-space. We explore the spin gauge theory in C-space and show that the the generalized spin connection contains the usual 4-dimensional gravity and Yang-Mills fields of the U(1)xSU(2)xSU(3) gauge group. The representation space for the latter group is provided by 16-component generalized spinors composed of four usual 4-component spinors, defined geometrically as the members of four independent left minimal ideals of Clifford algebra. [my comment: note possible contact with Garrett Lisi work ]
Daniele Oriti (d.oriti@damtp.cam.ac.uk)
Monday, September 12th, 17:20, Parallel session II: Quantum gravity
The group field theory approach to quantum gravity
Abstract: We review the basic ideas of the group field theory approach to non-perturbative quantum gravity, a generalisation of matrix models for 2d gravity, that provides a third quantization of gravity in higher spacetime dimensions. We also discuss several recent developments, including the coupling of matter fields to quantum gravity, the implementation of causality, and the definition of different transition amplitudes for these theories.
Aleksandar Mikovic (amikovic@ulusofona.pt)
Monday, September 12th, 16:55, Parallel session II: Quantum gravity
Quantum gravity as a topological quantum field theory
Abstract: In the discretized approaches to Quantum Gravity, like spin foam models, one needs to perform a sum over the spacetime triangulations, or to define a continious limit, in order to impose the diffeomorphism invariance. If the QG theory was a topological theory, then a single triangulation would suffice. We describe an approach to define quantum gravity theory as a topological quantum field theory by using a BF theory.
Fotini Markopoulou (fmarkopoulou@perimeterinstitute.ca)
Friday, September 16th, 9:15, Plenary session
The low energy problem of background-independent quantum gravity
Abstract: We review the main issue facing background-independent approaches to quantum gravity, the low-energy problem. This is the task of extracting general relativity (and possibly also quantum field theory) from a microscopic Planckian theory. We find that, perhaps not surprisingly, the central issue is dynamics. We then approach this problem from a quantum information theoretic perspective. In any such application, the focus has to be on dynamics. We propose ways to do so.
there were several other interesting titles and abstracts that could have been included in this sample but were dropped because the list was getting too long.
Last edited by a moderator:
Gold Member
Dearly Missed
marcus said:
this fall there were TWO major international quantum gravity conferences Loops '05, which was in October at AEI-Golm outside Berlin, and QG '05, which was held in September on the island of Sardinia
...
...
...
Daniele Oriti (d.oriti@damtp.cam.ac.uk)
Monday, September 12th, 17:20, Parallel session II: Quantum gravity
The group field theory approach to quantum gravity
Abstract: We review the basic ideas of the group field theory approach to non-perturbative quantum gravity, a generalisation of matrix models for 2d gravity, that provides a third quantization of gravity in higher spacetime dimensions. We also discuss several recent developments, including the coupling of matter fields to quantum gravity, the implementation of causality, and the definition of different transition amplitudes for these theories.
...
One sees from the Sardinia conference that Daniele Oriti was giving the GFT overview---essentially substituting for Laurent Freidel. Today he and Etera Livine posted another GFT paper:
http://arxiv.org/abs/gr-qc/0512002
Coupling of spacetime atoms and spin foam renormalisation from group field theory
Etera R. Livine, Daniele Oriti
18 pages
"We study the issue of coupling among 4-simplices in the context of spin foam models obtained from a group field theory formalism. We construct a generalisation of the Barrett-Crane model in which an additional coupling between the normals to tetrahedra, as defined in different 4-simplices that share them, is present. This is realized through an extension of the usual field over the group manifold to a five argument one. We define a specific model in which this coupling is parametrised by an additional real parameter that allows to tune the degree of locality of the resulting model, interpolating between the usual Barrett-Crane model and a flat BF-type one. Moreover, we define a further extension of the group field theory formalism in which the coupling parameter enters as a new variable of the field, and the action presents derivative terms that lead to modified classical equations of motion. Finally, we discuss the issue of renormalisation of spin foam models, and how the new coupled model can be of help regarding this."
==============================
Dan Christensen has been a co-author with John Baez, computing with spinfoams.
he is at UWO (western ontario) where they have a supercomputer center and does both theoretical and computational physics----they developed a fast algorithm for 10j symbols---they can do stuff with spinfoams that is sort of like what Loll does with dynamical triangulations---that is, run them. He also does spinfoam theory. Josh Willis, an Ashtekar Penn State PhD, has gone to postdoc at UWO with Christensen. Dan Cherrington, who gave a paper at Loops '05 is another UWO postdoc.
http://arxiv.org/abs/gr-qc/0512004
Finiteness of Lorentzian 10j symbols and partition functions
J. Daniel Christensen
8 pages
"We give a short and simple proof that the Lorentzian 10j symbol, which forms a key part of the Barrett-Crane model of Lorentzian quantum gravity, is finite. The argument is very general, and applies to other integrals. For example, we show that the Lorentzian and Riemannian causal 10j symbols are finite, despite their singularities. Moreover, we show that integrals that arise in Cherrington's work are finite. Cherrington has shown that this implies that the Lorentzian partition function for a single triangulation is finite, even for degenerate triangulations. Finally, we also show how to use these methods to prove finiteness of integrals based on other graphs and other homogeneous domains."
============================
Here is Charles Wang's paper he referred to in his talk at Sardinia QG '05, and a follow-up by the same author:
http://arxiv.org/abs/gr-qc/0501024
Conformal geometrodynamics: True degrees of freedom in a truly canonical structure
8 pages
Phys.Rev. D71 (2005) 124026
"The standard geometrodynamics is transformed into a theory of conformal geometrodynamics by extending the ADM phase space for canonical general relativity to that consisting of York's mean exterior curvature time, conformal three-metric and their momenta. Accordingly, an additional constraint is introduced, called the conformal constraint. In terms of the new canonical variables, a diffeomorphism constraint is derived from the original momentum constraint. The Hamiltonian constraint then takes a new form. It turns out to be the sum of an expression that previously appeared in the literature and extra terms quadratic in the conformal constraint. The complete set of the conformal, diffeomorphism and Hamiltonian constraints are shown to be of first class through the explicit construction of their Poisson brackets. The extended algebra of constraints has as subalgebras the Dirac algebra for the deformations and Lie algebra for the conformorphism transformations of the spatial hypersurface. This is followed by a discussion of potential implications of the presented theory on the Dirac constraint quantization of general relativity. An argument is made to support the use of the York time in formulating the unitary functional evolution of quantum gravity. Finally, the prospect of future work is briefly outlined."
http://arxiv.org/abs/gr-qc/0507044
Unambiguous spin-gauge formulation of canonical general relativity with conformorphism invariance
4 pages
Phys.Rev. D72 (2005) 087501
"We present a parameter-free gauge formulation of general relativity in terms of a new set of real spin connection variables. The theory is constructed by extending the phase space of the recently formulated conformal geometrodynamics for canonical gravity to accommodate a spin gauge description. This leads to a further enlarged set of first class gravitational constraints consisting of a reduced Hamiltonian constraint and the canonical generators for spin gauge and conformorphism transformations. Owing to the incorporated conformal symmetry, the new theory is shown to be free from an ambiguity of the Barbero-Immirzi type."
here is Charles Wang homepage---he has a remarkable set of research interests and accomplishments---check this out:
http://www.lancs.ac.uk/depts/physics/staff/chtw.htm
He is now at Aberdeen---the page was from 2004 when he was at Lancaster
Last edited:
Gold Member
Dearly Missed
marcus said:
============================
Here is Charles Wang's paper he referred to in his talk at Sardinia QG '05, and a follow-up by the same author:
http://arxiv.org/abs/gr-qc/0501024
Conformal geometrodynamics: True degrees of freedom in a truly canonical structure
8 pages
Phys.Rev. D71 (2005) 124026
"The standard geometrodynamics is transformed into a theory of conformal geometrodynamics by extending the ADM phase space for canonical general relativity to that consisting of York's mean exterior curvature time, conformal three-metric and their momenta. Accordingly, an additional constraint is introduced, called the conformal constraint. In terms of the new canonical variables, a diffeomorphism constraint is derived from the original momentum constraint. The Hamiltonian constraint then takes a new form. It turns out to be the sum of an expression that previously appeared in the literature and extra terms quadratic in the conformal constraint. The complete set of the conformal, diffeomorphism and Hamiltonian constraints are shown to be of first class through the explicit construction of their Poisson brackets. The extended algebra of constraints has as subalgebras the Dirac algebra for the deformations and Lie algebra for the conformorphism transformations of the spatial hypersurface. This is followed by a discussion of potential implications of the presented theory on the Dirac constraint quantization of general relativity. An argument is made to support the use of the York time in formulating the unitary functional evolution of quantum gravity. Finally, the prospect of future work is briefly outlined."
http://arxiv.org/abs/gr-qc/0507044
Unambiguous spin-gauge formulation of canonical general relativity with conformorphism invariance
4 pages
Phys.Rev. D72 (2005) 087501
"We present a parameter-free gauge formulation of general relativity in terms of a new set of real spin connection variables. The theory is constructed by extending the phase space of the recently formulated conformal geometrodynamics for canonical gravity to accommodate a spin gauge description. This leads to a further enlarged set of first class gravitational constraints consisting of a reduced Hamiltonian constraint and the canonical generators for spin gauge and conformorphism transformations. Owing to the incorporated conformal symmetry, the new theory is shown to be free from an ambiguity of the Barbero-Immirzi type."
here is Charles Wang homepage---he has a remarkable set of research interests and accomplishments---check this out:
http://www.lancs.ac.uk/depts/physics/staff/chtw.htm
He is now at Aberdeen---the page was from 2004 when he was at Lancaster
Today Charles H-T Wang posted another paper:
http://arxiv.org/abs/gr-qc/0512023
Towards conformal loop quantum gravity
Charles H.-T. Wang
6 pages, 1 figure, Talk given at Constrained Dynamics and Quantum Gravity 05, Cala Gonone, Sardinia, Italy, 12-16 September 2005
A discussion is given of recent developments in canonical gravity that assimilates the conformal analysis of gravitational degrees of freedom. The work is motivated by the problem of time in quantum gravity and is carried out at the metric and the triad levels. At the metric level, it is shown that by extending the Arnowitt-Deser-Misner (ADM) phase space of general relativity (GR), a conformal form of geometrodynamics can be constructed. In addition to the Hamiltonian and diffeomorphism constraints, an extra first class constraint is introduced to generate conformal transformations. This phase space consists of York's mean extrinsic curvature time, conformal three-metric and their momenta. At the triad level, the phase space of GR is further enlarged by incorporating spin-gauge as well as conformal symmetries. This leads to a canonical formulation of GR using a new set of real spin connection variables. The resulting gravitational constraints are first class, consisting of the Hamiltonian constraint and the canonical generators for spin-gauge and conformorphism transformations. The formulation has a remarkable feature of being parameter-free. Indeed, it is shown that a conformal parameter of the Barbero-Immirzi type can be absorbed by the conformal symmetry of the extended phase space. This gives rise to an alternative approach to loop quantum gravity that addresses both the conceptual problem of time and the technical problem of functional calculus in quantum gravity."
this guy is a dark horse. I would appreciate help evaluating this work if anyone has any ideas.
Last edited:
Gold Member
Dearly Missed
http://arxiv.org/abs/hep-th/0512033
Thermal gravity, black holes and cosmological entropy
Stephen D. H. Hsu, Brian M. Murray
5 pages, 2 figures
"Taking seriously the interpretation of black hole entropy as the logarithm of the number of microstates, we argue that thermal gravitons may undergo a phase transition to a kind of black hole condensate. The phase transition proceeds via nucleation of black holes at a rate governed by a saddlepoint configuration whose free energy is of order the inverse temperature in Planck units. Whether the universe remains in a low entropy state as opposed to the high entropy black hole condensate depends sensitively on its thermal history. Our results may clarify an old observation of Penrose regarding the very low entropy state of the universe."
Steve Hsu's blog is Information Processing. It is a real good blog.
He also collaborated with Zee on a fun paper called "A Message in the Sky"
New Witten paper
http://arxiv.org/abs/hep-th/0512039
New Arivero paper
http://arxiv.org/abs/hep-ph/0512065
Gold Member
Dearly Missed
Oriti: intro to the Group Field Theory approach to QG
Oriti presented this at the QG '05 conference
http://arxiv.org/abs/gr-qc/0512048
Quantum gravity as a group field theory: a sketch
Daniele Oriti
8 pages, 9 figures; to appear in the Proceedings of the Fourth Meeting on Constrained Dynamics and Quantum Gravity, Cala Gonone, Italy, September 12-16, 2005
DAMTP-2005-123
"We give a very brief introduction to the group field theory approach to quantum gravity, a generalisation of matrix models for 2-dimensional quantum gravity to higher dimension, that has emerged recently from research in spin foam models."
Gold Member
Dearly Missed
another new paper by Oriti
http://arxiv.org/abs/gr-qc/0512069
Generalised group field theories and quantum gravity transition amplitudes
Daniele Oriti
6 pages, 2 figures
DAMTP-2005-127
"We construct a generalised formalism for group field theories, in which the domain of the field is extended to include additional proper time variables, as well as their conjugate mass variables. This formalism allows for different types of quantum gravity transition amplitudes in perturbative expansion, and we show how both causal spin foam models and the usual a-causal ones can be derived from it, within a sum over triangulations of all topologies. We also highlight the relation of the so-derived causal transition amplitudes with simplicial gravity actions."
Oriti is the editor of a book Cambridge University Press has scheduled to bring out in 2006, and here is one of the chapters (contributed by Gambini and Pullin)
according to Oriti, the title of the new book is:
Towards quantum gravity: different approaches to a new understanding of space and time Cambridge University Press (2006); but Gambini and Pullin mention a trivially different title.
http://arxiv.org/abs/gr-qc/0512065
Consistent discretizations as a road to quantum gravity
Rodolfo Gambini, Jorge Pullin
Comments: 17 Pages, Draft chapter contributed to the book "Approaches to quantum gravity", being prepared by Daniele Oriti for Cambridge University Press
LSU-REL-121105
"We present a brief description of the consistent discretization'' approach to classical and quantum general relativity. We exhibit a classical simple example to illustrate the approach and summarize current classical and quantum applications. We also discuss the implications for the construction of a well defined quantum theory and in particular how to construct a quantum continuum limit."
a new paper by Freidel and Livine appeared to day:
http://arxiv.org/abs/hep-th/0512113
Effective 3d Quantum Gravity and Non-Commutative Quantum Field Theory
Laurent Freidel, Etera R. Livine
9 pages, Proceeding of the conference "Quantum Theory and Symmetries 4" 2005 (Varna, Bulgaria)
"We show that the effective dynamics of matter fields coupled to 3d quantum gravity is described after integration over the gravitational degrees of freedom by a braided non-commutative quantum field theory symmetric under a kappa-deformation of the Poincaré group."
a new paper by Jerzy Kowalski-Glikman and others
http://arxiv.org/abs/hep-th/0512107
The Free Particle in Deformed Special Relativity
F. Girelli, T. Konopka, J. Kowalski-Glikman, E.R. Livine
15 pages
"The phase space of a classical particle in DSR contains de Sitter space as the space of momenta. We start from the standard relativistic particle in five dimensions with an extra constraint and reduce it to four dimensional DSR by imposing appropriate gauge fixing. We analyze some physical properties of the resulting theories like the equations of motion, the form of Lorentz transformations and the issue of velocity. We also address the problem of the origin and interpretation of different bases in DSR."
Last edited:
Gold Member
Dearly Missed
http://arxiv.org/abs/gr-qc/0512072
Quantum information in loop quantum gravity
Daniel R. Terno
4 pages. Proceedings of QG'05, Cala Gonone, 2005
"A coarse-graining of spin networks is expressed in terms of partial tracing, thus allowing to use tools of quantum information theory. This is illustrated by the analysis of a simple black hole model, where the logarithmic correction of the Hawking-Bekenstein entropy is shown to be equal to the total amount of correlations on the horizon. Finally other applications of entanglement to quantum gravity are briefly discussed."
Gold Member
Dearly Missed
encyclopedic resource for quantum information and computing
http://arxiv.org/abs/quant-ph/0512125
Quantum information and computation
Jeffrey Bub
103 pages, no figures. Forthcoming as a chapter in Handbook of Philosophy of Physics, edited by John Earman and Jeremy Butterfield (Elsevier/NH)
"This article deals with theoretical developments in the subject of quantum information and quantum computation, and includes an overview of classical information and some relevant quantum mechanics. The discussion covers topics in quantum communication, quantum cryptography, and quantum computation, and concludes by considering whether a perspective in terms of quantum information sheds new light on the conceptual problems of quantum mechanics."
given that Daniel Terno is collaborating on LQG with Eteral Livine at Perimeter (where there are a lot of Q. information and computing people as well as QG) we may eventually need reference material in this area. Scott Aaronson, known for his blog among other things, is another QI at Waterloo. At first sight this source seems comprehensive and not too hard. Anyone have comments?
of possible interest
http://arxiv.org/abs/hep-th/0512197
Eric D'Hoker, D.H. Phong
http://arxiv.org/abs/hep-th/0512200
Observables in effective gravity
Steven B. Giddings, Donald Marolf, James B. Hartle
http://arxiv.org/abs/hep-th/0512201
Holography and entropy bounds in the plane wave matrix model
Raphael Bousso, Aleksey L. Mints
http://arxiv.org/abs/hep-th/0512210
2D Ising Model with non-local links - a study of non-locality
Yidun Wan
4 pages, 6 figures
"Markopoulou and Smolin have argued that the low energy limit of LQG may suffer from a conflict between locality, as defined by the connectivity of spin networks, and an averaged notion of locality that emerges at low energy from a superposition of spin network states. This raises the issue of how much non-locality, relative to the coarse grained metric, can be tolerated in the spin network graphs that contribute to the ground state. To address this question we have been studying statistical mechanical systems on lattices decorated randomly with non-local links. These turn out to be related to a class of recently studied systems called small world networks. We show, in the case of the 2D Ising model, that one major effect of non-local links is to raise the Curie temperature. We report also on measurements of the spin-spin correlation functions in this model and show, for the first time, the impact of not only the amount of non-local links but also of their configuration on correlation functions."
Yidun Wan's "Perimeter name" is Eaton Wan. He gave a talk at Loops '05. Smolin in his talk cited Eaton's results
====================
http://arxiv.org/abs/gr-qc/0512103
Quantum Gravity as a quantum field theory of simplicial geometry
Daniele Oriti
23 pages, 13 figures; to be published in 'Mathematical and Physical Aspects of Quantum Gravity', B. Fauser, J. Tolksdorf and E. Zeidler eds, Birkhaeuser, Basel (2006)
"This is an introduction to the group field theory approach to quantum gravity, with emphasis on motivations and basic formalism, more than on recent results; we elaborate on the various ingredients, both conceptual and formal, of the approach, giving some examples, and we discuss some perspectives of future developments."
=======================
http://arxiv.org/abs/gr-qc/0512102
Towards the graviton from spinfoams: the 3d toy model
Simone Speziale
7 pages, 2 figures
"Recently, a proposal has appeared for the extraction of the 2-point function of linearised quantum gravity, within the spinfoam formalism. This relies on the use of a boundary state, which introduces a semi-classical flat geometry on the boundary. In this paper, we investigate this proposal considering a toy model in the (Riemannian) 3d case, where the semi-classical limit is better understood. We show that in this limit the propagation kernel of the model is the one for the harmonic oscillator. This is at the origin of the expected 1/L behaviour of the 2-point function. Furthermore, we numerically study the short scales regime, where deviations from this behaviour occur."
Last edited:
Gold Member
Dearly Missed
I don't know anything of either author. Navarro is at Cambridge. I will flag this and watch for future papers.
http://arxiv.org/abs/gr-qc/0512109
Modified gravity, Dark Energy and MOND
Ignacio Navarro, Karel Van Acoleyen
24 pages, 2 figures
DAMTP-2005-129, DCPT/05/154, IPPP/05/77
"We propose a class of actions for the spacetime metric that introduce corrections to the Einstein-Hilbert Lagrangian depending on the logarithm of some curvature scalars. We show that for some choices of these invariants the models are ghost free and modify Newtonian gravity below a characteristic acceleration scale given by a_0 = c\mu, where c is the speed of light and \mu is a parameter of the model that also determines the late-time Hubble constant: H_0 \sim \mu. In these models, besides the massless spin two graviton, there is a scalar excitation of the spacetime metric whose mass depends on the background curvature. This dependence is such that this scalar, although almost massless in vacuum, becomes massive and effectively decouples when one gets close to any source and we recover an acceptable weak field limit at short distances. There is also a (classical) 'running' of Newton's constant with the distance to the sources and gravity is easily enhanced at large distances by a large ratio. We comment on the possibility of building a model with a MOND-like Newtonian limit that could explain the rotation curves of galaxies without introducing Dark Matter using this kind of actions. We also explore briefly the characteristic gravitational phenomenology that these models imply: besides a long distance modification of gravity they also predict deviations from Newton's law at short distances. This short distance scale depends on the local background curvature of spacetime, and we find that for experiments on the Earth surface it is of order \sim 0.1mm, while this distance would be bigger in space where the local curvature is significantly lower."
They cite REUTER work (renormalizable QG) as their reference [33] in this passage on page 21
"...there is a second effect in these theories: the Planck mass that controls the coupling strength of the massless graviton also undergoes a rescaling or 'running' with the distance to the sources (or the background curvature). This phenomenon, although a purely classical one in our theory, is reminiscent of the quantum renormalisation group running of couplings. So one might wonder if actions of the type (15) could be an effective classical description of strong renormalisation effects in the infrared that might appear in GR (see e.g. [33] and references therein), as happens in QCD. In fact, corrections depending on the logarithm of the renormalisation scale are ubiquitous in quantum field theory,.."
=======================
this Utrecht master's thesis was flagged by John Baez in TWF #224
I like the way it is written----by a person who gets a kick out of writing clearly and finding the simple way to understand something complex. Baez says he's looking forward to this guy's PhD thesis
http://arxiv.org/abs/math.QA/0512103
Categorical Aspects of Topological Quantum Field Theories
Bruce H. Bartlett
M.Sc Thesis, Utrecht University, 2005. 111 pages, numerous pictures. Supervisors : Dr. S. Vandoren, Prof. I. Moerdijk
"This thesis provides an introduction to the various category theory ideas employed in topological quantum field theory. These theories are viewed as symmetric monoidal functors from topological cobordism categories into the category of vector spaces. In two dimensions, they are classified by Frobenius algebras. In three dimensions, and under certain conditions, they are classified by modular categories. These are special kinds of categories in which topological notions such as braidings and twists play a prominent role. There is a powerful graphical calculus available for working in such categories, which may be regarded as a generalization of the Feynman diagrams method familiar in physics. This method is introduced and the necessary algebraic structure is graphically motivated step by step.
A large subclass of two-dimensional topological field theories can be obtained from a lattice gauge theory construction using triangulations. In these theories, the gauge group is finite. This construction is reviewed, from both the original algebraic perspective as well as using the graphical calculus developed in the earlier chapters.
This finite gauge group toy model can be defined in all dimensions, and has a claim to being the simplest non-trivial quantum field theory. We take the opportunity to show explicitly the calculation of the modular category arising from this model in three dimensions, and compare this algebraic data with the corresponding data in two dimensions, computed both geometrically and from triangulations. We use this as an example to introduce the idea of a quantum field theory as producing a tower of algebraic structures, each dimension related to the previous by the process of categorification."
======================
In the current conversation at Woit blog concerning Cosmological Natural Selection (CNS) Smolin cited this paper as a marginal aside in response to someone's question:
http://arxiv.org/gr-qc/0510052 [Broken]
Geometry from quantum particles
David W. Kribs, Fotini Markopoulou
17 pages
"We investigate the possibility that a background independent quantum theory of gravity is not a theory of quantum geometry. We provide a way for global spacetime symmetries to emerge from a background independent theory without geometry. In this, we use a quantum information theoretic formulation of quantum gravity and the method of noiseless subsystems in quantum error correction. This is also a method that can extract particles from a quantum geometric theory such as a spin foam model."
the CNS discussion is transcribed here:
Last edited by a moderator:
Gold Member
Dearly Missed
Freidel and Majid
http://arxiv.org/abs/hep-th/0601004
Noncommutative Harmonic Analysis, Sampling Theory and the Duflo Map in 2+1 Quantum Gravity
L. Freidel, S. Majid
"54 pages, 2 figs
We show that the $\star$-product for $U(su_2)$ arising in \cite{EL} in an effective theory for the Ponzano-Regge quantum gravity model is compatible with the noncommutative bicovariant differential calculus previously proposed for 2+1 Euclidean quantum gravity using quantum group methods in \cite{BatMa}. We show that the effective action for this model essentially agrees with the noncommutative scalar field theory coming out of the noncommutative differential geometry. We show that the required Fourier transform essentially agrees with the previous quantum group Fourier transform. In combining these methods we develop practical tools for noncommutative harmonic analysis for the model including radial quantum delta-functions and Gaussians, the Duflo map and elements of noncommutative sampling theory' applicable to the bounded $SU_2,SO_3$ momentum groups. This allows us to understand the bandwidth limitation in 2+1 quantum gravity arising from the bounded momentum. We also argue that the the anomalous extra time' dimension seen in the noncommutative differential geometry should be viewed as the renormalisation group flow visible in the coarse graining in going from $SU_2$ to $SO_3$. Our methods also provide a generalised twist operator for the $\star$-product."
http://arxiv.org/abs/hep-th/0601001
The String Landscape, Black Holes and Gravity as the Weakest Force
Nima Arkani-Hamed, Lubos Motl, Alberto Nicolis, Cumrun Vafa
20 pages, 5 figures
http://arxiv.org/abs/math-ph/0601005
Construction of Generalized Connections
Christian Fleischhack
12 pages
"We present a construction method for mappings between generalized connections, comprising, e.g., the action of gauge transformations, diffeomorphisms and Weyl transformations. Moreover, criteria for continuity and measure preservation are stated."
Last edited:
Gold Member
Dearly Missed
back in early December (post #429) I flagged this
marcus said:
...a new paper by Jerzy Kowalski-Glikman and others
http://arxiv.org/abs/hep-th/0512107
The Free Particle in Deformed Special Relativity
F. Girelli, T. Konopka, J. Kowalski-Glikman, E.R. Livine
15 pages
"The phase space of a classical particle in DSR contains de Sitter space as the space of momenta. We start from the standard relativistic particle in five dimensions with an extra constraint and reduce it to four dimensional DSR by imposing appropriate gauge fixing. We analyze some physical properties of the resulting theories like the equations of motion, the form of Lorentz transformations and the issue of velocity. We also address the problem of the origin and interpretation of different bases in DSR."
now I see that this gives a helpful perspective on the work of Freidel by people who are not Freidel. It is an outside perspective that can begin to sum up how they see his line of research going (and theirs in relation to it.)
---quote from conclusions---
In this paper, we have studied a classical particle in five space-time dimensions subject to two constraints defining two energy scales m and kappa. We have shown that, after gauge fixing, the 5d model can give rise to various DSR models in 4d. The reduction from 5d to 4d selects a set of phase space coordinates (x, p) via the requirement that they should commute with both the kappa-shell constraint H_5d and the gauge fixing function C.
...
...
In three space-time dimensions, the link between DSR and gravity has been clarified in [7]. Indeed, in 3d quantum gravity, particles are identified as conical singularities and their momentum is defined through non-local measurements as (a function of) the holonomy around the particle. This explicit characterization allows to rigorously derive DSR from 3d quantum gravity and unambiguously compute the Feynman diagrams for the resulting non-commutative quantum field theory [7].
There is also a proposal attempting to move the similarity between DSR and GR to the level of an explicit relationship in four dimensions [27]. In that proposal, the choice of coordinates p_mu (and x_mu) correspond to the definition of the measured momenta (and positions) in terms of the tetrad field e^I_mu. The issue then becomes: what are we exactly measuring physically when we talk about the energy-momentum p_mu? The answer to this question will determine the “correct” choice of physical coordinates to use in DSR. Regardless, we expect the physical predictions of DSR to be independent of any gauge fixing choice and propose that the “correctness” of a particular choice of coordinates should be measured by how convenient these coordinates are to express the measurements of a particular observer. For instance, one could try to properly define length measurements using clocks and time-of-flight experiments to define the metric operationally.
At the end of the day, we cannot make concrete predictions using DSR as long as we do not find gauge invariant quantities (commuting with the two constraints of the 5d action) and their physical interpretation, or equivalently an explicit link between the choices of gauge fixing and measurement. This avenue of research seems to be a natural one from the 5d perspective. It is also our view that the 5d perspective should be a used when looking at twoparticle systems and studying their properties. Other related topics to be investigated are free spinning particles.
Finally, an important unresolved issue regards the physical interpretation of the fifth dimension. Written as a 5d theory, DSR appears as a large extra dimension theory. We have proposed to see the coordinates in the fifth dimension as some effective degree of freedom coming from quantum gravity. The reformulation of GR as a SO(4, 1) BF gauge field theory proposed in [21] may prove to be a guide in this direction. It is also very tempting to interpret P_4 as the energy scale in a renormalisation scheme, as some kind of dynamical cut-off. X_4 would then be the generator of scale transformations. Such a speculation is supported by the fact that X_4 is (more or less) the 4d dilatation operator in the Snyder basis, but this is truly little evidence. One could look at the renormalisation equation of a scalar field and try to interpret them as equations of motion in the DSR framework. The potential link between DSR and quantum gravity and the fact that the renormalisation flow of general relativity can be associated to a fifth dimension (with an AdS metric) [29] also points toward such an interpretation.
---endquote---
what is emerging is some interconnected treatment of spacetime dynamics, matter, and DSR----in this Kolwalski-Glikman paper they are dealing with the flat DSR limit, but in the context of Freidel papers on spacetime dynamics. especially his seemingly successful treatment of the 3D case. this paper is evidently part of a combined initiative by several people to proceed to the 4D case.
BTW we should watch for possible observational tests of QG. GLAST has been discussed in this context and is scheduled for orbit next year. Also Auger (OH-ZHAY)
which is now beginning to report
http://arxiv.org/abs/astro-ph/0601035
The First Scientific Results from the Pierre Auger Observatory
T. Yamamoto (for The Pierre Auger Observatory Collaboration)
4 pages, 1 figure, Proceedings of the PANIC 2005 conference
"The southern site of the Pierre Auger Observatory is under the construction near Malargue in Argentina and now more than 60% of the detectors are completed. The observatory has been collecting data for over 1 year and the cumulative exposure is already similar to that of the largest forerunner experiments. The hybrid technique provides model-independent energy measurements from the Fluorescence Detector to calibrate the Surface Detector. Based on this technique, the first estimation of the energy spectrum above 3 EeV has been presented and is discussed in this paper."
Smolin has a paper "Falsifiable..." describing how Auger and GLAST may be able to distinguish between certain approaches to QG, and test some assumptions.
Last edited:
Gold Member
Dearly Missed
Carlip is an important figure. In case of interest:
http://arxiv.org/gr-qc/0601041 [Broken]
Horizons, Constraints, and Black Hole Entropy
S. Carlip
16 pages, talk given at the "Peyresq Physics 10 Meeting on Micro and Macro structures of spacetime"
"Black hole entropy appears to be "universal''--many independent calculations, involving models with very different microscopic degrees of freedom, all yield the same density of states. I discuss the proposal that this universality comes from the behavior of the underlying symmetries of the classical theory. To impose the condition that a black hole be present, we must partially break the classical symmetries of general relativity, and the resulting Goldstone boson-like degrees of freedom may account for the Bekenstein-Hawking entropy. In particular, I demonstrate that the imposition of a "stretched horizon'' constraint modifies the algebra of symmetries at the horizon, allowing the use of standard conformal field theory techniques to determine the asymptotic density of states. The results reproduce the Bekenstein-Hawking entropy without any need for detailed assumptions about the microscopic theory."
Mattingly takes off from Carlip's result
http://arxiv.org/gr-qc/0601044 [Broken]
On horizon constraints and Hawking radiation
David Mattingly
"Questions about black holes in quantum gravity generally presuppose the presence of a horizon. Recently Carlip has shown that enforcing an initial data surface to be a horizon leads to the correct form for the Bekenstein-Hawking entropy of the black hole. Requiring a horizon also constitutes fixed background geometry, which generically leads to non-conservation of the matter stress tensor at the horizon. In this work, I show that the generated matter energy flux for a Schwarzschild black hole is in agreement with the first law of black hole thermodynamics, 8 pi G Delta Q = kappa Delta A."
In case anyone is wondering whether gravitons can be detected (Freeman Dyson said not)
http://arxiv.org/gr-qc/0601043 [Broken]
Can Gravitons Be Detected?
Tony Rothman, Stephen Boughn
21 pages, no figures. To be submitted to AJP
"Freeman Dyson has questioned whether any conceivable experiment in the real universe can detect a single graviton. If not, is it meaningful to talk about gravitons as physical entities? We attempt to answer Dyson's question and find it is possible concoct an idealized thought experiment capable of detecting one graviton; however, when anything remotely resembling realistic physics is taken into account, detection becomes impossible, indicating that Dyson's conjecture is very likely true. We also point out several mistakes in the literature dealing with graviton detection and production."
----------------------as an afterthought-------------------
Two others looked as if they might be interesting as well:
http://arxiv.org/astro-ph/0601219 [Broken]
Constraining Lorentz violations with Gamma Ray Bursts
Maria Rodriguez Martinez, Tsvi Piran
16 pages, 4 figures
"Gamma ray bursts are excellent candidates to constrain physical models which break Lorentz symmetry. We consider deformed dispersion relations which break the boost invariance and lead to an energy-dependent speed of light. In these models, simultaneously emitted photons from cosmological sources reach Earth with a spectral time delay that depends on the symmetry breaking scale. We estimate the possible bounds which can be obtained by comparing the spectral time delays with the time resolution of available telescopes. We discuss the best strategy to reach the strongest bounds. We compute the probability of detecting bursts that improve the current bounds. The results are encouraging. Depending on the model, it is possible to build a detector that within several years will improve the present limits of 0.015 m_pl."
http://arxiv.org/astro-ph/0601247 [Broken]
Alternative proposal to modified Newton dynamics (MOND)
4 pages. Accepted for publication in PRD
"From a study of conserved quantities of the so-called Modified Newtonian Dynamics (MOND) we propose an alternative to this theory. We show that this proposal is consistent with the Tully-Fisher law, has conserved quantities whose Newtonian limit are the energy and angular momentum, and can be useful to explain cosmic acceleration. The dynamics obtained suggests that, when acceleration is very small, time depends on acceleration. This result is analogous to that of special relativity where time depends on velocity."
note that this paper has been accepted for publication in Physical Review Series D.
Last edited by a moderator:
Gold Member
Dearly Missed
a string theorist assesses Loop and Spinfoam Gravity (for nonspecialists)
http://arxiv.org/abs/hep-th/0601129
Loop and spin foam quantum gravity: a brief guide for beginners
Hermann Nicolai, Kasper Peeters
18 pages, 11 figures; Contributed article to "An assessment of current paradigms in theoretical physics"
Report-no: AEI-2006-004
"We review aspects of loop quantum gravity and spin foam models at an introductory level, with special attention to questions frequently asked by non-specialists."
Nicolai earlier co-authored an "outsider's view" of LQG but did not discuss recent work (e.g. in the past 5 years) and omitted spinfoam.
So the view had some problems---Lee Smolin replied to Nicolai politely and Peter Woit published the letter.
I don't know how this one is going to play out. Basically it is great of Nicolai, as a string theorist, to take an interest in alternatives like Loop and Spinfoam. It is potentially really constructive.
====================
the next is by two authors who are not familiar to me:
http://arxiv.org/abs/hep-th/0601127
Intersecting Connes Noncommutative Geometry with Quantum Gravity
Johannes Aastrup, Jesper M. Grimstrup
19 pages, 4 figures
NORDITA-2006-1
An intersection of Noncommutative Geometry and Loop Quantum Gravity is proposed. Alain Connes' Noncommutative Geometry provides a framework in which the Standard Model of particle physics coupled to general relativity is formulated as a unified, gravitational theory. However, to this day no quantization procedure compatible with this framework is known. In this paper we consider the noncommutative algebra of holonomy loops on a functional space of certain spin-connections. The construction of a spectral triple is outlined and ideas on interpretation and classical limit are presented.
Last edited:
Gold Member
Dearly Missed
http://arxiv.org/abs/gr-qc/0601085
Loop Quantum Cosmology
Martin Bojowald
104 pages, 10 figures; online version, containing 6 movies, available at "Living Reviews":
http://relativity.livingreviews.org/Articles/lrr-2005-11/ [Broken]
AEI-2005-185, IGPG-06/1-6
Journal-ref: Living Rev. Relativity 8 (2005) 11
"Quantum gravity is expected to be necessary in order to understand situations where classical general relativity breaks down. In particular in cosmology one has to deal with initial singularities, i.e. the fact that the backward evolution of a classical space-time inevitably comes to an end after a finite amount of proper time. This presents a breakdown of the classical picture and requires an extended theory for a meaningful description. Since small length scales and high curvatures are involved, quantum effects must play a role. Not only the singularity itself but also the surrounding space-time is then modified. One particular realization is loop quantum cosmology, an application of loop quantum gravity to homogeneous systems, which removes classical singularities. Its implications can be studied at different levels. Main effects are introduced into effective classical equations which allow to avoid interpretational problems of quantum theory. They give rise to new kinds of early universe phenomenology with applications to inflation and cyclic models. To resolve classical singularities and to understand the structure of geometry around them, the quantum description is necessary. Classical evolution is then replaced by a difference equation for a wave function which allows to extend space-time beyond classical singularities. One main question is how these homogeneous scenarios are related to full loop quantum gravity, which can be dealt with at the level of distributional symmetric states. Finally, the new structure of space-time arising in loop quantum gravity and its application to cosmology sheds new light on more general issues such as time."
To get the movies, go to the Living Reviews version
http://relativity.livingreviews.org/Articles/lrr-2005-11/ [Broken]
and scroll down the sidebar menu all the way to the bottom where it says "figures"
========================
Also in today's arxiv postings:
http://arxiv.org/abs/gr-qc/0601082
Quantum Hamiltonian for gravitational collapse
Viqar Husain, Oliver Winkler
17 pages
"Using a Hamiltonian formulation of the spherically symmetric gravity-scalar field theory adapted to flat spatial slicing, we give a construction of the reduced Hamiltonian operator. This Hamiltonian, together with the null expansion operators presented in an earlier work, form a framework for studying gravitational collapse in quantum gravity. We describe a setting for its numerical implementation, and discuss some conceptual issues associated with quantum dynamics in a partial gauge fixing."
============================
Lee Smolin thinks that if MOND is real then it may have an explanation in quantum gravity. We should keep an eye on MOND research, just in case.
Here is an overview for beginners. Good place to start if you want to learn something about MOND.
http://arxiv.org/abs/astro-ph/0601478
Modified Newtonian Dynamics, an Introductory Review
Riccardo Scarpa
"By the time, in 1937, the Swiss astronomer Zwicky measured the velocity dispersion of the Coma cluster of galaxies, astronomers somehow got acquainted with the idea that the universe is filled by some kind of dark matter. After almost a century of investigations, we have learned two things about dark matter, (i) it has to be non-baryonic -- that is, made of something new that interact with normal matter only by gravitation-- and, (ii) that its effects are observed in stellar systems when and only when their internal acceleration of gravity falls below a fix value a0=1.2x10-8 cm s-2. This systematic, more than anything else, tells us we might be facing a failure of the law of gravity in the weak field limit rather then the effects of dark matter. Thus, in an attempt to avoid the need for dark matter, the Modified Newtonian Dynamics. MOND posits a breakdown of Newton's law of gravity (or inertia) below a0, after which the dependence with distance became linear. Despite many attempts, MOND resisted stubbornly to be falsified as an alternative to dark matter and succeeds in explaining the properties of an impressively large number of objects without invoking the presence of non-baryonic dark matter. In this paper, I will review the basics of MOND and its ability to explain observations without the need of dark matter."
=====================
Of possible interest to the category-minded:
http://arxiv.org/abs/math.QA/0601458
Categorified Algebra and Quantum Mechanics
Jeffrey Morton (University of California, Riverside)
67 pages, 25 figures
Jeffrey Morton has studied Quantum Gravity with John Baez. Here is what he says in the acknowledgments section
"This work grew out of the regular Quantum Gravity seminar taught by John Baez at UCR, notes for which are available online as [2]. I would like to acknowledge his work on this subject (some published as [1]), excellent teaching, and helpful advice and discussions in preparing this paper. Other students in the seminar, especially Toby Bartels, Miguel Carrion-Alvarez, Alissa Crans, and Derek Wise also provided many useful discussions."
We know Miguel and Derek as Baez QG students. Miguel recently finished his thesis and Derek gave a paper at Loops '05. So this comes out of that group, although it is not specifically about gravity.
Last edited by a moderator:
Gold Member
Dearly Missed
Perez on Spin Foam, a chapter for Oriti's book
http://arxiv.org/abs/gr-qc/0601095
The spin-foam-representation of loop quantum gravity
Alejandro Perez
Draft chapter contributed to the book "Towards quantum gravity", being prepared by Daniele Oriti for Cambridge University Press. 19 pages
"The problem of background independent quantum gravity is the problem of defining a quantum field theory of matter and gravity in the absence of an underlying background geometry. Loop quantum gravity (LQG) is a promising proposal for addressing this difficult task. Despite the steady progress of the field, dynamics remains to a large extend an open issue in LQG. Here we present the main ideas behind a series of proposals for addressing the issue of dynamics. We refer to these constructions as the spin foam representation of LQG. This set of ideas can be viewed as a systematic attempt at the construction of the path integral representation of LQG.
The spin foam representation is mathematically precise in 2+1 dimensions, so we will start this chapter by showing how it arises in the canonical quantization of this simple theory. This toy model will be used to precisely describe the true geometric meaning of the histories that are summed over in the path integral of generally covariant theories.
In four dimensions similar structures appear. We call these constructions spin foam models as their definition is incomplete in the sense that at least one of the following issues remains unclear: 1) the connection to a canonical formulation, and 2) regularization independence (renormalizability). In the second part of this chapter we will describe the definition of these models emphasizing the importance of these open issues. We also discuss the non standard picture of quantum spacetime that follows from background independence.
Last edited:
Gold Member
Dearly Missed
several MOND articles recently, in post #437 we saw this
http://arxiv.org/abs/astro-ph/0601478
Modified Newtonian Dynamics, an Introductory Review
Riccardo Scarpa
"...This systematic, more than anything else, tells us we might be facing a failure of the law of gravity in the weak field limit rather then the effects of dark matter... In this paper, I will review the basics of MOND and its ability to explain observations without the need of dark matter."
now this has appeared
http://arxiv.org/abs/hep-th/0601213
Introduction to Modified Gravity and Gravitational Alternative for Dark Energy
S. Nojiri, S.D. Odintsov
21 pages, lectures for 42nd Karpacz Winter School on Theoretical Physics
"We review various modified gravities considered as gravitational alternative for dark energy. Specifically, we consider the versions of f(R), f(G) or f(R,G) gravity, model with non-linear gravitational coupling or string-inspired model with Gauss-Bonnet-dilaton coupling in the late universe where they lead to cosmic speed-up. It is shown that some of such theories may pass the Solar System tests. On the same time, it is demonstrated that they have quite rich cosmological structure: they may naturally describe the effective (cosmological constant, quintessence or phantom) late-time era with a possible transition from decceleration to acceleration thanks to gravitational terms which increase with scalar curvature decrease. The possibility to explain the coincidence problem as the manifestation of the universe expansion in such models is mentioned. The late (phantom or quintessence) universe filled with dark fluid with inhomogeneous equation of state (where inhomogeneous terms are originated from the modified gravity) is also described."
this paper was prepared for this year's Polish Winterschool. It happens every year in February. Two years ago we got a bunch of interesting papers from the 2004 Winterschool---it was about QG phenomenology, DSR, possible observable effects. Carlo Rovelli was one of the organizers and Lee Smolin and Jerzy K-G gave weeklong lecture courses. Smolin and others discussed MOND.
Now it seems that the 2006 Winterschool is again touching on some of the same topics! The school is held at a ski resort on the Poland Czech border, in south Poland. It has been about all kinds of theoretical physics, not just QG or DSR or MOND. but now for two out of the past three years it will be about these things. We should watch for more papers coming out on arxiv from this year's school.
http://www.ift.uni.wroc.pl/karp42/#Prog [Broken]
Bojowald is lecturing about Loop Cosmology at the Winterschool this year
mentioned briefly:
http://arxiv.org/abs/physics/0601218
A Theory of Quantum Gravity may not be possible because Quantum Mechanics violates the Equivalence Principle
Mario Rabinowitz
"Easy to follow original proof of the incompatibility of General Relativity and Quantum Mechanics"
Also a recent mathematics PhD thesis at Göttingen:
http://arxiv.org/abs/math.MG/0601744
Coarse geometry and asymptotic dimension
Bernd Grave
Dissertation
Subj-class: Metric Geometry
"We consider asymptotic dimension of coarse spaces. We analyse coarse structures induced by metrisable compactifications. We calculate asymptotic dimension of coarse cell complexes. We calculate the asymptotic dimension of certain negatively curved spaces, e.g. for complete, simply connected manifolds with bounded, strictly negative sectional curvature."
Bernd's thesis advisor was Tom Schick. This is high abstract math, with no obvious connection to QG or other physics. My personal opinion is that it might be interesting to develop a connection. Renate Loll and Hanno Sahlmann gave seminar talks at Göttingen around November-December last year. The physics department there seems to have an interest in QG.
http://arxiv.org/abs/gr-qc/0601121
The causal set approach to quantum gravity
Joe Henson
22 pages, 4 figures.
"Extended version of a review to be published in "Approaches to Quantum Gravity - Towards a new understanding of space and time" (ed. D. Oriti), Cambridge University Press, 2006... Dedicated to Rafael Sorkin on the occasion of his 60th birthday"
Renate Loll impresses me as a team player. Acting for the good of the QG field as a whole. She has taken Joe Henson on as a postdoc but he seems not to be doing Loll-type CDT research. He seems to be going great guns on Causal Sets---with several collaborations with Fay Dowker in the works and at least one with Rafael Sorkin.
I suppose this is what Lee Smolin was asking for (independence for worthy postdocs, don't tie support to one particular research program) and it seems an idealistic attempt to treat QG as a single field where the principle investigators share the job of hosting the postdocs. Instead of dividing up into separate competing factions---jealously guarded bailiwicks of funding
Well, I don't know how it will work in practice. I am a little disappointed, I thought Joe Henson going to Utrecht as a Loll postdoc would mean he crosses over into CDT research. Is there a common ground?
Christine flagged the Joe Henson paper on her blog yesterday and got some discussion:
http://christinedantas.blogspot.com/2006/01/causal-set-approach-to-quantum-gravity.html [Broken]
Last edited by a moderator:
Staff Emeritus
Gold Member
Dearly Missed
Just a note on Thiemann. I'm reading his latest master constraint paper, gr-qc/0510011, where he brings it all home. Unlike the previous papers in the series, this one is based on triangulations, not networks. Influence of CDT?
Later Edit. Reading farther I find he does make some use of spin networks. See my post # 442. However his Master Constraint is defined, as advertised, in terms of triangulations.
Last edited:
Gold Member
Dearly Missed
Just a note on Thiemann. I'm reading his latest master constraint paper, gr-qc/0510011, where he brings it all home. ...
I am imagining the TOC of Oriti's book ("towards a new understanding of space and time") with all these guys lined up to evaluate and compare.
Perez---Spinfoams
Henson---Causal Sets
Thiemann---Masterconstraint
Gambini---"Gambinistics"
Loll---Dynamic Triangulation
Bojowald---Loop Cosmology
Freidel---Finessing matter feynman diagrams from foam spacetime
...
...
so far we only actually know that Perez and Henson have contributed chapters, the others are guesses with varying degrees of seriousness.
But Thiemann certainly should be there!
EDIT: based on next post by selfAdjoint I deleted a non-essential mention of triangulations
Last edited:
Staff Emeritus
Gold Member
Dearly Missed
Marcus I have to stand corrected. He does use spin networks, or rather diffeomorphic equivalence classes of them, in defining his new inner product. When I finally get my head around it, I'll start a thread describing it; it has some very important consequences, and as you know, was cited along with CDT at the summer meetings as an important step forward in quantum gravity.
Gold Member
Dearly Missed
Oriti doing spacetime and matter in 3D (similar to Freidel)
http://arxiv.org/abs/gr-qc/0602010
Group field theory formulation of 3d quantum gravity coupled to matter fields
Daniele Oriti, James Ryan
28 pages, 21 figures
"We present a new group field theory describing 3d Riemannian quantum gravity coupled to matter fields for any choice of spin and mass. The perturbative expansion of the partition function produces fat graphs colored with SU(2) algebraic data, from which one can reconstruct at once a 3-dimensional simplicial complex representing spacetime and its geometry, like in the Ponzano-Regge formulation of pure 3d quantum gravity, and the Feynman graphs for the matter fields. The model then assigns quantum amplitudes to these fat graphs given by spin foam models for gravity coupled to interacting massive spinning point particles, whose properties we discuss."
Gold Member
Dearly Missed
Christine Dantas blog is turning out to be a real valuable resource.
http://christinedantas.blogspot.com/
Her sidebar has some good references. Not just the Smolin Lectures on Intro to LQG, but also links to a READING LIST to go along with the Smolin Lectures.
For instance Smolin is often recommending Dirac's thin book "Lectures on Quantum Mechanics" but that requires a trip to the library or bookstore. So Christine gives an online substitute:
http://www.tech.port.ac.uk/staffweb/seahras/documents/reviews/quantization.pdf [Broken]
This is by Sanjeev Seahra
"The Classical and Quantum Mechanics of Systems with Constraints"
Christine does onboard satellite computer code for Brazil government. She running what is, it seems, the world's only QG blog. She also has substantial other demands on her time. ...
[EDIT: correction, selfAdjoint points out another QG blog I didnt know about]
Last edited by a moderator:
Staff Emeritus
Gold Member
Dearly Missed
Marcus said:
...what is, it seems, the world's only QG blog.
Not quite, there is also http://lqg.blogspot.com/, but I think Dantas is better.
Gold Member
Dearly Missed
James Hartle, Lev Okun
Recent postings by James Hartle and by Lev Okun---both papers are somewhat on the philosophical side, and have a bit of historical perspective. Both Hartle and Okun should perhaps be revered as elder statesmen. Hartle was born in 1939 and Okun in 1929.
http://arxiv.org/abs/gr-qc/0602013
Generalizing Quantum Mechanics for Quantum Spacetime
James B. Hartle (University of California, Santa Barbara)
31 pages, 4 figures, latex, contribution to the 23rd Solvay Conference, The Quantum Structure of Space and Time
"Familiar textbook quantum mechanics assumes a fixed background spacetime to define states on spacelike surfaces and their unitary evolution between them. Quantum theory has changed as our conceptions of space and time have evolved. But quantum mechanics needs to be generalized further for quantum gravity where spacetime geometry is fluctuating and without definite value. This paper reviews a fully four-dimensional, sum-over-histories, generalized quantum mechanics of cosmological spacetime geometry. This generalization is constructed within the framework of generalized quantum theory. This is a minimal set of principles for quantum theory abstracted from the modern quantum mechanics of closed systems, most generally the universe. In this generalization, states of fields on spacelike surfaces and their unitary evolution are emergent properties appropriate when spacetime geometry behaves approximately classically. The principles of generalized quantum theory allow for the further generalization that would be necessary were spacetime not fundamental. Emergent spacetime phenomena are discussed in general and illustrated with the example of the classical spacetime geometries with large spacelike surfaces that emerge from the 'no-boundary' wave function of the universe. These must be Lorentzian with one, and only one, time direction. The essay concludes by raising the question of whether quantum mechanics itself is emergent."
====================
a key reference, in the Hartle paper, is
http://arxiv.org/abs/hep-th/0512200
Observables in effective gravity
Steven B. Giddings, Donald Marolf, James B. Hartle
43 pages
We address the construction and interpretation of diffeomorphism-invariant observables in a low-energy effective theory of quantum gravity. The observables we consider are constructed as integrals over the space of coordinates, in analogy to the construction of gauge-invariant observables in Yang-Mills theory via traces. As such, they are explicitly non-local. Nevertheless we describe how, in suitable quantum states and in a suitable limit, the familiar physics of local quantum field theory can be recovered from appropriate such observables, which we term `pseudo-local.' We consider measurement of pseudo-local observables, and describe how such measurements are limited by both quantum effects and gravitational interactions. These limitations support suggestions that theories of quantum gravity associated with finite regions of spacetime contain far fewer degrees of freedom than do local field theories."
this paper has half a dozen citations to work by Carlo Rovelli
13, 14, 15, 20, 42, 46
also about the same number of citations to papers by Abhay Ashtekar
I would say that a central theme of these two Hartle papers is BACKGROUND INDEPENDENCE the need for quantum observables to be defined in a diffeomorphism invariant way.
Hartle presents this in a PALATABLE way. To me he comes across as a reformer but with a tactful restrained manner. He is actually saying stuff not very different from Lee Smolin in The Case for Background Independence but he says it in a soothing way that does not step on anyone's toes.
All through Hartle section 7 he is talking generalities about something where Renate Loll has tried specifics-----but instead of CITING Loll and Ambjorn work, he puts a footnote where he says "Regge" and cites a paper of Ruth Williams (previous generation triangulation gravity).
I guess to say "Loll" at the 23rd Solvay conference (select old boys chosen by David Gross) would have sounded a jarring note.
Here is Hartle's page 14 footnote with the Ruth Williams citation:
"9 Perhaps, most naturally by discrete approximations to geometry such as the Regge calculus (see, e.g. [43, 44]) "
What he is essentially describing there, in section 7, is an approach that Loll has worked out. But seems unaware of this.
================
I think the Hartle paper could be an important contribution for DIPLOMATIC reasons.
It articulates a reform position but nicely, avoiding backlash. It is admirably intelligent and well-reasoned. Its faults (not explicitly pointing out developments in the non-string QG community) can be considered to be its merits.
=======================
http://arxiv.org/abs/hep-ph/0602037
The Concept of Mass in the Einstein Year
L.B. Okun
19 pages, Presented at the 12th Lomonosov conference on Elementary Particle Physics, Moscow State University, August 25-31
"Various facets of the concept of mass are discussed. The masses of elementary particles and the search for higgs. The masses of hadrons. The pedagogical virus of relativistic mass."
(another by Lev Okun is http://arxiv.org/abs/hep-ph/0602036)
===============
while I can still edit, I will tack on notice of a new paper by Thanu Padmanabhan
http://arxiv.org/abs/astro-ph/0602117
this is just a pedagogical cosmology paper, but he has written interesting articles on QG, so I am inclined to flag it.
Last edited:
Sauron
http://arxiv.org/abs/physics/0601218
A Theory of Quantum Gravity may not be possible because Quantum Mechanics violates the Equivalence Principle
Mario Rabinowitz
"Easy to follow original proof of the incompatibility of General Relativity and Quantum Mechanics"
I hve just readed and it is very basic and very false. It is obvious that the ordinary shcröedinger equation in a Newtonian potential can´t implement the weak equivalence principle. Nobody clames that.
The question which arise from these paper is ¿how difficoult is to publish in arxiv? ¿isn´t there any peer to peer revisión?
Staff Emeritus
Sauron said:
I hve just readed and it is very basic and very false. It is obvious that the ordinary shcröedinger equation in a Newtonian potential can´t implement the weak equivalence principle. Nobody clames that.
The question which arise from these paper is ¿how difficoult is to publish in arxiv? ¿isn´t there any peer to peer revisión?
This is why I always caution people who cite papers of arXiv. There is no peer review. There is only a very rudimentary review of submitted papers to make sure it is not pure quackery. But other than that, papers like this can get through especially if a person has posted a paper before the current endorsement system (i.e. you get grandfathered into the new system).
If you check for this author - Mario Rabinowitz - you'll see that he has had a series of equally dubious papers submitted. The alarm bells should ring when you realize that these papers don't appear anywhere else, and especially in peer-reviewed journals.
Always wait (unless it is a proceeding paper or a text of a speech) for an arXiv submission to appear in a peer-reviewed journal. That is your best bet. Unfortunately, arXiv has become a major "citation source" in String theory. I don't know if that's good, or a poor reflection on the field of study itself.
Zz.
Gold Member
Dearly Missed
Sauron said:
I hve just readed and it is very basic and very false...
I thought it was preposterous.
Title had amusement value for me since my area of interest is QG and he was saying that QG is apriori impossible(!)
Next time I put in a joke citation, if there is a next time, I will attach a SMILEY to make clear that the citation is not to be taken seriously.
Last edited:
Gold Member
Dearly Missed
Risto Raitio of Espoo, Finland
finns have great names for people and places sometimes and this denizen of Espoo has a blog called "Small Window"
http://fysix.blogspot.com/
and Risto has reported about a Zhao/Famaey MOND paper
http://fysix.blogspot.com/2006/02/refining-mond-this-is-not-Newtons.html
Here is Z/F paper,
http://arxiv.org/abs/astro-ph/0512425
a Chinese Scotch collaboration (sound potent?)
When you survey Quantum Gravity I think you have to keep MOND in your periferal vision, because if it turns out right it will be a test of QG. Because QG will have to EXPLAIN why this particular modification of Newton law happens. Finally a QG will have to predict mondy effects--or darkmattery effects---and then astronomers will measure to see if the QG got it right out to as many decimal places as you can.
I flagged some MOND papers recently in posts #432 and 435----noticed but did not flag the Zhao/Famaey paper. And Risto supplies something more: a "SOFT" discussion, as Risto calls it:
http://www.interactions.org/cms/?pid=1023887
It is a journalistic introduction to the paper. I like their attitude here. they aren't partisans, they just want the best possible MOND so that it can be tested and pitted against Dark Matter models. Let observation decide.
The paper was published this month in Astrophysical Journal Letters. It seems like these days more and more MOND research is passing review and getting published.
Last edited:
Gold Member
Dearly Missed
't Hooft, PCW Davies
these are noted because PCW Davies is a major league cosmologist and because anything by Gerard 't Hooft is likely to be of interest to someone---he has ideas---it is good to keep track of what he is thinking about these days, even if it is not a big breakthrough
http://arxiv.org/abs/gr-qc/0602076
Invariance under complex transformations, and its relevance to the cosmological constant problem
Gerard 't Hooft, Stefan Nobbenhuis
ITP-UU-06/06, SPIN-06/04
"In this paper we study a new symmetry argument that results in a vacuum state with strictly vanishing vacuum energy. This argument exploits the well-known feature that de Sitter and Anti- de Sitter space are related by analytic continuation. When we drop boundary and hermiticity conditions on quantum fields, we get as many negative as positive energy states, which are related by transformations to complex space. The paper does not directly solve the cosmological constant problem, but explores a new direction that appears worthwhile."
this paper makes several references to earlier work by 't Hooft's co-author Nobbenhuis,
http://arxiv.org/gr-qc/0411093 [Broken]
Categorizing Different Approaches to the Cosmological Constant Problem
Stefan Nobbenhuis
Accepted for publication
ITP-UU-04/40, SPIN-04/23
"We have found that proposals addressing the old cosmological constant problem come in various categories. The aim of this paper is to identify as many different, credible mechanisms as possible and to provide them with a code for future reference. We find that they all can be classified into five different schemes of which we indicate the advantages and drawbacks.
Besides, we add a new approach based on a symmetry principle mapping real to imaginary spacetime."
==========
the next is a talk that Davies gave at a Stanford conference to string theorists. Davies has co-authored with Lineweaver. He is a major cosmologist. I don't necessarily recommend the paper but I want to be able to keep tabs on Davies views of current issues like multiverse/anthropics.
http://arxiv.org/abs/astro-ph/0602420
The problem of what exists
P.C.W. Davies
18 pages, one figure
"Popular multiverse models such as the one based on the string theory landscape require an underlying set of unexplained laws containing many specific features and highly restrictive prerequisites. I explore the consequences of relaxing some of these prerequisites with a view to discovering whether any of them might be justified anthropically. Examples considered include integer space dimensionality, the immutable, Platonic nature of the laws of physics and the no-go theorem for strong emergence. The problem of why some physical laws exist, but others which are seemingly possible do not, takes on a new complexion following this analysis, although it remains an unsolved problem in the absence of an additional criterion."
Last edited by a moderator:
Gold Member
Dearly Missed
Magueijo video on MOND versus Dark Matter
this is a pretty good talk on MOND
Joao Magueijo gave it today at Perimeter and it is already
available as streamer.
http://streamer.perimeterinstitute.ca:81/mediasite/viewer/
where you click on "seminar series" in the sidebar menu on the left
The talk is based on a recent paper Magueijo did with Bekenstein
http://arxiv.org/abs/astro-ph/0602266
MOND habitats within the solar system
Jacob Bekenstein, Joao Magueijo
"MOdified Newtonian Dynamics (MOND) is an interesting alternative to dark matter in extragalactic systems. We here examine the possibility that mild or even strong MOND behavior may become evident well inside the solar system, in particular near saddle points of the total gravitational potential. Whereas in Newtonian theory tidal stresses are finite at saddle points, they are expected to diverge in MOND, and to remain distinctly large inside a sizeable oblate ellipsoid around the saddle point. We work out the MOND effects using the nonrelativistic limit of the TeVeS theory, both in the perturbative nearly Newtonian regime and in the deep MOND regime. While strong MOND behavior would be a spectacular 'backyard'' vindication of the theory, pinpointing the MOND-bubbles in the setting of the realistic solar system may be difficult. Space missions, such as the LISA Pathfinder, equipped with sensitive accelerometers, may be able to explore the larger perturbative region."
in the talk, one or more members of the audience seemed eager to interrupt with comments and questions, there seemed a fair amount of restrained excitement at times
one important thing involves the spaceprobe LISA which, if I understand correctly, will explore the Earth-Sun Lagrange L1 point, and the gravitational field between the Earth and L1.
Magueijo explained how LISA can discount radiation pressure----it has balls floating inside the spacecraft ----the spacecraft shields the balls from radiation pressure.
Magueijo explained the strategy of going to SADDLE POINTS where the acceleration due to gravity is small, and how (according to him) one could test MOND within the confines of the solar system.
He seemed to have a balanced view---conventional dark matter has strong points---mond has strong points----one should try to test the theories, may the best survive, maybe mond will be disproved by these tests (as with LISA) that he described. he did not seem to have his ego tied up in either competing theory DM or MOND.
Last edited by a moderator:
Sauron
I am not sure if it has been posted here or not.
Anyway like it is a "must be linked" website i post the url:
http://relativity.livingreviews.org/Articles/ [Broken]
In particular the articles by Astekhar in isolated horizonts (very readable althought it just trate too many aspects and sometimes doesn´t go as deep as i would like, and the one by bojowald in llop quantum cosmology which i have just discovered and can say too much more.
Last edited by a moderator:
Gold Member
Dearly Missed
Sauron said:
I am not sure if it has been posted here or not.
Anyway like it is a "must be linked" website i post the url:
http://relativity.livingreviews.org/Articles/ [Broken]
...
Excellent choice Sauron! In the past we have linked a few selected Living Reviews articles---including the recent one by Bojowald on LQC. But we have never posted a link to the table of contents of the entire collection. It is good to have. Thanks.
Last edited by a moderator:
Gold Member
Dearly Missed
new Bojowald----black holes this time
http://arxiv.org/abs/gr-qc/0602100
Quantum Riemannian Geometry and Black Holes
Martin Bojowald
45 pages, 4 figures, chapter of "Trends in Quantum Gravity Research" (Nova Science)
IGPG-06/2-2, AEI-2006-009
"Black Holes have always played a central role in investigations of quantum gravity. This includes both conceptual issues such as the role of classical singularities and information loss, and technical ones to probe the consistency of candidate theories. Lacking a full theory of quantum gravity, such studies had long been restricted to black hole models which include some aspects of quantization. However, it is then not always clear whether the results are consequences of quantum gravity per se or of the particular steps one had undertaken to bring the system into a treatable form. Over a little more than the last decade loop quantum gravity has emerged as a widely studied candidate for quantum gravity, where it is now possible to introduce black hole models within a quantum theory of gravity. This makes it possible to use only quantum effects which are known to arise also in the full theory, but still work in a rather simple and physically interesting context of black holes. Recent developments have now led to the first physical results about non-rotating quantum black holes obtained in this way. Restricting to the interior inside the Schwarzschild horizon, the resulting quantum model is free of the classical singularity, which is a consequence of discrete quantum geometry taking over for the continuous classical space-time picture. This fact results in a change of paradigm concerning the information loss problem. The horizon itself can also be studied in the quantum theory by imposing horizon conditions at the level of states. Thereby one can illustrate the nature of horizon degrees of freedom and horizon fluctuations. All these developments allow us to study the quantum dynamics explicitly and in detail which provides a rich ground to test the consistency of the full theory."
Last edited:
|
2023-03-22 16:41:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6009739637374878, "perplexity": 1152.822627693274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00089.warc.gz"}
|
https://www.esaral.com/q/a-a-conductor-a-with-a-cavity-as-shown-in-fig-1-36-a-is-given-a-charge-q-12145
|
# (a) A conductor A with a cavity as shown in Fig. 1.36(a) is given a charge Q.
Question:
(a) A conductor A with a cavity as shown in Fig. 1.36(a) is given a charge Q. Show that the entire charge must appear on the outer surface of the conductor.
(b) Another conductor B with charge is inserted into the cavity keeping B insulated from A. Show that the total charge on the outside surface of A is [Fig. 1.36(b)].
(c) A sensitive instrument is to be shielded from the strong electrostatic fields in its environment. Suggest a possible way
Solution:
(a) Let us consider a Gaussian surface that is lying wholly within a conductor and enclosing the cavity. The electric field intensity E inside the charged conductor is zero.
Let q is the charge inside the conductor and is the permittivity of free space.
According to Gauss’s law,
Flux, $\phi=\vec{E} \cdot \overrightarrow{d s}=\frac{q}{\epsilon_{0}}$
Here, $E=0$
$\frac{q}{\epsilon^{0}}=0$
$\because €_{0} \neq 0$
$\therefore q=0$
Therefore, charge inside the conductor is zero.
The entire charge Q appears on the outer surface of the conductor.
(b) The outer surface of conductor A has a charge of amount Q. Another conductor B having charge +q is kept inside conductor A and it is insulated from A. Hence, a charge of amount −will be induced in the inner surface of conductor A and +q is induced on the outer surface of conductor A. Therefore, total charge on the outer surface of conductor A is Q q.
(c) A sensitive instrument can be shielded from the strong electrostatic field in its environment by enclosing it fully inside a metallic surface. A closed metallic body acts as an electrostatic shield.
|
2023-03-25 01:01:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7335496544837952, "perplexity": 529.7893013286985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00434.warc.gz"}
|
https://flyingcoloursmaths.co.uk/student-asks/
|
Why do you multiply by 1.07 if you’re adding 7%? I thought 7% was 0.07.
You’re quite right - 0.07 is exactly the same thing as 7% (and, if you like, $\frac{7}{100}$). However, if you’re adding on 7%, you need to multiply by 1.07, and here’s why.
Let’s say you’re adding 7% to 1000: you’d be working out $1000 + 1000 \times 0.07$ - or, better still, $1000\left( 1 + 0.07 \right)$. That thing in the bracket is 1.07.
The short reason: because you’re adding 7% on to what’s already there, you need to multiply it by 1 + the percentage as a decimal.
|
2021-10-22 00:25:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8783688545227051, "perplexity": 574.4971048473255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00577.warc.gz"}
|
http://hackage.haskell.org/package/sessiontypes-0.1.2/docs/Control-SessionTypes-STTerm.html
|
sessiontypes-0.1.2: Session types library
Control.SessionTypes.STTerm
Description
This module defines a GADT STTerm that is the very core of this library
Session typed programs are constructed by composing the constructors of STTerm.
Each constructor is annotated with a specific session type (except for Ret and Lift).
By passing a constructor to another constructor as an argument their session types are joined to form a larger session type.
We do not recommend explicitly composing the STTerm constructors. Instead make use of the functions defined in the Control.SessionTypes.MonadSession module.
Of course a STTerm program in itself is not very useful as it is devoid of any semantics. However, an interpreter function can give meaning to a STTerm program.
We define a couple in this library: Control.SessionTypes.Debug, Control.SessionTypes.Interactive, Control.SessionTypes.Normalize and Control.SessionTypes.Visualize.
Synopsis
# Documentation
data STTerm :: (Type -> Type) -> Cap a -> Cap a -> Type -> Type where Source #
Although we say that a STTerm is annotated with a session type, it is actually annotated with a capability (Cap).
The capability contains a context that is necessary for recursion and the session type.
The constructors can be split in four different categories:
• Communication: Send and Recv for basic communication
• Branching: Sel1, Sel2, OffZ and OffS
• Recursion: Rec, Weaken and Var
• Unsession typed: Ret and Lift
Constructors
Send :: a -> STTerm m (Cap ctx r) r' b -> STTerm m (Cap ctx (a :!> r)) r' b The constructor for sending messages. It is annotated with the send session type (:!>).It takes as an argument, the message to send, of type equal to the first argument of :!> and the continuing STTerm that is session typed with the second argument of :!>. Recv :: (a -> STTerm m (Cap ctx r) r' b) -> STTerm m (Cap ctx (a :?> r)) r' b The constructor for receiving messages. It is annotated with the receive session type (:?>)It takes a continuation that promises to deliver a value that may be used in the rest of the program. Sel1 :: STTerm m (Cap ctx s) r a -> STTerm m (Cap ctx (Sel (s ': xs))) r a Selects the first branch in a selection session type.By selecting a branch, that selected session type must then be implemented. Sel2 :: STTerm m (Cap ctx (Sel (t ': xs))) r a -> STTerm m (Cap ctx (Sel (s ': (t ': xs)))) r a Skips a branch in a selection session type.If the first branch in the selection session type is not the one we want to implement then we may use Sel2 to skip this. OffZ :: STTerm m (Cap ctx s) r a -> STTerm m (Cap ctx (Off '[s])) r a Dually to selection there is also offering branches.Unlike selection, where we may only implement one branch, an offering asks you to implement all branches. Which is chosen depends on how an interpreter synchronizes selection with offering.This constructor denotes the very last branch that may be offered. OffS :: STTerm m (Cap ctx s) r a -> STTerm m (Cap ctx (Off (t ': xs))) r a -> STTerm m (Cap ctx (Off (s ': (t ': xs)))) r a offers a branch and promises at least one more branch to be offered. Rec :: STTerm m (Cap (s ': ctx) s) r a -> STTerm m (Cap ctx (R s)) r a Constructor for delimiting the scope of recursionThe recursion constructors also modify or at least make use of the context in the capability.The Rec constructor inserts the session type argument to R into the context of the capability of its STTerm argument.This is necessary such that we remember the session type of the body of code that we may want to recurse over and thus avoiding infinite type occurrence errors. Weaken :: STTerm m (Cap ctx t) r a -> STTerm m (Cap (s ': ctx) (Wk t)) r a Constructor for weakening (expanding) the scope of recusionThis constructor does the opposite of R by popping a session type from the context.Use this constructor to essentially exit a recursion Var :: STTerm m (Cap (s ': ctx) s) t a -> STTerm m (Cap (s ': ctx) V) t a Constructor that denotes the recursion variableIt assumes the context to be non-empty and uses the session type at the top of the context to determine what should be implemented after Var. Ret :: (a :: Type) -> STTerm m s s a Constructor that makes STTerm a (indexed) monad Lift :: m (STTerm m s r a) -> STTerm m s r a Constructor that makes STTerm a (indexed) monad transformer
Instances
This function can be used if we do not use lift in a program but we must still disambiguate m.
|
2021-12-06 12:51:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27322107553482056, "perplexity": 8963.458999394214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363292.82/warc/CC-MAIN-20211206103243-20211206133243-00595.warc.gz"}
|
https://www.physicsforums.com/threads/special-relativity.588699/
|
# Special relativity
1. Mar 20, 2012
### Lizwi
What is E = $\frac{m_{0}c^{2}}{1-v^{2}/c^{2}}$
2. Mar 20, 2012
### Nabeshin
Hi Lizwi,
I think you are missing a square root in the denominator? The expression:
$$E= \gamma m c^2 = \frac{mc^2}{\sqrt{1-v^2/c^2}}$$
Is the relativistic expression for the total energy of a moving body.
3. Mar 20, 2012
### genericusrnme
What Nabeshin said
You can also drop the 0 from the $m_0$, I've not seen the term 'rest mass' used since highschool. Once you're out of highschool it simply becomes mass as far as I know :p
4. Mar 21, 2012
### granpa
its a term that pops up in the four-momentum if you use the convention that x0 = ct rather that x0 = t
if you use x0 = t then you just get p0 = gamma*m
|
2017-08-23 16:13:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49572595953941345, "perplexity": 2063.139669770886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.75/warc/CC-MAIN-20170823152006-20170823172006-00489.warc.gz"}
|
http://mathoverflow.net/questions/94294/do-equivariant-crepant-resolutions-always-exist
|
# Do equivariant crepant resolutions always exist?
Let $X_t$ be a family of algebraic varieties (my interest is Calabi-Yau varieties, but I don't think that's important) over $\mathbb{C}$, smooth for $t \neq 0$, on which a group $G$ acts fibre-wise. Suppose further that $X_0$ admits at least one crepant resolution. Does there always exist an equivariant crepant resolution? If not, are there conditions under which such exists?
-
Consider $\mathbb Z/2$ acting on $\{xy-zw=t\}$ by $x\leftrightarrow y$. This swaps the two small resolutions of the central fibre (the 3-fold ordinary double point $xy=zw$ in $\mathbb C^4$). So there can't be an equivariant small resolution.
(A formal proof might go along these lines: $\mathbb Z/2$ does act on the blow up of the ODP, swapping the two rulings of the $\mathbb P^1\times\mathbb P^1$ exceptional divisor. The small resolutions are contractions of this blow up. If $\mathbb Z/2$ acted on one of them, it would act on its $H^2$. Pulling back, its action on $H^2(\mathbb P^1\times\mathbb P^1)\cong\mathbb Z\oplus\mathbb Z$ would be the identity on the contracted $\mathbb Z$ summand, contradicting the fact that it swaps the summands.)
|
2014-12-23 01:43:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8709879517555237, "perplexity": 251.65916678730008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777454.142/warc/CC-MAIN-20141217075257-00165-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/546789/fourier-transform-of-a-superlattice-hamiltonian
|
# Fourier Transform of a Superlattice Hamiltonian
In a paper by Gábor B. Halász and Leon Balents they derive the energy band structure for a Hamiltonian that models a time reversal invariant realization of the Weyl semimetal phase. The model is a superlattice of a topological insulator and normal insulator spacer layer. If we denote $$\boldsymbol{k} = (k_x,k_y)$$ and $$\tau_{\pm} = \tau_x \pm i\tau_y$$ the Hamiltonian is given by:
$$H = \sum_{\boldsymbol{k}}\sum_{i,j} \Big[v_f \tau_z(k_y \sigma_x - k_x \sigma_y)\delta_{i,j} + V\tau_z\delta_{i,j} + \Delta_T\tau_x\delta_{ij} + \Delta_N\sum_{\pm}\tau_{\pm}\delta_{i,j\pm 1} \Big]c_{\boldsymbol{k},i}^{\dagger} c_{\boldsymbol{k},j}$$
Here the Pauli matrices $$\boldsymbol{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$$ act on the real spin degree of freedom and the Pauli matrices $$\boldsymbol{\tau}=(\tau_x,\tau_y,\tau_z)$$ act on the top/bottom surface pseudospin degree of freedom. The authors claim that to solve this Hamiltonian and obtain the dispersion relation by exploiting the translational symmetry in the z-direction, and introduce the corresponding 3D momentum $$\vec{k}=(k_x,k_y,k_z)$$. The dispersion relation is said to be
$$E_{\pm}^2(\vec{k}) = \Delta^2(k_z) + [V \pm v_f |{\boldsymbol{k}}|]^2$$ where $$\Delta(k_z)=\sqrt{\Delta_T^2 + \Delta_N^2 + 2\Delta_T\Delta_N\cos{(k_z d)}}$$ and $$d$$ is the periodicity of the superlattice.
My question is how did they obtain the dispersion above? I'm having some difficulty reproducing this result. More specifically/embarrassingly, I don't know where to start. I have a hunch that when they say "exploit the translational symmetry in the z-direction," they are using a Fourier transform in the z-direction only to obtain the $$\cos{(k_z d)}$$ term. Still, I'm not quite sure of how to even perform a Fourier transform on this Hamiltonian to obtain the result above.
I don't know much about superlattices, or really anything related to topological physics. I know, however, how to mindlessly apply Fourier transform to this kind of tight-binding Hamiltonian. I see, however, why you might be confused with all these different operators.
For now, just forget about the "2D" $$\mathbf{k}$$ sum at the beginning. We are only performing a Fourier transform along $$z$$. Also, you can basically forget about the Pauli matrices being operators and treat them as numbers during the Fourier transform step. Let me rewrite the Hamiltonian in a more convenient way:
$$H = \sum_{\boldsymbol{k}}\sum_{n} \Big[ v_f \tau_z(k_y \sigma_x - k_x \sigma_y) + V\tau_z + \Delta_T\tau_x \Big] c_{\boldsymbol{k},n}^{\dagger} c_{\boldsymbol{k},n}^{\phantom{.}} + \Delta_N \tau_{+} c_{\boldsymbol{k},n}^{\dagger} c_{\boldsymbol{k},n+1}^{\phantom{.}} + \Delta_N \tau_{-} c_{\boldsymbol{k},n}^{\dagger} c_{\boldsymbol{k},n-1}^{\phantom{.}}$$
(I have grouped the terms together depending on the relative value of $$i$$ and $$j$$, and replaced $$i$$ by $$n$$ so that we don't confuse it with the complex number $$i$$ later).
So we have three different terms: terms like $$c_n^{\dagger} c_n^{\phantom{.}}$$, terms like $$c_n^{\dagger} c_{n+1}^{\phantom{.}}$$ and terms like $$c_n^{\dagger} c_{n-1}^{\phantom{.}}$$, with Pauli matrices and 2D $$k$$ - dependent factors in front, which we will treat as numbers for now. The Fourier transform technique consists in performing the following transformations:
\begin{align} c_{k_z}^{\dagger} &= \frac{1}{\sqrt{L}} \sum_{n} e^{+ik_znd} c_n^{\dagger}\\ c_{k_z}^{\phantom{.}} &= \frac{1}{\sqrt{L}} \sum_{n} e^{-ik_znd} c_n^{\phantom{.}}, \end{align}
with the corresponding inverse transformations:
\begin{align} c_n^{\dagger} &= \frac{1}{\sqrt{L}} \sum_{k_z} e^{-ik_znd} c_{k_z}^{\dagger} \\ c_n^{\phantom{.}} &= \frac{1}{\sqrt{L}} \sum_{k_z} e^{+ik_znd} c_{k_z}^{\phantom{.}}, \end{align}
where $$L$$ is the total size of your system in the $$z$$ direction, and $$d$$ is the period of the superlattice. You can check that this choice of normalization makes the $$c_{k_z}^{\dagger}$$'s and $$c_{k_z}^{\phantom{.}}$$'s "true" fermionic operators as they verify the anticommutation relation:
$$\left\{ c_{k_z}^{\phantom{.}}, c_{k'_z}^{\dagger} \right\} = \delta_{k_z, k'_z}$$
Here because the system is finite of size $$L$$, the $$k_z$$'s can only take values which are multiples of $$\frac{2 \pi}{L}$$, but the same would hold for an infinite sized system ($$L \to \infty$$), you would just need to be more careful about the normalization.
The next step is to actually perform the substitution. Because the factors in front of each family of terms do not depend on $$n$$, we can simply look at the following sums and multiply them by whatever is in front of each term in the Hamiltonian:
$$$$\begin{split} \sum_{n} c_n^{\dagger} c_n^{\phantom{.}} &= \frac{1}{L} \sum_{k_z, k'_z} \sum_{n} e^{-i(k_z-k'_z)nd} c_{k_z}^{\dagger} c_{k'_z}^{\phantom{.}} &= \sum_{k_z, k'_z} \delta_{k_z, k'_z} c_{k_z}^{\dagger} c_{k'_z}^{\phantom{.}} &= \sum_{k_z} c_{k_z}^{\dagger} c_{k_z}^{\phantom{.}} \\ \sum_{n} c_n^{\dagger} c_{n+1}^{\phantom{.}} &= \frac{1}{L} \sum_{k_z, k'_z} \sum_{n} e^{-i(k_z-k'_z)nd} e^{ik'_zd} c_{k_z}^{\dagger} c_{k'_z}^{\phantom{.}} &= \sum_{k_z, k'_z} \delta_{k_z, k'_z} e^{ik'_zd} c_{k_z}^{\dagger} c_{k'_z}^{\phantom{.}} &= \sum_{k_z} e^{ik_zd} c_{k_z}^{\dagger} c_{k_z}^{\phantom{.}} \\ \sum_{n} c_n^{\dagger} c_{n-1}^{\phantom{.}} &= \frac{1}{L} \sum_{k_z, k'_z} \sum_{n} e^{-i(k_z-k'_z)nd} e^{-ik'_zd} c_{k_z}^{\dagger} c_{k'_z}^{\phantom{.}} &= \sum_{k_z, k'_z} \delta_{k_z, k'_z} e^{-ik'_zd} c_{k_z}^{\dagger} c_{k'_z}^{\phantom{.}} &= \sum_{k_z} e^{-ik_zd} c_{k_z}^{\dagger} c_{k_z}^{\phantom{.}} \\ \end{split}$$$$
All things, considered, this yields the following Hamiltonian:
$$H = \sum_{\overrightarrow{k}} \Big[ v_f \tau_z(k_y \sigma_x - k_x \sigma_y) + V\tau_z + \Delta_T\tau_x + \Delta_N e^{-ik_zd} \tau_{+} + \Delta_N e^{+ik_zd} \tau_{-} \Big] c_{\overrightarrow{k}}^{\dagger} c_{\overrightarrow{k}}^{\phantom{k}}$$
which is of the form:
$$H = \sum_{\overrightarrow{k}} H'\left(\overrightarrow{k}\right) c_{\overrightarrow{k}}^{\dagger} c_{\overrightarrow{k}}^{\phantom{k}},$$
with $$\overrightarrow{k} = (\mathbf{k}, k_z) = (k_x, k_y, k_z)$$ a 3D wavevector. The last step which you can try to do by yourself because it is not related to Fourier transform is to remember that $$H'\left(\overrightarrow{k}\right)$$ is actually an operator, which needs to be diagonalized. It can be seen as a $$4 \times 4$$ matrix acting on the product of the two spin spaces associated respectively with $$\sigma$$ and $$\tau$$.
|
2021-06-16 08:21:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 43, "wp-katex-eq": 0, "align": 2, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.999703049659729, "perplexity": 712.8266057722964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622234.42/warc/CC-MAIN-20210616063154-20210616093154-00232.warc.gz"}
|
https://testbook.com/blog/mensuration-quiz-1-for-banking-insurance-exams/
|
• Share
# Useful Tips for Mensuration with Quiz 1 for IBPS Clerk 2018
2 years ago .
Save
If you are preparing for Banking, Insurance and other Competitive Recruitment or Entrance exams, you will likely need to solve a section on Quant. Mensuration Quiz 1 for Banking & Insurance Exams will help you learn concepts on an important topic in Quant – Mensuration. This Mensuration Quiz 1 is important for exams such as IBPS PO, IBPS Clerk, IBPS RRB Officer, IBPS RRB Office Assistant, IBPS SO, SBI PO, SBI Clerk, SBI SO, Indian Post Payment Bank (IPPB) Scale I Officer, LIC AAO, GIC AO, UIIC AO, NIACL AO, NICL AO.
Watch this video to learn Mensuration in detail –
## Read Below Tips for Mensuration –
Mensuration is the technique of measuring. With this technique the length of lines, the area of surface, volume, etc is measured. It is used in the questions where geometrical figures are concerned, where physical quantities like volume, area and length are asked.
• Mensuration is a formula based topic. Therefore, memorize all the formula thoroughly.
• Learn basic tricks or methods to easily solve mensuration based questions.
• Practice memory based calculations and multiplication.
## Mensuration Quiz 1 for Banking & Insurance Exams –
Que. 1
If the wheel of a bicycle makes 560 revolutions in travelling 1.1 km, what is its radius?
1.
31.25 cm
2.
37.75 cm
3.
35.15 cm
4.
11.25 cm
5.
None of these
Attempt these questions in our app. Download now
Que. 2
Capacity of a cylindrical vessel is 25,872 cm3. If the height of the cylinder is 200% more than the radius of its base, what is the area of the base in square cm?
1.
336 cm2
2.
1232 cm2
3.
616 cm2
4.
308 cm2
5.
Cannot be determined
Attempt these questions in our app. Download now
Que. 3
The perimeter of a square and a rectangle is the same. If the rectangle is 12 cm by 10 cm, then by what percentage is the area of the square more than that of the rectangle?
1.
1%
2.
3%
3.
$$\frac{5}{6}{\rm{\% }}$$
4.
$$\frac{1}{2}{\rm{\% }}$$
5.
$$\frac{3}{4}{\rm{\% }}$$
Attempt these questions in our app. Download now
Que. 4
A cylinder of diameter 14 cm and height 7 cm is converted into a cone of radius 6 cm. Now, what could be the height of the new shape?
1.
28.58 cm
2.
26.58 cm
3.
27.48 cm
4.
27.74 cm
5.
None of these
Attempt these questions in our app. Download now
Que. 5
Water flows into a tank 200 m × 150 m through a rectangular pipe of 1.5m × 1.25 m @ 20 kmph . In what time (in minutes) will the water rise by 2 metres?
1.
48 min
2.
96 min
3.
108 min
4.
36 min
5.
None of these
Attempt these questions in our app. Download now
Que. 6
The length of a rectangle is twice its breadth. If its length is decreased by 5 cm and breadth is increased by 5 cm, the area of the rectangle is increased by 75 sq. cm. Find the length of the rectangle.
1.
40
2.
30
3.
25
4.
15
5.
None of these
Attempt these questions in our app. Download now
Que. 7
A man walked diagonally across a square lot. Approximately what was the percent saved by not walking along the edges? (Rounded to nearest integer)
1.
20
2.
24
3.
38
4.
33
5.
29
Attempt these questions in our app. Download now
Que. 8
A cylinder is 6 cms in diameter and 6 cms in height. If 12 spheres of the same size are made from the material obtained, what is the diameter of each sphere?
1.
5 cms
2.
2 cms
3.
3 cms
4.
4 cms
5.
none of these
Attempt these questions in our app. Download now
Que. 9
If a wire is bent into the shape of a square, then the area of the square is 272.25 cm2. When the wire is bent into a circular shape, then the radius of the circle will be
1.
7.25 cm
2.
15.5 cm
3.
5.25 cm
4.
10.5 cm
5.
None of these
Attempt these questions in our app. Download now
Que. 10
A cylindrical container whose diameter is 18 cm and height is 6 cm, is filled with ice cream. The whole ice-cream is distributed to 9 children in equal cones having hemispherical tops. If the height of the conical portion is twice the diameter of its base, find the diameter of the ice-cream cone.
1.
6 cm
2.
10 cm
3.
3 cm
4.
9 cm
5.
2 cm
Attempt these questions in our app. Download now
As we all know, practice is the key to success. Therefore, boost your preparation by starting your practice now.
Furthermore, chat with your fellow aspirants and our experts to get your doubts cleared on Testbook Discuss:
About the author: Testbook
With an aim to cater to the students preparing for government exams, Testbook.com provides Online Learning Courses, Online Test Series, practice quizzes, and doubt solving platform. Covering more than 150 exams with study materials, previous years papers, expert-curated study plans, daily GK & Current Affairs news, job alerts & recruitment updates, it is the one-stop solution for all job aspirants.
4 years ago
4 years ago
4 years ago
4 years ago
|
2020-07-03 23:38:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3897949159145355, "perplexity": 4525.793073207108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883439.15/warc/CC-MAIN-20200703215640-20200704005640-00279.warc.gz"}
|
https://itensor.org/support/3397/reading-mps-from-mps-h5-that-contains-complex-numbers
|
# Reading MPS from mps.h5 that contains complex numbers
+1 vote
Hi Miles,
I used
f = h5open("mps.h5","r")
to read the ground state MPS from the file mps.h5. However, I got the error:
"ERROR: LoadError: HDF5 group or file does not contain BlockSparse{Complex{Float64}} data"
Is this because the file contains complex numbers? I tried with real MPS and it worked out fine. If this is the case, then is there a way around this?
Thanks a lot for your time,
-Mason
commented by (70.1k points)
Hi Mason, what version of ITensor are you using? I ask because I only finished implementing this feature very recently, though there may be some cases still missing.
Also, how did you create the file MPs.h5? Did the tensors used to create it contain complex numbers?
Thank you, Miles
commented by (700 points)
The version I'm using is v0.1.41. To create the mps.h5 file, I used
f = h5open("mps.h5","w")
write(f,"mps",psi)
close(f)
Thank you
commented by (14.1k points)
Hi Mason,
You should upgrade to the latest version of ITensors.jl (v0.2.6), since new HDF5 reading and writing features were added in recent releases. You can see a succinct list of recent changes in the NEWS.md file here: https://github.com/ITensor/ITensors.jl/blob/main/NEWS.md#itensor-v026-release-notes which is helpful for seeing if there were any changes to features you are interested in.
|
2022-11-30 00:21:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2547658383846283, "perplexity": 2980.0686037107416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00605.warc.gz"}
|
https://www.gamefront.com/games/battlefield-1942/file/the-forest-of-death
|
# The Forest of Death
This is a map where the Japanese army has landed on the western part of Russia. The Russians Has sent a some force to deal with the Jappane...
File Description
This is a map where the Japanese army has landed on the western part of Russia. The Russians Has sent a some force to deal with the Jappanese. If the Russians are Lucky the Jappanese will be push back into the sea. If the Jappanese are lucky the Russians will be all killed.
Screenshots
this is a map where the Japanese army has landed on the western part of Russia. The Russians Has sent a some force to deal with the Jappanese. If the Russians are Lucky the Jappanese will be push back into the sea. If the Jappanese are lucky the Russians will be all killed.
To intall this map, just upzip and put it in your bf1942 level folder.
C:\Program Files\EA GAMES\Battlefield 1942\Mods\bf1942\Archives\bf1942\levels
this is freeware. You may make this map to fit your liking. On this map only.
my contact info is: Tippmann2543 AT aol.com
|
2018-11-14 00:11:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5440994501113892, "perplexity": 3530.0060030747327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741569.29/warc/CC-MAIN-20181114000002-20181114022002-00067.warc.gz"}
|
http://physics.stackexchange.com/questions/48588/who-are-we-and-what-counts-as-a-question-in-consistent-histories
|
# Who are “we”, and what counts as a “question” in consistent histories?
If the preferred basis in quantum mechanics and/or choice of consistent histories in consistent histories is arbitrary, and can only be determined by the "questions we ask", just who exactly is this nebulous "we", and what counts or doesn't count as a "question"?
This is a dead serious question.
PS: Suppose we are closed (i.e. no external interactions) quantum simulations in a mega quantum computer. We are simulated without being measured or observed or decohered in any way at the "substrate level" of our programmer/sysadmin gods. Then, these gods decide to unsimulate and uncompute the entire simulation. Suppose we asked some questions before being "uncomputed". Are these questions wiped out after uncomputation? These "gods" don't know what we asked or observed precisely because they avoid observing/measuring us in any way.
-
You might be dead serious, but none of these sentences are understandable to me. – hwlau Jan 8 at 10:44
@hwlau the first sentence contains a good serious question which has a well defined answer from considering quantum mechanics, the second paragraph starting with PS: confuses me a bit too ... – Dilaton Jan 8 at 11:33
## 1 Answer
The consistent histories approach to quantum mechanics doesn't require the word "we" or any similar philosophical word to be defined. Instead, it defines what a history is (a sequence of projection operators at different times) and what it means for pairs of histories to be consistent (an orthogonality condition of a sort). That's enough. The dynamical maths of quantum mechanics can then be used to calculate the probabilities of different histories.
People's formulations only used the word "we" associated with a set of consistent histories for a particular set of consistent histories that (incidentally) "we" or one of us could find relevant or helpful to do planning of anything etc. However, the rules of the consistent histories approach do not require any consistent histories to be "relevant" or "important" (for anyone or anything, whether it's "we" or "them" or anything else), so there's no need to define "us".
The consistent histories approach, as well as any meaningful Copenhagen-like interpretation of quantum mechanics, rules out the possibility that we're a "classical simulation" of the quantum system (and I guess you meant a "classical" simulation). Our world is genuinely and fundamentally quantum and this fact is indeed very important for the consistent histories formalism to make sense. A classical simulation always envisions just one set of observables, one sets of questions that fundamentally make sense. But it's a genuine and true feature of quantum mechanics that is made plain obvious in the consistent histories approach that there can be many ways to choose the set of consistent histories and none of them is "objectively better" than others.
The closest question to yours that could be a genuine concern is how the consistent histories approach guarantees the consistency between the conclusions of different observers – different sets of consistent histories. But it does guarantee that. As long as two sets of consistent histories contain some questions that may be answered in both sets, the mathematical formalism guarantees that the predicted probabilities will match independently of which set of consistent histories we choose. That's ultimately guaranteed by the "consistency condition", after all.
-
I have moved this discussion over to this chat room, please continue there :) – Manishearth♦ Jan 8 at 14:12 Apologies, Manishearth, I won't because my continued answers here weren't really addressed to Kumar himself or herself. They were addressed to those whom I believe would learn something out of the comments, i.e. other visitors of this page. I think it's a waste of time to continue in infinite discussions with one person who isn't willing to understand. – Luboš Motl Jan 8 at 14:32 @LuboMotl: No need to apologize -- in this case you couldn't have done much (the user was a one-rep user, so he can't access chat unless a moderator lets him; so moving the discussion to chat yourself was out of the question) except flag the post asking for one of us to help (though the post is autoflagged at 20-ish comments). You don't have to continue the infinite discussions, stepping away is a perfectly valid response. If the user keeps troubling you after you have stepped away, flag asking for us to deal with it :) – Manishearth♦ Jan 8 at 14:42 Regarding They were addressed to those whom I believe would learn something out of the comments, i.e. other visitors of this page. -- when you want to address page visitors, edit the stuff into your answer somehow. Comments are (a) mostly hidden, and (b) liable to deletion at any time -- so it's better to incorporate your comments into the post itself. After all, that is the primary use of comments-- to get a post improved. – Manishearth♦ Jan 8 at 14:44 Hi @LubošMotl Lumo, maybe it would help if you could add a link to your nice slightly technical TRF article about consistent histories into this answer ... ;-) – Dilaton Jan 8 at 21:02
|
2013-05-23 14:13:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.547249972820282, "perplexity": 640.6241546711276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703326861/warc/CC-MAIN-20130516112206-00091-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://plainmath.net/51183/given-frac-log-equal-frac-log-equal-frac-log-show-that-plus-cdot-plus-cdot
|
# Given \frac{\log x}{b-c}=\frac{\log y}{c-a}=\frac{\log z}{a-b} show that x^{b+c-a}\cdot y^{c+a-b}\cdot
Given $$\displaystyle{\frac{{{\log{{x}}}}}{{{b}-{c}}}}={\frac{{{\log{{y}}}}}{{{c}-{a}}}}={\frac{{{\log{{z}}}}}{{{a}-{b}}}}$$ show that $$\displaystyle{x}^{{{b}+{c}-{a}}}\cdot{y}^{{{c}+{a}-{b}}}\cdot{z}^{{{a}+{b}-{c}}}={1}$$
• Questions are typically answered in as fast as 30 minutes
### Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
maul124uk
We have
$$\displaystyle{\frac{{{\log{{x}}}}}{{{b}-{c}}}}={\frac{{{\log{{y}}}}}{{{c}-{a}}}}={\frac{{{\log{{z}}}}}{{{a}-{b}}}}={t}$$
This gives us
$$\displaystyle{x}={e}^{{{t}{\left({b}-{c}\right)}}},{y}={e}^{{{t}{\left({c}-{a}\right)}}}\ \text{ and }\ {z}={e}^{{{t}{\left({a}-{b}\right)}}}$$
Hence,
$$\displaystyle{x}^{{{b}+{c}-{a}}}\cdot{y}^{{{c}+{a}-{b}}}\cdot{z}^{{{a}+{b}+{c}}}={e}^{{{\left({\left({b}-{c}\right)}{\left({b}+{c}-{a}\right)}+{\left({c}-{a}\right)}{\left({c}+{a}-{b}\right)}+{\left({a}-{b}\right)}{\left({a}+{b}-{c}\right)}\right)}}}$$
$$\displaystyle={e}^{{{t}{\left({b}^{{2}}-{c}^{{2}}-{a}{b}+{a}{c}+{c}^{{2}}-{a}^{{2}}-{b}{c}+{b}{a}+{a}^{{2}}-{b}^{{2}}-{a}{c}+{b}{c}\right)}}}={e}^{{0}}={1}$$
###### Not exactly what you’re looking for?
Karen Robbins
If you want to use your equations, here is a method.
Multiplying the equations together, we obtain:
$$\displaystyle{x}^{{{c}-{a}}}{y}^{{{a}-{b}}}{z}^{{{b}-{c}}}={y}^{{{b}-{c}}}{z}^{{{c}-{a}}}{x}^{{{a}-{b}}}$$
which gives after reordering:
Therefore it suffices to show that $$\displaystyle{x}^{{a}}{y}^{{b}}{z}^{{c}}={1}$$
Your first and third equations give $$\displaystyle{y}={x}^{{{\frac{{{c}-{a}}}{{{b}-{c}}}}}},{z}={x}^{{{\frac{{{a}-{b}}}{{{b}-{c}}}}}}$$ This gives us:
$$\displaystyle{x}^{{a}}{y}^{{b}}{z}^{{c}}={x}^{{a}}{x}^{{{\frac{{{c}-{a}}}{{{b}-{c}}}}\times{b}}}{x}^{{{\frac{{{a}-{b}}}{{{b}-{c}}}}\times{c}}}={x}^{{{a}+{\frac{{{b}{c}-{b}{a}+{c}{a}-{b}{c}}}{{{b}-{c}}}}}}={x}^{{{a}-{a}}}={x}^{{0}}={1}$$
QED
Vasquez
Given:
$$\frac{\log x}{b-c}=\frac{\log y}{c-a}=\frac{\log z}{a-b}=\lambda$$
we have:
$$x=e^{\lambda(b+c)}, y=e^{\lambda(c-a)}, z=e^{\lambda(a-b)}$$
hence:
$$x^{b+c-a}\cdot y^{c+a-b}\cdot z^{a+b-c}=\exp(\lambda\cdot\sum_{cyc}(b^2-c^2-a(b-c)))$$
$$=\exp(0)=1$$
|
2022-01-17 07:35:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6956549882888794, "perplexity": 2955.995787261407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300343.4/warc/CC-MAIN-20220117061125-20220117091125-00148.warc.gz"}
|
https://chem.libretexts.org/Courses/BethuneCookman_University/B-CU%3A_CH-345_Quantitative_Analysis/Book%3A_Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.07%3A_Liquid-Liquid_Extractions
|
# 7.7: Liquid-Liquid Extractions
A liquid–liquid extraction is an important separation technique for environmental, clinical, and industrial laboratories. A standard environmental analytical method illustrates the importance of liquid–liquid extractions. Municipal water departments routinely monitor public water supplies for trihalomethanes (CHCl3, CHBrCl2, CHBr2Cl, and CHBr3) because they are known or suspected carcinogens. Before their analysis by gas chromatography, trihalomethanes are separated from their aqueous matrix using a liquid–liquid extraction with pentane [“The Analysis of Trihalomethanes in Drinking Water by Liquid Extraction,”EPAMethod501.2 (EPA 500-Series, November 1979)].
The Environmental Protection Agency (EPA) also publishes two additional methods for trihalomethanes. Method 501.1 and Method 501.3 use a purge-and-trap to collect the trihalomethanes prior to a gas chromatographic analysis with a halide-specific detector (Method 501.1) or a mass spectrometer as the detector (Method 501.3). You will find more details about gas chromatography, including detectors, in Chapter 12.
In a simple liquid–liquid extraction the solute partitions itself between two immiscible phases. One phase usually is an aqueous solvent and the other phase is an organic solvent, such as the pentane used to extract trihalomethanes from water. Because the phases are immiscible they form two layers, with the denser phase on the bottom. The solute initially is present in one of the two phases; after the extraction it is present in both phases. Extraction efficiency—that is, the percentage of solute that moves from one phase to the other—is determined by the equilibrium constant for the solute’s partitioning between the phases and any other side reactions that involve the solute. Examples of other reactions that affect extraction efficiency include acid–base reactions and complexation reactions.
## Partition Coefficients and Distribution Ratios
As we learned earlier in this chapter, a solute’s partitioning between two phases is described by a partition coefficient, KD. If we extract a solute from an aqueous phase into an organic phase
$S_{a q} \rightleftharpoons S_{o r g} \nonumber$
then the partition coefficient is
$K_{\mathrm{D}}=\frac{\left[S_{org}\right]}{\left[S_{a q}\right]} \nonumber$
A large value for KD indicates that extraction of solute into the organic phase is favorable.
To evaluate an extraction’s efficiency we must consider the solute’s total concentration in each phase, which we define as a distribution ratio, D.
$D=\frac{\left[S_{o r g}\right]_{\text { total }}}{\left[S_{a q}\right]_{\text { total }}} \nonumber$
The partition coefficient and the distribution ratio are identical if the solute has only one chemical form in each phase; however, if the solute exists in more than one chemical form in either phase, then KD and D usually have different values. For example, if the solute exists in two forms in the aqueous phase, A and B, only one of which, A, partitions between the two phases, then
$D=\frac{\left[S_{o r g}\right]_{A}}{\left[S_{a q}\right]_{A}+\left[S_{a q}\right]_{B}} \leq K_{\mathrm{D}}=\frac{\left[S_{o r g}\right]_{A}}{\left[S_{a q}\right]_{A}} \nonumber$
This distinction between KD and D is important. The partition coefficient is a thermodynamic equilibrium constant and has a fixed value for the solute’s partitioning between the two phases. The distribution ratio’s value, however, changes with solution conditions if the relative amounts of A and B change. If we know the solute’s equilibrium reactions within each phase and between the two phases, we can derive an algebraic relationship between KD and D.
## Liquid-Liquid Extraction With No Secondary Reactions
In a simple liquid–liquid extraction, the only reaction that affects the extraction efficiency is the solute’s partitioning between the two phases (Figure $$\PageIndex{1}$$).
In this case the distribution ratio and the partition coefficient are equal.
$D=\frac{\left[S_{o r g}\right]_{\text { total }}}{\left[S_{aq}\right]_{\text { total }}} = K_\text{D} = \frac {[S_{org}]} {[S_{aq}]} \label{7.1}$
Let’s assume the solute initially is present in the aqueous phase and that we wish to extract it into the organic phase. A conservation of mass requires that the moles of solute initially present in the aqueous phase equal the combined moles of solute in the aqueous phase and the organic phase after the extraction.
$\left(\operatorname{mol} \ S_{a q}\right)_{0}=\left(\operatorname{mol} \ S_{a q}\right)_{1}+\left(\operatorname{mol} \ S_{org}\right)_{1} \label{7.2}$
where the subscripts indicate the extraction number with 0 representing the system before the extraction and 1 the system following the first extraction. After the extraction, the solute’s concentration in the aqueous phase is
$\left[S_{a q}\right]_{1}=\frac{\left(\operatorname{mol} \ S_{a q}\right)_{1}}{V_{a q}} \label{7.3}$
and its concentration in the organic phase is
$\left[S_{o r g}\right]_{1}=\frac{\left(\operatorname{mol} \ S_{o r g}\right)_{1}}{V_{o r g}} \label{7.4}$
where Vaq and Vorg are the volumes of the aqueous phase and the organic phase. Solving equation \ref{7.2} for (mol Sorg)1 and substituting into equation \ref{7.4} leave us with
$\left[S_{o r g}\right]_{1} = \frac{\left(\operatorname{mol} \ S_{a q}\right)_{0}-\left(\operatorname{mol} \ S_{a q}\right)_{1}}{V_{o r g}} \label{7.5}$
Substituting equation \ref{7.3} and equation \ref{7.5} into equation \ref{7.1} gives
$D = \frac {\frac {(\text{mol }S_{aq})_0-(\text{mol }S_{aq})_1} {V_{org}}} {\frac {(\text{mol }S_{aq})_1} {V_{aq}}} = \frac{\left(\operatorname{mol} \ S_{a q}\right)_{0} \times V_{a q}-\left(\operatorname{mol} \ S_{a q}\right)_{1} \times V_{a q}}{\left(\operatorname{mol} \ S_{a q}\right)_{1} \times V_{o r g}} \nonumber$
Rearranging and solving for the fraction of solute that remains in the aqueous phase after one extraction, (qaq)1, gives
$\left(q_{aq}\right)_{1} = \frac{\left(\operatorname{mol} \ S_{aq}\right)_{1}}{\left(\operatorname{mol} \ S_{a q}\right)_{0}} = \frac{V_{aq}}{D V_{o r g}+V_{a q}} \label{7.6}$
The fraction present in the organic phase after one extraction, (qorg)1, is
$\left(q_{o r g}\right)_{1}=\frac{\left(\operatorname{mol} S_{o r g}\right)_{1}}{\left(\operatorname{mol} S_{a q}\right)_{0}}=1-\left(q_{a q}\right)_{1}=\frac{D V_{o r g}}{D V_{o r g}+V_{a q}} \nonumber$
Example $$\PageIndex{1}$$ shows how we can use equation \ref{7.6} to calculate the efficiency of a simple liquid-liquid extraction.
Example $$\PageIndex{1}$$
A solute has a KD between water and chloroform of 5.00. Suppose we extract a 50.00-mL sample of a 0.050 M aqueous solution of the solute using 15.00 mL of chloroform. (a) What is the separation’s extraction efficiency? (b) What volume of chloroform do we need if we wish to extract 99.9% of the solute?
Solution
For a simple liquid–liquid extraction the distribution ratio, D, and the partition coefficient, KD, are identical.
(a) The fraction of solute that remains in the aqueous phase after the extraction is given by equation \ref{7.6}.
$\left(q_{aq}\right)_{1}=\frac{V_{a q}}{D V_{org}+V_{a q}}=\frac{50.00 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}=0.400 \nonumber$
The fraction of solute in the organic phase is 1–0.400, or 0.600. Extraction efficiency is the percentage of solute that moves into the extracting phase; thus, the extraction efficiency is 60.0%.
(b) To extract 99.9% of the solute (qaq)1 must be 0.001. Solving equation \ref{7.6} for Vorg, and making appropriate substitutions for (qaq)1 and Vaq gives
$V_{o r g}=\frac{V_{a q}-\left(q_{a q}\right)_{1} V_{a q}}{\left(q_{a q}\right)_{1} D}=\frac{50.00 \ \mathrm{mL}-(0.001)(50.00 \ \mathrm{mL})}{(0.001)(5.00 \ \mathrm{mL})}=999 \ \mathrm{mL} \nonumber$
This is large volume of chloroform. Clearly, a single extraction is not reasonable under these conditions.
In Example $$\PageIndex{1}$$, a single extraction provides an extraction efficiency of only 60%. If we carry out a second extraction, the fraction of solute remaining in the aqueous phase, (qaq)2, is
$\left(q_{a q}\right)_{2}=\frac{\left(\operatorname{mol} \ S_{a q}\right)_{2}}{\left(\operatorname{mol} \ S_{a q}\right)_{1}}=\frac{V_{a q}}{D V_{org}+V_{a q}} \nonumber$
If Vaq and Vorg are the same for both extractions, then the cumulative fraction of solute that remains in the aqueous layer after two extractions, (Qaq)2, is the product of (qaq)1 and (qaq)2, or
$\left(Q_{aq}\right)_{2}=\frac{\left(\operatorname{mol} \ S_{aq}\right)_{2}}{\left(\operatorname{mol} \ S_{aq}\right)_{0}}=\left(q_{a q}\right)_{1} \times\left(q_{a q}\right)_{2}=\left(\frac{V_{a q}}{D V_{o r g}+V_{a q}}\right)^{2} \nonumber$
In general, for a series of n identical extractions, the fraction of analyte that remains in the aqueous phase after the last extraction is
$\left(Q_{a q}\right)_{n}=\left(\frac{V_{a q}}{D V_{o r g}+V_{a q}}\right)^{n} \label{7.7}$
Example $$\PageIndex{2}$$
For the extraction described in Example $$\PageIndex{1}$$, determine (a) the extraction efficiency for two identical extractions and for three identical extractions; and (b) the number of extractions required to ensure that we extract 99.9% of the solute.
Solution
(a) The fraction of solute remaining in the aqueous phase after two extractions and three extractions is
$\left(Q_{aq}\right)_{2}=\left(\frac{50.00 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}\right)^{2}=0.160 \nonumber$
$\left(Q_{a q}\right)_{3}=\left(\frac{50.0 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}\right)^{3}=0.0640 \nonumber$
The extraction efficiencies are 84.0% for two extractions and 93.6% for three extractions.
(b) To determine the minimum number of extractions for an efficiency of 99.9%, we set (Qaq)n to 0.001 and solve for n using equation \ref{7.7}.
$0.001=\left(\frac{50.00 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}\right)^{n}=(0.400)^{n} \nonumber$
Taking the log of both sides and solving for n
\begin{aligned} \log (0.001) &=n \log (0.400) \\ n &=7.54 \end{aligned} \nonumber
we find that a minimum of eight extractions is necessary.
The last two examples provide us with an important observation—for any extraction efficiency, we need less solvent if we complete several extractions using smaller portions of solvent instead of one extraction using a larger volume of solvent. For the conditions in Example $$\PageIndex{1}$$ and Example $$\PageIndex{2}$$, an extraction efficiency of 99.9% requires one extraction with 9990 mL of chloroform, or 120 mL when using eight 15-mL portions of chloroform. Although extraction efficiency increases dramatically with the first few multiple, the effect diminishes quickly as we increase the number of extractions (Figure $$\PageIndex{2}$$). In most cases there is little improvement in extraction efficiency after five or six extractions. For the conditions in Example $$\PageIndex{2}$$, we reach an extraction efficiency of 99% after five extractions and need three additional extractions to obtain the extra 0.9% increase in extraction efficiency.
Exercise $$\PageIndex{1}$$
To plan a liquid–liquid extraction we need to know the solute’s distribution ratio between the two phases. One approach is to carry out the extraction on a solution that contains a known amount of solute. After the extraction, we isolate the organic phase and allow it to evaporate, leaving behind the solute. In one such experiment, 1.235 g of a solute with a molar mass of 117.3 g/mol is dissolved in 10.00 mL of water. After extracting with 5.00 mL of toluene, 0.889 g of the solute is recovered in the organic phase. (a) What is the solute’s distribution ratio between water and toluene? (b) If we extract 20.00 mL of an aqueous solution that contains the solute using 10.00 mL of toluene, what is the extraction efficiency? (c) How many extractions will we need to recover 99.9% of the solute?
(a) The solute’s distribution ratio between water and toluene is
$D=\frac{\left[S_{o r g}\right]}{\left[S_{a q}\right]}=\frac{0.889 \ \mathrm{g} \times \frac{1 \ \mathrm{mol}}{117.3 \ \mathrm{g}} \times \frac{1}{0.00500 \ \mathrm{L}}}{(1.235 \ \mathrm{g}-0.889 \ \mathrm{g}) \times \frac{1 \ \mathrm{mol}}{117.3 \ \mathrm{g}} \times \frac{1}{0.01000 \ \mathrm{L}}}=5.14 \nonumber$
(b) The fraction of solute remaining in the aqueous phase after one extraction is
$\left(q_{a q}\right)_{1}=\frac{V_{a q}}{D V_{org}+V_{a q}}=\frac{20.00 \ \mathrm{mL}}{(5.14)(10.00 \ \mathrm{mL})+20.00 \ \mathrm{mL}}=0.280 \nonumber$
The extraction efficiency, therefore, is 72.0%.
(c) To extract 99.9% of the solute requires
$\left(Q_{aq}\right)_{n}=0.001=\left(\frac{20.00 \ \mathrm{mL}}{(5.14)(10.00 \ \mathrm{mL})+20.00 \ \mathrm{mL}}\right)^{n}=(0.280)^{n} \nonumber$
\begin{aligned} \log (0.001) &=n \log (0.280) \\ n &=5.4 \end{aligned} \nonumber
a minimum of six extractions.
## Liquid-Liquid Extractions Involving Acid-Base Equilibria
As we see in equation \ref{7.1}, in a simple liquid–liquid extraction the distribution ratio and the partition coefficient are identical. As a result, the distribution ratio does not depend on the composition of the aqueous phase or the organic phase. A change in the pH of the aqueous phase, for example, will not affect the solute’s extraction efficiency when KD and D have the same value.
If the solute participates in one or more additional equilibrium reactions within a phase, then the distribution ratio and the partition coefficient may not be the same. For example, Figure $$\PageIndex{3}$$ shows the equilibrium reactions that affect the extraction of the weak acid, HA, by an organic phase in which ionic species are not soluble.
In this case the partition coefficient and the distribution ratio are
$K_{\mathrm{D}}=\frac{\left[\mathrm{HA}_{org}\right]}{\left[\mathrm{HA}_{a q}\right]} \label{7.8}$
$D=\frac{\left[\mathrm{HA}_{org}\right]_{\text { total }}}{\left[\mathrm{HA}_{a q}\right]_{\text { total }}} =\frac{\left[\mathrm{HA}_{org}\right]}{\left[\mathrm{HA}_{a q}\right]+\left[\mathrm{A}_{a q}^{-}\right]} \label{7.9}$
Because the position of an acid–base equilibrium depends on pH, the distribution ratio, D, is pH-dependent. To derive an equation for D that shows this dependence, we begin with the acid dissociation constant for HA.
$K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}_{\mathrm{aq}}^{+}\right]\left[\mathrm{A}_{\mathrm{aq}}^{-}\right]}{\left[\mathrm{HA}_{\mathrm{aq}}\right]} \label{7.10}$
Solving equation \ref{7.10} for the concentration of A in the aqueous phase
$\left[\mathrm{A}_{a q}^{-}\right]=\frac{K_{\mathrm{a}} \times\left[\mathrm{HA}_{a q}\right]}{\left[\mathrm{H}_{3} \mathrm{O}_{a q}^{+}\right]} \nonumber$
and substituting into equation \ref{7.9} gives
$D = \frac {[\text{HA}_{org}]} {[\text{HA}_{aq}] + \frac {K_a \times [\text{HA}_{aq}]}{[\text{H}_3\text{O}_{aq}^+]}} \nonumber$
Factoring [HAaq] from the denominator, replacing [HAorg]/[HAaq] with KD (equation \ref{7.8}), and simplifying leaves us with the following relationship between the distribution ratio, D, and the pH of the aqueous solution.
$D=\frac{K_{\mathrm{D}}\left[\mathrm{H}_{3} \mathrm{O}_{aq}^{+}\right]}{\left[\mathrm{H}_{3} \mathrm{O}_{aq}^{+}\right]+K_{a}} \label{7.11}$
Example $$\PageIndex{3}$$
An acidic solute, HA, has a Ka of $$1.00 \times 10^{-5}$$ and a KD between water and hexane of 3.00. Calculate the extraction efficiency if we extract a 50.00 mL sample of a 0.025 M aqueous solution of HA, buffered to a pH of 3.00, with 50.00 mL of hexane. Repeat for pH levels of 5.00 and 7.00.
Solution
When the pH is 3.00, [$$\text{H}_3\text{O}_{aq}^+$$] is $$1.0 \times 10^{-3}$$ and the distribution ratio is
$D=\frac{(3.00)\left(1.0 \times 10^{-3}\right)}{1.0 \times 10^{-3}+1.00 \times 10^{-5}}=2.97 \nonumber$
The fraction of solute that remains in the aqueous phase is
$\left(Q_{aq}\right)_{1}=\frac{50.00 \ \mathrm{mL}}{(2.97)(50.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}=0.252 \nonumber$
The extraction efficiency, therefore, is almost 75%. The same calculation at a pH of 5.00 gives the extraction efficiency as 60%. At a pH of 7.00 the extraction efficiency is just 3% .
The extraction efficiency in Example $$\PageIndex{3}$$ is greater at more acidic pH levels because HA is the solute’s predominate form in the aqueous phase. At a more basic pH, where A is the solute’s predominate form, the extraction efficiency is smaller. A graph of extraction efficiency versus pH is shown in Figure $$\PageIndex{4}$$. Note that extraction efficiency essentially is independent of pH for pH levels more acidic than the HA’s pKa, and that it is essentially zero for pH levels more basic than HA’s pKa. The greatest change in extraction efficiency occurs at pH levels where both HA and A are predominate species. The ladder diagram for HA along the graph’s x-axis helps illustrate this effect.
Exercise $$\PageIndex{2}$$
The liquid–liquid extraction of the weak base B is governed by the following equilibrium reactions:
$\begin{array}{c}{\mathrm{B}(a q) \rightleftharpoons \mathrm{B}(org) \quad K_{D}=5.00} \\ {\mathrm{B}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{OH}^{-}(a q)+\mathrm{HB}^{+}(a q) \quad K_{b}=1.0 \times 10^{-4}}\end{array} \nonumber$
Derive an equation for the distribution ratio, D, and calculate the extraction efficiency if 25.0 mL of a 0.025 M solution of B, buffered to a pH of 9.00, is extracted with 50.0 mL of the organic solvent.
Because the weak base exists in two forms, only one of which extracts into the organic phase, the partition coefficient, KD, and the distribution ratio, D, are not identical.
$K_{\mathrm{D}}=\frac{\left[\mathrm{B}_{org}\right]}{\left[\mathrm{B}_{aq}\right]} \nonumber$
$D = \frac {[\text{B}_{org}]} {[\text{B}_{aq}]} = \frac {[\text{B}_{org}]} {[\text{B}_{aq}] + [\text{HB}_{aq}^+]} \nonumber$
Using the Kb expression for the weak base
$K_{\mathrm{b}}=\frac{\left[\mathrm{OH}_{a q}^{-}\right]\left[\mathrm{HB}_{a q}^{+}\right]}{\left[\mathrm{B}_{a q}\right]} \nonumber$
we solve for the concentration of HB+ and substitute back into the equation for D, obtaining
$D = \frac {[\text{B}_{org}]} {[\text{B}_{aq}] + \frac {K_b \times [\text{B}_{aq}]} {[\text{OH}_{aq}^-]}} = \frac {[\text{B}_{org}]} {[\text{B}_{aq}]\left(1+\frac {K_b} {[\text{OH}_{aq}^+]} \right)} =\frac{K_{D}\left[\mathrm{OH}_{a q}^{-}\right]}{\left[\mathrm{OH}_{a q}^{-}\right]+K_{\mathrm{b}}} \nonumber$
At a pH of 9.0, the [OH] is $$1 \times 10^{-5}$$ M and the distribution ratio has a value of
$D=\frac{K_{D}\left[\mathrm{OH}_{a q}^{-}\right]}{\left[\mathrm{OH}_{aq}^{-}\right]+K_{\mathrm{b}}}=\frac{(5.00)\left(1.0 \times 10^{-5}\right)}{1.0 \times 10^{-5}+1.0 \times 10^{-4}}=0.455 \nonumber$
After one extraction, the fraction of B remaining in the aqueous phase is
$\left(q_{aq}\right)_{1}=\frac{25.00 \ \mathrm{mL}}{(0.455)(50.00 \ \mathrm{mL})+25.00 \ \mathrm{mL}}=0.524 \nonumber$
The extraction efficiency, therefore, is 47.6%. At a pH of 9, most of the weak base is present as HB+, which explains why the overall extraction efficiency is so poor.
## Liquid-Liquid Extraction of a Metal-Ligand Complex
One important application of a liquid–liquid extraction is the selective extraction of metal ions using an organic ligand. Unfortunately, many organic ligands are not very soluble in water or undergo hydrolysis or oxidation reactions in aqueous solutions. For these reasons the ligand is added to the organic solvent instead of the aqueous phase. Figure $$\PageIndex{5}$$ shows the relevant equilibrium reactions (and equilibrium constants) for the extraction of Mn+ by the ligand HL, including the ligand’s extraction into the aqueous phase (KD,HL), the ligand’s acid dissociation reaction (Ka), the formation of the metal–ligand complex ($$\beta_n$$), and the complex’s extraction into the organic phase (KD,c).
If the ligand’s concentration is much greater than the metal ion’s concentration, then the distribution ratio is
$D=\frac{\beta_{n} K_{\mathrm{D}, c}\left(K_{a}\right)^{n}\left(C_{\mathrm{HL}}\right)^{n}}{\left(K_{\mathrm{D}, \mathrm{HL}}\right)^{n}\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{n}+\beta_{n}\left(K_{\mathrm{a}}\right)^{n}\left(C_{\mathrm{HL}}\right)^{n}} \label{7.12}$
where CHL is the ligand’s initial concentration in the organic phase. As shown in Example $$\PageIndex{4}$$, the extraction efficiency for metal ions shows a marked pH dependency.
Example $$\PageIndex{4}$$
A liquid–liquid extraction of the divalent metal ion, M2+, uses the scheme outlined in Figure $$\PageIndex{5}$$. The partition coefficients for the ligand, KD,HL, and for the metal–ligand complex, KD,c, are $$1.0 \times 10^4$$ and $$7.0 \times 10^4$$, respectively. The ligand’s acid dissociation constant, Ka, is $$5.0 \times 10^{-5}$$, and the formation constant for the metal–ligand complex, $$\beta_2$$, is $$2.5 \times 10^{16}$$. What is the extraction efficiency if we extract 100.0 mL of a $$1.0 \times 10^{-6}$$ M aqueous solution of M2+, buffered to a pH of 1.00, with 10.00 mL of an organic solvent that is 0.1 mM in the chelating agent? Repeat the calculation at a pH of 3.00.
Solution
When the pH is 1.00 the distribution ratio is
$D=\frac{\left(2.5 \times 10^{16}\right)\left(7.0 \times 10^{4}\right)\left(5.0 \times 10^{-5}\right)^{2}\left(1.0 \times 10^{-4}\right)^{2}}{\left(1.0 \times 10^{4}\right)^{2}(0.10)^{2}+\left(2.5 \times 10^{16}\right)\left(5.0 \times 10^{-5}\right)^{2}\left(1.0 \times 10^{-4}\right)^{2}} \nonumber$
or a D of 0.0438. The fraction of metal ion that remains in the aqueous phase is
$\left(Q_{aq}\right)_{1}=\frac{100.0 \ \mathrm{mL}}{(0.0438)(10.00 \ \mathrm{mL})+100.0 \ \mathrm{mL}}=0.996 \nonumber$
At a pH of 1.00, we extract only 0.40% of the metal into the organic phase. Changing the pH to 3.00, however, increases the extraction efficiency to 97.8%. Figure $$\PageIndex{6}$$ shows how the pH of the aqueous phase affects the extraction efficiency for M2+.
One advantage of using a ligand to extract a metal ion is the high degree of selectivity that it brings to a liquid–liquid extraction. As seen in Figure $$\PageIndex{6}$$, a divalent metal ion’s extraction efficiency increases from approximately 0% to 100% over a range of 2 pH units. Because a ligand’s ability to form a metal–ligand complex varies substantially from metal ion to metal ion, significant selectivity is possible if we carefully control the pH. Table $$\PageIndex{1}$$ shows the minimum pH for extracting 99% of a metal ion from an aqueous solution using an equal volume of 4 mM dithizone in CCl4.
Table $$\PageIndex{1}$$. Minimum pH for Extracting 99% of an Aqueous Metal Ion Using 4.0 mM Dithizone in $$\text{CCl}_4$$ $$V_{aq} = V_{org}$$
metal ion minimum pH
Hg2+ –8.7
Ag+ –1.7
Cu2+ –0.8
Bi3+ 0.9
Zn2+ 2.3
Cd2+ 3.6
Co2+ 3.6
Pb2+ 4.1
Ni2+ 6.0
Tl+ 8.7
Example $$\PageIndex{5}$$
Using Table $$\PageIndex{1}$$, explain how we can separate the metal ions in an aqueous mixture of Cu2+, Cd2+, and Ni2+ by extracting with an equal volume of dithizone in CCl4.
Solution
From Table $$\PageIndex{1}$$, a quantitative separation of Cu2+ from Cd2+ and from Ni2+ is possible if we acidify the aqueous phase to a pH of less than 1. This pH is greater than the minimum pH for extracting Cu2+ and significantly less than the minimum pH for extracting either Cd2+ or Ni2+. After the extraction of Cu2+ is complete, we shift the pH of the aqueous phase to 4.0, which allows us to extract Cd2+ while leaving Ni2+ in the aqueous phase.
|
2021-10-20 07:23:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8250401020050049, "perplexity": 2696.746476198726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00110.warc.gz"}
|
https://math.stackexchange.com/questions/981069/how-do-we-know-that-the-first-few-digits-of-an-approximation-for-pi-are-corre
|
# How do we know that the first few digits of an approximation for $\pi$ are correct?
For Gregory–Leibniz series, wikipedia has - "after 500,000 terms, it produces only five correct decimal digits of π.". But how do you know that those five decimal values are correct when you reach 500,000?
What if during any random calculation (not considering pi) the number is 2.82999 and we were to add 0.00001 to it. The result will be 2.83000 which changes the second, third, fourth and fifth digit after decimal. How do you know the number of digits that will not change?
• its called a limit bro.. – Matthew Levy Oct 19 '14 at 16:11
• It's an alternating series, whose terms are decreasing. Those have the property that the sum is always in between any two consecutive partial sums. As in, $1$ is too big, $1-\frac13$ is too small, $1-\frac13+\frac15$ is too big, etc., so $\pi$ has to be in between any two consecutive partial sums. If two partial sums agree on the first five digits, then, those digits have to be correct. – Akiva Weinberger Oct 19 '14 at 16:12
• @columbus8myhw: I think that comment should be an answer – Ben Millwood Oct 19 '14 at 16:16
• @PIMan: I gave the question a title I think is more informative. Let me know if you disagree. – Ben Millwood Oct 19 '14 at 16:19
The Leibniz series is an alternating series (similar to the Wallis Product), whose terms are decreasing toward $\frac{\pi}{4}$. Thus, those terms must have the property that the sum is always in between any two consecutive partial sums. About $0.785398...$ (i.e $\frac{\pi}{4}$) is the goal, so $1$ is too big, $1−\frac{1}{3}$ ( $0.\overline{6}$ ) is too small, 1−$\frac{1}{3}+\frac{1}{5}$ ( $0.8\overline{6}$ ) is too big, (etc.). So, $\pi$ has to be in between any two consecutive partial sums. If two partial sums agree on the first five digits, then, those digits have to be correct.
Also, another thing you might be interested in: $13$ trillion digits of $\pi$ (no joke! click this link)
|
2019-12-12 06:21:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7028234004974365, "perplexity": 370.29978781583844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540537212.96/warc/CC-MAIN-20191212051311-20191212075311-00198.warc.gz"}
|
https://brilliant.org/problems/unit-power/
|
# Unit power
What is the units digit of the product of following 3 numbers: $3^{1001}$ $7^{1002}$ $13^{1003}$
*Feel free to add good solutions!
×
|
2018-03-20 06:04:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.329255610704422, "perplexity": 2497.134092780059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647299.37/warc/CC-MAIN-20180320052712-20180320072712-00650.warc.gz"}
|
http://mathhelpforum.com/advanced-statistics/162687-proof-axioms.html
|
# Math Help - proof of axioms
1. ## proof of axioms
given that set B is ccontained in a set A, show that the probability of A is greater than or equal to that of B
2. Short proof: All the elements in B are also in A.
Long proof:
$\displaystyle B \subseteq A$
$\displaystyle n(B) \leq n(A)$
$\displaystyle \frac{n(B)}{n(\varepsilon)} \leq \frac{n(A)}{n(\varepsilon)}$
$\displaystyle Pr(B) \leq Pr(A)$.
3. Originally Posted by Prove It
Short proof: All the elements in B are also in A.
Long proof:
$\displaystyle B \subseteq A$
$\displaystyle n(B) \leq n(A)$
$\displaystyle \frac{n(B)}{n(\varepsilon)} \leq \frac{n(A)}{n(\varepsilon)}$
$\displaystyle Pr(B) \leq Pr(A)$.
What is n(.) and what rules of proof are in force here?
Also re thread title: You can't proove axioms (unless there is redundancy in the set of axioms but we do try to avoid that).
Come to think of it do you not have to assume that both probabilities exist?
CB
4. Originally Posted by CaptainBlack
What is n(.) and what rules of proof are in force here?
Also re thread title: You can't proove axioms (unless there is redundancy in the set of axioms but we do try to avoid that).
Come to think of it do you not have to assume that both probabilities exist?
CB
The n(A) notation stands for number of elements in A.
5. Originally Posted by Prove It
The n(A) notation stands for number of elements in A.
Are you then assuming finite sets and equally likely cases, I'm not sure your argument works even with countably infinite sets.
You also leave too much notation undefined.
CB
6. Originally Posted by CaptainBlack
Are you then assuming finite sets and equally likely cases, I'm not sure your argument works even with countably infinite sets.
You also leave too much notation undefined.
CB
I disagree, this is standard notation - even year 8 students should be aware of the notation of $\displaystyle \varepsilon$ to represent the universal set, $\displaystyle n(A)$ to represent the number of elements in set A, and that $\displaystyle Pr(A) = \frac{n(A)}{n(\varepsilon)}$. Even if they are infinite, the same logic holds if you were to draw a Venn Diagram of the situation.
7. Originally Posted by Prove It
I disagree, this is standard notation - even year 8 students should be aware of the notation of $\displaystyle \varepsilon$ to represent the universal set,
Ha ha ha ... you jest
$\displaystyle n(A)$ to represent the number of elements in set A,
Cardinality? That is not the notation I know.
and that $\displaystyle Pr(A) = \frac{n(A)}{n(\varepsilon)}$. Even if they are infinite, the same logic holds if you were to draw a Venn Diagram of the situation.
Let your $\varepsilon$ be the unit disk, let A be some disk contained in $\varepsilon$, now what does your notation mean?
CB
8. Originally Posted by omoboye
given that set B is ccontained in a set A, show that the probability of A is greater than or equal to that of B
The title did say “from the axioms”
Recall that $\left( {\forall C} \right)\left[ {P(C) \geqslant 0} \right]$ and $C \cap D = \emptyset \; \Rightarrow \;P(C \cup D) = P(C) + P(D)$.
Use those. $A = B \cup \left( {A\backslash B} \right)$, therefore $P(A) = P(B) + P\left( {A\backslash B} \right) \geqslant P(B)$.
|
2014-07-26 20:42:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571733474731445, "perplexity": 543.6512709494181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997904391.23/warc/CC-MAIN-20140722025824-00018-ip-10-33-131-23.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/41216-solving-trigonometric-equations-exactly.html
|
# Thread: Solving Trigonometric Equations Exactly
1. ## Solving Trigonometric Equations Exactly
Hi, I'm new here. I'm a homeschooled senior in my last leg of pre-graduation math stress. Stupidly, I decided to take a pre-calculus class, and found out exactly how far it is beyond my scope of learning capability the hard way.
Anyway, I need help!
Solve exactly. 4sin^2x -3=0, 0 greater than or equal to x, x less than 2pi.
Sorry for not using the proper signs, I'm not sure where to find them on the keyboard. By "pi", I mean Pi. Not just two random variables strung together.
And please, don't think I haven't done any work on this problem--I've spent the last two days trying to find a page in my textbook that will shed light upon the matter, to no avail. None of the examples quite match the construction of this problem, and after flying through a myriad different starting points, all of which became tangled, I'm still just as lost as if I'd never started. It's the only reason I'm not showing any work. :P
2. Originally Posted by Mantissa
Hi, I'm new here. I'm a homeschooled senior in my last leg of pre-graduation math stress. Stupidly, I decided to take a pre-calculus class, and found out exactly how far it is beyond my scope of learning capability the hard way.
Anyway, I need help!
Solve exactly. 4sin^2x -3=0, 0 greater than or equal to x, x less than 2pi.
Sorry for not using the proper signs, I'm not sure where to find them on the keyboard. By "pi", I mean Pi. Not just two random variables strung together.
And please, don't think I haven't done any work on this problem--I've spent the last two days trying to find a page in my textbook that will shed light upon the matter, to no avail. None of the examples quite match the construction of this problem, and after flying through a myriad different starting points, all of which became tangled, I'm still just as lost as if I'd never started. It's the only reason I'm not showing any work. :P
$4sin^2(x) - 3 = 0$
For convenience and clarity let $y = sin(x)$. Then the equation becomes
$4y^2 - 3 = 0$
$y = \pm \frac{\sqrt{3}}{2}$
$sin(x) = \pm \frac{\sqrt{3}}{2}$
etc.
-Dan
3. Hello, Mantissa!
Solve exactly: . $4\sin^2\!x -3\:=\:0,\quad 0 \leq x < 2\pi.$
First, solve for $x$
We have: . $4\sin^2\!x -3 \:=\:0 \quad\Rightarrow\quad 4\sin^2\!x \:=\:3 \quad\Rightarrow\quad \sin^2\!x \:=\:\frac{3}{4}$
Take square roots: . $\sin x \:=\:\pm\sqrt{\frac{3}{4}} \quad\Rightarrow\quad \sin x \:=\:\pm\frac{\sqrt{3}}{2}$
At this point, we are expected to be familiar with some special angles.
. . We should know the trig values for 30°, 60°, and 45°.
And we recognize that: . $\sin60^o \:=\:\frac{\sqrt{3}}{2}$
From there, we know that: . $\sin120^o \:=\:\frac{\sqrt{3}}{2},\;\;\sin240^o\:=\:-\frac{\sqrt{3}}{2},\;\;\sin300^o\:=\:-\frac{\sqrt{3}}{2}$
Therefore: . $x \;\;=\;\;60^o,\:120^o,\:240^o,\:300^o \;\;=\;\;\boxed{\frac{\pi}{3},\:\frac{2\pi}{3},\:\ frac{4\pi}{3},\:\frac{5\pi}{3}\text{ radians}}$
4. Wow. I just had that "OOOOOH" feeling you get when something suddenly clicks.
Thank you both tremendously! Soroban, I totally forgot about radians (which explains why the answer in the book was so weird!) and Dan, that's an excellent way to think of it. So much simpler.
THANK YOU.
5. All right... I need some help again, regarding the same subject.
The problem looks like this:
sin[sin^-1(3/5)+cos^-1(4/5)]
It says to "Solve exactly without the use of a calculator".
What I don't understand, or have forgotten through lack of use, is how you solve a sin or cosine without the use of a calculator? Is this another problem concerning radian measure?
Would it be possible for someone to solve a similar problem to this one, so I can see from the example how it's to be done?
Thank you.
6. Originally Posted by Mantissa
All right... I need some help again, regarding the same subject.
The problem looks like this:
sin[sin^-1(3/5)+cos^-1(4/5)]
It says to "Solve exactly without the use of a calculator".
What I don't understand, or have forgotten through lack of use, is how you solve a sin or cosine without the use of a calculator? Is this another problem concerning radian measure?
Would it be possible for someone to solve a similar problem to this one, so I can see from the example how it's to be done?
Thank you.
You should not need a calculator for this one. You need these:
$\sin(a+b)\;=\;\sin(a)\cos(b)\;+\;\cos(a)\sin(b)$
$\sin^{-1}(\sin(a))\;=\;\sin(\sin^{-1}(a))\;=\;a$
$a^{2} + b^{2} = c^{2}$
7. Originally Posted by Mantissa
All right... I need some help again, regarding the same subject.
The problem looks like this:
sin[sin^-1(3/5)+cos^-1(4/5)]
It says to "Solve exactly without the use of a calculator".
What I don't understand, or have forgotten through lack of use, is how you solve a sin or cosine without the use of a calculator? Is this another problem concerning radian measure?
Would it be possible for someone to solve a similar problem to this one, so I can see from the example how it's to be done?
Thank you.
New questions should go in new threads.
-Dan
|
2013-12-11 22:30:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133409023284912, "perplexity": 776.1230032853128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164061354/warc/CC-MAIN-20131204133421-00078-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://webot.org/info/en/?search=Actuarial_reserves
|
# Actuarial reserves Information
https://en.wikipedia.org/wiki/Actuarial_reserves
In insurance, an actuarial reserve is a reserve set aside for future insurance liabilities. It is generally equal to the actuarial present value of the future cash flows of a contingent event. In the insurance context an actuarial reserve is the present value of the future cash flows of an insurance policy and the total liability of the insurer is the sum of the actuarial reserves for every individual policy. Regulated insurers are required to keep offsetting assets to pay off this future liability.
## The loss random variable
The loss random variable is the starting point in the determination of any type of actuarial reserve calculation. Define ${\displaystyle K(x)}$ to be the future state lifetime random variable of a person aged x. Then, for a death benefit of one dollar and premium ${\displaystyle P}$, the loss random variable, ${\displaystyle L}$, can be written in actuarial notation as a function of ${\displaystyle K(x)}$
${\displaystyle L=v^{K(x)+1}-P{\ddot {a}}_{{\overline {K(x)+1}}|}}$
From this we can see that the present value of the loss to the insurance company now if the person dies in t years, is equal to the present value of the death benefit minus the present value of the premiums.
The loss random variable described above only defines the loss at issue. For K(x) > t, the loss random variable at time t can be defined as:
${\displaystyle {}_{t}L=v^{K(x)+1-t}-P{\ddot {a}}_{\overline {K(x)+1-t|}}}$
Net level premium reserves, also called benefit reserves, only involve two cash flows and are used for some US GAAP reporting purposes. The valuation premium in an NLP reserve is a premium such that the value of the reserve at time zero is equal to zero. The net level premium reserve is found by taking the expected value of the loss random variable defined above. They can be formulated prospectively or retrospectively. The amount of prospective reserves at a point in time is derived by subtracting the actuarial present value of future valuation premiums from the actuarial present value of the future insurance benefits. Retrospective reserving subtracts accumulated value of benefits from accumulated value of valuation premiums as of a point in time. The two methods yield identical results (assuming bases are the same for both prospective and retrospective calculations).
As an example, consider a whole life insurance policy of one dollar issued on (x) with yearly premiums paid at the start of the year and death benefit paid at the end of the year. In actuarial notation, a benefit reserve is denoted as V. Our objective is to find the value of the net level premium reserve at time t. First we define the loss random variable at time zero for this policy. Hence
${\displaystyle L=v^{K(x)+1}-P{\ddot {a}}_{\overline {K(x)+1|}}}$
Then, taking expected values we have:
${\displaystyle \operatorname {E} [L]=\operatorname {E} [v^{K(x)+1}-P{\ddot {a}}_{\overline {K(x)+1|}}]}$
${\displaystyle \operatorname {E} [L]=\operatorname {E} [v^{K(x)+1}]-P\operatorname {E} [{\ddot {a}}_{\overline {K(x)+1|}}]}$
${\displaystyle {}_{0}\!V_{x}=A_{x}-P\cdot {\ddot {a}}_{x}}$
Setting the reserve equal to zero and solving for P yields:
${\displaystyle P={\frac {A_{x}}{{\ddot {a}}_{x}}}}$
For a whole life policy as defined above the premium is denoted as ${\displaystyle P_{x}}$ in actuarial notation. The NLP reserve at time t is the expected value of the loss random variable at time t given K(x) > t
${\displaystyle {}_{t}L=v^{K(x)+1-t}-P_{x}{\ddot {a}}_{\overline {K(x)+1-t|}}}$
${\displaystyle \operatorname {E} [{}_{t}L\mid K(x)>t]=\operatorname {E} [v^{K(x)+1-t}\mid K(x)>t]-P_{x}\operatorname {E} [{\ddot {a}}_{\overline {K(x)+1-t|}}\mid K(x)>t]}$
${\displaystyle {}_{t}\!V_{x}=A_{x+t}-P_{x}\cdot {\ddot {a}}_{x+t}}$
where ${\displaystyle {}P_{x}={\frac {A_{x}}{{\ddot {a}}_{x}}}}$
## Modified reserves
Modified reserves are based on premiums which are not level by duration. Almost all modified reserves are intended to accumulate lower reserves in early policy years than they would under the net level premium method. This is to allow the issuer greater margins to pay for expenses which are usually very high in these years. To do this, modified reserves assume a lower premium in the first year or two than the net level premium, and later premiums are higher. The Commissioner's Reserve Valuation Method, used for statutory reserves in the United States, allows for use of modified reserves. [1]
### Full preliminary term method
A full preliminary term reserve is calculated by treating the first year of insurance as a one-year term insurance. Reserves for the remainder of the insurance are calculated as if they are for the same insurance minus the first year. This method usually decreases reserves in the first year sufficiently to allow payment of first year expenses for low-premium plans, but not high-premium plans such as limited-pay whole life. [2]
## Computation of actuarial reserves
The calculation process often involves a number of assumptions, particularly in relation to future claims experience, and investment earnings potential. Generally, the computation involves calculating the expected claims for each future time period. These expected future cash outflows are then discounted to reflect interest to the date of the expected cash flow.
For example, if we expect to pay $300,000 in Year 1,$200,000 in year 2 and $150,000 in Year 3, and we are able to invest reserves to earn 8%p.a., the respective contributions to Actuarial Reserves are: • Year 1:$300,000 × (1.08)−1 = $277,777.78 • Year 2:$200,000 × (1.08)−2 = $171,467.76 • Year 3:$150,000 × (1.08)−3 = $119,074.84. If we sum the discounted expected claims over all years in which a claim could be experienced, we have completed the computation of Actuarial Reserves. In the above example, if there were no expected future claims after year 3, our computation would give Actuarial Reserves of$568,320.38.
|
2022-12-03 15:35:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4198049306869507, "perplexity": 1060.5259991199803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00177.warc.gz"}
|
https://socratic.org/questions/how-do-you-factor-x-3-3x-2-6x-8
|
How do you factor x^3+3x^2-6x-8?
Oct 17, 2016
$\left(x + 1\right) \left(x - 2\right) \left(x + 4\right)$
Explanation:
By trial and error
let $f \left(x\right) = {x}^{3} + 3 {x}^{2} - 6 x - 8$
let $x = - 1$
so $f \left(- 1\right) = - 1 + 3 + 6 - 8 = 0$
so $\left(x + 1\right)$ is a factor
Then you have to make a long division
$\frac{{x}^{3} + 3 {x}^{2} - 6 x - 8}{x + 1} = {x}^{2} + 2 x - 8$
Then factorise ${x}^{2} + 2 x - 8$
${x}^{2} + 2 x - 8 = \left(x - 2\right) \left(x + 4\right)$
and finally
${x}^{3} + 3 {x}^{2} - 6 x - 8 = \left(x + 1\right) \left(x - 2\right) \left(x + 4\right)$
graph{x^3+3x^2-6x-8 [-10, 10, -5, 5]}
Oct 17, 2016
$\left(x - 2\right) \left(x + 1\right) \left(x + 4\right)$
Explanation:
Group the terms in 'pairs' as follows.
$\left[{x}^{3} - 8\right] + \left[3 {x}^{2} - 6 x\right]$
Now, the first group is a $\textcolor{b l u e}{\text{difference of cubes}}$ and factorises, in general, as.
$\textcolor{red}{\overline{\underline{| \textcolor{w h i t e}{\frac{2}{2}} \textcolor{b l a c k}{{a}^{3} - {b}^{3} = \left(a - b\right) \left({a}^{2} + a b + {b}^{2}\right)} \textcolor{w h i t e}{\frac{2}{2}} |}}}$
Now ${\left(x\right)}^{3} = {x}^{3} \text{ and } {\left(2\right)}^{3} = 8$
$\Rightarrow a = x \text{ and } b = 2$
${x}^{3} - 8 = \left(x - 2\right) \left({x}^{2} + 2 x + {2}^{2}\right) = \left(x - 2\right) \left({x}^{2} + 2 x + 4\right)$
The second group has a $\textcolor{b l u e}{\text{common factor}}$ of 3x.
$\Rightarrow 3 {x}^{2} - 6 x = 3 x \left(x - 2\right) , \text{hence}$
${x}^{3} - 8 + 3 {x}^{2} - 6 x = \left(x - 2\right) \left({x}^{2} + 2 x + 4\right) + 3 x \left(x - 2\right)$
There is now a $\textcolor{b l u e}{\text{common factor }} \left(x - 2\right)$
$\textcolor{red}{\left(x - 2\right)} \left(\textcolor{m a \ge n t a}{{x}^{2} + 2 x + 4 + 3 x}\right) = \left(x - 2\right) \left({x}^{2} + 5 x + 4\right)$
$= \left(x - 2\right) \left(x + 1\right) \left(x + 4\right)$
|
2019-11-22 17:56:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 22, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654019236564636, "perplexity": 4455.745394122301}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671411.14/warc/CC-MAIN-20191122171140-20191122200140-00067.warc.gz"}
|
https://orchidas.lsce.ipsl.fr/dev/albedo/
|
orchidas : dev / ALBEDO / Site Map
Albedo Computation
Fractioning for bare soil, vegetation and snow
The land surface in ORCHIDEE is represented by 13 plant functional types (PFTs), including bare soil (PFT1), that can co-exist in any grid cell. The PFT distribution is described by yearly PFT maps, defining the maximum possible fraction of each vegetation type at each pixel ($$frac_\mathsf{max,pft}$$), so that $frac_\mathsf{veg} = \sum_\mathsf{pft=1}^{13} frac_\mathsf{max,pft} \leq 1$ The rest fraction of this distribution (if present) is considered as non-biological fraction (e.g. ice, permanent snow, etc.): $frac_\mathsf{nobio} = 1 - \sum_\mathsf{pft=1}^{13} frac_\mathsf{max,pft}$ The actual vegetation fractions for each vegetated PFT (PFT2-PFT13) are calculated at each time step as an exponention function of the simulated leaf area index (LAI): $frac_\mathsf{pft} = (1-\exp(-\mathsf{LAI}_\mathsf{pft})) \cdot frac_\mathsf{max,pft}$ The bare soil fraction (PFT1) then can be found as: $frac_\mathsf{bs} = \sum_\mathsf{pft=1}^{13} frac_\mathsf{max,pft} - \sum_\mathsf{pft=2}^{13} frac_\mathsf{pft}$ The fraction of snow on vegetated surfaces is calculated in the explicit snow scheme from the simulated snow depth ($$d_\mathsf{snow}$$) and snow density ($$\rho_\mathsf{snow}$$) as: $frac_\mathsf{snow,veg} = \tanh{\frac{50 \cdot d_\mathsf{snow}}{0.025 \cdot \rho_\mathsf{snow}}}$ Whereas the fraction of snow on non-biological surfaces is calculated using simulated snow mass ($$m_\mathsf{snow}$$ in kg/m2) and two fixed parameters — critical snow depth ($$d_\mathsf{snow,cri}$$) and snow density ($$\rho_\mathsf{snow,cri}$$): $frac_\mathsf{snow,nobio} = \min\left(1,\frac{\max(0,m_\mathsf{snow})}{\max(0,m_\mathsf{snow}) + d_\mathsf{snow,cri} \cdot \rho_\mathsf{snow,cri}}\right)$
Albedo parametrization
The overall albedo for the land surface is calculated by parameterizing the albedo coefficients for each land surface compartment (bare soil, vegetation types, non-biological surace, plus the snow cover for any of the land surface type): $albedo = frac_\mathsf{veg} \left[ (1-frac_\mathsf{snow,veg}) \cdot alb_\mathsf{veg} + frac_\mathsf{snow,veg} \cdot alb_\mathsf{snow,veg} \right] +$ $+ frac_\mathsf{nobio} \left[ (1-frac_\mathsf{snow,nobio}) \cdot alb_\mathsf{nobio} + frac_\mathsf{snow,nobio} \cdot alb_\mathsf{snow,nobio} ) \right]$ Where the vegetation albedo ($$alb_\mathsf{veg}$$) is defined as the superposition of the preset albedo coefficients for bare soil ($$alb_\mathsf{bs}$$) and leaf albedo for each vegetation type ($$alb_\mathsf{leaf}$$), weighted by their fractions: $alb_\mathsf{veg} = frac_\mathsf{bs} \cdot alb_\mathsf{bs} + \sum_{\mathsf{pft}=2}^{13} frac_\mathsf{pft} \cdot alb_\mathsf{leaf,pft}$ The snow albedo is parameterized for each vegetation type with the two coefficients — aged snow albedo ($$alb_\mathsf{snow,aged}$$), describing the minimum snow albedo value for dirty old snow, and snow albedo decay rate ($$alb_\mathsf{snow,dec}$$), used to calculate the snow albedo as the function of the simulated snow age ($$age_\mathsf{snow}$$): $alb_\mathsf{snow,veg} = \frac{\sum\limits_{\mathsf{pft}=1}^{13} frac_\mathsf{max,pft} \cdot \left[ alb_\mathsf{snow,aged,pft} + alb_\mathsf{snow,dec,pft} \cdot \exp(-age_\mathsf{snow} / tcst_\mathsf{snow}) \right] }{\sum\limits_{\mathsf{pft}=1}^{13} frac_\mathsf{max,pft}}$ Where $$tcst_\mathsf{snow}$$ is the time constant of the albedo decay of snow. The snow albedo for non-biological surfaces is calculated using the same principle with the coefficients for bare soil (PFT1): $alb_\mathsf{snow,nobio} = alb_\mathsf{snow,aged,1} + alb_\mathsf{snow,dec,1} \cdot \exp(-age_\mathsf{snow} / tcst_\mathsf{snow})$ Finally, the two additional parameters are used in controlling the snow age evolution — the maximum period of snow aging ($$age_\mathsf{snow,max}$$) and the transformation time constant for snow ($$trans_\mathsf{snow}$$), both used in calculating the snow age at each time step of the model simulations: $age_\mathsf{snow,i+1} = (age_\mathsf{snow,i} + (1-age_\mathsf{snow,i} / age_\mathsf{snow,max}) \cdot dt ) \cdot \exp (-precip_\mathsf{snow} / trans_\mathsf{snow} )$ Where $$precip_\mathsf{snow}$$ is the snow precipitation content falled during the time interval $$dt$$.
|
2022-08-10 16:46:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865677714347839, "perplexity": 3520.783951158358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00683.warc.gz"}
|
http://www.mathematicalfoodforthought.com/2006/06/better-count-them-right-topic_5.html
|
## Monday, June 5, 2006
### Better Count Them Right. Topic: Probability & Combinatorics/Sets. Level: AIME.
Problem: (2006 ARML Tiebreaker - #1) In how many ways can you choose three distinct numbers from the set $\{1, 2, \ldots, 34\}$ such that the sum is divisible by three? [Reworded]
Solution: Well let's break it down into cases. Even better, let's just look at the elements $\pmod{3}$. There are $11$ $0$'s, $12$ $1$'s, and $11$ $2$'s.
CASE 1: $0, 0, 0$.
Well we can just take $11 \cdot 10 \cdot 9$ but since we want combinations of these, it is $\frac{11 \cdot 10 \cdot 9}{3 \cdot 2 \cdot 1} = 165$.
CASE 2: $1, 1, 1$.
Same idea, only we have $12$ to choose from. $\frac{12 \cdot 11 \cdot 10}{3 \cdot 2 \cdot 1} = 220$.
CASE 3: $2, 2, 2$.
Same as CASE 1. $165$.
CASE 4: $0, 1, 2$.
We have $11 \cdot 12 \cdot 11$ total sets of three. We see that since the elements are distinct modulo three none of them can be permutations of each other. So we have $11 \cdot 12 \cdot 11 = 1452$.
It's not hard to tell that these are all the cases (well, a little harder under time pressure) so we can just sum them up to get $165+220+165+1452 = 2002$. QED.
--------------------
Comment: The fastest correct answer to this problem at the Las Vegas ARML Site was 2 minutes and 2 seconds. Across the country, it was somewhere around 1 minute. Pretty quick.
--------------------
Practice Problem: (2006 ARML Tiebreaker - #2) Let $f(x) = mx+b$ where $m$ and $b$ are integers with $m > 0$. If $2^{f(x)} = 5$ when $x = \log_8{10}$, find the ordered pair $(m, b)$. [Reworded]
1. Brian got it in 1:10 I think.
2. i got log 2=(3-m)/(3b+3)...
3. side comment:
the first question is almost identical to one that i wrote this year in E=mC.
except i had asked for the set of {1,2,...,10}. everything was identically the same :)
4. 4d331: Umm, not sure how you got that. The question is correct.
5. Oh. Generating functions is bad for the first problem after all. =/
f(log_8 10) = log_2 5 = 3(log_8 10) - 1
m = 3, b = -1
6. (m,b)=(3,-1)
|
2019-05-21 22:31:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8427797555923462, "perplexity": 575.549824213548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256586.62/warc/CC-MAIN-20190521222812-20190522004812-00311.warc.gz"}
|
http://www.christopherpoole.net/using-pyrax-with-standard-openstack-clouds.html
|
### Using pyrax with standard OpenStack clouds
Posted on 01, Dec 2015
Before we start, have a look at the pyrax documentation and the installation instructions. Note that at the time of writing, pyrax is "being deprecated in favor of the OpenStack SDK". Although it is worth noting if you have been using pyrax in any comprehensive way, falling back to the OpenStack SDK is going to involve a lot of work in my opinion.
RackSpace uses its own authentication endpoint, whereas a standard OpenStack installation will utilise username/password, token, or certificate (tokenless) authentication via the Keystone OpenStack Identiy Service . The getting started guide provides information on the various ways authentication information can be injected into pyrax, however we will use a configuration file that specifies multiple environments, and separate credentials files. Critically pyrax expects this configuration file to reside at ~/.pyrax.cfg , however you may want to specify your own file location. Your configuration file should look something like this:
[testing]
identity_type = keystone
auth_endpoint = http://<ip>:5000/v2.0/
[deployment]
identity_type = rackspace
region = SYD
And your credentials files (which you will be storing separately) will look something like this:
testing.creds
[keystone]
tenant_id = <tennant id>
deployment.creds
[rackspace_cloud]
api_key = <api key>
From Python, we can connect to either endpoint by specifying the environment and credentials file to use:
import pyrax
pyrax.cloudservers.servers.create(...)
|
2017-07-26 16:30:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5006051659584045, "perplexity": 5705.98129168227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426234.82/warc/CC-MAIN-20170726162158-20170726182158-00679.warc.gz"}
|
https://intelligencemission.com/free-energy-generator-in-speaker-magnet-free-energy-density.html
|
The solution to infinite energy is explained in the bible. But i will not reveal it since it could change our civilization forever. Transportation and space travel all together. My company will reveal it to thw public when its ready. My only hint to you is the basic element that was missing. Its what we experience in Free Power everyday matter. The “F” in the formula is FORCE so here is Free Power kick in the pants for you. “The force that Free Power magnet exerts on certain materials, including other magnets, is called magnetic force. The force is exerted over Free Power distance and includes forces of attraction and repulsion. Free Energy and south poles of two magnets attract each other, while two north poles or two south poles repel each other. ” What say to that? No, you don’t get more out of it than you put in. You are forgetting that all you are doing is harvesting energy from somewhere else: the Free Energy. You cannot create energy. Impossible. All you can do is convert energy. Solar panels convert energy from the Free Energy into electricity. Every second of every day, the Free Energy slowly is running out of fuel.
I have the blueprints. I just need an engineer with experience and some tools, and I’ll buy the supplies. [email protected] i honestly do believe that magnetic motor generator do exist, phyics may explain many things but there are somethings thar defly those laws, and we do not understand it either, Free energy was Free Power genius and inspired, he did not get the credit he deserved, many of his inventions are at work today, induction coils, ac, and edison was Free Power idiot for not working with him, all he did was invent Free Power light bulb. there are many things out there that we have not discovered yet nor understand yet It is possible to conduct the impossible by way of using Free Power two Free Energy rotating in different directions with aid of spring rocker arm inter locking gear to matching rocker push and pull force against the wheels with the rocker arms set @ the Free Electricity, Free Electricity, Free energy , and Free Power o’clock positions for same timing. No further information allowed that this point. It will cause Free Power hell lot of more loss jobs if its brought out. So its best leaving it shelved until the right time. when two discs are facing each other (both on the same shaft) One stationery & the other able to rotate, both embedded with permanent magnets and the rotational disc starts to rotate as the Free Electricity discs are moved closer together (and Free Power magnetic field is present), will Free Power almost perpetual rotation be created or (Free Power) will the magnets loose their magnetism over time (Free Electricity) get in Free Power position where they lock or (Free Electricity) to much heat generated between the Free Electricity discs or (Free Power) the friction cause loss of rotation or (Free Power) keep on accelerating and rip apart. We can have powerful magnets producing energy easily.
Your design is so close, I would love to discuss Free Power different design, you have the right material for fabrication, and also seem to have access to Free Power machine shop. I would like to give you another path in design, changing the shift of Delta back to zero at zero. Add 360 phases at zero phase, giving Free Power magnetic state of plus in all 360 phases at once, at each degree of rotation. To give you Free Power hint in design, look at the first generation supercharger, take Free Power rotor, reverse the mold, create Free Power cast for your polymer, place the mold magnets at Free energy degree on the rotor tips, allow the natural compression to allow for the use in Free Power natural compression system, original design is an air compressor, heat exchanger to allow for gas cooling system. Free energy motors are fun once you get Free Power good one work8ng, however no one has gotten rich off of selling them. I’m Free Power poor expert on free energy. Yup that’s right poor. I have designed Free Electricity motors of all kinds. I’ve been doing this for Free Electricity years and still no pay offs. Free Electricity many threats and hacks into my pc and Free Power few break in s in my homes. It’s all true. Big brother won’t stop keeping us down. I’ve made millions if volt free energy systems. Took Free Power long time to figure out.
The inventor of the Perendev magnetic motor (Free Electricity Free Electricity) is now in jail for defrauding investors out of more than Free Power million dollars because he never delivered on his promised motors. Of course he will come up with some excuse, or his supporters will that they could have delivered if they hade more time – or the old classsic – the plans were lost in Free Power Free Electricity or stolen. The sooner we jail all free energy motor con artists the better for all, they are Free Power distraction and they prey on the ignorant. To create Free Power water molecule X energy was released. Thermodynamic laws tell us that X+Y will be required to separate the molecule. Thus, it would take more energy to separate the water molecule (in whatever form) then the reaction would produce. The reverse however (separating the bond using Free Power then recombining for use) would be Free Power great implementation. But that is the bases on the hydrogen fuel cell. Someone already has that one. Instead of killing our selves with the magnetic “theory”…has anyone though about water-fueled engines?.. much more simple and doable …an internal combustion engine fueled with water.. well, not precisely water in liquid state…hydrogen and oxygen mixed…in liquid water those elements are chained with energy …energy that we didn’t spend any effort to “create”.. (nature did the job for us).. and its contained in the molecular union.. so the prob is to decompose the liquid water into those elements using small amounts of energy (i think radio waves could do the job), and burn those elements in Free Power effective engine…can this be done or what?…any guru can help?… Magnets are not the source of the energy.
I'm not very good at building things, but I think I will give it shot. The other group seems to be the extremely obsessed who put together web pages and draw on everything from every where. Common names and amazing “theories” keep popping up. I have found most of the stuff lacks any credibility especially when they talk of government cover ups and “big oil”. They throw around Free Energy stuff with every breath. They quote every new age terms in with established science and produce Free Power mix that defies description. The next group take it one step further. They are in it for the money and use Free Power lot of the sources of information used by the second group. Their goal is to get people to Free Power over investment money with the promise of Free Power “free energy ” future. All these groups tend to dismiss “mainstream science” as they see the various laws of physics as man-made rules. They often state the ancients broke all the laws and we are yet to discover how they did it. The test I apply to all the Free Energy made by these people and groups is very simple. Where is the independent evidence? I have seen Free Power lot of them quote input and output figures and numerous test results. Some even get supposedly independent testing done. To date I have not seen any device produce over-unity that has been properly tested. All the Bedini et al devices are often performance measured and peak wave figures quoted as averages and thus outputs are inflated by factors of Free Electricity to Free Power from what I recall. “Phase conjugation” – ah I love these terms. Why not quote it as “Free Electricity Ratio Phase Conjugation” as Free Energy does? The golden ratio (phi) is that new age number that people choose to find and quote for all sorts of (made up) reasons. Or how about: “Free Energy presents cutting-edge discoveries including the “eye of god” which amounts to Free Power boundary condition threshold related to plank length and time where plasma compression is entirely translated in vorticity to plasma acceleration, specifically by golden ratio heterodyning. ” From all the millions of believers, the thousands of websites and the hundreds of quoted names and the unlimited combinations of impressive sounding words have we gotten one single device that I can place on my desk that spins around and never stops and uses no energy ? Surely an enterprising Chinese company would see it as Free Power money spinner (oh no I forgot about the evil Government and big oil!) and produce Free Power cheap desk top model. Yeah, i decided to go big to get as much torque as possible. Also, i do agree with u that Free Power large (and expensive, chuckle) electric motor is all i am going to finish up with. However, its the net power margins that im most interested in. Thats y i thought that if i use powerful rare earth magnets on outside spiral and rotor, Free Power powerful electro magnet, and an efficient generator (like the wind genny i will be using) the margin of power used to run it (or not used to run it) even though proportionally the same percentage as Free Power smaller one will be Free Power larger margin in total (ie total wattage). Therefore more easy to measure if the margin is extremely smalll. Also, easier to overcome the fixed factors like air and bearing friction. Free Electricity had Free Power look at it. A lot bigger than I thought it would be for Free Power test model. Looks nicely engineered. I see there is Free Power comment there already. I agree with the comment. I’m suprised you can’t find some wrought (soft) iron. Free Power you realise if you have an externally powered electro-magnet you are merely building an electric motor? There won’t be enough power produced by Free Power generator driven by this device to power itself. I wish I had your patience. Enjoy the project. The Perendev motor has shielding and doesn’t work. Shielding as Free Power means to getting powered rotation is Free Power myth. Shielding redirects the magnetic flux lines but does not make the magnetic field only work in one direction to allow rotation. If you believe otherwise this is easily testable. Get any magnetic motor and using Free Power calibrated spring balance measure via Free Power torque arm determine the maximum load as you move the arm up to the point of maximum force. Free Power it in Free Power clockwise and counter clockwise direction.
Figure Free Electricity. Free Electricity shows some types of organic compounds that may be anaerobically degraded. Clearly, aerobic oxidation and methanogenesis are the energetically most favourable and least favourable processes, respectively. Quantitatively, however, the above picture is only approximate, because, for example, the actual ATP yield of nitrate respiration is only about Free Electricity of that of O2 respiration instead of>Free energy as implied by free energy yields. This is because the mechanism by which hydrogen oxidation is coupled to nitrate reduction is energetically less efficient than for oxygen respiration. In general, the efficiency of energy conservation is not high. For the aerobic degradation of glucose (C6H12O6+6O2 → 6CO2+6H2O); ΔGo’=−2877 kJ mol−Free Power. The process is known to yield Free Electricity mol of ATP. The hydrolysis of ATP has Free Power free energy change of about−Free energy kJ mol−Free Power, so the efficiency of energy conservation is only Free energy ×Free Electricity/2877 or about Free Electricity. The remaining Free Electricity is lost as metabolic heat. Another problem is that the calculation of standard free energy changes assumes molar or standard concentrations for the reactants. As an example we can consider the process of fermenting organic substrates completely to acetate and H2. As discussed in Chapter Free Power. Free Electricity, this requires the reoxidation of NADH (produced during glycolysis) by H2 production. From Table A. Free Electricity we have Eo’=−0. Free Electricity Free Power for NAD/NADH and Eo’=−0. Free Power Free Power for H2O/H2. Assuming pH2=Free Power atm, we have from Equations A. Free Power and A. Free energy that ΔGo’=+Free Power. Free Power kJ, which shows that the reaction is impossible. However, if we assume instead that pH2 is Free energy −Free Power atm (Q=Free energy −Free Power) we find that ΔGo’=~−Free Power. Thus at an ambient pH2 0), on the other Free Power, require an input of energy and are called endergonic reactions. In this case, the products, or final state, have more free energy than the reactants, or initial state. Endergonic reactions are non-spontaneous, meaning that energy must be added before they can proceed. You can think of endergonic reactions as storing some of the added energy in the higher-energy products they form^Free Power. It’s important to realize that the word spontaneous has Free Power very specific meaning here: it means Free Power reaction will take place without added energy , but it doesn’t say anything about how quickly the reaction will happen^Free energy. A spontaneous reaction could take seconds to happen, but it could also take days, years, or even longer. The rate of Free Power reaction depends on the path it takes between starting and final states (the purple lines on the diagrams below), while spontaneity is only dependent on the starting and final states themselves. We’ll explore reaction rates further when we look at activation energy. This is an endergonic reaction, with ∆G = +Free Electricity. Free Electricity+Free Electricity. Free Electricity \text{kcal/mol}kcal/mol under standard conditions (meaning Free Power \text MM concentrations of all reactants and products, Free Power \text{atm}atm pressure, 2525 degrees \text CC, and \text{pH}pH of Free Electricity. 07. 0). In the cells of your body, the energy needed to make \text {ATP}ATP is provided by the breakdown of fuel molecules, such as glucose, or by other reactions that are energy -releasing (exergonic). You may have noticed that in the above section, I was careful to mention that the ∆G values were calculated for Free Power particular set of conditions known as standard conditions. The standard free energy change (∆Gº’) of Free Power chemical reaction is the amount of energy released in the conversion of reactants to products under standard conditions. For biochemical reactions, standard conditions are generally defined as 2525 (298298 \text KK), Free Power \text MM concentrations of all reactants and products, Free Power \text {atm}atm pressure, and \text{pH}pH of Free Electricity. 07. 0 (the prime mark in ∆Gº’ indicates that \text{pH}pH is included in the definition). The conditions inside Free Power cell or organism can be very different from these standard conditions, so ∆G values for biological reactions in vivo may Free Power widely from their standard free energy change (∆Gº’) values. In fact, manipulating conditions (particularly concentrations of reactants and products) is an important way that the cell can ensure that reactions take place spontaneously in the forward direction.
Not Free Power lot to be gained there. I made it clear at the end of it that most people (especially the poorly informed ones – the ones who believe in free energy devices) should discard their preconceived ideas and get out into the real world via the educational route. “It blows my mind to read how so-called educated Free Electricity that Free Power magnet generator/motor/free energy device or conditions are not possible as they would violate the so-called Free Power of thermodynamics or the conservation of energy or another model of Free Power formed law of mans perception what Free Power misinformed statement to make the magnet is full of energy all matter is like atoms!!”
Maybe our numerical system is wrong or maybe we just don’t know enough about what we are attempting to calculate. Everything man has set out to accomplish, there have been those who said it couldn’t be done and gave many reasons based upon facts and formulas why it wasn’t possible. Needless to say, none of the ‘nay sayers’ accomplished any of them. If Free Power machine can produce more energy than it takes to operate it, then the theory will work. With magnets there is Free Power point where Free Energy and South meet and that requires force to get by. Some sort of mechanical force is needed to push/pull the magnet through the turbulence created by the magic point. Inertia would seem to be the best force to use but building the inertia becomes problematic unless you can store Free Power little bit of energy in Free Power capacitor and release it at exactly the correct time as the magic point crosses over with an electromagnet. What if we take the idea that the magnetic motor is not Free Power perpetual motion machine, but is an energy storage device. Let us speculate that we can build Free Power unit that is Free energy efficient. Now let us say I want to power my house for ten years that takes Free Electricity Kwhrs at 0. Free Energy /Kwhr. So it takes Free energy Kwhrs to make this machine. If we do this in Free Power place that produces electricity at 0. 03 per Kwhr, we save money.
I e-mailed WindBlue twice for info on the 540 and they never e-mailed me back, so i just thought, FINE! To heck with ya. Ill build my own. Free Power you know if more than one pma can be put on the same bank of batteries? Or will the rectifiers pick up on the power from each pma and not charge right? I know that is the way it is with car alt’s. If Free Power car is running and you hook Free Power batery charger up to it the alt thinks the battery is charged and stops charging, or if you put jumper cables from another car on and both of them are running then the two keep switching back and forth because they read the power from each other. I either need Free Power real good homemade pma or Free Power way to hook two or three WindBlues together to keep my bank of batteries charged. Free Electricity, i have never heard the term Spat The Dummy before, i am guessing that means i called you Free Power dummy but i never dFree Energy I just came back at you for being called Free Power lier. I do remember apologizing to you for being nasty about it but i guess i have’nt been forgiven, thats fine. I was told by Free Power battery company here to not build Free Power Free Electricity or 24v system because they heat up to much and there is alot of power loss. He told me to only build Free Power 48v system but after thinking about it i do not think i need to build the 48v pma but just charge with 12v and have my batteries wired for 48v and have Free Power 48v inverter but then on the other Free Power the 48v pma would probably charge better.
My Free Energy are based on the backing of the entire scientific community. These inventors such as Yildez are very skilled at presenting their devices for Free Power few minutes and then talking them up as if they will run forever. Where oh where is one of these devices running on display for an extended period? I’ll bet here and now that Yildez will be exposed, or will fail to deliver, just like all the rest. A video is never proof of anything. Trouble is the depth of knowledge (with regards energy matters) of folks these days is so shallow they will believe anything. There was Free Power video on YT that showed Free Power disc spinning due to Free Power magnet held close to it. After several months of folks like myself debating that it was Free Power fraud the secret of the hidden battery and motor was revealed – strangely none of the pro free energy folks responded with apologies.
What may finally soothe the anger of Free Power D. Free Energy and other whistleblowers is that their time seems to have finally come to be heard, and perhaps even have their findings acted upon, as today’s hearing seems to be striking Free Power different tone to the ears of those who have in-depth knowledge of the crimes that have been alleged. This is certainly how rep. Free Power Free Electricity, Free Power member of the Free Energy Oversight and Government Reform Committee, sees it:
The “energy ” quoted in magnetization is the joules of energy required in terms of volts and amps to drive the magnetizing coil. The critical factors being the amps and number of turns of wire in the coil. The energy pushed into Free Power magnet is not stored for usable work but forces the magnetic domains to align. If you do Free Power calculation on the theoretical energy release from magnets according to those on free energy websites there is enough pent up energy for Free Power magnet to explode with the force of Free Power bomb. And that is never going to happen. The most infamous of magnetic motors “Perendev”by Free Electricity Free Electricity has angled magnets in both the rotor and stator. It doesn’t work. Angling the magnets does not reduce the opposing force as Free Power magnet in Free Power rotor moves up to pass Free Power stator magnet. As I have suggested measure the torque and you’ll see this angling of magnets only reduces the forces but does not make them lessen prior to the magnets “passing” each other where they are less than the force after passing. Free Energy’t take my word for it, measure it. Another test – drive the rotor with Free Power small motor up to speed then time how long it slows down. Then do the same test in reverse. It will take the same time to slow down. Any differences will be due to experimental error. Free Electricity, i forgot about the mags loseing their power.
A former whistleblower, who has spoken with agents from the Free Power Free Electricity FBI field office last year and worked for years as an undercover informant collecting information on Russia’s nuclear energy industry for the bureau, noted his enormous frustration with the DOJ and FBI. He describes as Free Power two-tiered justice system that failed to actively investigate the information he provided years ago on the Free Electricity Foundation and Russia’s dangerous meddling with the U. S. nuclear industry and energy industry during the Obama administration.
Considering that I had used spare parts, except for the plywood which only cost me Free Power at the time, I made out fairly well. Keeping in mind that I didn’t hook up the system to Free Power generator head I’m not sure how much it would take to have enough torque for that to work. However I did measure the RPMs at top speed to be Free Power, Free Electricity and the estimated torque was Free Electricity ftlbs. The generators I work with at my job require Free Power peak torque of Free Electricity ftlbs, and those are simple household generators for when the power goes out. They’re not powerful enough to provide for every electrical item in the house to run, but it is enough for the heating system and Free Power few lights to work. Personally I wouldn’t recommend that drastic of Free Power change for Free Power long time, the people of the world just aren’t ready for it. However I strongly believe that Free Power simple generator unit can be developed for home use. There are those out there that would take advantage of that and charge outrageous prices for such Free Power unit, that’s the nature of mankind’s greed. To Nittolo and Free Electricity ; You guys are absolutely hilarious. I have never laughed so hard reading Free Power serious set of postings. You should seriously write some of this down and send it to Hollywood. They cancel shows faster than they can make them out there, and your material would be Free Power winner!
|
2019-03-19 20:13:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4979456961154938, "perplexity": 1714.3666880835383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202125.41/warc/CC-MAIN-20190319183735-20190319205735-00015.warc.gz"}
|
https://tex.stackexchange.com/questions/565113/uneven-vertical-spacing-between-paragraphs
|
# Uneven vertical spacing between paragraphs
Here's a MWE:
\documentclass{book}
\usepackage{geometry}
\geometry{paperwidth=127mm,paperheight=203mm,totalwidth=92mm,totalheight=165mm}
\usepackage{lipsum}
\begin{document}
\vspace*{5cm}
\lipsum[1][1-8]
\lipsum[1][1-8]
\lipsum[1][1-12]
\lipsum[1][1-9]
\lipsum[1][1-10]
\lipsum[1][1-10]
\lipsum[1][1-12]
\lipsum[1][1-12]
\lipsum[1][1-4]
\end{document}
My point is that there's a lot more vertical space between the paragraphs on page 2 than on page 1 or 3. If I look at pages 2 and 3 side by side, I think that the difference is annoying.
However, it seems pretty obvious that this could be fixed by moving the first line of page 3 to the end of page 2 without creating orphans or widows and without exceeding the available height for page 2. Why doesn't TeX do that?
• Typically \parskip uses (expandable) glue. Use \the\parskip for details. Try using \raggedbottom to make vertical expansion unnecessary. Oct 2 '20 at 16:13
• This is for a book. I can't use \raggedbottom. Oct 2 '20 at 16:42
• Is tex.stackexchange.com/questions/401778/… sort of where you are headed? Note: a better solution might be possible using \pagetotal and \pagegoal. Oct 2 '20 at 17:10
• Not really. I think the solution I describe below is fine with me, though. Thanks. Oct 2 '20 at 17:32
I found something in the Mittelbach/Goossens book (which I admit is on my bookshelf but rarely looked at). I'm not sure if this is the "best" way to do it but it seems to fix my problem. Use this at the beginning of the document:
\newcounter{tempc} \newcounter{tempcc}
\setlength\textheight{165mm-\topskip}
\setcounter{tempc}{\textheight}
\setcounter{tempcc}{\baselineskip}
\setcounter{tempc}{\value{tempc}/\value{tempcc}}
\setlength\textheight{\baselineskip*\value{tempc}+\topskip}
You also need to use the calc package for this to work.
EDIT: As David Carlisle suggested, the heightrounded option of the geometry package provides a similar solution.
• the geometry package will do that for you Oct 2 '20 at 16:58
• @DavidCarlisle Are you referring to the heightrounded option (that I just found in the documentation)? It seems the Mittelbach/Goossens solution produces a more pleasing result. It is also my understanding that heightrounded might enlarge textheight which is not what I want. Oct 2 '20 at 17:17
• Probably. I couldn't remember the name was going to check:-) either way the intention is to ensure that textheight-topskip is a multiple of \baselineskip Oct 2 '20 at 17:23
|
2021-09-19 15:22:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7643439769744873, "perplexity": 842.1675001944269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056890.28/warc/CC-MAIN-20210919125659-20210919155659-00647.warc.gz"}
|
https://taoofmac.com/space/blog/2006/10/18
|
### The Tao of Mac
Have I ever mentioned how much I loathe spammers?
Well, I have even more reason to hate them. Tonight, after arriving home, I opened my mailbox to find over 200 non-delivery reports in my taoofmac.com e-mail account (which I recently moved to Google), and which are the result of someone faking From: addresses @taoofmac.com.
Why did I get these? Well, because I went to the Google Apps For Your Domain preferences and set my account as a "catch-all" address for taoofmac.com - and, as a result, any bounced e-mail ends up in my inbox.
At this point, I have established that besides these 200-odd, another 444 were faked as originating from my domain and recognized by Google as Spam. I was worried for a while, though, since trying to log in to my mail account on Google via Safari yielded -
Server Error
We're sorry, but Gmail is temporarily unavailable. We're currently working to fix the problem -- please try logging in to your account in a few minutes.
...which did not bode well. I eventually managed to log in, but only to find I cannot remove the catch-all "nickname"!
I can add and remove other nicknames from my account, but not the catch-all (which, despite being a dumb idea, was actually suggested during domain setup - I just decided to go along with it temporarily). Clearly, not being able to remove this particular nickname is a bug. In my particular case, a pretty annoying one.
I tried with both Camino and Safari, but it seems to make no difference: *@taoofmac.com is still there, and I have reported this to Google via the support form and replied to the boilerplate e-mail.
In case anyone at Google is reading this, it's issue #78992522 Cannot remove catch-all (*) "nickname".
Update: Thanks to a reader with the right connections, I was made aware of a workaround, which is to disable catch-all address in 'Domain settings' -> 'Advanced settings'. This makes sense, but a link to that instead of the "Remove" option might be a good way to save time.
### Hunting Rats
Obviously, the maggots that are faking e-mail from my domain have noticed a brand new (i.e., virgin) MX record pop up and started using it as a likely way to bypass dumber Spam filters. Since it is impossible to stop people from faking From: addresses, all I can do at this point is track down the assholes that did it this time.
Looking at one of the e-mails I got, that's easily done:
X-Originating-IP: [67.187.135.122]
Return-Path: <[email protected]>
Authentication-Results: mta149.mail.re2.yahoo.com from=taoofmac.com; domainkeys=neutral (no sig)
Received: from 67.187.135.122 (EHLO c-67-187-135-122.hsd1.ca.comcast.net) (67.187.135.122)
by mta149.mail.re2.yahoo.com with SMTP; Wed, 18 Oct 2006 03:05:15 -0700
Message-ID: <[email protected]>
From: "Marina Brunson" <[email protected]>
Obviously, Marina does not exist. But garyscomputer has an IP address, and (guess what) it comes from one of the cesspits of spamming - Comcast Cable, aka "bot central":
$whois 67.187.135.122 Comcast Cable Communications, Inc. ATT-COMCAST (NET-67-160-0-0-1) 67.160.0.0 - 67.191.255.255 Comcast Cable Communications, Inc. STOKTON-3 (NET-67-187-128-0-1) 67.187.128.0 - 67.187.159.255 # ARIN WHOIS database, last updated 2006-10-17 19:10 # Enter ? for additional hints on searching ARIN's WHOIS database. I got 50 NDRs originating from this pest alone, but there were plenty more. Here are the other members of the "Top 5" nuisances I could track down: $ whois 68.88.166.243
SBC Internet Services - Southwest SBCIS-SBIS-6BLK (NET-68-88-0-0-1)
68.88.0.0 - 68.95.255.255
Maize USD SBC068088166000030708 (NET-68-88-166-0-1)
68.88.166.0 - 68.88.167.255
...
$whois 207.3.149.143 Savvis SAVVIS (NET-207-2-128-0-1) 207.2.128.0 - 207.3.255.255 WorldPath Internet Services CW-207-3-144-A (NET-207-3-144-0-1) 207.3.144.0 - 207.3.151.255 WPIS TRADEPORT DSL WPIS-207-3-149-128-25 (NET-207-3-149-128-1) 207.3.149.128 - 207.3.149.255 ...$ whois 24.24.57.45
OrgID: RRMA
City: Herndon
StateProv: VA
PostalCode: 20171
Country: US
ReferralServer: rwhois://ipmt.rr.com:4321
NetRange: 24.24.0.0 - 24.29.255.255
CIDR: 24.24.0.0/14, 24.28.0.0/15
...
\$ whois 71.65.207.158
CIDR: 71.64.0.0/12
|
2018-11-16 01:30:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2535077929496765, "perplexity": 5496.511718319972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742968.18/warc/CC-MAIN-20181116004432-20181116030432-00426.warc.gz"}
|
https://indico.fnal.gov/event/44870/contributions/198648/
|
We continue to review all events currently planned for the next sixty days and organizers will be notified if their event must be canceled, postponed, or held remotely. Please, check back on Indico during this time for updates regarding your meeting specifics.
As DOE O 142.3A, Unclassified Foreign Visits and Assignments Program (FVA) applies not only to physical access to DOE sites, technologies, and equipment, but also information, all remote events hosted by Fermilab must comply with FVA requirements. This includes participant registration and agenda review. Please contact Melissa Ormond, FVA Manager, with any questions.
----
ZOOM meetings Lab policy: You absolutely must not post Zoom meeting IDs on any public website unless you set a password to protect the meeting/event. Of course, do not post the password on any public website, either.
For details please refer to the news article https://news.fnal.gov/2020/05/security-guidelines-for-zoom-meetings-2/ Zoom information should be either given on email request or stored on a SharePoint page behind SSO".
Do NOT post the zoom information in the field 'Venue/Location' since it will show in the weekly Calendar even if the event is protected!
----
Indico search will be reestablished in the next version upgrade of the software: https://getindico.io/roadmap/
# Snowmass Community Planning Meeting - Virtual
5-8 October 2020
Virtual
US/Central timezone
Captions are available at https://us.ai-live.com/CaptionViewer/Join/thirdparty?sessionId=USFERM0810B . Default meeting times are in US Central Daylight Time (UTC-5). Zoom connection information has been sent to registered participants.
## Gas TPCs with directional sensitivity to dark matter, neutrinos, and BSM physics
Not scheduled
3m
Virtual
### Speaker
Sven Vahsen (University of Hawaii)
### Description
There is an opportunity to develop a long-term, diverse, and cost-effective US experimental program based on directional detection of nuclear recoils in gas TPCs.
Smaller, 1 m$^3$ scale detectors could detect and demonstrate directional sensitivity to Coherent Elastic Neutrino-Nucleus Scattering (CEνNS) at either NuMI or DUNE. This technology is also sensitive to beyond the Standard Model (BSM) physics in the form of low-mass dark matter, heavy sterile neutrinos, and axion-like particles. For every factor ten increase in exposure, new measurements are possible. A 10 m$^3$ detector could produce the strongest SD WIMP-proton cross section limits of any experiment across all WIMP masses. A 1000 m$^3$ detector would detect between 13 and 37 solar CEνNS events over six years. Larger volumes would bring sensitivity to neutrinos from an even wider range of sources, including galactic supernovae, nuclear reactors, and geological processes. An ambitious DUNE-scale detector, but operating at room temperature and atmospheric pressure, would have non-directional WIMP sensitivity comparable to any proposed experiment, and would, in addition, allow us to utilize directionality to penetrate deep into the neutrino floor.
If a dark matter signal is observed, this would mark the beginning of a new era in physics. A large directional detector would then hold the key to first establishing the galactic origin of the signal, and to subsequently map the local WIMP velocity distribution and explore the particle phenomenology of dark matter.
To understand and fully maximize the physics reach of gas TPCs as envisioned here, further phenomenological work on dark matter and neutrinos, improved micro-pattern gaseous detectors (MPGDs), customized front end electronics and novel region-of-interest triggers are needed. We encourage the wider dark matter, neutrino, and instrumentation communities participating in Snowmass to come together and help evaluate and improve this proposal.
Primary frontier topic Cosmic Frontier
### Primary authors
Diego Aristizabal Sierra (Universidad Tecnica Federico Santa Mar\'{i}a) Connor Awe (Duke University) Elisabetta Baracchini (INFN, GSSI) Phillip Barbeau (Duke University) Bhaskar Dutta (Texas A&M University) Warren Lynch (University of Sheffield) Neil Spooner (University of Sheffield) James Battat (Wellesley College) Cosmin Deaconu (UChicago / KICP) Callum Eldridge (University of Sheffield) Majd Ghrear (University of Haawaii) Peter Lewis (University of Bonn) Dinesh Loomba (University of New Mexico) Katie J. Mack (North Carolina State University) Diane Markoff Markoff (North Carolina Central University) Hans Muller (University of Bonn) Kentaro Miuchi (Kobe University) Ciaran O'Hare (University of Sydney) Nguyen Phan (Los Alamos National Laboratory) Kate Scholberg (Duke University) Daniel Snowden-Ifft (Occidental College) Louis Strigari (Texas A&M University) Thomas Thorpe (GSSI) Sven Vahsen (University of Hawaii)
### Presentation Materials
There are no materials yet.
|
2021-01-27 01:28:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20623436570167542, "perplexity": 14428.22575619582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704804187.81/warc/CC-MAIN-20210126233034-20210127023034-00594.warc.gz"}
|
https://codereview.stackexchange.com/questions/15382/is-there-simplier-than-this-one-in-php
|
# Is there simplier than this one in PHP?
I want to refactor my code but I can't find and much simplier one. Can you please suggest on how to refactor the code below without using a loop? maybe using only array functions? TIA
<?php
$week_no = array(2,3,4);$days = array('TU', 'WE', 'TH');
foreach ($week_no as$n) {
foreach ($days as$d) {
$out[] =$n . $d; } } var_dump($out); // array('2TU', '2WE', '2TH','3TU', '3WE', '3TH, '4TU', '4WE', '4TH)
?>
• that's the simple and the fastest way to do it !! – mgraph Sep 6 '12 at 10:30
• yes. but is there anymore more elegant than this? :) – Peter Sep 6 '12 at 10:31
• maybe you can change the structure of youre array depends on how you want to use it. – Jurgo Sep 6 '12 at 10:32
• I was working out a version using array_reduce just for shits and giggles, but my hands got tired halfway through... Seriously, this is the simplest way to do it. – deceze Sep 6 '12 at 10:36
• $week_no = array(2,3,4);$days = array('TU', 'WE', 'TH'); function concatenateArrayValues(&$value,$key, $data) {$value = $data[1][floor($key/count($data[1]))] .$data[0][$key % count($data[0])]; } $compositeArray = array_fill(0,count($week_no)*count($days),NULL); array_walk($compositeArray,'concatenateArrayValues',array($days,$week_no)); var_dump($compositeArray); – Mark Baker Sep 6 '12 at 12:45 ## 6 Answers What's wrong with a simple: $out = array('2TU', '2WE', '2TH','3TU', '3WE', '3TH, '4TU', '4WE', '4TH);
• Its repetitive and doesn't allow for extension. Now if you were to add Monday or Friday to that list, you'd have to manually do so for each week. – mseancole Sep 6 '12 at 17:03
• +1 from me, this is the simplest. If there aren't multiple places (at least 3 or more) where these week/days are used then this beats adding other complexity with a function etc. – Paul Sep 7 '12 at 3:54
• There's no mention in the question of extensions. I'm a pragmatic programmer. The time to add complexity is when the problem becomes more complex, not before ;-) – RichardAtHome Sep 7 '12 at 8:23
function weekday($a,$b){global $days; return "$a $b".join("$b", $days);};$out = explode(' ', trim(array_reduce($week_no, 'weekday'))); (Urm, yes well maybe not...) Sometimes to attain elegance you have to change the way your code is working, i.e. what are your reasons behind generating such an odd array? From my experience something like this would be more useful: $out = array_fill(reset($week_no),count($week_no),$days); Which generates the following: Array ( [2] => Array ( [0] => TU [1] => WE [2] => TH ) [3] => Array ( [0] => TU [1] => WE [2] => TH ) [4] => Array ( [0] => TU [1] => WE [2] => TH ) ) The above would be much easier to traverse and would be more extendable. In sticking with your question however the foreach method is by far the best as stated in the comments... but it was fun trying odd work arounds ;) Am surprised that there is no php function to directly prepend or append array items with another set of array items... probably because it's quite easy and fast to do so with a few foreachs. • I would say using globals is not really an improvement. – Ikke Sep 6 '12 at 14:24 • Yep, totally... that's was the reason for the (erm, yes well maybe not part) ;) Was just an example of the lengths you would have to go to in order to achieve the same... and the reason for me stating a possible implementation change. – Pebbl Sep 6 '12 at 15:41 • +1 For the restructure, not for the globals shudder I was originally debating array_combine(), but this comes out nicer. – mseancole Sep 6 '12 at 16:13 • Thanks, heh, yep Globals are rather evil - It's a shame array_reduce doesn't accept a userdata param like array_walk. However, on the plus side it's the first time I've actually found a use for array_reduce... albeit a rather non-use. The worse part imo though is the conversion to string and then back to array again... (which I was expecting ppl to complain about more) – Pebbl Sep 6 '12 at 21:22 I found another solution with array_merge and array_map. However it's bigger and probably slower than your foreach-solution. $out = call_user_func_array('array_merge',
array_map(function($a) use ($days){
return array_map(function($b) use ($a){
return $a .$b;
}, $days); },$week_no, $days) ); • Indentation works just fine if you use the provided formatting functions which are explained in the (not overlookable!) help just above the input field, highlighted by an ugly mustard yellow. – Konrad Rudolph Sep 7 '12 at 8:12 • You're right. I was hung up on backticks as they are meant to highlight code. It's probably just there that indentation doesn't work as expected. Thanks anyway! – Louis Huppenbauer Sep 7 '12 at 8:15 Not necessarily shorter, but you could create an object that takes the two arrays and does the merge for you, possibly using something like array_walk or array_map. Stop. This is as simple as you're going to get it. Any further refactoring will be making your code more complex rather than simpler. Look at the other code suggestions... are they making it simpler or more complex? You may want to wrap this inside a function depending on how your application uses it, but without seeing more code it's impossible to judge. There are a lot of cases where there is a nice array function to call, but this is not one of those cases. Instead of worrying about how many characters or lines it takes to perform a task, worry about which implementation makes your intent the most clear. That's what matters. Don't be sad... what you have is perfect. The only thing I would do is wrap everything inside a function. function getWeekArray($week_no, $days ){$result = array();
foreach ($week_no as$n) {
foreach ($days as$d) {
$result[] =$n . $d; } } return$result;
}
$a = array(2,3,4);$b = array('TU', 'WE', 'TH');
var_dump(getWeekArray( $a,$b )); // array('2TU', '2WE', '2TH','3TU', '3WE', '3TH, '4TU', '4WE', '4TH)
|
2021-04-16 09:24:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3707917034626007, "perplexity": 9024.18781571352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088731.42/warc/CC-MAIN-20210416065116-20210416095116-00586.warc.gz"}
|
https://answers.ros.org/users/1317/young-lee/?sort=recent
|
2012-06-20 11:56:12 -0600 answered a question navfn and carrot planner? I've never seen any launch file explicitly executing those package. They are typically used as a plugin for the move_base package. Of course, you can create your own move_base that uses one of those planners. The code snippets from the wiki pages of those and the source code of move_base are helpful to understand how to use those planners. 2012-06-18 04:59:53 -0600 answered a question hector slam problem with imu The velocities in the odometry messages from hector_mapping are always zero. You can try out the angular velocity from the IMU. For the linear velocity you can differentiate the position of the odometry from hector_mapping, integrate the linear acceleration from the IMU, and combine them through averaging or filtering. My robot doesn't have an IMU, and I get the "SearchDIR angle change too large" message only when my robot is gone crazy so that the map is messed up. If you get the message often or from the beginning, I don't think augmenting your robot with an IMU will fix the issue. 2011-11-04 06:29:47 -0600 asked a question gazebo_ros_force can't find link I'm trying to model a hovercraft by applying force to a rigid body using the gazebo_ros_force plugin. I wrote the urdf file below, which has two boxes, one on top of the other. It works fine in Gazebo as long as I apply some force to the root link, base_link in this urdf file. When I attempt to apply force to the child link (test_link), I get an error message: "gazebo_ros_force plugin error: bodyName: test_link does not exist." If anyone can point out what I'm doing wrong, I would appreciate it. Gazebo/Black 0.00001 0.00001 1000000 1.0 false 0.01 true 15.0 box_force test_link
|
2021-11-29 19:18:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3254295587539673, "perplexity": 1396.5159387851315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00423.warc.gz"}
|
http://www.sellingwaves.com/archives/2004/01/06/vicos_theories_freed_from_joyce/
|
## January 06, 2004
### Vico's theories freed from Joyce?
Posted by Curt at 03:47 PM in What the Fuck? | TrackBack
So now Howard Dean thinks he’s Job. But Job had everything he could hope for, and was almost arbitrarily deprived of the fruit of his ambitions. That’s is more or less the opposite of Dean’s trajectory. I also don’t think that he is Jeremiah, although that might be the most obvious Biblical figure to compare him to. Jeremiah lived in a time of slavery and poverty for his country, and railed against the lack of pride and militancy of his people. Again, that is pretty much the opposite of both the situation and the rhetoric of Howard Dean. I think that he is probably one of the minor, non-electable prophets somewhere between Hosea and Habakkuk, but I’m not sure which one. Any suggestions?
Curt, how about a false one?
Posted by: John Venlet at January 6, 2004 07:52 PM
Ah, indeed, I agree completely. But then again, if we are using that criterion, I would have to include most of the prophets in the Bible in the same category.
Posted by: Curt at January 6, 2004 11:15 PM
Would Kucinich, then, be like the demon-possessed man in Mark 5.5 who "Night and day among the tombs and in the hills he would cry out and cut himself with stones"?
Posted by: Curt at January 6, 2004 11:19 PM
Well, many people say history repeats itself.
Posted by: John Venlet at January 7, 2004 08:39 AM
'Ah, indeed, I agree completely. But then again, if we are using that criterion, I would have to include most of the prophets in the Bible in the same category.'
Gonna have to disagree with you there, Curt. Assuming I understand you correct, if most OT prophets are false, then it is implied that some are not. (That is to say that I take most as meaning not simple some, but not all.) So, if I'm correct so far, then you would affirm that not all OT prophets are false, that is that some are true. So a true prophet, at least in the sense that we mean, would include a miraculous element. And a miracle would, obviously, imply the existence of some form of god. Further, since this god, of whatever type, worked a miracle through this prophet, it would be an implicit confirmation or recognition of the prophets message, including the type of god, let's say, advocated by the prophet as well as the truth of the prophet's religious system, in this case Judaism. From there it's just a few more steps to showing that those prophets labelled as such are not based on the other assumptions. So there are clearly some problems with such a position. A far more reasonable position is that the OT prophets were not false prophets.
As to the question of Dean. I would say certain aspects of Jonah and Hosea might apply. If we are not limited to OT prophets, I'm sure I could come up with some better ones.
Posted by: Aaron at January 10, 2004 03:55 AM
Just because you're right about something doesn't mean you're not a false prophet.
Posted by: shonk at January 11, 2004 04:38 AM
'Just because you’re right about something doesn’t mean you’re not a false prophet.'
But that ignores the jewish concept of prophecy. In Judaism a prophet was not simply a fortune teller. Rather a prophet was a special messenger from God, and a prophecy was a message from God. Sometimes, often perhaps, a prophecy included future events, often times warnings, but not always, but that was not essential. It should be mentioned that Jewish law had strict rules for these future prophesies. But the point is, and I think the jewish use of the term was implied, that if we are to recognize some prophets are true then we are recognizing their message as such. Beyond all this, though, I think a case could be made that the OT prophets were not false prophets.
Posted by: Aaron at January 11, 2004 05:51 AM
The point is, stipulating a Judaic God of some sort, that just because someone purports to be a messenger of God and is generally in-line with the Judaic tradition does not mean that the person is actually a messenger from God. An allusion might be drawn with a calculus exam: a student could correctly state that $\int_0^1 xdx = 1/2$ (the definite integral from 0 to 1 of the function x is equal to 1/2) but for entirely the wrong reason (for example by stating the antiderivative of x as 1/2 instead of (x^2)/2). In such a case we would say that the student's reasoning was basically wrong and that he didn't understand integration. In the same way, a so-called prophet whose message seems more-or-less legitimate is not necessarily an actual messenger from God.
Posted by: shonk at January 11, 2004 12:21 PM
I think this is all much ado about a false distinction. I was certainly devoting more thought to making fun of Howard Dean than to making dogmatic distinctions between the relative truth or falseness of prophets in the Tanach. I don't think it is particularly enlightening to quarrel over which of those prophets are "true," although I do appreciate the point that prophets in the Bible are not simply seers and that "true" and "false" in such a case refers more to the truth of the doctrine they are propounding than to the literal truth of their predictions for the future. It might have been a pretty useless and arbitrary thing to say, but as far as my purposes in making my comment I might just as well have said that all the Biblical prophets are false, particularly as I do not even accept the basic assumption in all of their doctrines, i.e. a theological meaning to the universe. Rather, in such a case, the best response is a humble admission of personal ignorance as to the real theological truth of the universe, but to the extent that I had any agenda in my playful denigration of Biblical prophets I was only motivated by the conviction that, as Tolstoy said, "the whole truth can never be immoral," which I find to be a sufficient epitaph for the warmongers of ancient Israel such as Jeremiah.
Posted by: Curt at January 14, 2004 04:15 AM
|
2018-12-14 16:30:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6599122881889343, "perplexity": 1481.1881389304413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826145.69/warc/CC-MAIN-20181214162826-20181214184826-00266.warc.gz"}
|
https://bookdown.dongzhuoer.com/hadley/ggplot2-book/functional-programming.html
|
## 18.5 Functional programming
Since ggplot2 objects are just regular R objects, you can put them in a list. This means you can apply all of R’s great functional programming tools. For example, if you wanted to add different geoms to the same base plot, you could put them in a list and use lapply().
geoms <- list(
geom_point(),
geom_boxplot(aes(group = cut_width(displ, 1))),
list(geom_point(), geom_smooth())
)
p <- ggplot(mpg, aes(displ, hwy))
lapply(geoms, function(g) p + g)
#> [[1]]
#>
#> [[2]]
#>
#> [[3]]
#> geom_smooth() using method = 'loess' and formula 'y ~ x'
### 18.5.1 Exercises
1. How could you add a geom_point() layer to each element of the following list?
plots <- list(
ggplot(mpg, aes(displ, hwy)),
ggplot(diamonds, aes(carat, price)),
ggplot(faithfuld, aes(waiting, eruptions, size = density))
)
2. What does the following function do? What’s a better name for it?
mystery <- function(...) {
Reduce(+, list(...), accumulate = TRUE)
}
mystery(
ggplot(mpg, aes(displ, hwy)) + geom_point(),
geom_smooth(),
xlab(NULL),
ylab(NULL)
)
|
2022-05-23 07:41:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24998998641967773, "perplexity": 12457.67757015116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662556725.76/warc/CC-MAIN-20220523071517-20220523101517-00426.warc.gz"}
|
https://www.futurelearn.com/courses/thermodynamics/0/steps/25315
|
2.7
# Example 2: Equation of state for ideal gas
### Example : In a gas-spring system
An ideal gas is contained in a cylinder. Initially the pressure, volume and temperature were P1, V1, and T1. A spring with a spring constant k attaches to the piston as the figure.The area of piston is A.The gas expands against the spring by increasing temperature to at constant pressure T2.What is the displacement? Let final displacement as Δx
#### We can address this problem by two approaches; the force balances or the energy balances.
① Force balance at the piston
The force exerted by the spring due to the displacement is balanced with the gas pressure
$F$=$kΔx$=$P$gas$A$=$P$1$A$
Ideal gas law holds for all ideal gas
② Energy balance
Work done by the gas = Energy stored in spring
|
2019-03-22 21:18:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.626119077205658, "perplexity": 1016.0115726929336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202689.76/warc/CC-MAIN-20190322200215-20190322222215-00270.warc.gz"}
|
https://www.semanticscholar.org/paper/A-dichotomy-for-groupoid-%24%5Ctext%7BC%7D%5E%7B%5Cast-%7D%24-Rainone-Sims/533c077d635fb9d900e473cf4d85e242de5bc76c
|
# A dichotomy for groupoid $\text{C}^{\ast }$ -algebras
@article{Rainone2017ADF,
title={A dichotomy for groupoid \$\text\{C\}^\{\ast \}\$ -algebras},
author={Timothy Rainone and Aidan Sims},
journal={Ergodic Theory and Dynamical Systems},
year={2017},
volume={40},
pages={521 - 563}
}
• Published 14 July 2017
• Mathematics
• Ergodic Theory and Dynamical Systems
We study the finite versus infinite nature of C $^{\ast }$ -algebras arising from étale groupoids. For an ample groupoid $G$ , we relate infiniteness of the reduced C $^{\ast }$ -algebra $\text{C}_{r}^{\ast }(G)$ to notions of paradoxicality of a K-theoretic flavor. We construct a pre-ordered abelian monoid $S(G)$ which generalizes the type semigroup introduced by Rørdam and Sierakowski for totally disconnected discrete transformation groups. This monoid characterizes the finite/infinite nature…
• Mathematics
Ergodic Theory and Dynamical Systems
• 2020
In this paper, we study the ideal structure of reduced $C^{\ast }$ -algebras $C_{r}^{\ast }(G)$ associated to étale groupoids $G$ . In particular, we characterize when there is a one-to-one
We investigate the pure infiniteness and stable finiteness of the Exel-Pardo $C^*$-algebras $\mathcal{O}_{G,E}$ for countable self-similar graphs $(G,E,\varphi)$. In particular, we associate a
• Mathematics, Computer Science
International Mathematics Research Notices
• 2019
The main results generalize the recent work of Exel and Pardo on self-similar graphs and characterize the simplicity of ${{\mathcal{O}}}_{G,\Lambda }$ in terms of the underlying action, and prove that, whenever the action is simple, there is a dichotomy.
• Mathematics
Ergodic Theory and Dynamical Systems
• 2021
Abstract Let $(G,\unicode[STIX]{x1D6EC})$ be a self-similar $k$ -graph with a possibly infinite vertex set $\unicode[STIX]{x1D6EC}^{0}$ . We associate a universal C*-algebra
• Mathematics
Proceedings of the Edinburgh Mathematical Society
• 2020
Abstract A simple Steinberg algebra associated to an ample Hausdorff groupoid G is algebraically purely infinite if and only if the characteristic functions of compact open subsets of the unit space
In this paper, we introduce properties including groupoid comparison, pure infiniteness and paradoxical comparison as well as a new algebraic tool called groupoid semigroup for locally compact
• Mathematics
Bulletin of the London Mathematical Society
• 2020
AF‐embeddability, quasidiagonality and stable finiteness of a C∗ ‐algebra have been studied by many authors and shown to be equivalent for certain classes of C∗ ‐algebras. The crossed products
## References
SHOWING 1-10 OF 61 REFERENCES
• Mathematics
Ergodic Theory and Dynamical Systems
• 2020
In this paper, we study the ideal structure of reduced $C^{\ast }$ -algebras $C_{r}^{\ast }(G)$ associated to étale groupoids $G$ . In particular, we characterize when there is a one-to-one
• Mathematics
Ergodic Theory and Dynamical Systems
• 2014
Let $G$ be a Hausdorff, étale groupoid that is minimal and topologically principal. We show that $C_{r}^{\ast }(G)$ is purely infinite simple if and only if all the non-zero positive elements of
• Mathematics
• 2016
We study the matricial field (MF) property for certain reduced crossed product C*-algebras and their traces. Using classification techniques and induced K-theoretic dynamics, we show that reduced
• Mathematics
• 2014
We present a classification theorem for a class of unital simple separable amenable ${\cal Z}$-stable $C^*$-algebras by the Elliott invariant. This class of simple $C^*$-algebras exhausts all
• Mathematics
• 2015
Let $A$ be a unital simple separable C*-algebra satisfying the UCT. Assume that $\mathrm{dr}(A)<+\infty$, $A$ is Jiang-Su stable, and $\mathrm{K}_0(A)\otimes \mathbb{Q}\cong \mathbb{Q}$. Then $A$ is
• Mathematics
• 2004
We compute the monoid V(LK(E)) of isomorphism classes of finitely generated projective modules over certain graph algebras LK(E), and we show that this monoid satisfies the refinement property and
Starting from Kirchberg's theorems announced at the operator algebra conference in Gen eve in 1994, namely O2 A = O2 for separable unital nuclear simple A and O1 A = A for separable unital nuclear
• Mathematics
• 2017
We prove that if A is a \sigma-unital exact C*-algebra of real rank zero, then every state on K_0(A) is induced by a 2-quasitrace on A. This yields a generalisation of Rainone's work on pure
• Mathematics
• 2020
Let G be a Hausdorff, étale groupoid that is minimal and topologically principal. We show that C∗ r (G) is purely infinite simple if and only if all the nonzero positive elements of C0(G ) are
The possibility that nuclear (or amenable) C*-algebras should be classified up to isomorphism by their K-theory and related invariants was raised in an article by Elliott [48] (written in 1989) in
|
2023-02-09 12:14:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848247766494751, "perplexity": 1621.7691490025843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00284.warc.gz"}
|
https://zbmath.org/?q=an:0969.60078
|
## The limits of stochastic integrals of differential forms.(English)Zbl 0969.60078
This long article is concerned with developing an appropriate stochastic calculus for a diffusion process $$X$$ on a manifold $$E$$. It is assumed that $$X$$ is associated with a second-order elliptic differential operator in divergence form with measurable coefficients. The first main result is an extension of the Lyons-Zheng decomposition to the case of fixed starting point $$o\in E$$; if $$f$$ is a bounded function of finite energy, the process $$f(X_t)$$ can be uniquely decomposed under $$P^o$$ into the sum of an additive functional martingale $$(M_t^s,\;t\geq s)$$, a backward martingale $$(\overline{M}_s^t,\;0<s\leq t)$$, and two additive functionals $$(\alpha_t^s,\beta_t^s,\;t\geq s)$$ of finite variation: $f(X_t)=f(X_s)+\frac 12M^s_t-\frac 12\overline{M}^t_s- \alpha_t^s+\beta_t^s\qquad 0<s\leq t .$ This extends results of T. J. Lyons and W. Zheng [in: Les processus stochastiques. Astérisque 157-158, 249-271 (1988; Zbl 0654.60059)] and T. J. Lyons and T. S. Zhang [Ann. Probab. 22, No. 1, 494-524 (1994; Zbl 0804.60044)]. This decomposition is then used to define a Stratonovich type stochastic integral $$\int_s^t\omega \circ dX$$ of a differential 1-form $$\omega$$ along the path $$X$$. This integral is a priori only defined for $$s>0$$, but it is shown that it converges under certain conditions $$P^o$$-a.s. to a finite limit as $$s\downarrow 0$$. Similarly, the limiting behavior for $$t\uparrow\zeta$$ (the lifetime of $$X$$) is studied. Particular results are obtained in the case where $$X$$ is a Brownian motion on a negatively curved complete Riemannian manifold. Section 8 contains estimates on the heat kernel and its gradient.
Reviewer: A.Schied (Berlin)
### MSC:
60J60 Diffusion processes 60H05 Stochastic integrals 31C25 Dirichlet forms
### Citations:
Zbl 0654.60059; Zbl 0804.60044
Full Text:
### References:
[1] Bliedtner, J. and Hansen, W. (1986). Potential Theory. Springer, New York. · Zbl 0706.31001 [2] Blumenthal, R. M. and Getoor, R. K. (1986). Markov Processes and Potential Theory. Academic Press, New York. · Zbl 0169.49204 [3] Constantinescu, C. and Cornea, A. (1972). Potential Theory on Harmonic Spaces Springer, New York. · Zbl 0248.31011 [4] Courr ege, Ph. and Priouret, P. (1965). Recollements de processus de Markov. Publ. Inst. Statist. Univ. Paris 14 275-377. · Zbl 0275.30026 [5] Fabes, E. and Stroock, D. (1986). A new proof of Moser’s parabolic Harnack inequality using the old ideas of Nash. Arch. Rational Mech. Anal. 96 327-338. · Zbl 0652.35052 [6] Fukushima, M., Oshima, Y. and Takeda, M. (1994). Dirichlet Forms and Symmetric Markov Processes. de Gruyter, Berlin. · Zbl 0838.31001 [7] Gilbarg, D. and Trudinger, N. S. (1983). Elliptic Partial Differential Equations of Second Order. Springer, Berlin. · Zbl 0562.35001 [8] Ladyzenskaya, O. A., Uraltseva, N. N. and Solonikov, V. A. (1867). Linear and Quasilinear Equations of Parabolic Type. Nauka, Moskow. (In Russian.) [9] Li, P. and Karp, L. (1998). The heat equation on complete Riemannian manifolds. Unpublished manuscript. [10] Li, P. and Yau, S. T. (1986). On the upper estimate of the heat kernel of a complete Riemannian manifold. Acta Math. 156 153-201. · Zbl 0611.58045 [11] Lyons, T. (1998). Random thoughts on reversible potential theory. Unpublished manuscript. · Zbl 0757.31007 [12] Lyons, T. and Stoica, L. (1996). On the limit of stochastic integrals of differential forms. Stochastics Monogr. 10. Gordon and Breach, Yverdon. · Zbl 0899.60046 [13] Lyons, T. J. and Zhang, T. S. (1994). Decomposition of Dirichlet processes and its application. Ann. Probab. 22 1-26. · Zbl 0804.60044 [14] Lyons, T. J. and Zheng, W. (1988). A crossing estimate for the canonical process on a Dirichlet space and a tightness result. Astérisque 157-158 249-271. · Zbl 0654.60059 [15] Lyons, T. J. and Zheng, W. A. (1990). On conditional diffusion processes. Proc. Roy. Soc. Edinburgh Sec. A 115 243-255. · Zbl 0715.60097 [16] Ma,M. and R öckner, M. (1992). Introduction to the Theory of Dirichlet Forms. Springer, New York. · Zbl 0826.31001 [17] Prat, J.-J. (1971). Étude asymptotique du mouvement brownian sur une variété riemannienne a courbure négative. C.R. Acad. Sci. Paris Sér. A 272 1586-1589. · Zbl 0296.60053 [18] Saloff-Coste, L. (1992). Uniformly elliptic operators on Riemannian manifolds. J. Diff. Geometry 36 417-450. · Zbl 0735.58032 [19] Stoica, L. (1980). Local Operators and Markov Processes. Lecture Notes in Math. 816. Springer, Berlin. · Zbl 0446.60067 [20] Stroock, D. W. (1988). Diffusion semigroups corresponding to uniformly elliptic divergence form operators. Seminaire de Probabilités XXII Lecture Notes in Math. 1321 316-347. Springer, Berlin. · Zbl 0651.47031 [21] Sullivan, D. (1983). The Dirichlet problem at infinity for a negatively curved manifold. J. Diff. Geometry 18 723-732. · Zbl 0541.53037 [22] Takeda, M. (1991). On the conservatineness of the Brownian motion on a Riemannian manifold. Bull. London Math. Soc. 23 86-88. · Zbl 0748.60070
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2022-06-30 13:52:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7225728631019592, "perplexity": 1432.4746120419095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103821173.44/warc/CC-MAIN-20220630122857-20220630152857-00496.warc.gz"}
|
https://projecteuclid.org/euclid.aaa/1412605765
|
## Abstract and Applied Analysis
### Finite-Time ${H}_{\infty }$ Control for a Class of Discrete-Time Markov Jump Systems with Actuator Saturation via Dynamic Antiwindup Design
#### Abstract
We deal with the finite-time control problem for discrete-time Markov jump systems subject to saturating actuators. A finite-state Markovian process is given to govern the transition of the jumping parameters. A controller designed for unconstrained systems combined with a dynamic antiwindup compensator is given to guarantee that the resulting system is mean-square locally asymptotically finite-time stabilizable. The proposed conditions allow us to find dynamic anti-windup compensator which stabilize the closed-loop systems in the finite-time sense. All these conditions can be expressed in the form of linear matrix inequalities and therefore are numerically tractable, as shown in the example included in the paper.
#### Article information
Source
Abstr. Appl. Anal., Volume 2014, Special Issue (2013), Article ID 906902, 9 pages.
Dates
First available in Project Euclid: 6 October 2014
https://projecteuclid.org/euclid.aaa/1412605765
Digital Object Identifier
doi:10.1155/2014/906902
Mathematical Reviews number (MathSciNet)
MR3186986
Zentralblatt MATH identifier
07023285
#### Citation
Zhao, Junjie; Wang, Jing; Li, Bo. Finite-Time ${H}_{\infty }$ Control for a Class of Discrete-Time Markov Jump Systems with Actuator Saturation via Dynamic Antiwindup Design. Abstr. Appl. Anal. 2014, Special Issue (2013), Article ID 906902, 9 pages. doi:10.1155/2014/906902. https://projecteuclid.org/euclid.aaa/1412605765
#### References
• B. M. Chen, T. H. Lee, K. Peng, and V. Venkataramanan, “Composite nonlinear feedback control for linear systems with input saturation: theory and an application,” IEEE Transactions on Automatic Control, vol. 48, no. 3, pp. 427–439, 2003.
• T. Hu, A. R. Teel, and L. Zaccarian, “Anti-windup synthesis for linear control systems with input saturation: achieving regional, nonlinear performance,” Automatica, vol. 44, no. 2, pp. 512–519, 2008.
• J. Raouf and E. K. Boukas, “Stabilization of singular Markovian jump systems with discontinuities and saturation inputs,” IET Control Theory and Applications, vol. 42, no. 5, pp. 767–780, 2011.
• X. Song, J. Lu, S. Xu, H. Shen, and J. Lu, “Robust stabilization of state delayed T-S fuzzy systems with input saturation via dynamic anti-windup fuzzy design,” International Journal of Innovative Computing, Information and Control, vol. 7, no. 12, pp. 6665–6676, 2011.
• H. Liu, F. Sun, and E.-K. Boukas, “Robust control of uncertain discrete-time Markovian jump systems with actuator saturation,” International Journal of Control, vol. 79, no. 7, pp. 805–812, 2006.
• H. Li, B. Chen, Q. Zhou, and W. Qian, “Robust stability for uncertain delayed fuzzy Hopfield neural networks with Markovian jumping parameters,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 39, no. 1, pp. 94–102, 2009.
• H. Shen, J. H. Park, L. Zhang, and Z. Wu, “Robust extended dissipative control for sampled-data Markov jump systems,” International Journal of Control, 2013.
• H. Shen, X. Song, and Z. Wang, “Robust fault-tolerant control of uncertain fractional-order systems against actuator faults,” IET Control Theory and Applications, vol. 7, no. 9, pp. 1233–1241, 2013.
• Z. Wu, P. Shi, H. Su, and J. Chu, “Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled-data,” IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 1796–1806, 2013.
• S. Ma and E. K. Boukas, “Robust ${H}_{\infty }$ filtering for uncertain discrete Markov jump singular systems with mode-dependent time delay,” IET Control Theory and Applications, vol. 3, no. 3, pp. 351–361, 2009.
• H. Shen, S. Xu, J. Zhou, and J. Lu, “Fuzzy ${H}_{\infty }$ filtering for nonlinear Markovian jump neutral systems,” International Journal of Systems Science, vol. 42, no. 5, pp. 767–780, 2011.
• H. Shen, S. Xu, X. Song, and G. Shi, “Passivity-based control for Markovian jump systems via retarded output feedback,” Circuits, Systems, and Signal Processing, vol. 31, no. 1, pp. 189–202, 2009.
• H. Shen, S. Xu, J. Lu, and J. Zhou, “Passivity-based control for uncertain stochastic jumping systems with mode-dependent round-trip time delays,” Journal of the Franklin Institute, vol. 349, no. 5, pp. 1665–1680, 2012.
• Z. Wu, P. Shi, H. Su, and J. Chu, “Asynchronous ${I}_{2}$-${I}_{\infty }$ filtering for discrete-time stochastic Markov jump systems with randomly occurred sensor nonlinearities,” Automatica, vol. 50, no. 1, pp. 180–186, 2013.
• H. Shen, Z. Wang, X. Huang, and J. Wang, “Fuzzy dissipative control for nonlinear Markovian jump systems via retarded feedback,” Journal of the Franklin Institute, 2013.
• G. Wang, J. Cao, and J. Liang, “Exponential stability in the mean square for stochastic neural networks with mixed time-delays and Markovian jumping parameters,” Nonlinear Dynamics, vol. 57, no. 1-2, pp. 209–218, 2009.
• Y. Zhang and C. Liu, “Observer-based finite-time ${H}_{\infty }$ control of discrete-time Markovian jump systems,” Applied Mathematical Modelling, vol. 37, no. 6, pp. 3748–3760, 2013.
• Y. Hong, J. Wang, and D. Cheng, “Adaptive finite-time control of nonlinear systems with parametric uncertainty,” IEEE Transactions on Automatic Control, vol. 51, no. 5, pp. 858–862, 2006.
• S. He and F. Liu, “Observer-based finite-time control of time-delayed jump systems,” Applied Mathematics and Computation, vol. 217, no. 6, pp. 2327–2338, 2010.
• S. He and F. Liu, “Finite-time ${H}_{\infty }$ filtering of time-delay stochastic jump systems with unbiased estimation,” Proceedings of the Institution of Mechanical Engineers I, vol. 224, no. 8, pp. 947–959, 2010.
• S. He and F. Liu, “Finite-time ${H}_{\infty }$ fuzzy control of nonlinear jump system with time delays via dynamic observer-based state feedback,” IEEE Transactions on Fuzzy Systems, vol. 20, no. 4, pp. 605–614, 2012. \endinput
|
2019-12-16 02:12:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 1, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41904380917549133, "perplexity": 3839.332054439475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541315293.87/warc/CC-MAIN-20191216013805-20191216041805-00347.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapters-1-5-cumulative-review-exercises-page-377/17
|
## Basic College Mathematics (10th Edition)
$\frac{1}{5}$
200 people do not drink coffee and 1000 total were in the survey. Therefore, $\frac{200}{1000}$ do not drink coffee. $\frac{200}{1000}=\frac{1}{5}$
|
2021-02-26 10:19:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2729666829109192, "perplexity": 2474.420532493747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356456.56/warc/CC-MAIN-20210226085543-20210226115543-00244.warc.gz"}
|
http://math.stackexchange.com/questions/45020/visualization-of-complex-roots-for-quadratics?answertab=votes
|
# Visualization of complex roots for quadratics
I read that if a parabola has no real roots, then its complex roots can be visualized by graphing the same parabola ($ax^2 + bx + c$) with $-a$ and then finding the roots of that, then using those roots as the diameter of a circle. Then, the "top" and "bottom" of the circle will be the complex roots of the parabola (in the complex plane).
I don't know if I explained that in [enough] detail, but I'm trying to prove this. I've been trying to generalize an equation for all the parabolas with no real roots, and I seem to be getting somewhere, but it seems like I'm over-complicating this.
How would you approach this? Any suggestions would be appreciated.
A proof of this construction would also be an acceptable answer. Thank you.
-
There is a statement along these lines, but it doesn't work to simply negate $a$ when the quadratic is in standard form. You have to start by completing the square: $$ax^2 + bx + c \;=\; a(x-h)^2 + k.$$ Then the roots of the quadratic $$-a(x-h)^2 + k$$ will have the properties you describe. In particular, if we write the roots of this quadratic as $h + r$ and $h - r$, then the roots of the original quadratic are $h + ri$ and $h - ri$.
By the way, observe that the auxiliary quadratic described above has the same vertex as the original quadratic, but its graph has been reflected across the horizontal line that goes through this point. This gives a nice geometric way of "picturing" the complex roots of a quadratic by looking at the graph.
-
Either I'm misunderstanding what you wrote, I did a mistake below, or the construction does not work.
Assume that $a\gt 0$ (we may as well, since the roots of $ax^2+bx+c$ are the same as the roots of $-ax^2-bx^2-c$).
The roots of $-ax^2+bx+c$ are the same as the roots of $ax^2-bx-c$, and are given by $$\frac{b+\sqrt{b^2+4ac}}{2a}\quad\text{and}\quad\frac{b-\sqrt{b^2+4ac}}{2a}.$$ If you are assuming that $ax^2+bx+c$ has no real roots (so that $b^2-4ac\lt 0$), then this has real roots, since $b^2-4ac\lt 0$ implies that $ac\gt 0$, so $b^2+4ac\gt 0$.
The circle whose diameter goes from the point $\left(\frac{b-\sqrt{b^2+4ac}}{2a},0\right)$ to $\left(\frac{b+\sqrt{b^2+4ac}}{2a}\right)$ has center at $\left(\frac{b}{2a},0\right)$, and radius $\frac{\sqrt{b^2+4ac}}{2a}$. So the coordinates of the "top" and "bottom" of that circle are $$\left(\frac{b}{2a},\frac{\sqrt{b^2+4ac}}{2a}\right)\quad\text{and}\quad\left(\frac{b}{2a},-\frac{\sqrt{b^2+4ac}}{2a}\right).$$ That is, they correspond to the complex numbers $$\frac{b}{2a} + \frac{\sqrt{b^2+4ac}}{2a}i\qquad\text{and}\qquad \frac{b}{2a}-\frac{\sqrt{b^2+4ac}}{2a}i.$$ These are the roots of $ax^2+bx+c=0$ if and only if they are the roots of $x^2+\frac{b}{a}x + \frac{c}{a}=0$, if and only if they add up to $-\frac{b}{a}$ and multiply to $\frac{c}{a}$. But they add up to $\frac{b}{a}$, not $-\frac{b}{a}$; and their product is $$\left(\frac{b}{2a} + \frac{\sqrt{b^2+4ac}}{2a}i\right)\left(\frac{b}{2a}-\frac{\sqrt{b^2+4ac}}{2a}i\right) = \frac{b^2}{4a^2} + \frac{b^2+4ac}{4a^2} = \frac{b^2+2ac}{2a^2}\neq \frac{c}{a}.$$
For an explicitly example, take $x^2+x+1$, which has no real roots; the complex roots are $\frac{1}{2}+\frac{\sqrt{3}}{2}i$ and $\frac{1}{2}-\frac{\sqrt{3}}{2}i$. The construction you describe begins by considering instead $-x^2+x+1$, whose roots are $\frac{1}{2}+\frac{\sqrt{5}}{2}$ and $\frac{1}{2}-\frac{\sqrt{5}}{2}$. The circle whose diameter goes from $(\frac{1}{2}-\frac{\sqrt{5}}{2},0)$ to $(\frac{1}{2}+\frac{\sqrt{5}}{2},0)$ has center at $(\frac{1}{2},0)$ and radius $\frac{\sqrt{5}}{2}$, so the "top" and "bottom" of the circle will be $(\frac{1}{2},\frac{\sqrt{5}}{2})$ and $(\frac{1}{2},-\frac{\sqrt{5}}{2})$, which are not the complex roots of $x^2+x+1$.
The construction works if $b=0$: in that case, instead of graphing $ax^2+c$, with $\frac{c}{a}\gt 0$, we graph $-ax^2+c$; the zeros are $\pm\sqrt{\frac{c}{a}}$, so the circle in question will be centered at the origin and have radius $\sqrt{\frac{c}{a}}$, so the complex numbers corresponding to the "top" and "bottom" are exactly $i\sqrt{\frac{c}{a}}$ and $-i\sqrt{\frac{c}{a}}$, which are the roots of $ax^2+c$ when $\frac{c}{a}\gt 0$.
So this suggests that for the general case you should first complete the square and then change the sign of the entire squared factor. For my example, beginning with $x^2+x+1$, first complete the square: $$x^2 + x + 1 = \left(x^2 + x + \frac{1}{4}\right) + \frac{3}{4} = \left(x + \frac{1}{2}\right)^2 + \frac{3}{4}.$$ Then graph $-(x+\frac{1}{2})^2 + \frac{3}{4} = -x^2 -x + \frac{1}{2}$ and proceed as you describe. This will work: it amounts to first doing a horizontal shift so that we are dealing with a quadratic of the form $Az^2 + C$ (where $z=x+k$ for some $k$), and then proceeding as above. But you'll note that the resulting equation is not obtained simply by changing the sign of $a$: the values of $b$ and $c$ are also changed.
You can check that this modified idea works by doing essentially what I did above, but showing it does work.
-
We give a construction of the complex roots, focusing on the geometry.
The general quadratic equation has the shape $$ax^2+bx+c=0,$$ where $a \ne 0$. But if we are only interested in the geometry of the roots, it does no harm to divide every coefficient by $a$, obtaining the simplified form $$x^2+px+q=0.$$
Geometrically, this corresponds to a scaling in the $y$-direction.
Consider the parabola with equation $y=x^2+px+q$. Move the parabola sideways so that the vertex of the parabola lies on the $y$-axis. Then the equation of the shifted parabola assumes the simple shape $$y=x^2+d^2,$$ where $d>0$. We have written the constant term of the shifted parabola as $d^2$, since because the parabola does not cross the $x$-axis, the constant term must be positive.
The vertex of the parabola is at $(0,d^2)$. Draw a horizontal line at distance $d^2$ above the vertex. This line has equation $y=2d^2$, so it meets the parabola at the points with $x$-coordinates $x=\pm d$.
Draw a circle with center halfway between these two points, and passing through them. So the circle has radius $d$. Lower this circle to make a new circle $C$ with center the origin.
The "top" and "bottom" of this new circle $C$ are at $(0,d)$ and $(0,-d)$. If we use complex numbers to represent them, they are $0+di$ and $0-di$, the complex roots of $x^2+d^2=0$.
Now shift the parabola back to its original position, dragging everything we have constructed along. Then the roots of $x^2+d^2=0$ are dragged along to become the roots of the original equation $x^2+px+q=0$.
The shifting was done to make the algebra simple. But now we can give a geometric recipe for constructing the complex roots of a quadratic equation that has no real roots, say with positive coefficient for $x^2$.
(i) Draw the appropriate parabola.
(ii) Draw a line which is just as far above the vertex of the parabola as that vertex is above the $x$-axis. Suppose that this line meets the parabola at points $P$ and $Q$.
(iii) Draw the circle with center midway between $P$ and $Q$ and passing through $P$ and $Q$.
(iv) Lower this circle so that it becomes a circle $C$ with center on the $x$-axis.
(iv) The complex roots of our equation are at the "top" and "bottom" of circle $C$.
-
|
2015-07-30 18:33:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8967500925064087, "perplexity": 75.35034895421624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987552.57/warc/CC-MAIN-20150728002307-00068-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://www.homeworklib.com/questions/1603154/a-b-c-and-d-please
|
# A, B, C and D please
Assume that unemployment, u, is related to inflation, π, according to the following Phillips curve: u = u − φ (π−πe), where u is the natural rate of unemployment and πe is the expected rate of inflation. Assume rational expectations and that the central bank’s preferences are given by the loss function L(u,π) =λu+π2, where λ denotes the weight that the central bank assigns unemployment.
a. Suppose that φ = 1. Show what rate of inflation a central bank with λ = .04 will choose under discretion. What will the unemployment rate be?
b. Assume that a less conservative executive board of the central bank is appointed with the weight λ = .08 on unemployment. What is the new inflation rate? How is unemployment affected?
c. What will inflation and unemployment be if the executive board of the central bank does not care at all about unemployment, i.e. if λ = 0?
d. What can we learn from the calculations above?
Solution:-
Given that
As given in the question
where
= Natural rate of unemployment
= Expected rate of inflation.
u = Actual unemployement Rate
= Actual inflation rate
Equation (1) can be written as
when \pi =1 , then Actual inflation rate depends directly on past year inflation rate and in that case ,
Now '\lambda' which is the weight assign to unemployment depends on actual inflation and '1-\lambda' depends on expected inflation.
So, eqn (2)
becomes
a)
When
As we know when , then
so from eqn (1)
we get
unemployement rate will be 0.96 more than natural rate of unemployement
b)
When
then
unemployment rate will be 50 more than natural rate of unemployment.
Inflation will still depend on part year inflation as
c)
When 0
Then from eqn (3)
From eqn (4) it follow that change in inflation rate depends on deviation of unemployment rate from natural rate and its magnitude is determined by the values of
d)
The above calculation show that, in case of high inflation, \phi is positive and the relation between unemployment rate and inflation rate is likely to change, as a result of which the manner in which expectation are formed get changed.
Thanks for supporting***
Similar Homework Help Questions
Assume that unemployment, u, is related to inflation, π, according to the following Phillips curve: u = u − φ (π−πe), where u is the natural rate of unemployment and πe is the expected rate of inflation. Assume rational expectations and that the central bank’s preferences are given by the loss function L(u,π) =λu+π2, where λ denotes the weight that the central bank assigns unemployment.A. Suppose that φ = 1. Show what rate of inflation a central bank with λ...
• ### i only need question 5, thanks 4. Suppose that the Phillips curve is given by TT,...
i only need question 5, thanks 4. Suppose that the Phillips curve is given by TT, = 1 + 0.1 - 2u a. What is the natural rate of unemployment? Assume that expected inflation is given by m = (1 - 0) +010-1 And suppose that is initially equal to zero and it is given and does not change. It could be zero or any positive value. Suppose that the rate of unemployment is initially equal to the natural rate....
• ### Suppose that the Phillips curve is given by - = 0.1 - 2u where = -1 Suppose that inflation in year t - 1 is ze...
Suppose that the Phillips curve is given by - = 0.1 - 2u where = -1 Suppose that inflation in year t - 1 is zero. In year t, the central bank decides to keep the unemployment rate at 45 forever a. Compute the rate of inflation for years t, t + 1, t + 2, and t + 3. Now suppose that half the workers have indexed labor contracts so that = + (1 - A)-1 and 1 =...
• ### True, False or Uncertain (24 points). State whether the claims in these statements are true, false...
True, False or Uncertain (24 points). State whether the claims in these statements are true, false or ambiguous and explain why. You must provide an explanation to receive any credit. Drawing graphs and arrows only will yield minimum credit. a. “In the Mundell-Fleming (IS*-LM*) Model with fixed exchange rates, a contractionary monetary policy will have no effect on output in the long-run.” (8 points) b. ““Suppose that the equation for the Phillips curve is π = πe – 3(u -...
• ### 3. Discuss the relationship between the natural rate of unemployment, Un, and the Phillips curve, 1lt...
3. Discuss the relationship between the natural rate of unemployment, Un, and the Phillips curve, 1lt – itt-1 = -a(ut – Un); and explain why the natural rate of unemployment is also known as the non-accelerating inflation rate of unemployment (NAIRU). Hints: The central assumption used to derive the Phillips curve, Tet – 1lt-1 = -a(Ut – Un), was that tę = Tt-1, where tę represents expected inflation. What does this mean? Assume that Ut = Un. What happens to...
• ### Suppose the central bank, instead of following the rule r = r(Y,π), has a target level...
Suppose the central bank, instead of following the rule r = r(Y,π), has a target level of inflation. Specifically, it sets r according to r = rLR + b[π − π*]. Here rLR is the real interest rate when the economy is in long-run equilibrium; that is, it is the real interest rate that causes the loan market to be in equilibrium when Y = �Y. In addition, π* is the central bank’s target level of inflation, and b is...
• ### 4. (2.5 PTS) Assume the following Phillips curve: where π is the inflation rate, π et...
4. (2.5 PTS) Assume the following Phillips curve: where π is the inflation rate, π et is the expected inflation rate, ut the unemployment rate, un the natural rate of unemployment. The rule of expectations is π e.-π t-1 The economy is initially (t-0) in medium term equilibrium, with the unemployment rate equal to 10% and the inflation rate equal to 8%. a) Suppose the monetary authority decides to lower the inflation rate to 2%. It faces two options: i)...
• ### Question 1 (42 p) Consider a closed economy where goods market and finalcial markets can be descr...
Question 1 (42 p) Consider a closed economy where goods market and finalcial markets can be described by the following equations for period t С,-100 + 0.5yo- 2000.25Y- 200r G- 100: T-200 Suppose inflation escpectations in this economy is based on past period's inflation rate, ie. Let Yo- FIN)-No the labor force is given as constant at LF- 1000. (4p) Write down the IS equation for this economy (4p) Assume a horizontal LM function where the Central Bank announces the...
• ### Question 1 Assume that inflation and social welfare are given by r = ne-au- u*) and...
Question 1 Assume that inflation and social welfare are given by r = ne-au- u*) and S=-u-w/212 respectively. We also assume that the expected inflation ne has been set by the time the central bank gets to make its decisions. a) What inflation rate will the central bank pick if they optimize welfare? Show all the steps and discussion analytically. b) If the Central Banks announces that expected inflation ne is going to be 0, does it have incentive to...
• ### 2. Phillips Curve. An economy has the following functions for its short run aggregate supply (SRAS),...
2. Phillips Curve. An economy has the following functions for its short run aggregate supply (SRAS), Okun's Law (OL), and Phillips Curve (PC): SRAS: P = EP + (1/2)(y - 3) OL: (Y-Y) = -4(u-u") PC:T = ET - (1/5)( - 6) The economy begins at its natural rate of output with a stable price level equal to \$5. a.) Output is at its natural level when the price level is equal to expectations. Calculate the natural rate of output...
Free Homework App
|
2020-10-25 04:33:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8623939156532288, "perplexity": 2440.833089039045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107887810.47/warc/CC-MAIN-20201025041701-20201025071701-00209.warc.gz"}
|
https://cryptohack.gitbook.io/cryptobook/abstract-algebra/groups
|
# Introduction
Modern cryptography is based on the assumption that some problems are hard (unfeasable to solve). Since the we do not have infinite computational power and storage we usually work with finite messages, keys and ciphertexts and we say they lay in some finite sets $\mathcal{M}, \mathcal{K}$ and $\mathcal{C}$.
Furthermore, to get a ciphertext we usually perform some operations with the message and the key.
For example in AES128 $\mathcal{K} = \mathcal{M} = \mathcal{C} = \{0, 1\}^{128}$since the input, output and key spaces are 128 bits. We also have the encryption and decryption operations: $Enc: \mathcal{K} \times \mathcal{M} \to \mathcal{C} \\ Dec: \mathcal{K} \times \mathcal{C} \to \mathcal{M}$
The study of sets, and different types of operations on them is the target of abstract algebra. In this chapter we will learn the underlying building blocks of cryptosystems and some of the hard problems that the cryptosystems are based on.
# Definition
A set$G$paired with a binary operation $\cdot:G\times G\to G$is a group if the following requirements hold:
• Closure: For all $a, b \in G: \$ $a\cdot b \in G$ - Applying the operation keeps the element in the set
• Associativity: For all $a, b, c \in G:$ $(a \cdot b) \cdot c=a\cdot (b\cdot c)$
• Identity: There exists an element$e\in G$such that $a\cdot e=e\cdot a=a$for all $a\in G$
• Inverse: For all elements $a\in G$, there exists some $b\in G$such that $b\cdot a=a\cdot b=e$. We usually denote$b$as $a^{-1}$
For $n\in\mathbb Z$, $a^n$means $\underbrace{a\cdot a\dots{}\cdot a}_{n\text{ times}}$when $n>0$and $\left(a^{-n}\right)^{-1}$when $n<0$. For $n=0$, $a^n=e$.
If $ab=ba$, then $\cdot$is commutative and the group is called abelian. We often denote the group operation by $+$instead of $\cdot$and we typically use $na$instead of $a^n$.
Remark
• The identity element of a group $G$ is also denoted with $1_G$ or just $1$ if only one groups is present
Examples of groups
Integers modulo $n$ (remainders) under modular addition $= (\mathbb{Z} / n \mathbb{Z}, +)$. $\mathbb{Z} / n \mathbb{Z} = \{0, 1, ..., n -1\}$ Let's look if the group axioms are satisfied
1. $\checkmark$ $\forall a, b \in \mathbb{Z}/ n\mathbb{Z} \text{ let } c \equiv a + b \bmod n$. Because of the modulo reduction $c < n \Rightarrow c \in \mathbb{Z}/ n\mathbb{Z}$
2. $\checkmark$Modular addition is associative
3. $\checkmark$$0 + a \equiv a + 0 \equiv a \bmod n \Rightarrow 0$ is the identity element
4. $\checkmark$$\forall a \in \mathbb{Z}/ n\mathbb{Z}$we take $n - a \bmod n$to be the inverse of $a$. We check that
$a + n - a \equiv n \equiv 0 \bmod n$
$n - a + a \equiv n \equiv 0 \bmod n$
Therefore we can conclude that the integers mod $n$ with the modular addition form a group.
Z5 = Zmod(5) # Technically it's a ring but we'll use the addition hereprint(Z5.list())# [0, 1, 2, 3, 4]print(Z5.addition_table(names = 'elements'))# + 0 1 2 3 4# +----------# 0| 0 1 2 3 4# 1| 1 2 3 4 0# 2| 2 3 4 0 1# 3| 3 4 0 1 2# 4| 4 0 1 2 3a, b = Z5(14), Z5(3)print(a, b)# 4 3print(a + b)# 2print(a + 0)# 4print(a + (5 - a))# 0
Example of non-groups
$(\mathbb{Q}, \cdot)$ is not a group because we can find the element $0$ that doesn't have an inverse for the identity $1$. $(\mathbb{Z}, \cdot)$is not a group because we can find elements that don't have an inverse for the identity $1$
Exercise
Is $(\mathbb{Z} / n \mathbb{Z} \smallsetminus \{0\}, \cdot)$a group? If yes why? If not, are there values for $n$that make it a group?
sɹosᴉʌᴉp uoɯɯoɔ puɐ sǝɯᴉɹd ʇnoqɐ ʞuᴉɥ┴ :ʇuᴉH
## Proprieties
1. The identity of a group is unique
2. The inverse of every element is unique
3. $\forall$ $a \in G \ : \left(a^{-1} \right) ^{-1} = g$. The inverse of the inverse of the element is the element itself
4. $\forall a, b \in G:$ $(ab)^{-1} = b^{-1}a^{-1}$
Proof: $(ab)(b^{−1}a^{−1}) =a(bb^{−1})a^{−1}=aa^{−1}= e.$
n = 11Zn = Zmod(n)a, b = Zn(5), Zn(7)print(n - (a + b))# 10print((n - a) + (n - b))# 10
# Orders
In abstract algebra we have two notions of order: Group order and element order
Group order
The order of a group $G$is the number of the elements in that group. Notation: $|G|$
Element order
The order of an element $a \in G$ is the smallest integer $n$ such that $a^n = 1_G$. If such a number $n$ doesn't exist we say the element has order $\infty$. Notation: $|a|$
Z12 = Zmod(12) # Residues modulo 12print(Z12.order()) # The additive order # 12a, b= Z12(6), Z12(3)print(a.order(), b.order())# 2 4print(a.order() * a)# 0print(ZZ.order()) # The integers under addition is a group of infinite order# +Infinity
We said our messages lay in some group $\mathcal{M}$. The order of this group $|\mathcal{M}|$ is the number of possible messages that we can have. For $\mathcal{M} = \{0,1\}^{128}$we have $|\mathcal{M}| = 2^{128}$ possible messages.
Let $m \in \mathcal{M}$be some message. The order of $m$ means how many different messages we can generate by applying the group operation on $m$
# Subgroups
Definition
Let $(G, \cdot)$ be a group. We say $H$is a subgroup of $G$ if $H$ is a subset of $G$ and $(H, \cdot)$forms a group. Notation: $H \leq G$
Proprieties
1. The identity of $G$ is also in $H:$$1_H = 1_G$
2. The inverses of the elements in $H$are found in $H$
How to check $H \leq G$? Let's look at a 2 step test
1. Closed under operation: $\forall a, b \in H \to ab \in H$
2. Closed under inverses: $\forall a \in H \to a^{-1} \in H$
## Generators
Let $G$be a group,$g \in G$an element and $|g| = n$. Consider the following set:
$\{1, g, g^2, ..., g^{n-1}\} \overset{\text{denoted}}{=} \langle g\rangle.$
This set paired the group operation form a subgroup of $G$generated by an element $g$.
Why do we care about subgroups? We praise the fact that some problems are hard because the numbers we use are huge and exhaustive space searches are too hard in practice.
Suppose we have a big secret values space $G$and we use an element $g$to generate them.
If an element$g \in G$with a small order $n$ is used then it can generate only $n$ possible values and if $n$ is small enough an attacker can do a brute force attack.
Example
For now, trust us that if given a prime $p$, a value $g \in \mathbb{Z} / p \mathbb{Z}$ and we compute $y = g^x \bmod p$ for a secret $x$, finding $x$ is a hard problem. We will tell you why a bit later.
p = 101 # primeZp = Zmod(p) H_list = Zp.multiplicative_subgroups() # Sage can get the subgroup generators for usprint(H_list)# ((2,), (4,), (16,), (32,), (14,), (95,), (10,), (100,), ())g1 = H_list[3][0] # Weak generatorprint(g1, g1.multiplicative_order())# 32 20g2 = Zp(3) # Strong generatorprint(g2, g2.multiplicative_order())# 3 100## Consider the following functionsdef brute_force(g, p, secret_value): """ Brute forces a secret value, returns number of attempts """ for i in range(p-1): t = pow(g, i, p) if t == secret_value: break return i def mean_attempts(g, p, num_keys): """ Tries num_keys times to brute force and returns the mean of the number of attempts """ total_attempts = 0 for _ in range(num_keys): k = random.randint(1, p-1) sv = pow(g, k, p) # sv = secret value total_attempts += brute_force(g, p, sv) return 1. * total_attempts / num_keys ## Let's try with our generatorsprint(mean_attempts(g1, p, 100)) # Weak generator# 9.850print(mean_attempts(g2, p, 100)) # Strong generator# 49.200
# Examples
// subgroups, quotient groups
// cyclic groups
|
2021-07-27 01:25:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 110, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9678724408149719, "perplexity": 1754.5862048564672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152168.38/warc/CC-MAIN-20210727010203-20210727040203-00603.warc.gz"}
|
http://codeforces.com/blog/A.K.Goharshady
|
By A.K.Goharshady, 5 years ago, ,
Hi, Here's the editorial.
Please note that not all the codes presented below belong to me. (It's a combination of codes from our problemsetters and testers) -- And I borrowed AKGMA's account since I wasn't able to link to my own submissions somehow!
Note: It seems that the Codeforces mark-up is not functioning. To see a submission go to: http://www.codeforces.com/contest/282/submission/submission-number
#### A: Bit++
Just use a simple loop. (Take a look at the Python code)
GNU C++: 3314442, 3314464
GNU C: 3314471
Python: 3314475
#### B: Painting Eggs
This one can be solved by a greedy algorithm. Start from the 1st egg and each time give the egg to A if and only if giving it to A doesn't make the difference > 500, otherwise give it to G.
To prove the correctness, one can use induction. The base case is trivial. Suppose that we've assigned the first n - 1 eggs such that the total money given to A is Sa and total money given to G is Sg. We can assume Sa ≥ Sg. Now we must either add gn to Sg or add an to Sa. If we can't add gn to Sg, then Sg + gn > Sa + 500, so - 500 > Sa - Sg - gn, adding 1000 to both sides gives us the inequality 500 > Sa + (1000 - gn) - Sg which is exactly what we need to make sure that we can add an = 1000 - gn to Sa.
GNU C++: 3314480, 3314484
GNU C: 3314488
Python: 3314492
#### C: XOR and OR
First of all, check the length of the two strings to be equal. Then with a little try and guess, you can find out that the zero string (00...0) can't be converted to anything else and nothing else can be converted to zero. All other conversions are possible.
GNU C++: 3314503, 3314504, 3314509, 3314512, 3314514
#### D: Yet another Number Game
For n=1, everything is clear. If a1 = 0 then BitAryo wins, otherwise BitLGM is the winner.
For n=2: define win[i][j] = (Whether i,j is a Winning position). It's easy to calculate win[i][j] for all i and j, using a loop (Checking all possible moves). This leads us to an O(n3) solution.
For n=3: Everything is similar to NIM, With the same statement of proof as for NIM, i,j,k is a winning position if and only if (i xor j xor k) ≠ 0.[Don't forget the parentheses in code :) ] Complexity: O(1)
One can also solve this case using DP. We define lose[i][j]= (Least k, such that i,j,k is a losing position) ,lose2[i][j]=(Least k, such that k,k+i,k+i+j is a losing position) and win[i][j][k] just as the case with n=2. As in the codes below, one can calculate all these values in O(n3).
Using the same DP strategy for n=2 and the O(1) algorithm for n=3 and n=1, leads us to a total complexity of O(n2) which was not necessary in this contest.
GNU C++: 3314578, 3314580, 3314585, 3314588
#### E: Sausage Maximization
Can be solved using a trie in O(n log (max{ai})).
Start with a prefix of size n, and decrease the size of prefix in each step. For each new prefix calculate the XOR of elements in that prefix and add the XOR of the newly available suffix (which does not coincide with the new prefix) to the trie, then query the trie for the best possible match for the XOR of the new prefix. (Try to get 1 as the first digit if possible, otherwise put 0, then do the same thing for the second digit and so on). Get a maximum over all answers you've found, and it's all done. [By digit, I mean binary digit]
GNU C++: 3314616, 3314619
We hope you enjoyed the tasks.
•
• +50
•
By A.K.Goharshady, 6 years ago, ,
I think virtual contest must end as soon as all problems are solved.
It's quite unfit for it to continue while the user can do nothing more.
•
• +19
•
By A.K.Goharshady, 7 years ago, ,
I wrote this post to help my friend , Iman Movahhedi , complete this one.
Round #40 was my first contest here in Codeforces and I feel I fell in love with CF just after that.
A-Translation: (C# code)
Many languages have built-in reverse() function for strings. we can reverse one of the strings and check if it's equal to the other one , or we can check it manually. I prefer the second.
•
• +21
•
By A.K.Goharshady, 7 years ago, ,
Hear the song Here
این ترانه را از اینجا بشنوید
•
• -11
•
By A.K.Goharshady, 7 years ago, ,
Hi all!
Unknown language round #1 was up, 21st of February and now we're going to hold yet another Unknown language round.
It will be the usual ACM-ICPC unrated contest , so there is no hacking! The only feature - you will be able to submit problems using the only one, not very popular language. What? It's a secret! And I expect you'll have to learn the language at the time of contest since the used language will be a secret until ~1 minute before the start of contest.
Problem setters of this round are Alireza FarhadiSaeed IlchiSajjad GhahramanpourZahra Rohanifar and Me. We are extremely grateful to Mike Mirzayanov and Artem Rakhov.
Number of problems will be more than usual and the problems concentrate on coding abilities rather than algorithmic view and problem solving techniques.
UPD:The contest is over
Congratulations to the top 3 winners who solved all problems:
Wrong
tomek
watashi
Announcement of Unknown Language Round #2
•
• +52
•
By A.K.Goharshady, 7 years ago, ,
•
• +5
•
By A.K.Goharshady, 7 years ago, ,
This post is written to help my friend , Iman , complete this one.
Problem A: Triangle (code)
For each of the possible combinations of three sticks , we can make a triangle if sum of the lengths of the smaller two is greater than the length of the third and we can make a segment in case of equality.
•
• +3
•
By A.K.Goharshady, 7 years ago, ,
•
• +4
•
By A.K.Goharshady, 7 years ago, ,
This one can be solved in O(nlgn) using a segment tree.
First we convert all powers to numbers in range 0..n-1 to avoid working with segments as large as 109 in our segment tree. Then for each of the men we should find number of men who are placed before him and have more power let's call this gr[j]. When ever we reach a man with power x we add the segment [0,x-1] to our segment tree , so finding gr[j] can be done by querying power of j in our segment tree when it's updated by all j-1 preceding men.
Now let's call number of men who are standing after j but are weaker than j as le[j]. These values can be found using the same method with a segment-tree or in O(n) time using direct arithmetic:
le[j]=(power of j -1)-(i-1-gr[j])
note that powers are in range 0..n-1 now.
Now we can count all triplets i,j,k which have j as their second index. This is le[j]*gr[j]
( \sum_{j=0}^{n-1} le[j]\times gr[j] )
•
• +1
•
By A.K.Goharshady, 7 years ago, ,
This one has two different linear-time solutions. Greedy and dynamic programming.
Greedy solution:
You should stop at the city with maximum distance from the root (city number 1). So all roads are traversed twice except for the roads between the root and this city.
Dynamic Programming:
For each city i we declare patrol[i] as the traverse needed for seeing i and all of it's children without having to come back to i (Children of a city are those cities adjacent to it which are farther from the root) and revpatrol[i] as the traverse needed to see all children of i and coming back to it. we can see that revpatrol[i] is sum of revpatrols of its children + sum of lengths of roads going from i to its children. patrol[i] can be found by replacing each of revpatrols with patrol and choosing their minimum.
•
• +8
•
By A.K.Goharshady, 7 years ago, ,
The code for converting decimal numbers to Roman was on Wikipedia , so I'm not going to explain it.
Since we have the upper limit 1015 for all of our numbers , we can first convert them to decimal (and store the answer in a 64-bit integer) and then to the target base.
For converting a number to decimal we first set the decimal variable to 0, then at each step we multiply it by the base and add the left-most digit's equivalent in base 10.
We had some tricky test cases for this one which got many people :
test #51:
2 10
0
many codes printed nothing for this one, this was also the most used hack for this problem
test #54:
12 2
000..00A
a sample of having initial zeros in input
test #55:
17 17
0000000...000
There were many people who just printed the input if bases were equal
there were two nice extremal hacks:
2 R
101110111000
and
10 2
1000000000000000
•
• +4
•
By A.K.Goharshady, 7 years ago, ,
In this problem signs can be ignored in both initial and answer strings, so first we remove signs from initial strings. Then we make a list of the six possible concatenations of the 3 initial strings and convert all of them to lowercase.
For checking an answer string , we remove the signs , convert it to lowercase and check if it is just one of those 6 concatenations.
There were two really nice hack protocols , the first one is:
-------__________;
_____;
;;;;---------_
2
ab____
_______;;;
Here all concatenations become empty.
The second one was putting 0 as number of students :D
•
• +8
•
By A.K.Goharshady, 7 years ago, ,
Problem A:
This was indeed the easiest problem. You just needed to XOR the given sequences.
The only common mistake was removing initial 0s which led to "Wrong Answer"
Common mistake in hacks which led to "Invalid Input" was forgetting that all lines in input ends with EOLN.
•
• +1
•
By A.K.Goharshady, 7 years ago, ,
سلام
بسیار مفتخرم که شما را به شرکت در پنجاه و هفتمین کانتست دعوت کنم.
طراحان این کانتست علیرضا فرهادی و من هستیم
همچنین از زحمات مایک میرزایانف - آرتم راخف - سعید ایلچی - محمدجواد نادری - جرالد آگاپف و ماریا بلوا نهایت تشکر را داریم
به مناسبت روز ملی مهندس این کانتست را به خواجه نصیرالدین توسی تقدیم می کنیم
در این کانتست برای اولین بار صورت سوالها به زبان پارسی هم منتشر می شود.
می توانید صورت سوالها را پس از شروع کانتست از اینجا ببینید
کانتست به پایان رسید
:برنده
•
• +8
•
By A.K.Goharshady, 7 years ago, ,
This is a semi-Tutorial for codeforces #42 (Div.2) , I'm not going to explain everything but I'm just telling the ideas.
The problems were extremely nice.
A) This is pretty obvious , you can store the two strings and how many times each of them occurred
B) For each of the upper or lower case letters , take care about how many times it appeared in each of the strings. if for a character x , repetitions of x in the second string is more than the first string , we can't make it , otherwise the answer is YES.
C) We all know that the remainder of a number when divided by 3 is equal to the remainder of sum of its digits when divided by three. So we can put all of input numbers in 3 sets based on their remainder by 3. Those with remainder 1 can be matched with those with remainder 2 and those with remainder 0 can be matched with themselves. so the answer is:
half of number of those divisible by three + minimum of those having a remainder of 1 and those having a remainder of 2
D) Actually we're looking for an Eulerian tour. I found it like this:
If at least one of m and n was even do it like this figure:
else do it like this and add a teleport from last square to the first:
But there were really nice hacks as I studied them. like these two:
1 10
and
1 2
E) Let's just take care about 2 cars and see how many times they change their position. This is easy. Then do this for all cars :D
|
2017-10-22 19:15:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4330194294452667, "perplexity": 1605.1063011117553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825436.78/warc/CC-MAIN-20171022184824-20171022204824-00891.warc.gz"}
|
https://en.wiktionary.org/wiki/p-adic_ordinal
|
# p-adic ordinal
Jump to: navigation, search
## English
Wikipedia has an article on:
Wikipedia
### Noun
p-adic ordinal (plural p-adic ordinals)
1. (number theory) A function of rational numbers, with prime number p as parameter, which is defined for some non-zero integer x as the largest integer r such that pr divides x; is defined for some non-zero rational number a/b as the p-adic ordinal of a minus the p-adic ordinal of b; and is defined for 0 as infinity. [1]
Notice the resemblance between the p-adic ordinal and the base-p logarithm.
#### Usage notes
• The p-adic ordinal of rational number x can be denoted as ${\displaystyle {\mbox{ord}}_{p}\,x}$.
### References
1. ^ 2011, Andrew Baker, An Introduction to p-adic Numbers and p-adic Analysis, Definition 2.3
|
2017-09-23 16:46:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242034554481506, "perplexity": 3024.713515643564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689752.21/warc/CC-MAIN-20170923160736-20170923180736-00658.warc.gz"}
|
http://superuser.com/questions/490708/using-the-ifwinactive-keyword-in-autohotkey/490714
|
# Using the IfWinActive keyword in AutoHotKey
This question about the use of AutoHotKey is specific to a Windows LaTeX editor called TeXnicCenter.
So, I was trying to write my first AutoHotKey (AHK) script, and wanted the shortcuts to be available only when the TeXniceCenter window was active. AHK provides the IfWinActive keyword to deal with these scenarios, however, I ran into some difficulties in using this keyword.
Here is a draft file I wrote:
SetTitleMatchMode, 2
SetTitleMatchMode, Slow
#IfWinActive, .* TeXnicCenter *.
!t::
Send \texttt{{}{}}
return
The #IfWinActive, .* TeXnicCenter *. line, so that the pattern "TeXnicCenter" is found somewhere in the window name does not work.
There were some other options that I discarded
• I use TeXnicCenter mainly with projects, so that the window name shows up as "projectname - TeXnicCenter", so it is not feasible to use this as the argument to IfWinActive. Note that the window name is TeXnicCenter if working on standalone documents.
• Another option provided by AHK is that you use something called the ahk_class of the process, which is typically intuitive (and can be had from the handy bundled AHK tool, called Window Spy) -- for example, in the case of Chrome, it is Chrome_WidgetWin_1.
However, for TeXnicCenter, it shows the bizarre signature -- for example, for one of my projects, it is Afx:000000013F370000:8:0000000000010005:0000000000000000:0000000012B80087, and not only that, it is not constant across TeXnicCenter windows, as it usually is for other processes.
I am at a loss -- does anyone have experience setting up AHK with TeXnicCenter, and using the IfWinActive keyword? I have a feeling that this might be better directed at the developers of TeXnicCenter, but here's hoping.
-
You used SetTitleMatchMode to set the title-matching mode to 2 which means A window's title can contain WinTitle anywhere inside it to be a match. So, it is trying to find .* TeXnicCenter *. in the title-bar. You should remove the .* and *. (unless the title bar actually contains those—which as far as I know, it does not). You can set the title-matching mode to RegEx if you would rather use the regex syntax, (and even then, the *. is incorrect, it should be .*).
As for the class, I had the same issue with GraphEdit which for the main window has a window class like Afx:1000000:b:10011:6:1070780 with the same pattern, but different numbers for each instance. I solved it by using regex mode (SetTitleMatchMode, RegEx) and a pattern like ^Afx:.+:.:.+:.:.+$—you can specify the exact number of digits between the colons, but it’s unlikely you’ll need to. (I eventually ended up simplifying the whole process by using groups.) So, in your case, you would use one of the following: SetTitleMatchMode, 2 SetTitleMatchMode, Slow #IfWinActive, TeXnicCenter !t:: Send \texttt{{}{}} return SetTitleMatchMode, regex SetTitleMatchMode, Slow #IfWinActive, .* TeXnicCenter *. !t:: Send \texttt{{}{}} return Here is my recommendation: SetTitleMatchMode, regex SetTitleMatchMode, Slow GroupAdd, TXC, ^.*TeXnicCenter.*$ ahk_class ^Afx:.+:.:.+:.+:.*\$
#IfWinExist, ahk_group TXC
!t::
Send \texttt{{}{}}
return
#IfWinExist
-
Thanks. I think that your suggestions will work, and I will try them out and get back. But I just wanted to point out that I am using TeXnicCenter 2 (alpha), so in my case the window title does contain "TeXnicCenter". I can't see the image you have linked to, it shows up as a 503 error. – fg nu Oct 21 '12 at 18:16
Yep, works like a charm. Thanks. I did change the IfWinExist to IfWinActive since that is what I needed. – fg nu Oct 21 '12 at 18:21
> in my case the window title does contain "TeXnicCenter" Yes, it will contain TeXnicCenter in the title-bar, but not .* or .*. > I can't see the image you have linked to, it shows up as a 503 error. I didn’t link to an image, I linked to Google Images. I guess GI must be blocked by your network. > I did change the IfWinExist to IfWinActive since that is what I needed. Sure; but they are a little different, so as long you know the difference, you should be fine. ☺ – Synetech Oct 21 '12 at 18:38
|
2016-02-05 22:40:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6773548126220703, "perplexity": 1544.100626498107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145519.33/warc/CC-MAIN-20160205193905-00281-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://ec.gateoverflow.in/2272/gate-ece-1994-question-4-2
|
7 views
The response of an $\text{LCR}$ circuit to a step input is If the transfer function has
(1) poles on the
(A) over damped negative real axis
(2) poles on the
(B) critically damped imaginary axis
(3) multiple poles on (C) oscillatory the positive realaxis
(4) poles on the positive real axis
(5) multiple poles on the negative real axis
|
2022-10-01 17:43:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6734983325004578, "perplexity": 3623.2055382309504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00319.warc.gz"}
|
https://www.opuscula.agh.edu.pl/om-vol35iss6art1
|
Opuscula Math. 35, no. 6 (2015), 853-866
http://dx.doi.org/10.7494/OpMath.2015.35.6.853
Opuscula Mathematica
# Continuous spectrum of Steklov nonhomogeneous elliptic problem
Mostafa Allaoui
Abstract. By applying two versions of the mountain pass theorem and Ekeland's variational principle, we prove three different situations of the existence of solutions for the following Steklov problem: \begin{aligned}\Delta_{p(x)} u&=|u|^{p(x)-2}u \phantom{\lambda} \quad\text{in}\;\Omega, \\ |\nabla u|^{p(x)-2}\frac{\partial u}{\partial \nu}&= \lambda|u|^{q(x)-2}u \quad\text{on}\;\partial\Omega,\end{aligned} where $$\Omega \subset \mathbb{R}^N$$ $$(N\geq 2)$$ is a bounded smooth domain and $$p,q: \overline{\Omega}\rightarrow(1,+\infty)$$ are continuous functions.
Keywords: $$p(x)$$-Laplacian, Steklov problem, critical point theorem.
Mathematics Subject Classification: 35J48, 35J66.
Full text (pdf)
1. M. Allaoui, A. El Amrouss, A. Ourraoui, Existence and multiplicity of solutions for a Steklov problem involving the $$p(x)$$-Laplace operator, Electron. J. Diff. Equ. 132 (2012), 1-12.
2. J. Chabrowski, Y. Fu, Existence of solutions for $$p(x)$$-Laplacian problems on a bounded domain, J. Math. Anal. Appl. 306 (2005), 604-618.
3. Y.M. Chen, S. Levine, M. Rao, Variable exponent, linear growth functionals in image restoration, SIAM J. Appl. Math. 66 (2006), 1383-1406.
4. S.G. Dend, Eigenvalues of the $$p(x)$$-Laplacian Steklov problem, J. Math. Anal. Appl. 339 (2008), 925-937.
5. S.G. Deng, A local mountain pass theorem and applications to a double perturbed $$p(x)$$-Laplacian equations, Appl. Math. Comput. 211 (2009), 234-241.
6. X. Ding, X. Shi, Existence and multiplicity of solutions for a general $$p(x)$$-Laplacian Neumann problem, Nonlinear Anal. 70 (2009), 3713-3720.
7. X.L. Fan, J.S. Shen, D. Zhao, Sobolev embedding theorems for spaces $$W^{k,p(x)}$$, J. Math. Anal. Appl. 262 (2001), 749-760.
8. X.L. Fan, D. Zhao, On the spaces $$L^{p(x)}$$ and $$W^{m,p(x)}$$, J. Math. Anal. Appl. 263 (2001), 424-446.
9. X.L. Fan, S.G. Deng, Remarks on Ricceri's variational principle and applications to the $$p(x)$$-Laplacian equations, Nonlinear Anal. 67 (2007), 3064-3075.
10. R. Filippucci, P. Pucci, V. Radulescu, Existence and non-existence results for quasilinear elliptic exterior problems with nonlinear boundary conditions, Comm. Partial Differential Equations 33 (2008), 706-717.
11. P. Harjulehto, P. Hästö, Ú.V. Lê, M. Nuortio, Overview of differential equations with non-standard growth, Nonlinear Anal. 72 (2010), 4551-4574.
12. R. Kajikia, A critical point theorem related to the symmetric mountain pass lemma and its applications to elliptic equations, J. Funct. Anal. 225 (2005), 352-370.
13. N. Mavinga, M.N. Nkashama, Steklov spectrum and nonresonance for elliptic equations with nonlinear boundary conditions, Electron. J. Diff. Equ. Conf. 19 (2010), 197-205.
14. M. Mihailescu, V. Radulescu, On a nonhomogeneous quasilinear eigenvalue problem in Sobolev spaces with variable exponent, Proc. Amer. Math. Soc. 135 (2007), 2929-2937.
15. M. Mihăilescu, V. Rădulescu, Eigenvalue problems associated with nonhomogenenous differential operators in Orlicz-Sobolev spaces, Anal. Appl. 6 (2008), 83-98.
16. T.G. Myers, Thin films with high surface tension, SIAM Review 40 (1998), 441-462.
17. V. Radulescu, I. Stancut, Combined concave-convex effects in anisotropic elliptic equations with variable exponent, Nonlinear Differential Equations Appl., in press (DOI 10.1007/s00030-014-0288-8). http://dx.doi.org/10.1007/s00030-014-0288-8.
18. M. Rǔžicka, Electrorheological Fluids: Modeling and Mathematical Theory, Springer-Verlag, Berlin, 2000.
19. L.L. Wang, Y.H. Fan, W.G. Ge, Existence and multiplicity of solutions for a Neumann problem involving the $$p(x)$$-Laplace operator, Nonlinear Anal. 71 (2009), 4259-4270.
20. Q.H. Zhang, Existence of solutions for $$p(x)$$-Laplacian equations with singular coefficients in $$R^N$$, J. Math. Anal. Appl. 348 (2008), 38-50.
21. V.V. Zhikov, Averaging of functionals of the calculus of variations and elasticity theory, Math. USSR Izv. 29 (1987), 33-66.
22. V.V. Zhikov, S.M. Kozlov, O.A. Oleinik, Homogenization of Differential Operators and Integral Functionals, translated from Russian by G.A. Yosifian, Springer-Verlag, Berlin, 1994.
• Mostafa Allaoui
• University Mohamed I, Faculty of Sciences, Department of Mathematics, Oujda, Morocco
• Communicated by Vicentiu D. Radulescu.
• Revised: 2014-11-10.
• Accepted: 2014-11-13.
• Published online: 2015-06-06.
|
2019-03-20 07:05:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5773340463638306, "perplexity": 3168.4638993813624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202303.66/warc/CC-MAIN-20190320064940-20190320090940-00063.warc.gz"}
|
https://www.physicsforums.com/threads/limit-of-two-variable-function.94667/
|
# Limit of two-variable function
twoflower
Hi all,
suppose I want to get this:
$$\lim_{[x,y] \rightarrow [0,0]} (x^2+y^2)^{xy}$$
Here's how I approached:
$$\lim_{[x,y] \rightarrow [0,0]} (x^2+y^2)^{xy} = \lim_{[x,y] \rightarrow [0,0]} \exp^{xy \log (x^2+y^2)} \lim_{[x,y] \rightarrow [0,0]} xy \log (x^2 + y^2) = (x^2 + y^2) \log (x^2 + y^2) \frac{xy}{x^2 + y^2} \rightarrow 0$$
Because the last fraction is bounded and the part before it goes to 0 (I hope).
But that's the problem, I don't know how to prove
$$\lim_{t \rightarrow 0+} t\ \log t = 0$$
Thank you for help.
$$\lim_{t \rightarrow 0 ^ +} t \log{t} = \lim_{t \rightarrow 0 ^ +} \frac{\log{t}}{\frac{1}{t}}$$. Now it's in form $$\frac{\infty}{\infty}$$. Can you go from here?
|
2022-09-28 07:32:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9058306813240051, "perplexity": 875.7632202121147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00397.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/26918/humanely-reducing-the-human-population/27024
|
# Humanely reducing the human population?
Imagine quite a way into the future, the Earth becomes overrun with humans. Birthrates are similar to where they are now, and the advancement of medical science means that people live a lot longer (say average life expectancy is 150 for the developed world). We are, at this point, in frequent contact with species from other worlds, who (with us as a member) have formed an intergalactic federation. The Federation has decided that the human population of Earth is too great, and have given Earth an ultimatum that we are to reduce our population from 120 billion to 8 billion (if this is too large, the Federation may be open to negotiation) in the next 20 years. The means we use to do this are up to us.
Presumptions:
• The technology level of humans is assumed to have moved at a realistic rate
• All major political powers are open to working together towards this requirement
• Most governments still care about ethics, and aren't up for just randomly choosing people and shooting them.
• The Federation will take action if we do not attempt to carry out their ultimatum.
So, what would be the most humane way to select people to die for such a good cause, and how would it be done?
• The idea of my children dying for this cause makes my blood boil - I'm sorry, but this question simply assumes too much about human nature. This would put the entire planet into outright rebellion. Take my baby and rest assured that what happens to me no longer matters - I'm coming for you. – Sean Boddy Oct 4 '15 at 15:25
• Has exporting people been ruled out? If there's a federation, are there other planets they can live on? Or are you saying that there are too many humans period, whether they live on Earth or elsewhere? – Monica Cellio Oct 4 '15 at 20:09
• "The Federation Council has determined the population of Earth is too large. It's crowded and dangerous. For the safety of your population... kill 94% of the population." Federation morality is weird. Why is this Federation concerned about how many people are on Earth again? If we've reached 120 billion without total collapse, we've probably overcome the ecological and social problems of overpopulation. – Schwern Oct 5 '15 at 8:48
• What's the Federation's leverage here? They're literally saying "suffer 94% casualties, or...". There is no rational reason to comply with this request; complying is already essentially a worst-case scenario. – Leushenko Oct 5 '15 at 18:54
• What rationale does the Federation have for adding the time requirement? What is 20 years vs 200 years on a planet you don't control? Given 200 years, it could be done reasonably ethically - birth control and deaths from natural causes would have time to work their magic. If, given the choice between patience and genocide, the Federation chooses genocide then I'd say the Federation is EEEEEEEEEVIL and as a matter of principle humanity must resist. – chucksmash Oct 5 '15 at 20:49
## War
Clearly the most humane way to choose the people is to get volunteers. It's likely to be hard to get volunteers for the problem as stated. So instead ask for volunteers to attack those members of the Federation who voted for this proposal.
This could be combined with a draft. People who do not wish to be subject to the draft can accept a one-in-seven chance of living and mandatory sterilization. Those who fail the one-in-seven chance will be humanely terminated.
If we lose the war, our population should be drastically reduced. Combined with birth control at or below replacement level, this should solve the problem.
And of course if we win the war, we've gotten rid of the Federation and are no longer subject to population controls.
This is the only humane way to proceed. Anything else is simply capitulating to tyranny. This also has the side benefit of improving leverage in negotiations for a longer term of population reduction and/or an increase in the allowed population.
It also gets around a big problem. Assuming current fertility rates continue, how would a country like Japan react? They currently have a shrinking population. How would they feel about having to give up around 90% of their population to cover for population gained in other countries, like India?
## Unrealistic
This whole thing seems unrealistic though. First world countries like Japan or those in Western Europe already have declining native populations. If the entire world was rich enough to have a life expectancy of 150 years, it seems likely that this would be even more of a problem. The world population may continue to grow for a while, but in a hundred years, I'd expect the concern to be switching to the possibility that we're going to die out due to lack of children.
This would be more realistic if set soon enough in the future that our population is still growing, say 2050. Obviously our population won't be 120 billion then. Perhaps twelve billion with a reduction to one billion. And of course our life expectancy won't be 150 years at that point without some major changes.
• War as only humane way ... baffled – Hagen von Eitzen Oct 4 '15 at 14:12
• @HagenvonEitzen Given the unreasonable requirements put forward in the question, it may well be the most reasonable answer. – kasperd Oct 4 '15 at 15:22
• +1 Having a committee to choose who to kill or abandon isn't humane at all. They only make it looks humane. And anything else other than a war is not likely able to reduce the population that much. Note that it isn't the war being inhumane in this case, but the need of reducing population. But the cost of a war might be huge if happened in undesirable ways. If it is in a story, maybe they can be lucky and invent another dangerous job in the right time, but it is still likely going wrong for economical reasons. – user23013 Oct 4 '15 at 18:10
• @user23013, there is no concept of good and evil where an act of genocide falls on the side of good. Being given the order to kill ourselves would be met with an ultimatum telling the Federation to either take their best shot or shut up. – Sean Boddy Oct 5 '15 at 4:10
• I think this is what would happen no matter what you choose (unless you're very, very sneaky). Any government that agreed to a foreign demand to kill 93% of its own citizens just wouldn't last long. – Deolater Oct 5 '15 at 16:16
I only see one way to do it humanely: Negotiate with the Federation to get the technology to quickly build lots of huge space habitats/habitats on other planets and transport the humans there.
Now if you manage to negotiate a longer time frame, comparable to the life time of a human, then you have one more option at your disposal: Implement very strict birth control.
• I agree, this is the only humane option: all the others involve killing people or rendering them infertile. – Max Williams Oct 6 '15 at 10:53
• Dyson ring, anyone? – Jon Story Oct 6 '15 at 13:40
• A Dyson ring would be a most excellent power source, perhaps, but less good as a generation ship. – Isaac Woods Oct 8 '15 at 16:48
• The problem is that voluntary birth control is a kind of selective breeding - we are engaged in a massive genetics experiment where people who want (and are able to get) children gradually replace those who don't. Birth control is going to be harder and harder to enforce as time goes on, as the people who can be prevented from having large numbers of children get weeded out. – Ebonair Dec 20 '17 at 19:00
The 120 billion of humans fight for their survival against federation, if won we colonise whatever planet/spaceship they live in. Earth (considering vastly superior future technology) can easily sustain 50bil people, rest are sent as colonists.
• I vote this up because this is what we tell the people. If we win the war it was true, and if we lose the population is reduced sufficiently to meet demand. – Joshua Oct 5 '15 at 0:55
• Earth cannot support 50 bil people even with advanced tech, unless gmo food in future grows like 4x quicker – P.Lord Jul 15 '19 at 18:54
I believe the most "humane" reaction would be to flip this Federation off.
120 billions of people would most likely not appreciate being bullied like this by some abstract "Federation", and would most likely throw the politicians which are okay with this out the nearest window.
It would take probably less than a day for any politician which is okay with this to not be okay with this anymore.
Everything goes on the table then: 20 years ? Make that 2000, and we can talk business. 120 Or: to 8 billion ? Make that 120 to 80, and make that 1 billion decrease every 100 years.
The point is that as far as people would most likely be concerned, cutting even 1 person by decree will be unacceptable (the "by decree" part is the point). A planet-wide armed riot / civil war is very possible.
How can politicians prevent the potential self-annihilation of the civilization on the planet ? They can either:
• Enforce compliance, starting to weed out people and turning the planet into an armed dictatorship (people will resist and will fight).
• Ignore the directive, accepting the repercussions. We're back to the population feeling bullied.
• Flip the federation off, dropping out from it. People would be happy-ish (no more "cut back on the population" nonsense) until the inevitable consequences (I foresee import/export difficulties and frantic attempts to keep friends with neighbour civilizations and/or the federation itself)
My point being: politicians might be okay with it, but the other 120 billion people won't be. This kind of demand can't possibily be open to negotiations enough to become likeable. Either party has to fold to prevent open conflict.
• Hi Alex, welcome to Worldbuilding SE. If your answer is correct, here on Worldbuilding like slightly more expanded answers... – clem steredenn Oct 5 '15 at 11:25
• I actually intended to post a comment, something went wrong along the way... Well, I'll take the chance to write up some more – Alex Oct 5 '15 at 11:56
• Very good answer, i agree with it – veryRandomMe Oct 7 '15 at 15:45
Assuming negotiation is not an option:
1. Strict birth control. You do not have to kill what is not (yet) born. It will have little effect in scale but let's be honest every single life counts.
2. Ask for volunteers. Will make some little difference I suppose.
3. Promote extreme sports. Same as above.
4. Stop treating the terminally ill. Harsh but it will save some lives.
5. Set up a Survival Determination Project. Should be the most popular URL instantly.
The Survival Determination Project
What is needed is a fool proof mechanism to select survivors from among the total population. To ensure popular support and compliance, and to eliminate political wrangling this needs to be thought out and created publicly, with open and probably vigorous discussion about the why’s, how’s, and when’s. Initially I used RFC (Request for comments) for this concept but it is indeed more along the line of an open source project, as @celtschk rightly pointed out.
The project should provide for:
1. Some kind of raffle. In the end you need some random way to spread survival chances among the to-be-reduced living population. Better be absolute foolproof.
2. Some kind of life gift method. Can only be given, not asked to avoid mass coercion. So must stay secret until day zero for the recipient. This way parents are able to give up their meagre percentage to their children.
3. Possibly gladiator games can be introduced for those who want to fight and/or believe in survival of the fittest. Gives the rest something to watch while it all plays out.
Interesting times indeed.
• Birth control isn't too effective in a 20 year span, especially with a life expectancy of 150 years. What do you mean with "RFC"? The only meaning I know for that abbreviation is "request for comments", but I'm pretty sure that's not what you mean. – celtschk Oct 4 '15 at 12:17
• Not within 20 years as asked for, but birth control can be very effective as you can see in China. Their birthrate is droping for decades and by 2035-2040 more people will get to old to work than young people get old enough to work. (At least they said that in the french/german polit show >Mit Offenen Karten<) – jawo Oct 5 '15 at 11:56
• @celtschk birth control does not have great scale but it cannot be skipped as every life counts; currently some fast growing states have half their population under 20. I've also edited the text around RFC as clearly the intent does not come across. My bad. Hopefully now the explanation has become more useful. – Bookeater Oct 6 '15 at 14:30
• An exam would be good too, in order to ensure the remaining people are the best and brightest, not just the strongest and most merciless. – stephen Oct 6 '15 at 16:44
• Birth control is so obvious that it seems that the 20 year ultimatum is cruel--they are just toying with us. If they gave us 150 years we could set the population to exactly any number (relatively) painlessly through birth control, therefore I would reach the conclusion that there are other motives for the ultimatum--possibly to see how brutal we are or if (how?) we choose to make the decision to kill others to save ourselves. – Bill K Oct 6 '15 at 17:03
Mass hybernation. People are frozen and stored in underground storages. Then they can be awakened at some shedule, then hybernated again. The technology exists today but currently used only for recently dead people in hope the medicine of the future could cure their deseases. It was also tested on animals.
• The technology does not provenly exist today. – gerrit Oct 4 '15 at 19:06
• @gerrit It does provenly exist. We just don't have a way of bringing the frozen people back, yet. Not that much of a problem if the bigger problem is overpopulation, if you ask me. – timuzhti Oct 5 '15 at 2:22
• @Alpha3031 Without proving that they could be brought back, this technique becomes tantamount to mass murder by hypothermia. Nonetheless, +1, because this could definitely be possible wiith near-future tech. – ApproachingDarknessFish Oct 5 '15 at 7:27
• @Alpha3031 That's like saying that you can fly down a cliff or high building, and that it's just the landing you need to figure out. – gerrit Oct 5 '15 at 11:28
• I think this answer works considering this is set far enough in the future really. – Tim B Oct 5 '15 at 12:03
In Dan Brown's Inferno a mad scientist releases a rapidly spreading virus that causes infertility in about 50% of all people.
It cuts the population by a drastic amount without actually causing anyone harm, though it is delayed by about 1-2 generations.
• This is definitely the most humane and fair way to do things. – Varrick Oct 5 '15 at 11:02
• This may reduce the population in the long run, but even if you halted all births immediately, 20 years is too short for this to make any significant difference. – pluckedkiwi Oct 5 '15 at 18:21
• without actually causing anyone harm With respect, I think you vastly underestimate the social and emotional harm that people suffer even now from infertility. It might be possible to argue that this is the least inhumane way to do things, but forced sterility is anything but humane. – GrandOpener Oct 7 '15 at 19:54
• Infertility is a lot more fun than starvation. – Peter Wone Oct 9 '15 at 6:09
• Sounds a bit like the Sterile Insect Technique and related inherited sterility in insects – Kelly Thomas Oct 9 '15 at 11:33
Set Baby Rights
Allocate every woman the amount of 1/2 of a baby allowed. A couple or a woman can sell or buy the rights to a full baby and give birth to one.
Allow the Free Market to Take Over
A woman (single or as part of a couple) that has the means to buy the allocation of others, can have a full baby or more, if they have the cash. This way, a woman who, for example, may be impoverished, can sell her allocation (probably for a lot of money). This evens the playing field a bit.
In a lesbian couple, one of them can have a baby (two half-allocations) or they can 'buy' more baby. Women who are unable to conceive can sell their allocation and can pay a couple to adopt, but that's a different story.
Two men who want to adopt will just pay whatever it costs to adopt someone who has had a baby; this will be more expensive, because unlike a heterosexual couple they do not start with 1/2 allocation.
• How do you actually plan to police this? Without dystopian levels of surveillance, policing this is near impossible. – March Ho Oct 5 '15 at 9:59
• And what do you do if someone conceives? Or if a couple is sold a fake birth right? – Davidmh Oct 5 '15 at 12:52
• This will still not have significant effects on total population in 20 years. – Paŭlo Ebermann Oct 5 '15 at 13:23
• @Mikey the problem is that even if no new kids are born at all, with a life expectance of 150 years, only after about that time most of the currently living people are dead. – Paŭlo Ebermann Oct 5 '15 at 18:45
• @DJMethaneMan The solution is that everyone gets 1/4th of a baby regardless of gender. Yay, equality! – DoubleDouble Oct 6 '15 at 22:01
Computer Avatars
2045 Initiative comes to mind. This organization says that by 2045, we will have analyzed the brains complexity and will be able to upload our self into a virtual reality world. Instead of a body we would have an avatar (and in the future, a real-life "robot" avatar).
If this technology exists, it might not be hard to convince the old and the young to live in this simulator.
• The problem is, that's a simulator. You still die if you are killed. You have no real connection to the simulation when dead. – veryRandomMe Oct 7 '15 at 15:47
• @NoviceInDisguise I don't understand. The body will be dead (which is what they want) but the conscious will be in the simulator. You can see family/friends that are in the simulation. People from the real world could go "play" and visit like in modern MMORPG. If you define someone from the conscious then nobody actually dies. – the_lotus Oct 7 '15 at 18:31
• That is all virtual. You yourself are not actually within the simulation. When you die, the simulation continues, yes. However that simulation is not you. You will not be able to connect with it. It will appear the same to people still alive, the simulation of the other person will be realistic, however that person will indeed be dead and unable to think, act, or perceive. – veryRandomMe Oct 7 '15 at 18:43
• That's an interesting philosophical question of what constitutes the "self." If I were able to "upload" my consciousness into a simulation with memories and emotions intact, while my meat-body died, I would consider that simulation to be myself. – GrandOpener Oct 7 '15 at 19:54
• @GrandOpener Searle has an interesting analogy in his Chinese Room argument. If you can simulate a human brain, you can surely simulate a thunderstorm at the same level of complexity. Would you be worried about water damage when you run the thunderstorm simulation? If, for a thunderstorm, a simulation is obviously not the real thing, how come, for a human brain, a simulation is the exact same thing with the exact same causal powers such as creating phenomenal conscioussness? – Solanacea Apr 14 '17 at 19:53
The federation sounds pretty unintelligent to think that you can remove 85% of a population in 20 years without any adverse side-effects and I'd wager we could outwit them but let's assume they're just mean.
Assuming there's a roughly equal amount of every age, you can prevent 7.5% of the population via birth control, and 7.5% would die in those 20 years so we can safely ignore any new births and expect 7.5% to die right from the top.
The moon is roughly a quarter the size of earth, and assuming 120b people are living on earth, 30b of those people could live on the moon since it's very likely the technology to do so is entirely there.
• With this knowledge we can strategically place 25% of the population (under 130 years of age, so as to not have any die) on the moon. (30b safely away)
• 7.5% of the total population will still die, but as they're all on earth and this will make sure the moon stays at full capacity upon inspection day. (9b from natural death)
At this point we have 73b people left to work with to meet the demands fully via humane means. It's not looking too good.
Hopefully they can be made to reason with us as we've almost halved our population in a matter of 20 earth years without a single shot fired. All good things take time and hopefully they may see the value in allowing time to take its course and we can be left to continue our means of population control by sending the old to earth and keeping a persistent 30b people on the moon until 8b remain on earth.
or
The Federation eliminates all humans on earth and we persist via the strategically placed youngest humans on the moon. (Sorry gramps)
• If we can move 30 billion to the Moon, maybe we can move 73 billion to Mars? The problem with either though, is that the aliens would just tell us we're restricted to something like 2 billion on the Moon and 6 billion on Mars. It's doubtful we'd be able to hide those massive populations somewhere else in the solar system. – MichaelS Oct 6 '15 at 2:36
• 1 quarter the size does not mean 1 quarter surface ;) the living space on the moon is not 1/4 of earth. Also earth is covered by oceans. – CoffeDeveloper Oct 6 '15 at 15:32
• If we assume the same population density as Tokyo, we can easily fit all 120 billion people on the Moon :) There's also Mars, with 4x the surface area. We just need enough space farms for food. – timuzhti Oct 12 '15 at 3:46
# Colonoize another planet, or live in space
You mentioned that technology will be growing at a good rate. We're already have probes on multiple planets, and the commercial space industry is exploding, so its reasonable to assume that by the time your scenario comes around we'll have much better inter-planetary transportation abilities.
The bigger question is, why does this federation of aliens care how many people are on Earth so much that they are threatening us?
• Welcome to Worldbuilding. While your answer is not bad as such, it is generally encouraged to elaborate a bit. – Burki Oct 6 '15 at 7:03
• Best solution from all answers. Why they care - just wish to see how we will solve that, or they find earth ecosystem more valuable resource then entry human population - because it takes 100000 to calculate all that live for their supercomp - so way to save resources for other calculations. So moving to spacehabitats is good solution, if they did't mean population in solar system. If they did - anyway move to space and prepare to war. – MolbOrg Jun 30 '16 at 2:08
To reduce the number of humans on earth we will use our advanced biological science to mutate 112 billion humans on the planet to become lizard men.
Thus by definition we have reduced the human population. If the federation is not amused by our trick then we will send the lizard men after them (who of course are naturally well suited for war).
A most (hu)man(e?)ly method would be to start a war with The Federation.
(Given their demands it should be easy to get global support for it)
War will always cause casualties, and as such will reduce the population at a fast rate.
Whether Earth wins or looses, the population objective will be met in the end.
• Sidenote: Don't attempt this when The Federation has spaceships that can destroy planets and is willing to use them. – LukStorms Oct 6 '15 at 12:39
• If I am one of the 112 BILLION people, I could hardly care less to be honest. – veryRandomMe Oct 7 '15 at 15:48
• If the Federation demands that 93% of the human race be killed, it doesn't take much chance of success to increase the expected number of human survivors. There's also the possibility of sending combination armies and colonies to hit other Federation worlds. Or, alternatively, inform the Federation of how many worlds humanity will attempt to destroy. – David Thornley Sep 20 '18 at 19:49
Negotiate over the time-line, we will need about a human lifespan (according to the comments possibly a lot longer). Then make everyone rich.
If we have achieved space travel it seems probable that a post-scarcity society would be technological possible. The only reason it hasn’t happened already is we still have a class system imposed by our free market. Free markets tend to favour people who start with wealth to invest, this enables them to generate more wealth with greater ease than people who cannot make an investment.
However with this ultimatum from the Federation we suddenly have a very strong incentive to stop this nonsense. The governments will redistribute wealth right across the globe. When families are wealthy they tend to have fewer kids or none at all. Japan and Germany both experience population decline for this reason.
The truth is that we could do most of this today. If we redistributed wealth to places that currently have very high birth rates, then their child mortality would fall, and their economic prospects would rise. It wouldn't take much to greatly increase the quality of life of many of us. Lets not wait for an alien ultimatum, lets do this today.
• Are you sure you're not confusing cause and effect? What proof do you have to back the claim that "when families are wealthy they tend to have fewer kids"? Is it possible that families who have fewer kids just tend to have more money available? In that case, distributing wealth would do nothing to change the growth of population in cultures where having many children is common. – Patrick Roberts Oct 5 '15 at 0:52
• @PatrickRoberts If less children where the cause then we would exspect to see the effect vary from family to family. It should be possible for a family in a lower economic strata to have no kids and climb. Instead we see low social mobility, tinyurl.com/o6zjyzj. Yet when the entire contry becomes more economicly sucessful, its birth rate sees a sharp decline, tinyurl.com/nvvuy5g. – Jekowl Oct 5 '15 at 6:34
• Since the article points to (albeit uncommon) successes, this does suggest the effect varies, and is not conclusive proof on the differences between successful and unsuccessful families. Secondly, the graph seems convenient in that it fails to label the 3 countries that don't follow the trend, nor does it imply causality of the trend. There is also no indication of whether the birth rate and GDP are country-wide or per family, so please provide context for the graph. – Patrick Roberts Oct 5 '15 at 10:14
• This is the most humane solution, I think - although the time frame would probably have to be quite big. If each woman has 1.5 children on average, you'd need about 23 generations - that's maybe 500years (Assuming the population is split about 50% female). – Jost Oct 7 '15 at 6:03
• Reason for this being the most humane thing: a) You solve two problems in one go (poverty and the federation threat), b) There is no force involved - no one-child laws, no economical reasons to not get a child.Some reasons why it will work: It just removes a bunch of economical reasons for getting children: They are no longer needed as workforce to feed the family, and there will be fewer "accidential" pregnancies, because people have the means for birth control. – Jost Oct 7 '15 at 6:11
"Birthrates are similar to where they are now"
In lots of developed countries, birthrates is lower than 2 ! Which means that without immigration from poor countries population would decline (in fact Japan could even lose all its population within some hundred years given its current birthrate).
So the answer is sample : make underdeveloped countries developed so that they undergo a demographic shift.
• Stabilizing the population growth to levels slightly below replacement will slightly reduce the size of each subsequent generation, but does not even begin to address the need to kill off nearly everyone within a mere 20 years. Even preventing all births will not significantly reduce the population within that timescale. – pluckedkiwi Oct 5 '15 at 18:25
• Indeed, but it would be stupid not to use a peaceful solution for decades, then suddenly decide decide that 120 billion is too much and that we should decrease to a few billions. But I agree that given the setup only murder/displacement could work – agemO Oct 7 '15 at 6:58
There is no way to 'humanely' reduce the population by that much in twenty years. You can see this if you picture yourself at somewhere between 20-30 years old in the here and now, and then imagine that only 6% of the people you know at that point in your life are alive 20 years later. Even taking someone who is 60, you're still describing mass slaughter.
This is assuming that you aren't using the meat industry's definition of "humane" I suppose. But even then, the idea of "humane death" is one that decreases suffering as much as possible. If you're going to announce that 94% of humanity needs to die in the next 20 years, you're pretty much announcing an inhumane outcome.
So yeah - war with Federation it is, like other people said. There's no sense in trying to reason with a bunch of genocidal maniacs.
Well if you are George Lucas you have some really cool options.
You file a Form 382-G with the intergalactic courts to get them to stop the process. (Get them tied up with that for a while.)
You're going to also probably want to file for a HIQ9 restraining order at the same time. But that's going to cost because you're going to have to hire process servers to serve papers to each federation world leader, AND you're going to have to take out a full page legal notice ad in the intergalactic news beacon next month.
Then you file a Z-91 notice of foreclosure (with a NV-56.121 notice to vacate attached) against each member of the federation home worlds. You don't really have a case but they're still going to have to prove it. Now that's going to buy you a good 60 years or so.
Of course you mustn't forget to file a bunch of motions with the senate subcommittee on population control. It would be ideal if these were as confusing as possible as you want to tie up the process as long as possible.
Then you haul over to the interstellar transportation board and file to have a toll zone put in between Mars and the ort cloud. You'll probably have to pay a few bribes for that one. So you give them Pluto (Jokes on them, it's not eve a planet anymore.)
Of course, the toll zone isn't going to keep them out forever, but everyone knows they aren't going be paying that toll just come check on you.
So that bought you some time. Now what?....
Now you just wait about 20 years, green screen the whole planet and digitally alter it to make it appear that people that were there aren't there anymore.
Just to round it out you fill in every empty space with random creatures for no good reason.
Then make a nice film about how barren your planet is and send it back to the federation senate, where they watch all of about 20 minutes of it and silently all vow to pretend like the entire thing never happened.
• Actually I had a real legit point with this... There obviously is some assumption of some sort of "rule of law" in this federation and as members the earth would be entitled to some legal recourses inside the system. Armed conflict is not automatically the only way to fight back. – Justin Ohms Oct 6 '15 at 21:21
• Very interesting! The Federation prides itself on its legal process, and I wondered if anyone would pick this line of thought! – Isaac Woods Oct 10 '15 at 8:50
The way I see it, nobody would take the risk to accept this ultimatum in front of the populations. Since a democratic choice would lead to chaos all over the world, I think the only realistic way would be to do it without the consent of peoples.
Some deadly viruses/diseases could be successively unleashed like the 1918 flu or Ebola and that could decrease the population a bit. Keeping only 1 percent of a human population seems really a difficult task.
Some nuclear war may be efficient enough to kill a huge amount of population but who would pull the trigger ?
Freeze them
Freeze the excessive population (choose who to freeze by lottery, age, geographical area ...) and implement strict birth control to prevent new people being created. When somebody dies, unfreeze someone to replace them.
• What does this add that wasn't present in previous answers? – zeta Oct 5 '15 at 20:58
According to the WHO 56 million people die each year. With the population at 120 billion, even though people are living older and the fact that medicine would have presumably improved, I don't think this would entail the mortality rate being proportionally lower per capita, as the state of being a lot older actually would nullify the improved medical treatment. So I'm going to assume a mortality rate in proportion to today's, that is (56 x 17) 952 million people a year.
Assuming reproductive rights were strictly suspended by the powers that be, and this injunction was adhered to by the citizens, or successfully enforced by the government(s), to drop down to a population of 8 billion from 120 billion would take 118 years. That is IF no extra person were born in that time.
After this period of 118 years, those who were babies at the start of the reproductive ban would be 118 years old, and eight billion humans would be between the ages of 118 and 150. The 118-year-olds would be the youngest generation.
If you assume the mortality rate per capita to be LOWER because of improved medicine, then this 118 year span would become longer. Even if you quadrupled the mortality rate, so that it were just under 4 billion people dying per year, it would still take, all other factors held constant, about 28 years to accomplish an 8 billion population goal.
This is assuming NO new births, and no voluntary or involuntary euthanasia.
The most destructive war in terms of lives was WW2, which claimed (higher estimates) around 80 million people. And in my opinion, a great portion of these were non-combatants dying from famine, disease and genocides.
So that's 80 million in a 6 year period, that's 13 million per year. Well it's a start. The deadliest earthquake on record killed about 800,000 Chinese. The Black Death of medieval Europe wiped out between a quarter to half of it's population (50 mil to 200 mil). This is a measly 10 to 40 million people a year. The Spanish flu of 1918 killed between 25 and 50 million people a year. So with extra mortalities thrown in we'd be getting there.
But don't forget this is with a complete ban on reproduction. Assuming a complete baby ban, and a mortality rate per capita four times what it is today, we'd be sitting around 28 years to achieve the goal. To reduce that time to 16 years about 900 million people extra would need to die per year. Likewise, you could assume a mortality rate of 8 times what it is today, per capita, and the natural number of attrition would drop the population down to 8 billion after 16 years, that's about 8 billion people dying per year.
Did I mention with a complete baby ban.
I think we can hardly better Johnathon Swift's Modest Proposal in 1729. "I have been assured by a very knowing American of my acquaintance in London, that a young healthy child well nursed, is, at a year old, a most delicious nourishing and wholesome food, whether stewed, roasted, baked, or boiled; and I make no doubt that it will equally serve in a fricasie, or a ragoust. I do therefore humbly offer it to publick consideration, that of the hundred and twenty thousand children, already computed, twenty thousand may be reserved for breed, whereof only one fourth part to be males; which is more than we allow to sheep, black cattle, or swine, and my reason is, that these children are seldom the fruits of marriage, a circumstance not much regarded by our savages, therefore, one male will be sufficient to serve four females. That the remaining hundred thousand may, at a year old, be offered in sale to the persons of quality and fortune, through the kingdom, always advising the mother to let them suck plentifully in the last month, so as to render them plump, and fat for a good table. A child will make two dishes at an entertainment for friends, and when the family dines alone, the fore or hind quarter will make a reasonable dish, and seasoned with a little pepper or salt, will be very good boiled on the fourth day, especially in winter." Just scale up the numbers and sell to the Federation
To reduce population you need to focus on two things, decrease birth rate, and increase death rate.
Fastest way to increase death rate, will be start conflict. It will be very easy to construct it. If federation create that kind of demand, in next step people will split into two groups. one which agreed with those terms, and another which will against. Now few "terrorist" attacks, and large anti-terrorist operation should successfully help to decrease population to required limit.
Other solution, is much simpler, just increase costs of health service and costs of living and promote unhealthy life style.
To decrease birthrate, I see two solutions. One was described in Dan Brown book, Inferno, which was virus which made 1/3 humanity infertile. Other one, will be to introduce Laws which allows only some people to breed.
An "intergalactic federation" spans between galaxies. Why are we worried about population on one planet? The implication of FTL travel and significantly vast energy scales means people should colonize new worlds and artificial habitats. With the technology of hundreds of billions of worlds, and substantially higher technology implied by FTL etc and because some worlds will be much older, the carrying capacity of one planet will be significantly higher, too, not limited by the energy of the sun.
So your premise needs to be at least quantified better, or certainly explained and justified better.
First someone should ask in which frame of reference are those 20 years defined.
If the federation people are constrained by the speed of light and moving at relativistic speeds, then it is quite possible that those 20 years are actually a significantly longer time. Even if time dilation is not a factor, the distances involved are.
If they do have faster-than-light travel, then Earth could kindly ask to borrow one of the time machines they certainly have and a) destroy or undermine the Federation in their past or b) teach our Bronze age ancestors abstinence. I'll leave the various paradoxes and ways to avoid overshooting the target as an exercise...
• +1! This is an almost perfect answer, but there is a paradox free use for the time machines which I will outline in my answer below. Great answer! – Henry Taylor Jun 23 '17 at 4:27
Personally I agree with the folks suggesting war. But, if for the sake of argument that isn't an option, there is perhaps an actual humane way if the Humans can negotiate two things - a longer time period and for the 'federation' to cough up some money.
The solution would be to pay people not to have children.
If the federation is powerful enough that it feels it can issue a mandate to humanity, then it clearly has incredible resources. If they are willing to part with some, they can offer humans large payments in exchange for accepting permanent birth control. It is a win-win transaction for everyone involved with minimal coercion. If payments are large enough, then enough people will accept them. Over the course of an average human life-time, you'd see a significant drop in the population. If its not dropping fast enough, the federation ups the payment. If its dropping too fast, they lower the payment.
There could of course be cheating (freezing eggs, using surrogates), but DNA tests at birth should stamp out the majority of it.
• Very interesting! Thanks for adding this answer. This also opens a potential for negotiation with the Federation to extend the timespan to a human life's worth as well! – Isaac Woods Oct 10 '15 at 8:47
@thkala's answer was almost perfect, but missed one implication of the Federation having FTL capabilities...
FORWARD TIME TRAVEL
1. First of all, sterilize all but 8 billion people and hold those still fertile humans in reserve for the last step of the process.
2. Acquire several of the Federation's Time Machines.
3. Divide the remaining 112 billion people into groups of 8 billion people each; carefully balancing skills and capabilities so that every group has all the knowledge and experience needed to thrive on their own.
4. Send each group of 8 billion people into the future, advancing each group forward 150 years farther than the group before. Each of those groups can then live out their lives in total before the next group arrives to take their place.
5. After all of the other groups have gone forward, send the 8 billion fertile people to the empty world which follows the death of the furthest sent group. It is their job to carefully propagate the human race into the future.
Once that fertile group has left, the Federation can come pick up their time machines so that nobody is tempted to misuse them.
Note: This could also be done without Time Machines if the Federation could supply enough star ships to contain the 112 billion (otherwise time-travelling) people. Those ships would travel out on vast circular routes at heavy time-dilating speeds in such a way that enough ships to offload 8 billion humans, would return to Earth every 150 years.
Note: It could also be done with cryogenics, but that might not satisfy the Federation because the 112 billion frozen humans might still be considered alive, making the entire effort moot.
Late to the party, but I had a shower thought (literally)
Choosing people at random is the only way that will be viewed as fair and acceptable. But of course those chosen to die will go to war against both galactic federation and human leaders that serve them.
Instead, the rulers of the earth will need to reframe the situation. Evil empire has infected the Earth with a virus, but Galactic Federation has generously provided billions of doses of vaccine. So we randomly choose those who will receive it.
Ofc, the virus would need to be made and distributed in secret, but earth government or the Federation. Federation's own weapon is is probably a virus anyway. Any other weapon of mass distraction will mess up the planet, and if they do not mind messing up the planet, they would just blow it up with exhaust from their intergalactic drive.
In Mass Effect the solution was spreading the disease that makes most victims sterile. But again, it needs about 0.75 times of a lifespan to bring result. The real humanely way is to send everybody to another planet, may be cheat the aliens with classic "genie wishes" trick.
First I want to say that it's impossible to have 120 billion humans on the Earth but well, it doesn't matter right now, sooner or later all of them will be dead in some way.
# Birth Control Rate
It's an slower rate but it's quite ethical.
By this way it's impossible to kill 120 billion humans, but if you prohibit them having any children (even if they want children you can sterilise them, I don't know if it's possible in humans but it's possible to permeate with gamma rays in mosquitoes and is very fast and cheap). You could also make a deal to increase the human survival or increase the time limit you have to do it.
Personally I think it's impossible but if humanity shows that they are trying to complete the deal (even out of time) the Federation could give them a second chance.
Obviously you can not stelize useful and smart people, it is your chooice.
Also, you can make an aleatory process (or selected) where people who lose (14 out of 15 people :)) have 2 options:
• Be executed.
• Be sterilize.
# Controlled Plague
A faster way could be using a similar method used in Utopia.
• Bassicaly they make an Alpha protein which is given to humans secretly in cereals (and other types of food I think).
• Then 7 secret persons fly 7 planes (planes used to irrigate field crops) but instead of irrigating with water or insecticide they irrigate the cities (yes, the 7 most important capitals, not farms) with a lethal sickness (Black Death).
• Finally, the US government gives free vaccines to all the American citizens and also gives vaccines to other countries. These vaccines contain the Beta protein which causes a chemical reaction with Alpha proteins:
• This sterilises randomly 95% of the human population. This "attacks" humans bodies only if they have a certain Z (I would say X but maybe it would bring confusion) gene, this gene is almost random in the human population.
• Also (it wasn't totally planned in the series) this Alpha-Beta combination can destroy the human immunity to the black plague, making it more lethal.
• They said that in 100 years the human population would be stabilized to 500 million humans. In your case, it would be 7.5 - 6 billion humans.
Important people could have the real vacine (without the Alpha or Beta protein).
Basically you can poison (or a sterilising drug), water, food or even a person (like a plague?).
# Reduce Medical Care
• Terminal and non-terminal but lethal diseases are not allowed to be healed by medics.
• Suicides and bloody accidents aren't treated by medics.
• Vaccines aren't given to humans.
• People in comas, vegetative states, intensive care, etc... disconnected.
• Medical drugs replaced by placebos.
If you don't want to make a revolution you can instead make it very expensive. Rich and smart people (who often are rich or at least have friends) won't die.
# War
War is very useful, it's waged mostly by volunteers and there are a lot of deaths.
• Countries could make wars to kill people, maybe the country who loses would have to "sacrifice" more people for the Federation.
• The world's population could make a war against the Federation.
• If they lose, human overpopulation would be almost resolved.
• If they win, there is no more human cap.
# Other methods, quite... dangerous.
• Poison Water: you can poison the drinkable water (or at least make it sterilise humans) when humans reach the 8 billion you can make a vaccine and also stop poisoning the water. (You can distribute fresh water or vacines to VIPs).
• Controlled Droughts: secretly governments could destroy crop fields (drought by removal of water, controlled insect plagues, fire) to make citiziens die by starvation. VIPs generally are rich or smart (and with rich or other VIPs friends), they won't have any problems buying food.
• Melt Ice Poles: yes, it's a crazy idea, but if you melt the poles a lot of people would die from drowning, then if you stop heating them they would slowy freeze again (right???).
• Nuclear War: obviously.
• Biological War: like the controlled plague but on a bigger scale (this would be in your enemies countries not on your own).
|
2020-10-26 01:57:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29104435443878174, "perplexity": 1776.7537565126484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890108.60/warc/CC-MAIN-20201026002022-20201026032022-00295.warc.gz"}
|
https://codeforces.com/topic/22406/?mobile=true&locale=ru
|
Counting Divisors of a Number in [tutorial]
Правка en5, от himanshujaju, 2015-12-26 17:09:26
Topic : Counting Divisors of a Number
Pre Requisites : Basic Maths , Factorisation , Primality testing
Motivation Problem :
There are T test cases. Each test case contains a number N. For each test case , output the number of factors of N.
1 < = T < = 10
1 < = N < = 1018
Note : 1 and N are also treated as factors of the number N.
Solution :
Naive Solution Factorisation
The most naive solution here is to do the factorisation. But as we can see, it will surely receive the Time Limit Exceeded verdict. You may try to compute the prime numbers till required range and loop over them , but that too exceeds the usual limit of 108 operations. Some optimisations and heuristics may allow you to squeeze through your solution, but usually at the cost of a few TLE's.
Supposing you fit in the above description and cannot think of anything better than the naive solution, I would like to present to you a very simple algorithm which helps you count the number of divisors in , which would be useful for this kind of questions. This algorithm only gives us the count of factors, and not the factors itself.
Firstly, let's do a quick recap of how we obtain the algorithm for any number N :
We write N as product of two numbers P and Q.
Looping over all values of P gives us the count of factors of N.
Before you move forward to the next section, it would be useful if you try to come up with an algorithm that finds the count of factors in . As a hint, the way of thinking is same as that of getting to the algorithm.
## Counting factors in
Let's start off with some maths to reduce our factorisation to for counting factors :
We write N as product of three numbers P, Q and R.
We can loop over all prime numbers in range and try to reduce N to it's prime factorisation, which would help us count the number of factors of N.
We will split our number N into two numbers X and Y such that X * Y = N. Further, X contains only prime factors in range and Y deals with higher prime factors (). Thus, gcd(X , Y) = 1. Let the count of divisors of a number N be denoted by the function F(N). It is easy to prove that this function is multiplicative in nature, i.e., F(m * n) = F(m) * F(n), if gcd(M,N) = 1. So, if we can find F(X) and F(Y), we can also find F(X * Y) or F(N) which is the required quantity.
For finding F(X), we use the naive trial division to prime factorise X and calculate the number of factors. Once this is done, we have Y = N / X remaining to be factorised. This may look tough, but we can see that there are only three cases which will cover all possibilities of Y :
1. is a prime number : F(Y) = 2.
2. is square of a prime number : F(Y) = 3.
3. is product of two distinct prime numbers : F(Y) = 4.
We have only these three cases since there can be at max two prime factors of Y. If it would have had more than two prime factors, one of them would surely have been , and hence it would be included in X and not in Y.
So once we are done with finding F(X) and F(Y), we are also done with finding F(X * Y) or F(N).
Pseudo Code :
N = input()
primes = array containing primes till 10^6
ans = 1
for all p in primes :
if p*p*p > N:
break
count = 1
while N divisible by p:
N = N/p
count = count + 1
ans = ans * count
if N is prime:
ans = ans * 2
else if N is square of a prime:
ans = ans * 3
else if N != 1:
ans = ans * 4
Checking for primality can be done quickly using Miller Rabin. Thus, the time complexity is for every test case and hence we can solve our problem efficiently.
At this point, you may think that in a similar way, we can reduce this to by handling some cases. I have not thought much on it, but the number of cases to be handled are high since after trial division N could be factorised into one, two or three primes. This is easy enough to code in contest environment, which is our prime objective.
This trick is not quite commonly known and people tend to make bugs in handling the three cases. A problem in regionals which uses this trick directly :
Problem F | Codeforces Gym
You can try also this technique on problems requiring factorisation for practice purposes.
Hope you found this useful! Please suggest more problems to be added as well as any edits, if required.
Happy Coding!
#### История
Правки
Rev. Язык Кто Когда Δ Комментарий
en5 himanshujaju 2015-12-26 17:09:26 20
en4 himanshujaju 2015-12-26 17:01:34 168
en3 himanshujaju 2015-12-26 15:12:53 25 title
en2 himanshujaju 2015-12-26 15:08:47 60 first iteration (published)
en1 himanshujaju 2015-12-26 15:05:51 5527 Initial revision (saved to drafts)
|
2021-10-23 02:47:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6635176539421082, "perplexity": 619.0129523263635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00198.warc.gz"}
|
http://www.apmonitor.com/wiki/index.php/Main/SteadyState?action=diff&source=n&minor=n
|
Main
June 09, 2017, at 12:57 AM by 10.5.113.159 -
Changed lines 5-6 from:
nlc.imode = 1
to:
apm.imode = 1
Changed lines 8-9 from:
apm_option(server,app,'nlc.imode',1);
to:
apm_option(server,app,'apm.imode',1);
Changed line 11 from:
apm_option(server,app,'nlc.imode',1)
to:
apm_option(server,app,'apm.imode',1)
June 16, 2015, at 06:55 PM by 45.56.3.184 -
Changed line 10 from:
% Python example
to:
# Python example
June 16, 2015, at 06:55 PM by 45.56.3.184 -
Changed lines 5-11 from:
NLC.imode = 1
to:
nlc.imode = 1
% MATLAB example
apm_option(server,app,'nlc.imode',1);
% Python example
apm_option(server,app,'nlc.imode',1)
October 02, 2008, at 09:03 PM by 158.35.225.228 -
Deleted lines 2-3:
The first step in model creation and testing is a steady state simulation. Steady state model simulations are employed to converge new model units, verify relationships between key process variables, and allign process values to typical operating regions. The SS (#1) and RTO (#3) modes are used to obtain an initial condition solution for all other modes of operation. The initial condition files rto.t0 and ss.t0 override default values given in the apm file.
Changed lines 5-7 from:
NLC.imode = 1
to:
NLC.imode = 1
The first step in model creation and testing is a steady state simulation. Steady state model simulations are employed to converge new model units, verify relationships between key process variables, and allign process values to typical operating regions. The SS (#1) and RTO (#3) modes are used to obtain an initial condition solution for all other modes of operation. The initial condition files rto.t0 and ss.t0 override default values given in the apm file.
September 29, 2008, at 06:45 PM by 158.35.225.229 -
|
2017-11-18 08:15:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5751652121543884, "perplexity": 7269.60331726368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804680.40/warc/CC-MAIN-20171118075712-20171118095712-00547.warc.gz"}
|
https://colleenyoung.org/category/algebra/
|
# Mathematical Miscellany #75
A collection of impressive resources…
Jake Gordon has been rather busy! Have a look at his “monster of a PowerPoint”. These detailed worked examples are based on the book Teaching math with examples by Michael Pershan.
Also inspired by Michael Pershan’s book and her research on self-explanation, have a look at Karen Hancock‘s journey into worked examples.
From Nathan Day – a brilliant collection of resources on Distributivity: Partitioning, Grid Method, and Expanding Brackets The 15 tasks increase in demand. Note the complete thread.
Also from Nathan Day – a wonderful collection of Starters – again, note the complete thread.
I have added this collection to my Starters page, a collection that includes Advanced Level Starters.
Andy Lutwyche’s collection of Erica’s Errors where students must identify errors in solutions can be an ideal starter for either retrieval practice for an earlier topic or to consolidate learning for a current topic.
Added to my Primary page – curriculum mappings from NCETM
Also added to the Primary page:
On DrFrostMaths there is a growing collection of Key Skills for Primary students.
I have mentioned DrFrostMaths more than once recently, a new instruction manual for teachers is now available.
From the GCSE/IGCSE/L2 Further Maths page, the pages for individual topic areas all include Sites with clear resources by topic, see the Algebra page for example. These resources by topic include links to the relevant DrFrostMaths key skills.
# Algebraic Notation
From the KS3 National Curriculum we see the above on algebraic notation, see also pages 56-66 of the Teaching mathematics at key stage 3 guidance. The guidance covers the entire KS3 curriculum and includes common difficulties and misconceptions, examples for use in lessons, and suggested questioning and other strategies for teachers to use.
The following slideshow includes several resources you can use with students for practice in writing algebraic notation.
Included you can see Jonathan Hall’s Worded Expressions, as always with MathsBot resources we have lots of choices – for example, hide either the sentences or expressions. With the ability to generate new expressions we have an endless supply; also from Jonathan Hall, see his Forming Expressions, these resources are ideal for self-study as well as for use in class.
From Don Steward, we have translating English to algebra, expressions, see also translating English to algebra, relationships. Also included here is an activity, A1 from the Standards Unit on Interpreting algebraic expressions. This includes 4 card sets to match, ideal for looking at multiple representations, students match algebraic expressions, explanations in words, tables of numbers and areas of shapes. One of the goals of the activity is to help learners to translate between words, symbols, tables, and area representations of algebraic shapes. The Standards Unit resources can all be accessed without a login from the very clear to navigate University of Nottingham site linked to in the Standards Unit post.
One of Chris McGrane’s Starting Points MathsCurriculum Booklets – Algebra 1 from Phase 3 features some great activities for writing algebraic statements, featured on the slides you can see a Smile activity, and Jo Morgan’s lovely Introduction to Writing Algebraically – this is such a good idea, as Jo says in the resource description if they know how to do it with numbers, then they just do the same thing with the algebra.
Further excellent resources on this skill are available on Maths4Everyone.
On Transum, Writing Expressions is an exercise with a difference, listen to the audio then type in the expression.
From Corbett Maths
Algebra: expressions – forming Video 16 Practice Questions Textbook Exercise
16. Algebra: expressions – forming Practice Questions answers Textbook answers
From Andy Lutwyche – Algebraic Expressions Spiders
Here’s an interesting query type on WolframAlpha – simple word problems. See more examples of Word Problems (and All Examples by Topic).
From my post on Bar Modelling see The Mathenæum from Ken Wessen which includes Modelling Word Problems.
# Mathematical Miscellany #55
Featuring:
In Mathematical Miscellany #54 I featured two excellent resources from Curriculum for Wales; a third is now available, “The Foundations of Algebra” is suitable for progression step 3 of the new #CurriculumForWales (age 11). The workbook contains chapters on patterns, commutativity, distributivity & associativity.
I do like the above exercise which as the Teacher’s Guide acknowledges is based on Don Steward’s work, directed number arithmetic speed up and Chris McGrane’s Alternative representation of integers. A further useful resource for such an exercise is Jonathan Hall’s Directed Number MCQ Generator on MathsBot with which you can generate all the addition and subtraction multiple-choice questions you want; choose between Counters on or off.
As with the other two resources, a very comprehensive teacher’s guide is also available. You can see the contents here, this resource with its carefully chosen and varied activities and exercises will help students with the foundations of Algebra.
On the subject of negative numbers, from PhET simulations we have another excellent resource in their latest addition to the Mathematics collection. To use the number line as a model for ordering real numbers and also to illustrate operations with negative numbers we can use the excellent, Number Line: Distance. Also available are Number Line: Integers, and Number Line: Operations. All are excellent for students to explore.
This resource has been added to my post on Negative Numbers which looks at some resources to develop understanding of operations with positive and negative integers and exercises for practice.
A popular post on this blog is on Venn Diagrams, first written in 2016 this has recently been checked and updated with some new resources including always excellent resources from Amanda Austin on Dr Austin Maths. Included in her Probability resources you will find an excellent section on Set Notation and Venns.
GeoGebra retweeted this from Javier Cayetano Rod
…the translation:
“Adding some leaves to the stem of a flower can be the perfect excuse to talk about translations and turns in space @debora_pereiro . Added to the flower generator in @geogebrahttps://geogebra.org/m/duqthjva
Which in turn led me to this amazing GeoGebra collection, Flowers 3D, Author: Deborah Pereiro Carbajo.
Have look at Flowers from Curves, simply brilliant!
Explore the collection – you could be a while!
Complete Maths has made Robert Smith’s session “Web Autograph, a First Look” freely available
I wrote earlier this year on the excellent Math Whiteboard. This is completely free to use; If you create a whiteboard you can then get a link for that whiteboard which you can share. When I have created a whiteboard I then save a second copy so I can always return to the original.
With an individual subscription ( currently \$15 for a year) it is possible to access all the features of Fluid Math including as an authoring tool for creating Math Whiteboard activities. It is also possible to save your activities.
A new feature is available – the ability to create answer boxes to check for correct answers. You can see examples using this feature here.
By Colleen Young
# Coordinate Geometry – Underground Maths
Exploring Algebra Review Questions from Underground Mathematics I came across some Coordinate Geometry questions I really like and yesterday spending a day with the very talented writing team and my fellow Underground Mathematics Champions we explored Straight Line Pairs, a question with much scope for exploration and possible methods of solution.
The image above has been created from the Printable/supporting materials.
My Year 11s will be looking at Coordinate Geometry this week and I have some other questions I would like them to try. It is possible to create pdf files for a collection of questions, see Saving Favourite Resources, one of Underground Mathematics’ How To Videos. (See the tutorials page I have in the Underground Maths series of pages – a work in progress).
You will find a whole collection of such questions if you look at Geometry of Equations. This includes many resources including Review questions. Note the Building Blocks resources. I think I’ll be using Underground Mathematics resources with ever younger students – Year 9 can try Lots of Lines! You will see from the the supporting materials that this has come from the brilliant Standards Unit (A10) collection. Students must sort the lines into six pairs, each pair matching one of the given descriptions.
Staying with the Building Blocks I do like Straight Lines where students must decide which of 17 equations are equations of a straight line.
Look at the list – a wonderful lesson in not jumping to conclusions here! Both my Year 9 and my Year 11 are going to be trying these this week!
$resize=342,41$.
Straight Lines reminded me of Line Pairs, I feel an extension for Year 11 coming on!
By Colleen Young
# Underground Mathematics Algebra Review Questions
Underground Mathematics provides such an outstanding collection of resources that I have begun to create a series of pages on the site. The resources are not only good for Advanced Level but for GCSE students too, particularly for students aiming at the very highest grades. This series of pages is very much a work in progress which I will be updating regularly.
I have used many of the Review questions for my able GCSE students. As you can see from the descriptions of resorce types, the review questions are ideal for the new GCSE specifications as they have been selected to test students’ understanding of one or more topics and to exercise their problem-solving skills. The questions which have been chosen require non-routine thinking. You can browse all the Review questions or narrow your search by question type; note the O/AO-level questions which are questions from old papers. One can also search by line ( Number, Geometry, Algebra, Functions or Calcuus) and by Station.
If you create an account you can easily save and organise your favourite resources. This list of favourites can be easily downloaded as a csv file. To further organise your favourites you can create subcollections.
This too is a work in progress, I will create a collection of resources I believe are particularly useful for GCSE. I have several Algebra favourites so far. This Excel file has hyperlinks to all the resources shown here. algebra-gcse-9-1. Alternatively this pdf file also has the relevant hyperlinks. algebra-gcse-9-1
|
2022-11-29 00:47:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24838729202747345, "perplexity": 2282.6423348779017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710684.84/warc/CC-MAIN-20221128235805-20221129025805-00821.warc.gz"}
|
http://alejandroerickson.com/j/2012/07/31/006-connecting-tatami-tilings-with-other-areas-of-math.html
|
I recently asked a question on Math Overflow, to see if other mathematicians and students could think of possible connections to tatami tilings. I have reposted the text of it here, but if you like the question, please go and see it here.
<p>A monomer-dimer tiling of a rectangular grid with $r$ rows and $c$ columns satisfies the $tatami$ condition if no four tiles meet at any point. (or you can think of it as the removal of a matching from a grid graph that breaks all $4$-cycles).</p>
This simple restriction, brought to my attention by Don Knuth in Vol 4 of TAOCP, became my PhD thesis topic, when my research group and I discovered that it imposes a aesthetically pleasing structure with a nice description, and opened up lots of fun questions.
Here is my favourite example, which has all of the possible "features". First it is shown uncoloured
And here it is coloured
The magenta tiles show the types of features it can have, and here they are on their own:
I'll introduce my question here, and then summarize some more results later. I want to find more and better ties between tatami tilings and other less esoteric math problems. If you think of a paper or subject I might want to look into, don't hesitate to answer.
In our first paper, we proved the above structure and showed that:
1. A tiling is described by the tiles on its boundary, and hence has a description that is linear in the dimensions of the grid.
2. The maximum number of monomers is at most $\max(r+1,c+1)$, and this is achievable.
3. We found an algorithm for finding the rational generating polynomial of the numbers of tilings of height r (which I think can also be calculated with the transfer matrix method).
We posed a couple of complexity questions (which I am working on), for example, is it NP-hard to reconstruct a tatami tiling given its row and column projections?
Or tile a given orthogonal region with no monomers?
Next we focused on enumerating tilings of the $n\times n$ grid, and found a partition of $n\times n$ tiles with the maximum number of monomers into $n$ parts of size $2^{n-1}$. We also counted the number tilings with $k$ monomers, and this curious consequence:
The number of $n \times n$ tatami tilings is equal to the sum of the squares of all parts in all compositions of $n$. That is, $2^{n-1}(3n-4)+2$.
We also found an algorithm to generate the ones with $n$ monomers, and a Gray code of sorts.
Nice numbers, and a cute problem, but another paper that is self contained with elementary (albeit, somewhat complicated) reasoning.
Our third paper in this story (in preparation), looks at the generating polynomial for $n\times n$ tilings with $n$ monomers whose coefficients are the number of tilings with exactly $v$ vertical dimers (or $h$ horizontal dimers). It turns out this generating polynomials is a product of cyclotomic polynomials, and a somewhat mysterious and seemingly irreducible polynomial, who's complex roots look like this:
We've found a bunch of neat stuff about it, for example the evaluation of this polynomial at $-1$ is $\binom{2n}{n}$, for $2(n+1)\times 2(n+1)$ tilings, and we found our generating polynomial gives an algorithm to generate the tilings in constant amortized time. Here is some output of the implementation:
That's the most of the published (and almost published) story. There is a loose connection with other monomer-dimer problems, and things I can look into, like Aztec tatami tilings, but I'm looking for direct applications of other results to these, or vice versa, especially with this last paper in preparation. I'm not asking you to do research for me, but just your thoughts as they are now, so I can go learn new stuff.
Feel free to comment about what you think is interesting, or not, about tatami tilings too!
, , ,
|
2019-02-23 11:48:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6940450668334961, "perplexity": 433.63612553207054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249500704.80/warc/CC-MAIN-20190223102155-20190223124155-00239.warc.gz"}
|
https://jeremykun.com/2013/02/22/methods-of-proof-contrapositive/?like_comment=3003&_wpnonce=7cc6a741c5
|
# Methods of Proof — Contrapositive
In this post we’ll cover the second of the “basic four” methods of proof: the contrapositive implication. We will build off our material from last time and start by defining functions on sets.
## Functions as Sets
So far we have become comfortable with the definition of a set, but the most common way to use sets is to construct functions between them. As programmers we readily understand the nature of a function, but how can we define one mathematically? It turns out we can do it in terms of sets, but let us recall the desired properties of a function:
• Every input must have an output.
• Every input can only correspond to one output (the functions must be deterministic).
One might try at first to define a function in terms of subsets of size two. That is, if $A, B$ are sets then a function $f: A \to B$ would be completely specified by
$\displaystyle \left \{ \left \{ x, y \right \} : x \in A, y \in B \right \}$
where to enforce those two bullets, we must impose the condition that every $x \in A$ occurs in one and only one of those subsets. Notationally, we would say that $y = f(x)$ means $\left \{ x, y \right \}$ is a member of the function. Unfortunately, this definition fails miserably when $A = B$, because we have no way to distinguish the input from the output.
To compensate for this, we introduce a new type of object called a tuple. A tuple is just an ordered list of elements, which we write using round brackets, e.g. $(a,b,c,d,e)$.
As a quick aside, one can define ordered tuples in terms of sets. We will leave the reader to puzzle why this works, and generalize the example provided:
$\displaystyle (a,b) = \left \{ a, \left \{ a, b \right \} \right \}$
And so a function $f: A \to B$ is defined to be a list of ordered pairs where the first thing in the pair is an input and the second is an output:
$\displaystyle f = \left \{ (x, y) : x \in A, y \in B \right \}$
Subject to the same conditions, that each $x$ value from $A$ must occur in one and only one pair. And again by way of notation we say $y = f(x)$ if the pair $(x,y)$ is a member of $f$ as a set. Note that the concept of a function having “input and output” is just an interpretation. A function can be viewed independent of any computational ideas as just a set of pairs. Often enough we might not even know how to compute a function (or it might be provably uncomputable!), but we can still work with it abstractly.
It is also common to call functions “maps,” and to define “map” to mean a special kind of function (that is, with extra conditions) depending on the mathematical field one is working in. Even in other places on this blog, “map” might stand for a continuous function, or a homomorphism. Don’t worry if you don’t know these terms off hand; they are just special cases of functions as we’ve defined them here. For the purposes of this series on methods of proof, “function” and “map” and “mapping” mean the same thing: regular old functions on sets.
## Injections
One of the most important and natural properties of a function is that of injectivity.
Definition: A function $f: A \to B$ is an injection if whenever $a \neq a'$ are distinct members of $A$, then $f(a) \neq f(a')$. The adjectival version of the word injection is injective.
As a quick side note, it is often the convention for mathematicians to use a capital letter to denote a set, and a lower-case letter to denote a generic element of that set. Moreover, the apostrophe on the $a'$ is called a prime (so $a'$ is spoken, “a prime”), and it’s meant to denote a variation on the non-prime’d variable $a$ in some way. In this case, the variation is that $a' \neq a$.
So even if we had not explicitly mentioned where the $a, a'$ objects came from, the knowledgeable mathematician (which the reader is obviously becoming) would be reasonably certain that they come from $A$. Similarly, if I were to lackadaisically present $b$ out of nowhere, the reader would infer it must come from $B$.
One simple and commonly used example of an injection is the so-called inclusion function. If $A \subset B$ are sets, then there is a canonical function representing this subset relationship, namely the function $i: A \to B$ defined by $i(a) = a$. It should be clear that non-equal things get mapped to non-equal things, because the function doesn’t actually do anything except change perspective on where the elements are sitting: two nonequal things sitting in $A$ are still nonequal in $B$.
Another example is that of multiplication by two as a map on natural numbers. More rigorously, define $f: \mathbb{N} \to \mathbb{N}$ by $f(x) = 2x$. It is clear that whenever $x \neq y$ as natural numbers then $2x \neq 2y$. For one, $x, y$ must have differing prime factorizations, and so must $2x, 2y$ because we added the same prime factor of 2 to both numbers. Did you catch the quick proof by direct implication there? It was sneaky, but present.
Now the property of being an injection can be summed up by a very nice picture:
A picture example of an injective function.
The arrows above represent the pairs $(x,f(x))$, and the fact that no two arrows end in the same place makes this function an injection. Indeed, drawing pictures like this can give us clues about the true nature of a proposed fact. If the fact is false, it’s usually easy to draw a picture like this showing so. If it’s true, then the pictures will support it and hopefully make the proof obvious. We will see this in action in a bit (and perhaps we should expand upon it later with a post titled, “Methods of Proof — Proof by Picture”).
There is another, more subtle concept associated with injectivity, and this is where its name comes from. The word “inject” gives one the mental picture that we’re literally placing one set $A$ inside another set $B$ without changing the nature of $A$. We are simply realizing it as being inside of $B$, perhaps with different names for its elements. This interpretation becomes much clearer when one investigates sets with additional structure, such as groups, rings, or topological spaces. Here the word “injective mapping” much more literally means placing one thing inside another without changing the former’s structure in any way except for relabeling.
In any case, mathematicians have the bad (but time-saving) habit of implicitly identifying a set with its image under an injective mapping. That is, if $f :A \to B$ is an injective function, then one can view $A$ as the same thing as $f(A) \subset B$. That is, they have the same elements except that $f$ renames the elements of $A$ as elements of $B$. The abuse comes in when they start saying $A \subset B$ even when this is not strictly the case.
Here is an example of this abuse that many programmers commit without perhaps noticing it. Suppose $X$ is the set of all colors that can be displayed on a computer (as an abstract set; the elements are “this particular green,” “that particular pinkish mauve”). Now let $Y$ be the set of all finite hexadecimal numbers. Then there is an obvious injective map from $X \to Y$ sending each color to its 6-digit hex representation. The lazy mathematician would say “Well, then, we might as well say $X \subset Y$, for this is the obvious way to view $X$ as a set of hexadecimal numbers.” Of course there are other ways (try to think of one, and then try to find an infinite family of them!), but the point is that this is the only way that anyone really uses, and that the other ways are all just “natural relabelings” of this way.
The precise way to formulate this claim is as follows, and it holds for arbitrary sets and arbitrary injective functions. If $g, g': X \to Y$ are two such ways to inject $X$ inside of $Y$, then there is a function $h: Y \to Y$ such that the composition $hg$ is precisely the map $g'$. If this is mysterious, we have some methods the reader can use to understand it more fully: give examples for simplified versions (what if there were only three colors?), draw pictures of “generic looking” set maps, and attempt a proof by direct implication.
## Proof by Contrapositive
Often times in mathematics we will come across a statement we want to prove that looks like this:
If X does not have property A, then Y does not have property B.
Indeed, we already have: to prove a function $f: X \to Y$ is injective we must prove:
If x is not equal to y, then f(x) is not equal to f(y).
A proof by direct implication can be quite difficult because the statement gives us very little to work with. If we assume that $X$ does not have property $A$, then we have nothing to grasp and jump-start our proof. The main (and in this author’s opinion, the only) benefit of a proof by contrapositive is that one can turn such a statement into a constructive one. That is, we can write “p implies q” as “not q implies not p” to get the equivalent claim:
If Y has property B then X has property A.
This rewriting is called the “contrapositive form” of the original statement. It’s not only easier to parse, but also probably easier to prove because we have something to grasp at from the beginning.
To the beginning mathematician, it may not be obvious that “if p then q” is equivalent to “if not q then not p” as logical statements. To show that they are requires a small detour into the idea of a “truth table.”
In particular, we have to specify what it means for “if p then q” to be true or false as a whole. There are four possibilities: p can be true or false, and q can be true or false. We can write all of these possibilities in a table.
p q
T T
T F
F T
F F
If we were to complete this table for “if p then q,” we’d have to specify exactly which of the four cases correspond to the statement being true. Of course, if the p part is true and the q part is true, then “p implies q” should also be true. We have seen this already in proof by direct implication. Next, if p is true and q is false, then it certainly cannot be the case that truth of p implies the truth of q. So this would be a false statement. Our truth table so far looks like
p q p->q
T T T
T F F
F T ?
F F ?
The next question is what to do if the premise p of “if p then q” is false. Should the statement as a whole be true or false? Rather then enter a belated philosophical discussion, we will zealously define an implication to be true if its hypothesis is false. This is a well-accepted idea in mathematics called vacuous truth. And although it seems to make awkward statements true (like “if 2 is odd then 1 = 0”), it is rarely a confounding issue (and more often forms the punchline of a few good math jokes). So we can complete our truth table as follows
p q p->q
T T T
T F F
F T T
F F T
Now here’s where contraposition comes into play. If we’re interested in determining when “not q implies not p” is true, we can add these to the truth table as extra columns:
p q p->q not q not p not q -> not p
T T T F F T
T F F T F F
F T T F T T
F F T T T T
As we can see, the two columns corresponding to “p implies q” and “not q implies not p” assume precisely the same truth values in all possible scenarios. In other words, the two statements are logically equivalent.
And so our proof technique for contrapositive becomes: rewrite the statement in its contrapositive form, and proceed to prove it by direct implication.
## Examples and Exercises
Our first example will be completely straightforward and require nothing but algebra. Let’s show that the function $f(x) = 7x - 4$ is injective. Contrapositively, we want to prove that if $f(x) = f(x')$ then $x = x'$. Assuming the hypothesis, we start by supposing $7x - 4 = 7x' - 4$. Applying algebra, we get $7x = 7x'$, and dividing by 7 shows that $x = x’$ as desired. So $f$ is injective.
This example is important because if we tried to prove it directly, we might make the mistake of assuming algebra works with $\neq$ the same way it does with equality. In fact, many of the things we take for granted about equality fail with inequality (for instance, if $a \neq b$ and $b \neq c$ it need not be the case that $a \neq c$). The contrapositive method allows us to use our algebraic skills in a straightforward way.
Next let’s prove that the composition of two injective functions is injective. That is, if $f: X \to Y$ and $g: Y \to Z$ are injective functions, then the composition $gf : X \to Z$ defined by $gf(x) = g(f(x))$ is injective.
In particular, we want to prove that if $x \neq x'$ then $g(f(x)) \neq g(f(x'))$. Contrapositively, this is the same as proving that if $g(f(x)) = g(f(x'))$ then $x=x'$. Well by the fact that $g$ is injective, we know that (again contrapositively) whenever $g(y) = g(y')$ then $y = y'$, so it must be that $f(x) = f(x')$. But by the same reasoning $f$ is injective and hence $x = x'$. This proves the statement.
This was a nice symbolic proof, but we can see the same fact in a picturesque form as well:
A composition of two injections is an injection.
If we maintain that any two arrows in the diagram can’t have the same head, then following two paths starting at different points in $X$ will never land us at the same place in $Z$. Since $f$ is injective we have to travel to different places in $Y$, and since $g$ is injective we have to travel to different places in $Z$. Unfortunately, this proof cannot replace the formal one above, but it can help us understand it from a different perspective (which can often make or break a mathematical idea).
Expanding upon this idea we give the reader a challenge: Let $A, B, C$ be finite sets of the same size. Prove or disprove that if $f: A \to B$ and $g: B \to C$ are (arbitrary) functions, and if the composition $gf$ is injective, then both of $f, g$ must be injective.
Another exercise which has a nice contrapositive proof: prove that if $A,B$ are finite sets and $f:A \to B$ is an injection, then $A$ has at most as many elements as $B$. This one is particularly susceptible to a “picture proof” like the one above. Although the formal the formal name for the fact one uses to prove this is the pigeonhole principleit’s really just a simple observation.
Aside from inventing similar exercises with numbers (e.g., if $ab$ is odd then $a$ is odd or $b$ is odd), this is all there is to the contrapositive method. It’s just a direct proof disguised behind a fact about truth tables. Of course, as is usual in more advanced mathematical literature, authors will seldom announce the use of contraposition. The reader just has to be watchful enough to notice it.
Though we haven’t talked about either the real numbers $\mathbb{R}$ nor proofs of existence or impossibility, we can still pose this interesting question: is there an injective function from $\mathbb{R} \to \mathbb{N}$? In truth there is not, but as of yet we don’t have the proof technique required to show it. This will be our next topic in the series: the proof by contradiction.
Until then!
## 10 thoughts on “Methods of Proof — Contrapositive”
1. Alok
Thanks for writing this. I learned several new things.
>> Let’s show that the function f(x) = 7x – 4 is injective. Contrapositively, … This example is important because if we tried to prove it directly, we might make the mistake of assuming algebra works with \neq the same way it does with equality.
Can’t we just say: Let x != x’, Implies 7x != 7x’. Implies 7x-4 != 7x’-4. Implies f(x) != f(x’). Direct implication. In any case, I see your point about contrapositive being helpful sometimes.
>> Expanding upon this idea we give the reader a challenge: Let A, B, C be finite sets of the same size. Prove or disprove that if f: A \to B and g: B \to C are injective functions, and if the composition gf is injective, then both of f, g must be injective.
There seems to be some typo here. If A \to B and g: B \to C are already injective, there is nothing left to prove. I guess you mean that if f and g are functions and their composition is injective, then they individually must also be injective if the three sets are of the same finite size.
>> prove that if A,B are finite sets and f:A \to B is an injection, then A has fewer elements than B
Why cannot A have the same number of elements as B?
>> As a quick aside, one can define ordered tuples in terms of sets. We will leave the reader to puzzle why this works, and generalize the example provided: \displaystyle (a,b) = \left \{ a, \left \{ b \right \} \right \}
Can you expand upon this please? 🙂 I had always wondered about formal definitions of ordered sets.
Like
• >> Can’t we just say: Let x != x’, Implies 7x != 7x’. Implies 7x-4 != 7x’-4. Implies f(x) != f(x’).
Yes you can, but in one view this relies on the implicit fact that multiplication by 7 and subtraction by 4 are injective functions. Thanks for catching those typos.
In regards to ordered pairs, the definition is nice because it allows for repetition and maintains order. For repetition, {a, {a}} != {a, a} = {a}. And for order if you want to check whether a or b comes first in (a,b), check if a is an element of (a,b) = {a, {b}}. If it is, then a comes first, and if it’s instead {{a}, b} = (b,a), then {a} must in it and we know that b comes first. To extend this, we can define (a,b,c,…,y,z) = (a, (b, (c, … (y,z)))) just as a programmer might define linked lists.
Like
2. And although it seems to make awkward statements true (like “if 2 is prime then 1 = 0″)
Am I being an idiot here? 2 is prime, so the antecedent is true and the consequent is false, which makes the implication statement false.
Like
3. If 2 is prime, 1=0 is false because 2 IS prime but 1 != 0. You might want to change that example.
Like
• Oh man, I do stuff like this all the time. It cracks me up 🙂
Like
4. glasser
I’m not sure that your ordered pair definition works. Is {{1},{2}} the same as (1, {2}) or (2, {1})?
Like
• I have checked the wikipedia link you provided below and seen the definition proposed by Kuratowski, but it made me more confused because to proof its correctness Kuratowski says that: let Y belong to p, i understand p represents the tuple but what is Y in his proof, is it a set that belongs to the tuple !?
Like
• Yes, Y is a set in his proof. The reason is that there is no such thing as a “tuple” yet (a tuple is defined as a set of sets).
Like
5. Al
Thanks!
Like
|
2020-07-08 01:19:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 109, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936117887496948, "perplexity": 290.8475308147392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00419.warc.gz"}
|
https://math3402samwatson.wordpress.com/2013/12/12/potential-flow/
|
This week we looked more into potential flow as well as doing some examples.
We looked into a Doublet, which was 2-dimensional and steady. The streamline that was given is:
To find the isopots of this, we have to find:
and:
with some manipulation we can get this in the form of x^2+y^2=r^2
As we can see from this, the circle is going to be centered around and with radius .
If we plot the streamlines and isopots on the same graph, we get:
We can see these cross at right angles, this seems to be the case for every stream function.
Now we looked at Bernouilli’s equation in potential flow and how to derive it.
This is here:
We also looked at another example with water waves.
This week I enjoyed more since we were deriving new equations. I think I will need to go over this week thoroughly so that I understand fully the derivation.
|
2019-12-13 08:47:48
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8056238293647766, "perplexity": 546.3738220859902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540551267.14/warc/CC-MAIN-20191213071155-20191213095155-00488.warc.gz"}
|
https://blog.quant-quest.com/community/time-series/common-distributions-and-random-variables/
|
Common Distribution...
# Common Distributions and Random Variables
(@david)
Joined: 4 years ago
Posts: 27
09/01/2020 1:19 am
random variable, X, is a variable quantity (i.e., not necessarily fixed) whose possible values depend on a set of random events. Like a traditional mathematical variable, its value is unknown a priori (before the outcome of the events is known) A random variable's possible values might represent the possible outcomes of a yet to occur event. This event can take on a range of values, each with an associated probability, giving the random variable a probability distribution.
For example, the value of a roll of a die is a random variable. This variable, X, can take values 1 - 6, each with a probability of ⅙, but it’s exact value is unknown till the die roll is actually performed.
probability distribution is a mathematical function that assigns a probability to every possible value of a random variable. For example, the random variable X that represents the value of a die rolls and can take values 1 to 6, each with a probability of ⅙ has a distribution: http://www.w3.org/1998/Math/MathML"><mi>P</mi><mo stretchy="false">(</mo><mi>X</mi><mo>=</mo><mi>i</mi><mo stretchy="false">)</mo><mo>=</mo><mn>1</mn><mrow class="MJX-TeXAtom-ORD"><mo>/</mo></mrow><mn>6</mn>[/itex]" role="presentation" style="font-style: normal;font-weight: 400;line-height: normal;font-size: 14px;text-indent: 0px;text-align: left;text-transform: none;letter-spacing: normal;float: none;direction: ltr;max-width: none;max-height: none;min-width: 0px;min-height: 0px;border: 0px;padding: 0px;margin: 0px;color: #000000;background-color: #ffffff">𝑃(𝑋=𝑖)=1/6, where i = 1,2,3,4,5,6
Random variables can be separated into two different classes:
• Discrete random variables
• Continuous random variables
## Discrete Random Variables
Discrete random variables have finitely countable outcomes. For example, the value of a coin toss can only be H or T, each with a probability of 1/2. Similarly, the value of a die roll can only be between 1 and 6.
For discrete random variables where X can take a finite set of values, the probability distribution function gives the probability p(x) that X is exactly equal to some value. p(x)=P(X=x), where x belongs to the finite set of values that are possible
## Uniform Distribution
Let's look at the distribution of a die roll below.
import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport scipyfrom auquanToolbox import dataloader as dlfrom __future__ import division
class DiscreteRandomVariable:
def __init__(self, a=0, b=1):
self.variableType = ""
self.low = a
self.high = b
return
def draw(self, numberOfSamples):
samples = np.random.randint(self.low, self.high, numberOfSamples)
return samples
A die roll can have 6 values, each value can occur with a probability of 1/6. Each time we roll the die, we have an equal chance of getting each face. This is an example of uniform distribution. The chart below shows the distribution for 10 die rolls.
DieRolls = DiscreteRandomVariable(1, 6)
plt.hist(DieRolls.draw(10), bins = [1,2,3,4,5,6,7], align = 'mid')
plt.xlabel('Value')
plt.ylabel('Occurences')
plt.legend(['Die Rolls'])
plt.show()
In the short run, this looks uneven, but if we take a large number of samples it is apparent that each face is occurring the same percentage of times. The chart below shows the distribution for 10,000 die rolls
plt.hist(DieRolls.draw(10000), bins = [1,2,3,4,5,6,7], align = 'mid')
plt.xlabel('Value')
plt.ylabel('Occurences')
plt.legend(['Die Rolls']);
plt.show()
A random variable is independent and identically distributed (i.i.d) if each random variable has the same probability distribution as the others and all are mutually independent, i.e. outcome of one doesn’t affect the other. For example, random variables representing die rolls are i.i.d. The value of one die roll does not affect the value of next die roll.
## Binomial Distribution
A binomial distribution is used to describe successes and failures in a binary experiment. This can be very useful in an investment context as many of our choices tend to be binary like this. A single experiment which can result in success with probability p and failure with probability (1-p) is called a Bernoulli trial.
𝑝(1)=𝑃(𝑋=1)=𝑝
𝑝(0)=𝑃(𝑋=0)=1𝑝
A binomial distribution is a set of 𝑛n Bernoulli trials. There can be between 00 and 𝑛n successes in 𝑛n trials, with each trial having the same probability of success, 𝑝p, and all of the trials being independent of each other. A binomial random variable is denoted as X ~ B(n,p).
The probability function of a binomial random variable p(x) is the probability that there are exactly 𝑥 successes in 𝑛 trials. This is defined by choosing x trials which should result in success and multiplying by the probability that these x trails result in success and the remaining n−x trials result in failure. The resulting probability function is:
If X is a binomial random variable distributed with B(n,p)
class BinomialRandomVariable(DiscreteRandomVariable):
def __init__(self, numberOfTrials = 10, probabilityOfSuccess = 0.5):
self.variableType = "Binomial"
self.numberOfTrials = numberOfTrials
self.probabilityOfSuccess = probabilityOfSuccess
return
def draw(self, numberOfSamples):
samples = np.random.binomial(self.numberOfTrials, self.probabilityOfSuccess, numberOfSamples)
return samples
Let's draw the distribution of 10000 samples of a binomial random variable 𝐵(5,0.5)B(5,0.5), i.e 5 trials with 50% probability of success.
plt.hist(StockProbabilities.draw(10000), bins = [0, 1, 2, 3, 4, 5, 6], align = 'left')
plt.xlabel('Value')
plt.ylabel('Occurences');
plt.show()
We see that the distribution is symmetric, since probability of success = probability of failure. If we skew the probabilities such that the probability of success is 0.25, we get an asymmetric distribution.
StockProbabilities = BinomialRandomVariable(5, 0.25)
plt.hist(StockProbabilities.draw(10000), bins = [0, 1, 2, 3, 4, 5, 6], align = 'left')
plt.xlabel('Value')
plt.ylabel('Occurences');
plt.show()
We can extend this idea of an experiment following a binomial random variable into a framework that we call the Binomial Model of Stock Price Movement. This is used as one of the foundations for option pricing. In the Binomial Model, it is assumed that for any given time period a stock price can move up or down by a value determined by the up or down probabilities. This turns the stock price into the function of a binomial random variable, the magnitude of upward or downward movement, and the initial stock price. We can vary these parameters in order to approximate different stock price distributions.
## Continuous Random Variables
For continuous random variables (where X can take an infinite number of values over a continuous range), the probability of a single point, the probability that X is exactly equal to some value is zero. In this case, the probability distribution function gives the probability over intervals which can include infinitely many outcomes. Here we define a probability density function (PDF), f(x), such that we can say:
For example, if you buy a piece of rope and the scale reads 1 meter, this value is possible but the probability that the length is exactly 1 meter is zero; You can keep increasing the accuracy of your instrument so that the probability of measuring exactly 1m tends to zero. However, we might be able to say that there is 99% probability that the length is between 99cm and 1.01 m. Just like a probability distribution function f(x) gives the probability that a random variable lies in a range, a cumulative distribution function F(x) describes the probability that a random variable is less than or equal to a given value.
class ContinuousRandomVariable:
def __init__(self, a = 0, b = 1):
self.variableType = ""
self.low = a
self.high = b
return
def draw(self, numberOfSamples):
samples = np.random.uniform(self.low, self.high, numberOfSamples)
return samples
The most widely used distribution with widespread applications in finance is the normal distribution.
# Normal Distribution
Many important tests and methods in statistics, and by extension, finance, are based on the assumption of normality. A large part of this is due to the results of the Central Limit Theorem (CLT) which states that the sum of many independent random variables tends toward a normal distribution, even if the original variables themselves are not normally distributed. The convenience of the normal distribution finds its way into certain algorithmic trading strategies as well.
class NormalRandomVariable(ContinuousRandomVariable):
def __init__(self, mean = 0, variance = 1):
ContinuousRandomVariable.__init__(self)
self.variableType = "Normal"
self.mean = mean
self.standardDeviation = np.sqrt(variance)
return
def draw(self, numberOfSamples):
samples = np.random.normal(self.mean, self.standardDeviation, numberOfSamples)
return samples
Normal distributions are described by their mean (μ) and variance (σ2, where 𝜎σ is the standard deviation). The probability density of the normal distribution is:
And is defined for −∞<x<∞. When we have μ=0 and σ=1, we call this the standard normal distribution.
By changing the mean and standard deviation of the normal distribution, we can change the depth and width of the bell curve. With a larger standard deviation, the values of the distribution are less concentrated around the mean.
mu_1 = 0
mu_2 = 0
sigma_1 = 1
sigma_2 = 2
x = np.linspace(-8, 8, 200)
y = (1/(sigma_1 * np.sqrt(2 * 3.14159))) * np.exp(-(x - mu_1)*(x - mu_1) / (2 * sigma_1 * sigma_1))
z = (1/(sigma_2 * np.sqrt(2 * 3.14159))) * np.exp(-(x - mu_2)*(x - mu_2) / (2 * sigma_2 * sigma_2))
plt.plot(x, y, x, z)
plt.xlabel('Value')
plt.ylabel('Probability');
plt.show()
In modern portfolio theory, stock returns are generally assumed to follow a normal distribution. We use the distribution to model returns instead of stock prices because prices cannot go below 0 while the normal distribution can take on all values on the real line, making it better suited to returns.
One major characteristic of a normal random variable is that a linear combination of two or more normal random variables is another normal random variable. This is useful for considering mean returns and variance of a portfolio of multiple stocks.
## 68-95-99.7 rule or 3 sigma rule
This rule of thumb states that given the mean and variance of a normal distribution, we can make the following statements:
• Around 68% of all observations fall within one standard deviations around the mean (μ±σ)
• Around 95% of all observations fall within two standard deviations around the mean (μ±2σ)
• Around 99% of all observations fall within three standard deviations around the mean (μ±3σ)
## Standardising Random Variables to Normal Distribution
The power of normal distributions lies in the fact that using the central limit theorem, we can standardize different random variables so that they become normal random variables.
We standardize a random variable X by subtracting the mean and dividing by the variance, resulting in the standard normal random variable Z.
Let's look at the case where X ~ B(n,p) is a binomial random variable. In the case of a binomial random variable, the mean is μ=np and the variance is σ2=np(1−p).
n = 50
p = 0.25
X = BinomialRandomVariable(n, p)
X_samples = X.draw(10000)
Z_samples = (X_samples - n * p) / np.sqrt(n * p * (1 - p))
plt.hist(X_samples, bins = range(0, n + 2), align = 'left')
plt.xlabel('Value')
plt.ylabel('Probability');
plt.show()
plt.hist(Z_samples, bins=20)
plt.xlabel('Value')
plt.ylabel('Probability');
plt.show()
The idea that we can standardize random variables is very important. By changing a random variable to a distribution that we are more familiar with, the standard normal distribution, we can easily answer any probability questions that we have about the original variable. This is dependent, however, on having a large enough sample size.
## Stock Returns as Normal Distribution
Let's assume that stock returns are normally distributed. Say that 𝑌Y is the price of a stock. We will simulate its returns and plot it.
Y_initial = 100
X = NormalRandomVariable(0, 1)
Y_returns = X.draw(1000) # generate 1000 daily returns
Y = pd.Series(np.cumsum(Y_returns), name = 'Y') + Y_initial
Y.plot()
plt.xlabel('Time')
plt.ylabel('Value')
plt.show()
Say that we have some other stock, Z, and that we have a portfolio of Y and Z, called 𝑊W.
Z_initial = 50
Z_returns = X.draw(1000)
Z = pd.Series(np.cumsum(Z_returns), name = 'Z') + Z_initial
Z.plot()
plt.xlabel('Time')
plt.ylabel('Value');
plt.show()
Y_quantity = 20
Z_quantity = 50
Y_weight = Y_quantity/(Y_quantity + Z_quantity)
Z_weight = 1 - Y_weight
W_initial = Y_weight * Y_initial + Z_weight * Z_initial
W_returns = Y_weight * Y_returns + Z_weight * Z_returns
W = pd.Series(np.cumsum(W_returns), name = 'Portfolio') + W_initial
W.plot()
plt.xlabel('Time')
plt.ylabel('Value');
plt.show()
We construct W by taking a weighted average of Y and Z based on their quantity.
pd.concat([Y, Z, W], axis = 1).plot()
plt.xlabel('Time')
plt.ylabel('Value');
plt.show()
Note how the returns of our portfolio, W, are also normally distributed:
plt.hist(W_returns);
plt.xlabel('Return')
plt.ylabel('Occurrences');
plt.show()
## Fitting a Distribution
Let's attempt to fit a probability distribution to the returns of a stock. We will take the returns of AAPL and try to fit a normal distribution to them. The first thing to check is whether the returns actually exhibit properties of a normal distribution. For this purpose, we will use the Jarque-Bera test, which indicates non-normality if the p-value is below a cutoff.
start = '2014-01-01'
end = '2016-12-31'
data = dl.load_data_nologs('nasdaq', ['AAPL'], start, end)
Reading AAPL
# Take the daily returns
returns = prices/prices.shift(-1) -1
#Set a cutoff
cutoff = 0.01
# Get the p-value of the normality test
k2, p_value = scipy.stats.mstats.normaltest(returns[:-1].values)
print("The JB test p-value is: ", p_value)
print("We reject the hypothesis that the data are normally distributed ", p_value < cutoff)
print("The skewness of the returns is: ", scipy.stats.skew(returns[:-1].values))
print("The kurtosis of the returns is: ", scipy.stats.kurtosis(returns[:-1].values))
plt.hist(returns[:-1], bins = 20)
plt.xlabel('Value')
plt.ylabel('Occurrences')
plt.show()
('The JB test p-value is: ', 8.6122250241313796e-22)
('We reject the hypothesis that the data are normally distributed ', True)
('The skewness of the returns is: ', 0.38138558143920764)
('The kurtosis of the returns is: ', 4.231909703399142)
The low p-value of the test leads us to reject the null hypothesis that the returns are normally distributed. This is due to the high kurtosis (normal distributions have a kurtosis of 3).
We will proceed from here assuming that the returns are normally distributed so that we can go through the steps of fitting a distribution. We calculate the sample mean and standard deviation of the series and see how a theoretical normal curve fits against the actual values.
# Take the sample mean and standard deviation of the returns
sample_mean = np.mean(returns[:-1])
sample_std_dev = np.std(returns[:-1])
print("Mean: ", sample_mean)
('Mean: ', -0.0004662534806121209)
x = np.linspace(-(sample_mean + 4 * sample_std_dev), (sample_mean + 4 * sample_std_dev), len(returns))
sample_distribution = ((1/(sample_std_dev * 2 * np.pi)) *
np.exp(-(x - sample_mean)*(x - sample_mean) / (2 * sample_std_dev * sample_std_dev)))
plt.hist(returns[:-1], bins = 20, normed=True)
plt.plot(x, sample_distribution)
plt.xlabel('Value')
plt.ylabel('Occurrences');
plt.show()
Our theoretical curve for the returns has a substantially lower peak than the actual values, which makes sense because the returns are not actually normally distributed. This is again due to the kurtosis of the normal distribution. The returns have a kurtosis value of around http://www.w3.org/1998/Math/MathML"><mn>5.29</mn></math> ;" role="presentation" style="font-style: normal;font-weight: 400;line-height: normal;font-size: 14px;text-indent: 0px;text-align: left;text-transform: none;letter-spacing: normal;float: none;direction: ltr;max-width: none;max-height: none;min-width: 0px;min-height: 0px;border: 0px;padding: 0px;margin: 0px;color: #000000;background-color: #ffffff">5.29, while the kurtosis of the normal distribution is 3. A higher kurtosis leads to a higher peak.
A major reason why it is so difficult to model prices and returns is due to the underlying probability distributions. A lot of theories and frameworks in finance require that data be somehow related to the normal distribution. This is a major reason why the normal distribution seems to be so prevalent. However, it is exceedingly difficult to find real-world data that fits nicely into the assumptions of normality. When actually implementing a strategy, you should not assume that data follows a distribution that it demonstrably does not unless there is a very good reason for it.
Generally, when trying to fit a probability distribution to real-world values, we should have a particular distribution (or distributions) in mind. There are a variety of tests for different distributions that we can apply to see what might be the best fit. In addition, as more information becomes available, it will become necessary to update the sample mean and standard deviation or maybe even to find a different distribution to more accurately reflect the new information. The shape of the distribution will change accordingly.
This topic was modified 2 years ago 3 times by David
Topic Tags
Share:
|
2021-11-27 02:44:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7643745541572571, "perplexity": 850.4064943258115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358078.2/warc/CC-MAIN-20211127013935-20211127043935-00074.warc.gz"}
|
https://people.maths.bris.ac.uk/~matyd/GroupNames/320/D4s2D5sC4.html
|
Copied to
clipboard
## G = D4⋊2D5⋊C4order 320 = 26·5
### 2nd semidirect product of D4⋊2D5 and C4 acting via C4/C2=C2
Series: Derived Chief Lower central Upper central
Derived series C1 — C20 — D4⋊2D5⋊C4
Chief series C1 — C5 — C10 — C20 — C2×C20 — C2×C4×D5 — C2×D4⋊2D5 — D4⋊2D5⋊C4
Lower central C5 — C10 — C20 — D4⋊2D5⋊C4
Upper central C1 — C22 — C2×C4 — D4⋊C4
Generators and relations for D42D5⋊C4
G = < a,b,c,d,e | a4=b2=c5=d2=e4=1, bab=eae-1=a-1, ac=ca, ad=da, bc=cb, dbd=a2b, ebe-1=ab, dcd=c-1, ce=ec, ede-1=a2d >
Subgroups: 590 in 158 conjugacy classes, 55 normal (37 characteristic)
C1, C2, C2, C4, C4, C22, C22, C5, C8, C2×C4, C2×C4, D4, D4, Q8, C23, D5, C10, C10, C42, C22⋊C4, C4⋊C4, C4⋊C4, C2×C8, C2×C8, C22×C4, C2×D4, C2×D4, C2×Q8, C4○D4, Dic5, Dic5, C20, C20, D10, D10, C2×C10, C2×C10, D4⋊C4, D4⋊C4, Q8⋊C4, C42⋊C2, C22×C8, C2×C4○D4, C52C8, C40, Dic10, Dic10, C4×D5, C2×Dic5, C2×Dic5, C5⋊D4, C2×C20, C2×C20, C5×D4, C5×D4, C22×D5, C22×C10, C23.24D4, C8×D5, C2×C52C8, C4×Dic5, C4⋊Dic5, D10⋊C4, C5×C4⋊C4, C2×C40, C2×Dic10, C2×C4×D5, D42D5, D42D5, C22×Dic5, C2×C5⋊D4, D4×C10, C10.Q16, C20.44D4, D4⋊Dic5, C5×D4⋊C4, C4⋊C47D5, D5×C2×C8, C2×D42D5, D42D5⋊C4
Quotients: C1, C2, C4, C22, C2×C4, D4, C23, D5, C22⋊C4, C22×C4, C2×D4, D10, C2×C22⋊C4, C4○D8, C4×D5, C22×D5, C23.24D4, C2×C4×D5, D4×D5, D5×C22⋊C4, D83D5, SD163D5, D42D5⋊C4
Smallest permutation representation of D42D5⋊C4
On 160 points
Generators in S160
(1 16 6 11)(2 17 7 12)(3 18 8 13)(4 19 9 14)(5 20 10 15)(21 36 26 31)(22 37 27 32)(23 38 28 33)(24 39 29 34)(25 40 30 35)(41 56 46 51)(42 57 47 52)(43 58 48 53)(44 59 49 54)(45 60 50 55)(61 76 66 71)(62 77 67 72)(63 78 68 73)(64 79 69 74)(65 80 70 75)(81 91 86 96)(82 92 87 97)(83 93 88 98)(84 94 89 99)(85 95 90 100)(101 111 106 116)(102 112 107 117)(103 113 108 118)(104 114 109 119)(105 115 110 120)(121 131 126 136)(122 132 127 137)(123 133 128 138)(124 134 129 139)(125 135 130 140)(141 151 146 156)(142 152 147 157)(143 153 148 158)(144 154 149 159)(145 155 150 160)
(1 81)(2 82)(3 83)(4 84)(5 85)(6 86)(7 87)(8 88)(9 89)(10 90)(11 91)(12 92)(13 93)(14 94)(15 95)(16 96)(17 97)(18 98)(19 99)(20 100)(21 101)(22 102)(23 103)(24 104)(25 105)(26 106)(27 107)(28 108)(29 109)(30 110)(31 111)(32 112)(33 113)(34 114)(35 115)(36 116)(37 117)(38 118)(39 119)(40 120)(41 121)(42 122)(43 123)(44 124)(45 125)(46 126)(47 127)(48 128)(49 129)(50 130)(51 131)(52 132)(53 133)(54 134)(55 135)(56 136)(57 137)(58 138)(59 139)(60 140)(61 141)(62 142)(63 143)(64 144)(65 145)(66 146)(67 147)(68 148)(69 149)(70 150)(71 151)(72 152)(73 153)(74 154)(75 155)(76 156)(77 157)(78 158)(79 159)(80 160)
(1 2 3 4 5)(6 7 8 9 10)(11 12 13 14 15)(16 17 18 19 20)(21 22 23 24 25)(26 27 28 29 30)(31 32 33 34 35)(36 37 38 39 40)(41 42 43 44 45)(46 47 48 49 50)(51 52 53 54 55)(56 57 58 59 60)(61 62 63 64 65)(66 67 68 69 70)(71 72 73 74 75)(76 77 78 79 80)(81 82 83 84 85)(86 87 88 89 90)(91 92 93 94 95)(96 97 98 99 100)(101 102 103 104 105)(106 107 108 109 110)(111 112 113 114 115)(116 117 118 119 120)(121 122 123 124 125)(126 127 128 129 130)(131 132 133 134 135)(136 137 138 139 140)(141 142 143 144 145)(146 147 148 149 150)(151 152 153 154 155)(156 157 158 159 160)
(1 35)(2 34)(3 33)(4 32)(5 31)(6 40)(7 39)(8 38)(9 37)(10 36)(11 30)(12 29)(13 28)(14 27)(15 26)(16 25)(17 24)(18 23)(19 22)(20 21)(41 75)(42 74)(43 73)(44 72)(45 71)(46 80)(47 79)(48 78)(49 77)(50 76)(51 70)(52 69)(53 68)(54 67)(55 66)(56 65)(57 64)(58 63)(59 62)(60 61)(81 120)(82 119)(83 118)(84 117)(85 116)(86 115)(87 114)(88 113)(89 112)(90 111)(91 105)(92 104)(93 103)(94 102)(95 101)(96 110)(97 109)(98 108)(99 107)(100 106)(121 160)(122 159)(123 158)(124 157)(125 156)(126 155)(127 154)(128 153)(129 152)(130 151)(131 145)(132 144)(133 143)(134 142)(135 141)(136 150)(137 149)(138 148)(139 147)(140 146)
(1 151 31 121)(2 152 32 122)(3 153 33 123)(4 154 34 124)(5 155 35 125)(6 156 36 126)(7 157 37 127)(8 158 38 128)(9 159 39 129)(10 160 40 130)(11 146 26 131)(12 147 27 132)(13 148 28 133)(14 149 29 134)(15 150 30 135)(16 141 21 136)(17 142 22 137)(18 143 23 138)(19 144 24 139)(20 145 25 140)(41 96 71 101)(42 97 72 102)(43 98 73 103)(44 99 74 104)(45 100 75 105)(46 91 76 106)(47 92 77 107)(48 93 78 108)(49 94 79 109)(50 95 80 110)(51 81 66 111)(52 82 67 112)(53 83 68 113)(54 84 69 114)(55 85 70 115)(56 86 61 116)(57 87 62 117)(58 88 63 118)(59 89 64 119)(60 90 65 120)
G:=sub<Sym(160)| (1,16,6,11)(2,17,7,12)(3,18,8,13)(4,19,9,14)(5,20,10,15)(21,36,26,31)(22,37,27,32)(23,38,28,33)(24,39,29,34)(25,40,30,35)(41,56,46,51)(42,57,47,52)(43,58,48,53)(44,59,49,54)(45,60,50,55)(61,76,66,71)(62,77,67,72)(63,78,68,73)(64,79,69,74)(65,80,70,75)(81,91,86,96)(82,92,87,97)(83,93,88,98)(84,94,89,99)(85,95,90,100)(101,111,106,116)(102,112,107,117)(103,113,108,118)(104,114,109,119)(105,115,110,120)(121,131,126,136)(122,132,127,137)(123,133,128,138)(124,134,129,139)(125,135,130,140)(141,151,146,156)(142,152,147,157)(143,153,148,158)(144,154,149,159)(145,155,150,160), (1,81)(2,82)(3,83)(4,84)(5,85)(6,86)(7,87)(8,88)(9,89)(10,90)(11,91)(12,92)(13,93)(14,94)(15,95)(16,96)(17,97)(18,98)(19,99)(20,100)(21,101)(22,102)(23,103)(24,104)(25,105)(26,106)(27,107)(28,108)(29,109)(30,110)(31,111)(32,112)(33,113)(34,114)(35,115)(36,116)(37,117)(38,118)(39,119)(40,120)(41,121)(42,122)(43,123)(44,124)(45,125)(46,126)(47,127)(48,128)(49,129)(50,130)(51,131)(52,132)(53,133)(54,134)(55,135)(56,136)(57,137)(58,138)(59,139)(60,140)(61,141)(62,142)(63,143)(64,144)(65,145)(66,146)(67,147)(68,148)(69,149)(70,150)(71,151)(72,152)(73,153)(74,154)(75,155)(76,156)(77,157)(78,158)(79,159)(80,160), (1,2,3,4,5)(6,7,8,9,10)(11,12,13,14,15)(16,17,18,19,20)(21,22,23,24,25)(26,27,28,29,30)(31,32,33,34,35)(36,37,38,39,40)(41,42,43,44,45)(46,47,48,49,50)(51,52,53,54,55)(56,57,58,59,60)(61,62,63,64,65)(66,67,68,69,70)(71,72,73,74,75)(76,77,78,79,80)(81,82,83,84,85)(86,87,88,89,90)(91,92,93,94,95)(96,97,98,99,100)(101,102,103,104,105)(106,107,108,109,110)(111,112,113,114,115)(116,117,118,119,120)(121,122,123,124,125)(126,127,128,129,130)(131,132,133,134,135)(136,137,138,139,140)(141,142,143,144,145)(146,147,148,149,150)(151,152,153,154,155)(156,157,158,159,160), (1,35)(2,34)(3,33)(4,32)(5,31)(6,40)(7,39)(8,38)(9,37)(10,36)(11,30)(12,29)(13,28)(14,27)(15,26)(16,25)(17,24)(18,23)(19,22)(20,21)(41,75)(42,74)(43,73)(44,72)(45,71)(46,80)(47,79)(48,78)(49,77)(50,76)(51,70)(52,69)(53,68)(54,67)(55,66)(56,65)(57,64)(58,63)(59,62)(60,61)(81,120)(82,119)(83,118)(84,117)(85,116)(86,115)(87,114)(88,113)(89,112)(90,111)(91,105)(92,104)(93,103)(94,102)(95,101)(96,110)(97,109)(98,108)(99,107)(100,106)(121,160)(122,159)(123,158)(124,157)(125,156)(126,155)(127,154)(128,153)(129,152)(130,151)(131,145)(132,144)(133,143)(134,142)(135,141)(136,150)(137,149)(138,148)(139,147)(140,146), (1,151,31,121)(2,152,32,122)(3,153,33,123)(4,154,34,124)(5,155,35,125)(6,156,36,126)(7,157,37,127)(8,158,38,128)(9,159,39,129)(10,160,40,130)(11,146,26,131)(12,147,27,132)(13,148,28,133)(14,149,29,134)(15,150,30,135)(16,141,21,136)(17,142,22,137)(18,143,23,138)(19,144,24,139)(20,145,25,140)(41,96,71,101)(42,97,72,102)(43,98,73,103)(44,99,74,104)(45,100,75,105)(46,91,76,106)(47,92,77,107)(48,93,78,108)(49,94,79,109)(50,95,80,110)(51,81,66,111)(52,82,67,112)(53,83,68,113)(54,84,69,114)(55,85,70,115)(56,86,61,116)(57,87,62,117)(58,88,63,118)(59,89,64,119)(60,90,65,120)>;
G:=Group( (1,16,6,11)(2,17,7,12)(3,18,8,13)(4,19,9,14)(5,20,10,15)(21,36,26,31)(22,37,27,32)(23,38,28,33)(24,39,29,34)(25,40,30,35)(41,56,46,51)(42,57,47,52)(43,58,48,53)(44,59,49,54)(45,60,50,55)(61,76,66,71)(62,77,67,72)(63,78,68,73)(64,79,69,74)(65,80,70,75)(81,91,86,96)(82,92,87,97)(83,93,88,98)(84,94,89,99)(85,95,90,100)(101,111,106,116)(102,112,107,117)(103,113,108,118)(104,114,109,119)(105,115,110,120)(121,131,126,136)(122,132,127,137)(123,133,128,138)(124,134,129,139)(125,135,130,140)(141,151,146,156)(142,152,147,157)(143,153,148,158)(144,154,149,159)(145,155,150,160), (1,81)(2,82)(3,83)(4,84)(5,85)(6,86)(7,87)(8,88)(9,89)(10,90)(11,91)(12,92)(13,93)(14,94)(15,95)(16,96)(17,97)(18,98)(19,99)(20,100)(21,101)(22,102)(23,103)(24,104)(25,105)(26,106)(27,107)(28,108)(29,109)(30,110)(31,111)(32,112)(33,113)(34,114)(35,115)(36,116)(37,117)(38,118)(39,119)(40,120)(41,121)(42,122)(43,123)(44,124)(45,125)(46,126)(47,127)(48,128)(49,129)(50,130)(51,131)(52,132)(53,133)(54,134)(55,135)(56,136)(57,137)(58,138)(59,139)(60,140)(61,141)(62,142)(63,143)(64,144)(65,145)(66,146)(67,147)(68,148)(69,149)(70,150)(71,151)(72,152)(73,153)(74,154)(75,155)(76,156)(77,157)(78,158)(79,159)(80,160), (1,2,3,4,5)(6,7,8,9,10)(11,12,13,14,15)(16,17,18,19,20)(21,22,23,24,25)(26,27,28,29,30)(31,32,33,34,35)(36,37,38,39,40)(41,42,43,44,45)(46,47,48,49,50)(51,52,53,54,55)(56,57,58,59,60)(61,62,63,64,65)(66,67,68,69,70)(71,72,73,74,75)(76,77,78,79,80)(81,82,83,84,85)(86,87,88,89,90)(91,92,93,94,95)(96,97,98,99,100)(101,102,103,104,105)(106,107,108,109,110)(111,112,113,114,115)(116,117,118,119,120)(121,122,123,124,125)(126,127,128,129,130)(131,132,133,134,135)(136,137,138,139,140)(141,142,143,144,145)(146,147,148,149,150)(151,152,153,154,155)(156,157,158,159,160), (1,35)(2,34)(3,33)(4,32)(5,31)(6,40)(7,39)(8,38)(9,37)(10,36)(11,30)(12,29)(13,28)(14,27)(15,26)(16,25)(17,24)(18,23)(19,22)(20,21)(41,75)(42,74)(43,73)(44,72)(45,71)(46,80)(47,79)(48,78)(49,77)(50,76)(51,70)(52,69)(53,68)(54,67)(55,66)(56,65)(57,64)(58,63)(59,62)(60,61)(81,120)(82,119)(83,118)(84,117)(85,116)(86,115)(87,114)(88,113)(89,112)(90,111)(91,105)(92,104)(93,103)(94,102)(95,101)(96,110)(97,109)(98,108)(99,107)(100,106)(121,160)(122,159)(123,158)(124,157)(125,156)(126,155)(127,154)(128,153)(129,152)(130,151)(131,145)(132,144)(133,143)(134,142)(135,141)(136,150)(137,149)(138,148)(139,147)(140,146), (1,151,31,121)(2,152,32,122)(3,153,33,123)(4,154,34,124)(5,155,35,125)(6,156,36,126)(7,157,37,127)(8,158,38,128)(9,159,39,129)(10,160,40,130)(11,146,26,131)(12,147,27,132)(13,148,28,133)(14,149,29,134)(15,150,30,135)(16,141,21,136)(17,142,22,137)(18,143,23,138)(19,144,24,139)(20,145,25,140)(41,96,71,101)(42,97,72,102)(43,98,73,103)(44,99,74,104)(45,100,75,105)(46,91,76,106)(47,92,77,107)(48,93,78,108)(49,94,79,109)(50,95,80,110)(51,81,66,111)(52,82,67,112)(53,83,68,113)(54,84,69,114)(55,85,70,115)(56,86,61,116)(57,87,62,117)(58,88,63,118)(59,89,64,119)(60,90,65,120) );
G=PermutationGroup([[(1,16,6,11),(2,17,7,12),(3,18,8,13),(4,19,9,14),(5,20,10,15),(21,36,26,31),(22,37,27,32),(23,38,28,33),(24,39,29,34),(25,40,30,35),(41,56,46,51),(42,57,47,52),(43,58,48,53),(44,59,49,54),(45,60,50,55),(61,76,66,71),(62,77,67,72),(63,78,68,73),(64,79,69,74),(65,80,70,75),(81,91,86,96),(82,92,87,97),(83,93,88,98),(84,94,89,99),(85,95,90,100),(101,111,106,116),(102,112,107,117),(103,113,108,118),(104,114,109,119),(105,115,110,120),(121,131,126,136),(122,132,127,137),(123,133,128,138),(124,134,129,139),(125,135,130,140),(141,151,146,156),(142,152,147,157),(143,153,148,158),(144,154,149,159),(145,155,150,160)], [(1,81),(2,82),(3,83),(4,84),(5,85),(6,86),(7,87),(8,88),(9,89),(10,90),(11,91),(12,92),(13,93),(14,94),(15,95),(16,96),(17,97),(18,98),(19,99),(20,100),(21,101),(22,102),(23,103),(24,104),(25,105),(26,106),(27,107),(28,108),(29,109),(30,110),(31,111),(32,112),(33,113),(34,114),(35,115),(36,116),(37,117),(38,118),(39,119),(40,120),(41,121),(42,122),(43,123),(44,124),(45,125),(46,126),(47,127),(48,128),(49,129),(50,130),(51,131),(52,132),(53,133),(54,134),(55,135),(56,136),(57,137),(58,138),(59,139),(60,140),(61,141),(62,142),(63,143),(64,144),(65,145),(66,146),(67,147),(68,148),(69,149),(70,150),(71,151),(72,152),(73,153),(74,154),(75,155),(76,156),(77,157),(78,158),(79,159),(80,160)], [(1,2,3,4,5),(6,7,8,9,10),(11,12,13,14,15),(16,17,18,19,20),(21,22,23,24,25),(26,27,28,29,30),(31,32,33,34,35),(36,37,38,39,40),(41,42,43,44,45),(46,47,48,49,50),(51,52,53,54,55),(56,57,58,59,60),(61,62,63,64,65),(66,67,68,69,70),(71,72,73,74,75),(76,77,78,79,80),(81,82,83,84,85),(86,87,88,89,90),(91,92,93,94,95),(96,97,98,99,100),(101,102,103,104,105),(106,107,108,109,110),(111,112,113,114,115),(116,117,118,119,120),(121,122,123,124,125),(126,127,128,129,130),(131,132,133,134,135),(136,137,138,139,140),(141,142,143,144,145),(146,147,148,149,150),(151,152,153,154,155),(156,157,158,159,160)], [(1,35),(2,34),(3,33),(4,32),(5,31),(6,40),(7,39),(8,38),(9,37),(10,36),(11,30),(12,29),(13,28),(14,27),(15,26),(16,25),(17,24),(18,23),(19,22),(20,21),(41,75),(42,74),(43,73),(44,72),(45,71),(46,80),(47,79),(48,78),(49,77),(50,76),(51,70),(52,69),(53,68),(54,67),(55,66),(56,65),(57,64),(58,63),(59,62),(60,61),(81,120),(82,119),(83,118),(84,117),(85,116),(86,115),(87,114),(88,113),(89,112),(90,111),(91,105),(92,104),(93,103),(94,102),(95,101),(96,110),(97,109),(98,108),(99,107),(100,106),(121,160),(122,159),(123,158),(124,157),(125,156),(126,155),(127,154),(128,153),(129,152),(130,151),(131,145),(132,144),(133,143),(134,142),(135,141),(136,150),(137,149),(138,148),(139,147),(140,146)], [(1,151,31,121),(2,152,32,122),(3,153,33,123),(4,154,34,124),(5,155,35,125),(6,156,36,126),(7,157,37,127),(8,158,38,128),(9,159,39,129),(10,160,40,130),(11,146,26,131),(12,147,27,132),(13,148,28,133),(14,149,29,134),(15,150,30,135),(16,141,21,136),(17,142,22,137),(18,143,23,138),(19,144,24,139),(20,145,25,140),(41,96,71,101),(42,97,72,102),(43,98,73,103),(44,99,74,104),(45,100,75,105),(46,91,76,106),(47,92,77,107),(48,93,78,108),(49,94,79,109),(50,95,80,110),(51,81,66,111),(52,82,67,112),(53,83,68,113),(54,84,69,114),(55,85,70,115),(56,86,61,116),(57,87,62,117),(58,88,63,118),(59,89,64,119),(60,90,65,120)]])
56 conjugacy classes
class 1 2A 2B 2C 2D 2E 2F 2G 4A 4B 4C 4D 4E 4F 4G 4H 4I 4J 4K 4L 5A 5B 8A 8B 8C 8D 8E 8F 8G 8H 10A ··· 10F 10G 10H 10I 10J 20A 20B 20C 20D 20E 20F 20G 20H 40A ··· 40H order 1 2 2 2 2 2 2 2 4 4 4 4 4 4 4 4 4 4 4 4 5 5 8 8 8 8 8 8 8 8 10 ··· 10 10 10 10 10 20 20 20 20 20 20 20 20 40 ··· 40 size 1 1 1 1 4 4 10 10 2 2 4 4 5 5 5 5 20 20 20 20 2 2 2 2 2 2 10 10 10 10 2 ··· 2 8 8 8 8 4 4 4 4 8 8 8 8 4 ··· 4
56 irreducible representations
dim 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 4 4 4 4 type + + + + + + + + + + + + + + + + + - image C1 C2 C2 C2 C2 C2 C2 C2 C4 D4 D4 D4 D5 D10 D10 D10 C4○D8 C4×D5 D4×D5 D4×D5 D8⋊3D5 SD16⋊3D5 kernel D4⋊2D5⋊C4 C10.Q16 C20.44D4 D4⋊Dic5 C5×D4⋊C4 C4⋊C4⋊7D5 D5×C2×C8 C2×D4⋊2D5 D4⋊2D5 C4×D5 C2×Dic5 C22×D5 D4⋊C4 C4⋊C4 C2×C8 C2×D4 C10 D4 C4 C22 C2 C2 # reps 1 1 1 1 1 1 1 1 8 2 1 1 2 2 2 2 8 8 2 2 4 4
Matrix representation of D42D5⋊C4 in GL4(𝔽41) generated by
32 2 0 0 0 9 0 0 0 0 1 0 0 0 0 1
,
1 0 0 0 9 40 0 0 0 0 40 0 0 0 0 40
,
1 0 0 0 0 1 0 0 0 0 0 40 0 0 1 6
,
40 23 0 0 0 1 0 0 0 0 6 1 0 0 6 35
,
38 11 0 0 14 3 0 0 0 0 9 0 0 0 0 9
G:=sub<GL(4,GF(41))| [32,0,0,0,2,9,0,0,0,0,1,0,0,0,0,1],[1,9,0,0,0,40,0,0,0,0,40,0,0,0,0,40],[1,0,0,0,0,1,0,0,0,0,0,1,0,0,40,6],[40,0,0,0,23,1,0,0,0,0,6,6,0,0,1,35],[38,14,0,0,11,3,0,0,0,0,9,0,0,0,0,9] >;
D42D5⋊C4 in GAP, Magma, Sage, TeX
D_4\rtimes_2D_5\rtimes C_4
% in TeX
G:=Group("D4:2D5:C4");
// GroupNames label
G:=SmallGroup(320,399);
// by ID
G=gap.SmallGroup(320,399);
# by ID
G:=PCGroup([7,-2,-2,-2,-2,-2,-2,-5,477,219,58,570,136,851,102,12550]);
// Polycyclic
G:=Group<a,b,c,d,e|a^4=b^2=c^5=d^2=e^4=1,b*a*b=e*a*e^-1=a^-1,a*c=c*a,a*d=d*a,b*c=c*b,d*b*d=a^2*b,e*b*e^-1=a*b,d*c*d=c^-1,c*e=e*c,e*d*e^-1=a^2*d>;
// generators/relations
×
𝔽
|
2021-05-08 04:09:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999502897262573, "perplexity": 14324.346960715777}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00294.warc.gz"}
|
https://fusion2019.org/gqiz83fj/4cb6e3-how-to-find-the-measure-of-an-angle-with-equations
|
Detailed Answer Key. Construct an angle that measures … For people who are having problems with certain parts of geometry, this video will offer you advice on how to find a missing angle on the outside of a triangle when you are provided with the measurements of the other two angles in the triangle. Answer: You can find the angle B from the arccos of 0.75 and then use the fact that the three angles add up to 180 to find the remaining angle. The solution given in the book is correct. For this type of triangle, we must use The Law of Cosines first to calculate the third side of the triangle; then we can use The Law of Sines to find one of the other two angles, and finally use Angles of a Triangle to find the last angle. m∠ ZXY = 4x + 7° m∠ ZXY = 4(12°) + 7° An arc is a segment of a circle around the circumference. An angle is a … The measure of an angle can be expressed using a number of units, but the most often used are radians and degrees. Thus, x = 35 o. -2). A= 4x + 2y B= 9y - 2 C= 10x + 10 The assignment calls for setting it up as a system of equations and solving the problem. Step 4: If necessary, substitute in for the variable and find the angle measure. Now measure the angle that is formed by the extension line you just made and the second side of the original angle you want to measure. The measure of the angle is twenty-nine times greater than its supplement. You are also able to measure an arc in linear units and degrees and use the correct symbol, mAB⌢ (where A and B are the two points on the circle), to show arc length. We have moved all content for this concept to for better organization. 10. Click here to jump to the two degrees, minutes, seconds calculators near the bottom of this page. the same magnitude) are said to be equal or congruent.An angle is defined by its measure and is not dependent upon the lengths of the sides of the angle (e.g. The outcome of the equations (and the calculators based on them) may differ from the data given by a LED or spotlight manufacturer, or from what you measure with a Lux meter, for several reasons. Order an Essay Check Prices. Solving linear equations using cross multiplication method. If we cut across a delicious, fresh pizza, we have two halves, and each half is an arc measuring 180°. An angle is measured in either degrees or radians. LOGIN TO VIEW ANSWER. Some people find setting up word problems with two variables easier than setting them up with just one variable. Radians are the ratios between the length of an arc and its radius. For each side, select the trigonometric function that has the unknown side as either the numerator or the denominator. Step 5: … Find a tutor locally or online. We, of course, know that the sum total of these four angles has to be 360 degrees. Use an algebraic equation to find the measures of the two angles described below. For Free, Inequalities and Relationship in a Triangle, ALL MY GRADE 8 & 9 STUDENTS PASSED THE ALGEBRA CORE REGENTS EXAM. Problem 2 : Find the measure of ∠ZXY. So the formula for this particular pizza slice is: An arc angle's measurement is shown as mAB⌢ where A and B are the two points on the circle creating the arc. To find the diagonal of a rectangle formula, you can divide a rectangle into two congruent right triangles, i.e., triangles with one angle of 90°.Each triangle will have sides of length l and w and a hypotenuse of length d.You can use the Pythagorean theorem to estimate the diagonal of a rectangle, which can be expressed with the following formula: You are flying an F-117A fully equipped, which means that your aircraft weighs 52,500 pounds. What is the measure of an angle, if three is subtracted from twice the supplement and the result is 297 degrees? Question 264823: Directions: Use an algebraic equation to find the measure of the angle labeled x. Refer to the triangle above, assuming that a, b, and c are known values. x^\circ x∘. When two angles are known, work out the third using Angles of a Triangle Add to 180°. We can rewrite the previous equation to be. The measure of angle ABC is 36 degrees. Begin by letting x represent the degree measure of the angle’s supplement. (4x-85)^\circ (4x−85)∘. If you add up the angles, you get 90 + 90 + 90 + 90 = 360. There are several ways to measure the size of an angle. There are a number of equations used to find the central angle, or you can use the Central Angle Theorem to find the relationship between the central angle and other angles. 21. TutorsOnSpot.com. If you measured 7, … The measure of an exterior angle (our w) of a triangle equals to the sum of the measures of the two remote interior angles (our x and y) of the triangle. Find the measure of each angle whose degree is represented with variables. Back Trigonometry Realms Mathematics Contents Index Home. Use an algebraic equation to find the measures of the two angles described below. If two angles are complementary, their sum = 90 degrees. Geometry. To be able to calculate an arc measure, you need to understand angle measurements in both degrees and radians. There are multiple different equations for calculating the area of a triangle, dependent on what information is known. Given a right triangle, the length of one side, and the measure of one acute angle, find the remaining sides. A degree is the 360 th part of a full rotation. Find the measures of each angle if the measure of Angle BAC is 10x + 16 and the measure of Angle CAD is 7x + 8. Local and online. Radian measure is another way. 20. Just like regular numbers, angles can be added to obtain a sum, perhaps for the purpose of determining the measure of an unknown angle. An angle is measured in either degrees or radians. Start here or give us a call: (312) 646-6365, © 2005 - 2021 Wyzant, Inc. - All Rights Reserved, a Question an obtuse angle; 127° a right angle; 90° How to measure an angle with a protractor: Place the midpoint of the protractor on the VERTEX of the angle. Exercises. The m means measurement, and the short curved line over the AB⌢ indicates we are referring to the arc. Measure the length of the adjacent side from the vertex to the point where it intersects with the opposite side. So here, we can say x is one of our angles, and y is the complement. The chord's length will always be shorter than the arc's length. The picture below illustrates the relationship between the radius, and the central angle in radians. The center of the clock (the point in which the two hands meet) is called the vertex of the angle. Step 3. Step 1 The two sides we know are O pposite (300) and A djacent (400). Let's try two example problems. Problem 3 : Find the measure of ∠JML. Round the answer to the nearest tenth. One important distinction between arc length and arc angle is that, for two circles of different diameters, same-angle sectors from each circle will not have the same arc length. You can also measure the circumference, or distance around, a circle. Learn faster with a math tutor. When an angle is decomposed into non-overlapping parts, the angle measure of the whole is the sum of the angle measures of the parts. If you take less than the full length around a circle, bounded by two radii, you have an arc. We can use this equation, along with the other two equations given, to form this system of equations: x + y + z = 180. y = 2z. How To: Use a protractor to measure an angle How To: Do a rotation in Geometry How To: Find an angle when there are pairs of parallel lines How To: Understand the properties of a square in Geometry How To: Find the equations of parallel and perpendicular lines Find the angle of elevation of the plane from point A on the ground. The formula is $$S = r \theta$$ where s represents the arc length, $$S = r \theta$$ represents the central angle in radians and r is the length of the radius. Since three interior angles are represented by variables (2 are Y and 1 is X) and the exterior is (3x+15). Try The Law of Sinesbefore the The Law of Cosinesas it is easier to use. Problems that lend themselves to this technique are those such as 2sin 2 5x = 1 and . If you have the diameter, you can also use πd where d = diameter. If the measure of an angle is larger than 90 degrees, but smaller than 180 degrees, the angle is called an obtuse angle. So the sum of angles and degrees. How to Label Angles. Get a free answer to a quick problem. Let's try an example where our arc length is 3 cm, and our radius is 4 cm as seen in our illustration: Start with our formula, and plug in everything we know: Now we can convert 34 radians into degrees by multiplying by 180 dividing by π. Equivalence angle pairs. Patrick W. Therefore, the shape’s angles add up to 360-degrees even if there are no right angles. Find the measure of angle A. Show equations and all work that leads to your answer. Learn to understand the angle measures of quadrilaterals. Now, let's draw a line parallel to side a c that passes through P o i n t b (which is also where you find ∠ b).. That new parallel line created two new angles on either side of ∠ b.We will label these two angles ∠ z and ∠ w from left to right. That form an angle with the vertex in point B. Explanation: . 1-to-1 tailored lessons, flexible scheduling. One way is to use units of degrees. So degrees and radians are related by the following equations: 360 ° = 2 π r a d i a n s See Solving "SAS" Triangles . After working your way through this lesson and video, you will learn to: Get better grades with tutoring from top-rated private tutors. Example 2: Solving Algebraic Equations in Angle Relationships. Name what we are looking for. Solving one step equations. The two points derived from the central angle (the angle of the two radii emerging from the center point). Angle Measurement: Degrees, Minutes, Seconds. Point B is at (-2, -2) and C (1. The arc length is the fractional amount of the circumference of the circle. We need to solve this to find the value of . The sum of the measures of complementary angles is 90°. Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. Remember -- the sum of the degree measures of angles in any triangle equals 180 degrees. First decide which acute angle you would like to solve for, as this will determine which side is opposite your angle of interest. When you cut up a circular pizza, the crust gets divided into arcs. z = 0.5x - 30 Then divide each angle measure by a. Given the lengths of all three sides of any triangle, each angle can be calculated using the following equation. A good way to start thinking about the size and degree-measure of angles is by picturing an entire pizza — that’s 360° of pizza. Step 2: Set up the equation. So here, we can say x is one of our angles, and y is the complement. Log On Ad: Over 600 Algebra Word Problems at edhelper.com The angle is the amount of turn between each arm. Write and solve an equation to find the missing angle measures. The angle made by a line and a circle is the angle made by that line and the tangent to the circle at their intersection. Isosceles triangle has two angles with equal measure (lets call them Y) The degree measure of each triangles three interior angles and an exterior angle (3x+15) and one angle is X. Examples ∠ABD and ∠CBD form a linear pair and are also supplementary angles, where ∠1 + ∠2 = 180 degrees. Systems of linear equations are very useful for solving applications. EXERCISE 1 Express the angle radians in (a) decimal form and (b) DMS form. Recognize angle measure as additive. ; Two angles that share terminal sides, but differ in size by an integer multiple of a turn, are called coterminal angles. Want to see the math tutors near you? Write and solve an equation to find x. Using the Tangent Function to Find the Angle of a Right Triangle (The Lesson) The tangent function relates a given angle to the opposite side and adjacent side of a right triangle.. Parts of an Angle. Place the end of your ruler at the vertex of the angle. We can rewrite the previous equation to be (y + 12) + y = 90. Below is a picture of triangle ABC, where angle A = 60 degrees, angle B = 50 degrees and angle C = … The equation is 2x + x = 180. That’s plus 102 plus 116 plus 78. Walk students through the steps: Step 1: Identify the angle relationship. Find the measures of the angles. Show equations and all work that leads to your answer. answered 10/11/19. In order to find the measure of a single interior angle of a regular polygon (a polygon with sides of equal length and angles of equal measure) with n sides, we calculate the sum interior anglesor $$(\red n-2) \cdot 180$$ and then divide that sum by the number of sides or $$\red n$$. Squares and rectangles have four right angles. Back Trigonometry Realms Mathematics Contents Index Home. Solution: We use Equation 1, , with R=75 inches and , to obtain Here are some more exercises in the use of the rules given in Equations 1,2, and 3. Translate into a system of equations. The larger angle is twelve less than five times the smaller angle: The system is: Step 5. SSA. Because, we know that the measure of a straight angle is 180 degrees, so a linear pair of angles must also add up to 180 degrees. Because the sum of the measures of the angles in any triangle must be 180 degrees, we know that x + y + z = 180. Arc length changes with the radius or diameter of the circle (or pizza). Now that you have eaten your way through this lesson, you can identify and define an arc and distinguish between major arcs and minor arcs. You can use the coordinate plane to measure the length of a line segment. For other angle measures, see the following list and figure: Get better grades with tutoring from top-rated professional tutors. Get help fast. Distinguish between major arcs and minor arcs, Measure an arc in linear units and degrees. The known side will in turn be the denominator or the numerator. Line up one side of the angle with the zero line of the protractor (where you see the number 0). Once you have the measure of the second angle, add that number to 180. Inscribed Angle Theorem. The corner point of an angle is called the vertex. Problem 1 : Find the measure of ∠ EHF. An inscribed angle is an angle whose vertex is on a circle and whose sides contain chords of a circle. That curved piece of the circle and the interior space is called a sector, like a slice of pizza. The measure of the larger angle is 12 degrees more than three times the smaller angle. If we make three additional cuts in one side only (so we cut the half first into two quarters and then each quarter into two eighths), we have one side of the pizza with one big, 180° arc and the other side of the pizza with four, 45° arcs like this: The half of the pizza that is one giant slice is a major arc since it measures 180° (or more). 2x + x = 180 combine like terms to get: 3x = 180 divide both sides of this equation by 3 to get: Since, both angles and are adjacent to angle --find the measurement of one of these two angles by: . Write it out as: x + y = 90. How to find the diagonal of a rectangle? When solving a trig equation of the form ax = f – 1 (k) where you want the solution to be all the angles within one complete rotation, write out all the solutions within the number of complete rotations that k represents. The measures of the three angles are x, y, and z. Click here to jump to the two degrees, minutes, seconds calculators near the bottom of this page. Choose 1 answer: Choose 1 answer: (Choice A) A. x + ( 4 x − 85) = 180. x + (4x - 85) = 180 x+(4x−85)= 180. The arc is the fraction of the circle's circumference that lies between the two points on the circle. Most questions answered within 4 hours. For example, the complement of a 60 Degree angle is a 30 Degree angle because together they will equal 90 degrees. To be able to calculate an arc measure, you need to understand angle measurements in both degrees and radians.
Capita Gas Registration And Ancillary Services Ltd, How Do You Get Paint Off A Tile Floor, Mellow Mushroom Delivery, Lds Bookstore Online, Transferable Courses To Uc, Crazy Ex-girlfriend Cast, Alvord Desert Weather, The Manor Kettleby, Aizat Amdan Songs,
|
2021-05-15 20:28:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6895236372947693, "perplexity": 365.9862549782128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.52/warc/CC-MAIN-20210515192444-20210515222444-00244.warc.gz"}
|
https://rupc19.kattis.com/problems/rupc19.emergencyexits
|
Hide
# Emergency Exits
An emergency exit.
The university is set to undergo a comprehensive quality inspection next month. The set of requirements that will be checked are known in advance, and the university has been going through the list, making sure everything is in order.
Under the “Fire Safety” section there is a requirement concerning emergency exits that they are having a hard time assessing. It states that it must be possible to reach an emergency exit from any location within the university building in a reasonable time.
You have been asked to help with assessing the current state of these emergency exits. The university has provided you with a graph representation of the building, where each location in the building is represented as a vertex, and each pathway is represented as a weighted directed edge from one location to another. The weight of an edge represents the time in seconds required to travel along that pathway. Note that each pathway can only be traveled in one direction.
Given at which locations an emergency exit is present, determine the maximum time required to reach the closest emergency exit from any location in the building.
## Input
The input consists of:
• One line with three integers $n$, $m$ and $k$ ($1 \le k \le n \le 2\cdot 10^5$, $0 \le m \le 2\cdot 10^5$), the number of locations, pathways and emergency exits.
• One line with $k$ integers, the distinct locations of the emergency exits.
• $m$ lines, the $i$th of which contains three integers $u_ i$, $v_ i$ and $s_ i$ ($1 \le u_ i,v_ i \le n$, $0 \le s_ i \le 10^6$, $u_ i \neq v_ i$), representing a unidirectional pathway from location $u_ i$ to location $v_ i$ that takes $s_ i$ seconds to travel along.
No two pathways have the same source and destination.
## Output
Output the minimum time, in seconds, required to reach the closest emergency exit from any location in the building. If it is not possible to reach an emergency exit from all locations in the building, output “danger”.
Sample Input 1 Sample Output 1
4 5 2
3 4
1 2 4
1 3 11
2 4 3
4 2 2
4 3 20
7
Sample Input 2 Sample Output 2
4 5 1
4
1 3 10
2 3 2
2 4 5
3 1 10
4 2 1
danger
CPU Time limit 2 seconds
Memory limit 1024 MB
|
2022-06-25 19:17:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48055094480514526, "perplexity": 510.4801882530142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036099.6/warc/CC-MAIN-20220625190306-20220625220306-00053.warc.gz"}
|
https://www.groundai.com/project/information-extraction-tool-text2alm-from-narratives-to-action-language-system-descriptions/
|
Information Extraction Tool text2alm: From Narratives to Action Language System Descriptions We would like to thank Parvathi Chundi, Nicholas Hippen, Brian Hodges, Joseph Meyer, Gang Ling, and Ryan Schuetzler for their valuable feedback. We appreciate the insights from Michael Gelfond, Daniela Inclezan, Edward Wertz, and Yuanlin Zhang on their work on language {\mathscr{ALM}}, the CoreALMLib library, and system calm.
# Information Extraction Tool text2alm: From Narratives to Action Language System Descriptions ††thanks: We would like to thank Parvathi Chundi, Nicholas Hippen, Brian Hodges, Joseph Meyer, Gang Ling, and Ryan Schuetzler for their valuable feedback. We appreciate the insights from Michael Gelfond, Daniela Inclezan, Edward Wertz, and Yuanlin Zhang on their work on language ALM, the CoreALMLib library, and system calm.
Craig Olson Yuliya Lierler University of Nebraska Omaha
6001 Dodge St, Omaha, NE 68182, USA
###### Abstract
In this work we design a narrative understanding tool text2alm. This tool uses an action language to perform inferences on complex interactions of events described in narratives. The methodology used to implement the text2alm system was originally outlined by Lierler, Inclezan, and Gelfond [14] via a manual process of converting a narrative to an model. It relies on a conglomeration of resources and techniques from two distinct fields of artificial intelligence, namely, natural language processing and knowledge representation and reasoning. The effectiveness of system text2alm is measured by its ability to correctly answer questions from the bAbI tasks published by Facebook Research in 2015. This tool matched or exceeded the performance of state-of-the-art machine learning methods in six of the seven tested tasks. We also illustrate that the text2alm approach generalizes to a broader spectrum of narratives.
## 1 Introduction
The field of Information Extraction (IE) is concerned with gathering snippets of meaning from text and storing the derived data in structured, machine interpretable form. Consider a sentence
BBDO South in Atlanta, which handles corporate advertising for Georgia-Pacific, will assume additional duties for brands like Angel Soft, said Ken Haldin, a spokesman for Georgia-Pacific from Atlanta.
A sample IE system that focuses on identifying organizations and their corporate locations may extract the following predicates from this sentence:
locatedIn(BBDOSouth,Atlanta) locatedIn(GeorgiaPacific,Atlanta)
These predicates can then be stored either in a relational database or a logic program, and queried accordingly by well-known methods in computer science. Thus, IE allows us to turn unstructured data present in text into structured data easily accessible for automated querying.
In this paper, we focus on an IE system that is capable of processing simple narratives with action verbs, in particular, verbs that express physical acts such as go, give, and put. Consider a sample narrative that we refer to as the JS discourse:
John traveled to the hallway. (1) Sandra journeyed to the hallway. (2)
The actions travel and journey in the narrative describe changes to the narrative’s environment, and can be coupled with the reader’s commonsense knowledge to form and alter the reader’s mental picture for the narrative. For example, after reading sentence (1), a human knows that John is the subject of the sentence and traveled is an action verb describing an action performed by John. A human also knows that traveled describes the act of motion, and specifically that John’s location changes from an arbitrary initial location to a new destination, the hallway. Lierler et al. [14] outline a methodology for constructing a Question Answering (QA) system by utilizing IE techniques. Their methodology focuses on performing inferences using the complex interactions of events in narratives. Their process utilizes an action language [9] and an extension of the VerbNet lexicon [20, 12]. Language enables a system to structure knowledge regarding complex interactions of events and implicit background knowledge in a straight-forward and modularized manner. The knowledge represented in is processed by means of logic programming under answer set semantics and can be used to derive inferences about a given text. The proposed methodology in [14] assumes the extension of the VerbNet lexicon with interpretable semantic annotations in . The VerbNet lexicon groups English verbs into classes allowing us to infer that such verbs as travel and journey practically refer to the same class of events.
The processes described in [14] are exemplified via two sample narratives processed manually. The authors translated those narratives to programs by hand and wrote the supporting modules to capture knowledge as needed. To produce system descriptions for considered narratives, the method by Lierler et al. [14] utilizes NLP resources, such as semantic role labeler lth [10], parser and co-reference resolution tools of coreNLP [16], and lexical resources PropBank [21] and SemLink [4]. Ling [15] used these resources to automate parts of the method in the text2drs system. In particular, text2drs extracts entities, events, and their relations from a given action-based narrative. A narrative understanding system developed within this work, text2alm, utilizes text2drs and automates the remainder of the method outlined in [14]. When considering the JS discourse as an example, system text2alm produces a set of facts in spirit of the following:
move(john,hallway,0) move(sandra,hallway,1) (3) loc_in(john,hallway,1) loc_in(john,hallway,2) loc_in(sandra,hallway,2), (4)
where are time points associated with occurrences of described actions in the JS discourse. Intuitively, time point 0 corresponds to a time prior to utterance of sentence (1). Time point 1 corresponds to a time upon the completion of the event described in (1). Facts in (3) and (4) allow us to provide grounds for answering questions related to the JS discourse such as:
Question: Ground:
Is John inside the hallway at the end of the story (time 2)?
Who is in the hallway at the end of the story?
We note that modern NLP tools and resources prove to be sufficient to extract facts (3) given the JS discourse. Yet, inferring facts such as (4) requires complex reasoning about specific actions present in a given discourse and modeling such common sense knowledge as inertia axiom (stating that things normally stay as they are[13]. System text2alm combines the advances in NLP and knowledge representation and reasoning (KRR) to tackle the complexities of converting narratives such as the JS discourse into a structured form such as facts in (3-4).
The effectiveness of system text2alm is measured by its ability to answer questions from the bAbI tasks [23]. These tasks were proposed by Facebook Research in 2015 as a benchmark for evaluating basic capabilities of QA systems in twenty categories. Each of the twenty bAbI QA tasks is composed of narratives and questions, where 1000 questions are given in training set and 1000 questions are given in a testing set. We extend the information extraction component of the text2alm by a specialized QA processing module to tackle seven of the bAbI tasks containing narratives with action verbs. Tool text2alm matched or exceeded the performance of modern machine learning methods in six of these tasks. We also illustrate that the text2alm approach generalizes to a broader spectrum of narratives than present in bAbI.
We start the paper by a review of relevant tools and resources stemming from NLP and KRR communities. We then proceed to describe the architecture of the text2alm system implemented in this work. We conclude by providing the evaluation data on the system.
## 2 Background
NLP Resource VerbNetVerbNet is a domain-independent English verb lexicon organized into a hierarchical set of verb classes [20, 12]. The verb classes aim to achieve syntactic and semantic coherence between members of a class. Each class is characterized by a set of verbs and their thematic roles. For example, the verb run is a member of the VerbNet class run-51.3.2. This class is characterized by
• 96 members including verbs such as bolt, frolic, scamper, and weave,
• four thematic roles, namely, theme, initial location, trajectory and destination,
• two subbranches: run-51.3.2-1 and run-51.3.2-2. For instance, run-51.3.2-2 has members gallop, skip, and strut, and has additional thematic roles agent, result, and source.
Dynamic Domains, Transition Diagrams, and Action Language Action languages are formal KRR languages that provide convenient syntactic constructs to represent knowledge about dynamic domains. The knowledge is compiled into a transition diagram, where nodes correspond to possible states of a considered dynamic domain and edges correspond to actions/events whose occurrence signal transitions in the dynamic system. The JS discourse exemplifies a narrative modeling a dynamic domain with three entities John, Sandra, hallway and four actions, specifically:
ajin – John travels into the hallway ajout – John travels out of the hallway asin – Sandra travels into the hallway asout – Sandra travels out of the hallway
Scenarios of a dynamic domain correspond to trajectories in the domain’s transition diagram. Trajectories are sequences of alternating states and actions. A trajectory captures the sequence of events, starting with the initial state associated with time point 0. Each edge is associated with the time point incrementing by 1.
We illustrate the syntax and semantics of using the JS discourse dynamic domain by first defining an system description and then an history for this discourse. In language , a dynamic domain is described via a system description that captures a transition diagram specifying the behavior of a given domain. An system description consists of a theory and a structure. A theory is comprised of a hierarchy of modules, where a module represents a unit of general knowledge describing relevant sorts, properties, and the effects of actions. The structure declares instances of entities and actions of the domain. Figure 1 illustrates these concepts with the formalization of the JS discourse domain.
The JS discourse theory uses a single module to represent the knowledge relevant to the domain. The module declares the sorts (agents, points, move) and the property (loc_in) to represent entities and attributes of the domain. Actions utilize attributes to define the roles of participating entities. For instance, destination is an attribute of move that denotes the final location of the mover. Here we ask a reader to draw a parallel between the notions of an attribute and a VerbNet thematic role.
The JS discourse theory also defines two types of axioms, dynamic causal laws and executability conditions, to represent commonsense knowledge associated with a move action. The dynamic causal law states that if a move action occurs with a given actor and destination, then the actor’s location becomes that of the destination. The executability conditions restrict an action from occurring if the action is an instance of move, where the actor and actor’s location are defined, but either (i) the actor’s location is not equal to the origin of the move event or (ii) the actor’s location is already the destination.
An structure in Figure 1 defines the entities and actions from the JS discourse. For example, it states that john and sandra are agents. Also, action is declared as an instance of move where john is the actor and hallway is the destination.
An system description can be coupled with a history. A history is a particular scenario described by observations about the values of properties and occurring events. In the case of narratives, a history describes the sequence of events by stating occurrences of specific actions at given time points. For instance, the JS discourse history contains the events
• John moves to the hallway at the beginning of the story (an action occurs at time 0) and
• Sandra moves to the hallway at the next point of the story (an action occurs at time 1).
The following history is appended to the end of the system description in Figure 1 to form an program for the JS disocurse. We note that is a keyword that captures the occurrence of actions.
history
Ψhappened(ajin, 0).
Ψhappened(asin, 1).
An Solver calm: System calm is an solver developed at Texas Tech University by Wertz, Chandrasekan, and Zhang [22]. It uses an program to produce a ”model” for an encoded dynamic domain. The engine for system calm (i) constructs a logic program under stable model/answer set semantics [7], whose answer sets/solutions are in one-to-one correspondence with the models of the program, and (ii) uses an answer set solver sparc [2] for finding these models. In this manner, calm processes the knowledge represented by an program to enable reasoning capabilities. The program in Figure 1 follows the calm syntax. However, system calm requires two additional components for this program to be executable. The user must specify (i) the computational task and (ii) the max time point considered.
In our work we utilize the fact that system calm can solve a task of temporal projection, which is the process of determining the effects of a given sequence of actions executed from a given initial situation (which may be not fully determined). In the case of a narrative the initial situation is often unknown, whereas the sequence of actions are provided by the discourse. Inferring the effects of actions allows us to answer questions about the narrative’s domain. We insert the following statement in the program prior to the history to perform temporal projection:
temporal projection
Additionally, calm requires the max number of steps to be stated. Intuitively, we see this number as an upper bound on the ”length” of considered trajectories. This information denotes the final state’s time point in temporal projection problems. We insert the following line in the program to define the max steps for the JS discourse program:
max steps 3
For the case of the temporal projection task, a model of an program is a trajectory in the transition system captured by the program that is ”compatible” with the provided history. A compatible model correlate to the answer set solved by calm. For the JS discourse program, the calm computes a model that includes the following expressions:
happened(ajin, 0), happened(asin, 1),
loc_in(john, hallway, 1), loc_in(sandra, hallway, 2), loc_in(john, hallway, 2)
Knowledge Base CoreALMLib: The CoreALMLib is an library of generic commonsense knowledge for modeling dynamic domains developed by Inclezan [8]. The library’s foundation is the Component Library or CLib [3], which is a collection of general, reusable, and interrelated components of knowledge. CLib was populated with knowledge stemming from linguistic and ontological resources, such as VerbNet, WordNet, FrameNet, a thesaurus, and an English dictionary. The CoreALMLib was formed by translating CLib into to obtain descriptions of 123 action classes grouped into 43 reusable modules. The modules are organized into a hierarchical structure, and contain action classes and axioms to support commonsense reasoning. An example of one such axiom from the motion module is provided in Figure 2. This axiom states that if a move action occurs where O is the object moving and D is a spatial entity and the destination, then the location of O becomes D.
## 3 System text2alm Architecture
Lierler, Inclezan, and Gelfond [14] outline a methodology for designing IE/QA systems to make inferences based on complex interactions of events in narratives. This methodology is exemplified with two sample narratives completed manually by the authors. System text2alm automates this process. Figure 3 pretenses the architecture of the system. It implements four main tasks/processes:
1. text2drs Processing – Entity, Event, and Relation Extraction
2. drs2alm Processing – Creation of Program
3. calm Processing – Model Generation and Interpretation
4. QA Processing
Figure 3 denotes each process by its own column. Ovals identify inputs and outputs. Systems or resources are represented with white, grey, and black rectangles. White rectangles denote existing, unmodified resources. Grey rectangles are used for existing, but modified resources. Black rectangles signify newly developed subsystems. The first three processes form the core of text2alm, seen as an IE system. The QA Processing component is specific to the bAbI QA benchmark that we use to illustrate the validity of the approach advocated by text2alm. The system’s source code is available at https://github.com/cdolson19/Text2ALM.
### 3.1 text2drs Processing
The method by Lierler et al. [14] utilizes NLP resources, such as semantic role labeler lth [10], parsing and coreference resolution tools of coreNLP [16], and lexical resources PropBank [21] and SemLink [4] to produce system descriptions for considered narratives. System text2drs [15] was developed with these resources to deliver a tool that extracts entities, events, and their relations from given narratives. The text2drs tool formed the starting point in the development of text2alm due to its ability to extract basic entity and relational information from a narrative. The output of the text2drs system is called a discourse representation structure, or DRS [11]. A DRS captures key information present in discourse in a structured form. For example, Figure 4 presents the DRS for the JS discourse.
This DRS states that there are three entities and two events that take part in the JS narrative. The DRS assigns names, or referents, to the entities ( and ) and the events (). For instance, entity and event denote John and an event representing the VerbNet class run-51.3.2-1, respectively. The theme (which is one of the thematic roles associated with run-51.3.2-1) of event is entity (or, John) and the destination of this event is entity (or, hallway). Event occurs at time point 0, while event occurs at time point 1. We refer an interested reader to the work by Ling [15] for the details of the text2drs component. In realms of this project, text2drs was modified to accommodate VerbNet v 3.3 (in place of VerbNet v 2), which provides broader coverage of verbs.
### 3.2 drs2alm Processing
The drs2alm subsystem is concerned with combining commonsense knowledge related to events in a discourse with the information from the DRS generated by text2drs. The goal of this process is to produce an program consisting of a system description and a history for the scenario described by the narrative. The system description is composed of a theory containing relevant commonsense knowledge and a structure that is unique for a given narrative. Since the structure is specific to a given narrative, it is created using the information from a narrative’s DRS. Meanwhile the theory represents the commonsense knowledge associated with a narrative’s actions. Thus, the theory depends on a general, reusable knowledge base pertaining to actions. The CoreALMLib knowledge base was modified to form CoreCALMLib to fit this need of the text2alm system. We organize this section by (1) explaining how CoreCALMLib was obtained and (2) provide details on how a narrative’s program is generated.
Library CoreCALMLib: To obtain the CoreCALMLib knowledge base, the following modifications to the CoreALMLib were made:
1. Syntactic adjustments 3. VerbNet extensions 2. Property extractions 4. Axiom changes
First, syntactic adjustments were implemented to make the library compatible with the calm syntax. Second, we observed that the CoreALMLib has instances where properties (fluents) with the same name are declared in multiple modules. Yet semantically, these properties are assumed to be the same across all modules. We found this approach counter-intuitive from the point of knowledge-base design, thus we extracted all fluent declarations from CoreALMLib modules and created new modules whose purpose was to declare fluents. These modules were organized by their properties and grouped similar properties together. The original CoreALMLib modules now import the necessary properties as needed. Regarding VerbNet extensions, CoreALMLib was further modified by adding a module for every VerbNet class we observed in the bAbI QA task training sets. We discuss these training sets in detail in Section 4. In particular, 52 of VerbNet’s 274 classes were formalized with modules in CoreCALMLib. Each VerbNet module defines a sort for that verb class that inherits from one of the 123 action classes stemming from CoreALMLib. Specifically, we utilize 15 action classes formalized in CoreALMLib, stemming from 9 of its total 43 modules. Thematic roles from the VerbNet lexicon are then mapped via state constraints to the attributes associated with actions already used by the CoreALMLib library. These VerbNet modules are stored in a CoreCALMLib sub-library that we call vn_class_library. Lastly, we modified and added axioms into some CoreALMLib modules after identifying pieces of knowledge that were not represented within the original library. When not considering fluent extractions, fluents were altered or added to only four modules from the original CoreALMLib. This supports the hypothesis that CoreALMLib can provide an effective baseline for commonsense reasoning about actions. All modifications to the CoreALMLib to form CoreCALMLib are explained further in [19].
Program Generation: The drs2alm processing step generates an program for a given discourse by combining the information in a narrative’s DRS and the CoreCALMLib library. We first examine the theory in the program’s system description. We start by identifying the general knowledge associated with a narrative’s domain by importing the VerbNet modules from the CoreCALMLib for all VerbNet classes associated with a narrative. These provide the commonsense knowledge backbone for the actions in the narrative. Then, we define a new module unique to the narrative. This module declares entities from the narrative as new sorts inheriting from base CoreCALMLib sorts. We chose to declare the narrative’s entities as new sorts to provide more flexibility to define additional, unique attributes associated with the entities if the need arises. However, to declare these new sorts we must identify the CoreCALMLib parent sort to inherit from. We rely on the VerbNet thematic roles associated with an entity to make this selection. We grouped VerbNet thematic roles into four parent sorts of CoreCALMLib by reviewing the thematic roles associated with the VerbNet classes in the training sets and attempting to map these to the most similar sorts defined by the original CoreALMLib. Figure 5 presents the groupings. If an entity is associated with roles from different categories, we use a prioritized sort order defined as follows:
where is transitive and states that the left argument has a higher priority than the right one.
We now turn our attention to the process of generating the structure and history for the program. The structure declares the specific entities and events from the narrative. Entity IDs from a given narrative’s DRS are defined as instances of the corresponding entity sort from the theory. Events are also declared as instances of their associated VerbNet class sort, and the entities related to events are listed as attributes of these events. The history states the order and timepoints in which narrative’s events happened. We extract this information from the arguments expressed in DRS. To exemplify the described process, Figure 6 presents the program output by the drs2alm Processing stage applied towards the JS discourse DRS in Figure 4. Note that the theory in Figure 6 imports the VerbNet module for run-51.3.2-1 from the vn_class_library. The two events in the JS discourse were identified as members of the VerbNet class run-51.3.2-1. Thus, the module associated with this class is imported to retrieve the knowledge relevant to run events in the JS discourse domain.
### 3.3 calm and QA Processing
In the calm Processing performed by text2alm, the calm system is invoked on a given narrative’s program that was generated by the drs2alm Processing stage. The calm system computes a model via logic programming under answer set semantics. We then perform post-processing on this model to make its content more readable for a human by replacing all entities IDs with their names from the narrative. For instance, given the program in Figure 6, the output of the calm Processing will include expressions:
loc_in(John,hallway,1), loc_in(John,hallway,2), loc_in(Sandra,hallway,2).
We note that no other loc_in fluents will be present in the output.
A model derived by the calm system contains facts about the entities and events from the narrative supplemented with basic commonsense knowledge associated with the events. We use a subset of the bAbI QA tasks to test the text2alm system’s IE effectiveness and implement QA capabilities within the sphinx subsystem (see Figure 3). It utilizes regular expressions to identify the kind of question that is being asked and then query the model for relevant information to derive an answer. The sphinx system is specific to the bAbI QA task and is not a general purpose question answering component.
Additional information on the components of system text2alm are given in [19].
## 4 text2alm Evaluation
#### Related Work:
Many modern QA systems predominately rely on machine learning techniques. However, there has recently been more work related to the design of QA systems combining advances of NLP and KRR. The text2alm system is a representative of the latter approach. Other approaches include the work by Clark, Dalvi, and Tandon [5] and Mitra and Baral [18]. Mitra and Baral [18] use a training dataset to learn the knowledge relevant to the action verbs mentioned in the dataset. They posted nearly perfect test results on the bAbI tasks. However, this approach doesn’t scale to narratives that utilize other action verbs which are not present in the training set, including synonymous verbs. For example, if their system is trained on bAbI training data that contains verb travel it will process the JS discourse correctly. Yet, if we alter the JS discourse by exchanging travel with a synonymous word stroll, their system will fail to perform inferences on this altered narrative (note that stroll does not occur in the bAbI training set). We address this limitation in the text2alm system because the system does not rely upon the training narratives for the commonsense knowledge. If the verbs occurring in narratives belong to VerbNet classes whose semantics have been captured within CoreCALMLib then text2alm is normally able to process them properly.
Another relevant QA approach is the work by Clark, Dalvi, and Tandon [5]. This approach uses VerbNet to build a knowledge base containing rules of preconditions and effects of actions utilizing the semantic annotations that VerbNet provides for its classes. In our work, we can view modules associated with VerbNet classes as machine interpretable alternatives to these annotations. However, Clark et al. [5] use the first and most basic action language strips [6] for inference. The strips language allows more limited capabilities than the language in modeling complex interactions between events.
The bAbI dataset enables us to compare text2alm’s IE/QA ability with other modern approaches designed for this task. The left hand side of Figure 8 compares the accuracy of the text2alm system with the machine learning approach AM+NG+NL MemNN described by Weston et al. [23]. In that work, the authors compared results from 8 machine learning approaches on bAbI tasks and the AM+NG+NL MemNN (Memory Network) method performed best almost across the board. There were two exceptions among the seven tasks that we consider. For the Task 7-Counting the AM+N-GRAMS MemNN algorithm was reported to obtain a higher accuracy of 86%. Similarly, for the Task 8-Lists/Sets the AM+NONLINEAR MemNN algorithm was reported to obtain accuracy of 94%. Figure 8 also presents the details on the Inductive Rule Learning and Reasoning (IRLR) approach by [18]. We cannot compare text2alm performance with the methodology by [5] because their system is not available and it has not been evaluated using the bAbI tasks.
System text2alm matches the Memory Network approach by Weston et al. [23] at 100% accuracy in tasks 1, 2, 3, and 6 and performs better on tasks 7 and 8. When compared to the methodology by Mitra and Baral [18], the Text2ALM system matches the results for tasks 1, 2, 3, 6, and 8, but is outperformed in tasks 5 and 7.
The results of the text2alm system were comparable to the industry-leading results with one outlier, namely, task 5. We investigated the reason. It turns out that the testing set frequently contained a phrase of the form:
Entity1 handed the Object to Entity2. e.g., Fred handed the football to Bill.
The text2alm system failed to properly process such phrases because the semantic role labeler lth, a subcomponent of the text2drs system, incorrectly annotated the sentence. In particular, lth consistently considered a reading in spirit of the following: Fred handed Bill’s football away. This annotation error prevents text2drs from adding crucial event argument to the DRS stating that Entity2 plays the thematic role of destination in the phrase. Consequently, the text2alm system does not realize that possession of the object was passed from Entity1 to Entity2.
## 5 Conclusion and Future Work
Lierler, Inclezan, and Gelfond [14] outline a methodology for designing IE/QA systems to make inferences based on complex interactions of events in narratives. To explore the feasibility of this methodology, we built the text2alm system to take an action-based narrative as input and output a model encoding facts about the given narrative. We tested the system over tasks 1, 2, 3, 5, 6, 7, and 8 from the bAbI QA dataset [23]. System text2alm matched or outperformed the results of modern machine learning methods in all of these tasks except task 5. It also matched the results of another KRR approach [18] in tasks 1, 2, 3, 6, and 8, but did not perform as well in tasks 5 and 7. However, our approach adjusts well to narratives with a more diverse lexicon. Additionally, the ability of the CoreCALMLib to represent the interactions of events in the bAbI narratives serves as a proof of usefulness of the original CoreALMLib endeavor.
We conclude our work by listing future research directions in some areas, (i) Expanding narrative processing capabilities, (ii) Expanding QA ability, (iii) Exploring additional reasoning tasks.
The bAbI QA tasks provided basic narratives to evaluate the effectiveness of information extraction by system text2alm. However, these are basic narratives with simple sentence structures. Future work includes expanding the narrative processing capabilities of system text2alm, as well as reducing the impact of semantic role labeling errors. We need to enhance the text2drs subsystem’s capabilities in order to provide more detailed IE on narratives. Also, so far we provided annotations via the CoreCALMLib library for twenty two classes of VerbNet. In the future we intend to cover all VerbNet classes.
Questions in the bAbI QA tasks follow pre-specified formats. Therefore, system text2alm’s QA ability relies on simple regular expression matching. Further research is required on representing generic questions and answers before using the system’s IE abilities in other applications. Additionally, our approach should be tested on more advanced QA datasets, such as ProPara [17]. Conducting tests on the ProPara dataset would enable us to compare the results of text2alm to the approach by [5].
Finally, we will build on text2alm’s reasoning abilities. For example, the calm model may sometimes not contain atoms that could be argued as reasonable. For example, given a narrative The monkey is in the tree. The monkey grabs the banana., the calm model will contain fluents stating that the monkey’s location is the tree at time point 1, the monkey is holding the banana at time point 2, and the banana’s location is the tree at time point 2. However, it is also natural to infer that the banana’s location is the tree when the monkey grabs it (time point 1). Yet, that requires reasoning that goes beyond temporal projection.
## References
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
2020-05-28 00:29:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.372648149728775, "perplexity": 2391.5909921651996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396300.22/warc/CC-MAIN-20200527235451-20200528025451-00125.warc.gz"}
|
https://www.tutorialspoint.com/which-of-the-following-form-an-ap-justify-your-answer-1-1-1-1-ldots
|
# Which of the following form an AP? Justify your answer.$-1,-1,-1,-1, \ldots$
#### Complete Python Prime Pack for 2023
9 Courses 2 eBooks
#### Artificial Intelligence & Machine Learning Prime Pack
6 Courses 1 eBooks
#### Java Prime Pack 2023
8 Courses 2 eBooks
To do:
We have to check whether the given sequences are in AP.
Solution:
(i) In the given sequence,
$a_1=-1, a_2=-1, a_3=-1, a_4=-1$
$a_2-a_1=-1-(-1)=-1+1=0$
$a_3-a_2=-1-(-1)=-1+1=0$
$a_4-a_3=-1-(-1)=-1+1=0$
Here,
$a_2 - a_1 = a_3 - a_2=a_4-a_3$
Therefore, the given sequence is an AP.
(ii) In the given sequence,
$a_1=0, a_2=2, a_3=0, a_4=2$
$a_2-a_1=2-0=2$
$a_3-a_2=0-2=-2$
$a_4-a_3=2-0=2$
Here,
$a_2 - a_1 ≠ a_3 - a_2$
Therefore, the given sequence is not an AP.
(iii) In the given sequence,
$a_1=1, a_2=1, a_3=2, a_4=2$
$a_2-a_1=1-1=0$
$a_3-a_2=2-1=1$
$a_4-a_3=2-2=0$
Here,
$a_2 - a_1 ≠ a_3 - a_2$
Therefore, the given sequence is not an AP.
(iv) In the given sequence,
$a_1=11, a_2=22, a_3=33$
$a_2-a_1=22-11=11$
$a_3-a_2=33-22=11$
Here,
$a_2 - a_1 = a_3 - a_2$
Therefore, the given sequence is an AP.
(v) In the given sequence,
$a_1=\frac{1}{2}, a_2=\frac{1}{3}, a_3=\frac{1}{4}$
$a_2-a_1=\frac{1}{3}-\frac{1}{2}=\frac{2-3}{6}=\frac{-1}{6}$
$a_3-a_2=\frac{1}{4}-\frac{1}{3}=\frac{3-4}{12}=\frac{-1}{12}$
Here,
$a_2 - a_1 ≠ a_3 - a_2$
Therefore, the given sequence is not an AP.
(vi) In the given sequence,
$a_1=2, a_2=2^2, a_3=2^3$
$a_2-a_1=2^2-2=4-2=2$
$a_3-a_2=2^3-2^2=8-4=4$
Here,
$a_2 - a_1 ≠ a_3 - a_2$
Therefore, the given sequence is not an AP.
(vii) In the given sequence,
$a_1=\sqrt3, a_2=\sqrt{12}=\sqrt{4\times3}=2\sqrt3, a_3=\sqrt{27}=\sqrt{9\times3}=3\sqrt{3}, a_4=\sqrt{48}=\sqrt{16\times3}=4\sqrt{3}$
$a_2-a_1=2\sqrt3-\sqrt3=\sqrt3$
$a_3-a_2=3\sqrt3-2\sqrt3=\sqrt3$
$a_4-a_3=4\sqrt3-3\sqrt3=\sqrt3$
Here,
$a_2 - a_1 = a_3 - a_2=a_4 - a_3$
Therefore, the given sequence is an AP.
Updated on 10-Oct-2022 13:27:27
|
2022-12-01 17:54:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5809615850448608, "perplexity": 6642.5818433259665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00369.warc.gz"}
|
https://math.stackexchange.com/questions/1615821/evaluate-the-double-integral-by-changing-to-polar-coordinates-for-x2y2-leq4
|
# Evaluate the double integral by changing to polar coordinates for $x^2+y^2\leq4$
Change the double integral $\iint_D \sqrt{4-x^2-y^2} \, dx \, dy$ where $D = \{(x,y):x^2+y^2\leq4,y\geq0\}$ by changing to polar coordinates $r, \phi$
So am I right in thinking the limits would be $0$ and $4$ for $x$ and $y$?
Converting the integral would be
\begin{align} & \int_0^4 \int_0^4 \sqrt{4-x^2-y^2} \, dx \, dy = \iint_D \sqrt{4-r^2\cos^2\phi-r^2\sin^2\phi} \ |r| \, dx \, dy \\[10pt] = {} & \iint_D \sqrt{4-r^2} \, |r| \, dx \, dy \end{align}
I am unsure how to change the coordinates?
• are you sure end points of first integral? it sould be from $0$ to $2$ – corcia candy Jan 17 '16 at 17:40
• and if you change coordinates, then you should change $dxdy$ to polar coordinates $drd\theta$ – corcia candy Jan 17 '16 at 17:41
• The integral is over the upper semicircular region of radius $2$ centered at the origin. Therefore it is equal to $\int_0^2 dr\ r\sqrt{4-r^2}\int_0^\pi d\phi=8 \pi/3$. The original limits of integration should be $$\int_{-2}^2 dx\int_0^{\sqrt{4-x^2}}dy\ .$$ – Pierpaolo Vivo Jan 17 '16 at 17:58
İf you think $z=\sqrt{4-x^2 -y^2}$ and $0\leq y$ , it shows a semi-sphere. $$\int_{-2}^2 \int_0^{2\sqrt{4-x^2}} \int_0^\sqrt{4-x^2 - z^2} \, dy \, dz \, dx$$
Converting to polar coordinates in double integral;
Note:Hence there is a symmetry , we can think like $z \geq 0$ instead of $y \geq0$ $$\int_0^{2\pi} \int_0^2 \sqrt{4-r^2}\,r \, dr \, d\theta$$
Equation of the circle is $x^2+y^2=r^2$
Since you have $x^2+y^2=4$
That means radius of your circle is $2$
So the following integral will become
$$\iint_D \sqrt{4-x^2-y^2} \, dx \, dy = \int_0^\pi \, d\theta \int_0^2\:r\sqrt{4-r^2}\ dr$$
$\theta$ is from $0$ to $\pi$ because you have only upper half of a circle. $\sqrt{4-r^2}$ gets multiplied on $r$ because you need to take into account $dxdy = rdrd\theta$
|
2020-05-28 07:56:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937921166419983, "perplexity": 154.91134714357065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00316.warc.gz"}
|
https://cs.stackexchange.com/questions/19686/is-regex-golf-np-complete
|
# Is regex golf NP-Complete?
As seen in this recent XKCD strip and this recent blog post from Peter Norvig (and a Slashdot story featuring the latter), "regex golf" (which might better be called the regular expression separation problem) is the puzzle of defining the shortest possible regular expression that accepts every word in set A and no word in set B. Norvig's post includes an algorithm for generating a reasonably short candidate, and he notes that his approach involves solving an NP-complete Set Cover problem, but he's also careful to point out that his approach doesn't consider every possible regular expression, and of course his isn't necessarily the only algorithm, so his solutions aren't guaranteed to be optimal, and it's also possible that some other assuredly polynomial-time algorithm could find equivalent or better solutions.
For concreteness' sake and to avoid having to solve the optimization question, I think the most natural formulation of Regular Expression Separation would be:
Given two (finite) sets $A$ and $B$ of strings over some alphabet $\Sigma$, is there a regular expression of length $\leq k$ that accepts every string in $A$ and rejects every string in $B$?
Is anything known about the complexity of this particular separation problem? (Note that since I've specified $A$ and $B$ as finite sets of strings, the natural notion of size for the problem is the total lengths of all strings in $A$ and $B$; this swamps any contribution from $k$). It seems highly likely to me that it is NP-complete (and in fact, I would expect the reduction to be to some sort of cover problem) but a few searches haven't turned up anything particularly useful.
• Is it even in NP? Given a regular expression, how do you check whether a word is in the described language in polynomial time? The standard approach -- transform to NFA, then DFA and check -- takes exponential time in $k$ (?).
– Raphael
Jan 13 '14 at 9:38
• should be PSPACE-complete; see (Gramlich, Schnitger, Minimizing NFAs and Regular Expressions, 2005) at ggramlich.github.io/Publications/approximationSTACS05Pres.pdf and citeseerx.ist.psu.edu/viewdoc/… (PS: I'm posting this as a comment, because an answer should explain why, but I don't have time to do so at the moment; perhaps someone else can use the reference and explain how it works) Jan 13 '14 at 12:09
• For regular expressions as understood in TCS, the problem is in NP (A certificate of polynomial size and verifiable in polynomial time would be the regular expression itself). It (probably) isn't in NP if we use e.g. PCREs for regular expressions, because even testing membership is NP-hard (perl.plover.com/NPC/NPC-3SAT.html). Jan 13 '14 at 12:20
• @MikeB.: And how exactly do you check in polynomial time? Did you see the comment by @Raphael? Jan 13 '14 at 12:43
• (1) You can run a deterministic algorithm in P to test membership of NFAs (start at start-state, and remember all the states you can be in after consuming a symbol of the word. Reach the end, check if you reached at least one final state.) (2) It depends on the definition of "regular expression" - do we use the one of computer scientists, or the one of programmers? Do we allow only regular languages, or (a subset of) context sensitive languages (hence PCREs)? Jan 13 '14 at 15:48
Assuming the TCS-variant of regex, the problem is indeed NP-complete.
We assume that our regexes contain
• letters from $\Sigma$, matching themselves,
• $+$, denoting union,
• $\cdot$, denoting concatenation,
• $*$, denoting Kleene-Star,
• $\lambda$, matching the empty string
and nothing else. Length of a regex is defined as the number of characters from $\Sigma$. As in the comic strip, we consider a regex to match a word, if it matches a substring of the word. (Changing any of these assumptions should only influence the complexity of the construction below, but not the general result.)
That it is in NP is straightforward, as explained in the comments (verify a candidate-RE by translating it into an NFA and running that on all words from $A$ and $B$).
In order to show NP-hardness, we reduce Set cover:
Given a universe $U$ and a collection $C$ of subsets of $U$, is there a set $C' \subseteq C$ of size $\leq k$ so that $\bigcup_{S \in C'} S = U$?
We translate an input for Set cover into one for regex golf as follows:
• $\Sigma$ contains one character for each subset in $C$ and one additional character (denoted $x$ in the following).
• $A$ contains one word for each element $e$ of $U$. The word consists of exactly the characters representing subsets in $C$ that contain $e$ (in arbitrary order).
• $B$ contains the single word $x$.
• $k$ is simply carried over.
This reduction is obviously in P and equivalence is also quite simple to see:
• If $c_1, \ldots, c_k$ is a solution for the set cover instance, the regex $c_1 + \cdots + c_k$ is a solution to regex golf.
• A regex matching the empty subword would match $x$. Thus, any regex solving the golf problem has to contain at least one letter from each of the words in $A$. Hence, if the golf instance is solvable, there is a set of at most $k$ letters from $\Sigma$ so that each word in $A$ is covered by this set of letters. By construction, the corresponding set of subsets from $C$ is a solution to the set cover instance.
• Very nice, let me add 2 points, for completeness: (1) As an additional assumption regarding problem specification, $A$ and $B$ must be finite sets (and all elements are enumerated explicitly?) (2) The RE-candidate's size is in $O(n)$, since $a_1+a_2+..., a_i\in A$ is a valid candidate with size in $O(n)$, so for every larger $k$ the answer is trivially true. Jan 22 '14 at 9:50
• @Mike B.: (1): Finiteness of $A$ and $B$ is given in the question. In complexity theory, exhaustive listing is the default way of representing finite sets. (2) is indeed a required argument, if one wants to make the "in NP" part rigorous. Jan 22 '14 at 11:39
|
2021-11-28 09:08:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7410735487937927, "perplexity": 455.78858862720205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00126.warc.gz"}
|